Don’t attack policies/ideas because of what they symbolise

(Epistemic Status: Uncertain. Many obvious counterarguments. At best a general rule with many, many exceptions)

In debating a few years ago there was a controversy over gender pronouns. At the start of a debate speakers introduce themselves. For example, the first speaker would say something like “I’m Bob, speaking first”. The British and American contingent in the world debating council pushed for gender pronoun introductions, meaning speakers would also say what their preferred pronoun was. There was a backlash with many non-anglo nations objecting. What happened next isn’t important. What is important is that those objecting to the policy made a fundamental mistake. Instead of objecting to what they actually didn’t agree with, creeping liberal bias and imposition of western far-left norms on debating, they objected to what they a visible symbol of those things and the faction that pushed them. This was wrong, both morally and tactically.

If there is something you think is wrong, it is usually good to attack that thing rather than other things tangentially associated with it. (We usually associate policies when they are proposed by the same faction or seem to be from the same political ideology). The original policy you disagree with probably has flaws and you can explain rationally and clearly why you think it’s bad. On the other hand, the related policy may well be good ideas and by attacking it you stop a good thing from coming to pass, make yourself look like an idiot and waste time and political capital on a hard target when a soft one is readily available. Don’t do it. You’re choosing to loose. You’re also contributing to the problem of ideas being judged by which side supports them rather than on their utility.

The general principle: Judge ideas on their own merits.

Blood like Water

I met a girl at work who wanted to join the reserves. When I asked her why, she talked about why it was good for her. Why she would enjoy the challenge. Why the skills would be useful. She never once mentioned serving the greater good. I told her that by joining she could be called into active service at any time, as many were during the Iraq war, and required to kill or die. She hadn’t considered it.

Looking back, what’s shocking is not the lack of thought and research that went into such a serious decision. It’s the amorality. Being a soldier means being a warrior. It means taking life. Even in a non-frontline role, what you are doing will kill. Translate the right enemy communication and an airstrike hits a house with enemy fighters. Give your troops better intelligence and they kill the enemy better. What’s chilling is the amorality, the fact that the decision to kill was made so lightly and based entirely on self interest.

I can understand people with different morals. I can respect them, even if I find their ethics objectionable or even evil. What I can never respect is a person without morals. Anyone willing to spill blood like water without a second thought, without any consideration of the moral weight of that decision, is a person who is not just.

Two kinds of bad people

There are two kinds of bad people. There are people who are bed because they lack morals. There are people who are bad because they have the wrong morals. They are very different.


Most people who are bad are bad because they lack morals. Criminals. Drug dealers. Thieves. Etc.. These people don’t do bad things out of a genuine belief that stealing or hurting others is the right thing to do. They do it because they do not have morals and/or because they are too weak to live by their morals. They may well have circumstantial justifications for their crimes but these are usually paper think and only serve as rationalisations. These people are like animals. They respond to simple stimuli: pleasure and pain. Hurt them when they do evil and they will stop. Establish dominance over them and make them fear you and you will have peace. If you’re faithful try to change them but  changing peoples character is hard.


Some people who are bad are bad because their morals are evil. Jihadi’s. Ideological killers. Etc… They do evil knowingly and willingly, believing it to be good. They differ from the average criminal. The criminals motivation is selfish. The ideologically motivated evildoers is not and some will be willing to pay for their beliefs with their lives if that’s what it takes. Hence dealing with them is a different matter. Force alone is seldom sufficient and bloodshed alone can’t kill an idea.


That’s a lie. It can and has. The soviets killed whole peoples. The mongols put Baghdad to the sword. The CCP smothered Falun Gong. If you’re willing to be brutal enough, to spill enough blood, you can certainly kill an idea. Modern democratic states are not so that option isn’t worth discussion. (And for those who think it is, remember that a state which can liquidate millions of it’s citizens at will may well be more horrific than whatever ideology/group you personally dislike.)


I don’t know what else to say so I won’t. One other thing. It’s not discrete. Most things aren’t. It’s a scale and many monsters will fall somewhere in the middle. Still, I’ve found the model useful.

Large Quality differentials are exponential force multipliers

In military terms, quality and quantity can be conceptualised as being convertible. e.g: an army of 1000 tribal warriors may be roughly equivalent to an army of 800 similar warriors with slightly better weapons. The assumption of convertibility is true only up to a point. As the quality differential increases the quality multiplier to force effectiveness gradually goes from being multiplicative to exponential. In short, past a certain threshold no amount of quantity can compensate for a difference in quality. It doesn’t matter how many tribal warriors you have if your opponent has a self-replicating army of probes.

(One response is that having a larger quantity of forces does increase your comparative strength because it means that wiping you out requires more resources. This is true even if your forces are so comparatively weak as to be unable to fight. I think the claim that quantity usually makes you harder to wipe out is true. The problem is that being hard to exterminate does not mean that you have a fighting force with non-zero strength. Your opponent can still act as they wish, go where they wish and interfere with your political structures as they wish. Your ability to impede them is non-existent and that’s the ultimate measure of military strength. Being hard to kill is not equivalent to military strength. If it was, cockroaches would be stronger than any human superpower.)

The quantity/quality interchangeability fallacy often comes up is in discussions about alien life, artificial intelligence or other X-risk scenarios. Those discussions often descend into talk of war and it’s feasibility or rationality given the costs involved. How could an alien civilisation attack us. Why would they given the resource expenditure involved in crossing interstellar distances? How could AI take over the world? Surely we’d see it doing that and just bomb it. How could a single techno-authorotarian regime win against the rest of the world combined? The answer is that these questions are fundamentally mistaken because they presume a conflict. A conflict requires at least two sides. The force differentials involved in many X-risk situations are likely so large as to make any notion of resistance, war or a adversarial struggles generally laughable. Instead of an invasion, maybe an alien species would launch a probe carrying a plank worm which would convert our whole solar system and an ever growing sphere stemming from it to novo-vacuum. Instead of building killer robots, a.k.a the terminator scenario, maybe an AI cracks molecular nano-tech and creates self-replicating grey goo which quickly convert the earth and then the solar system to computronium. In short, any vastly more advanced agent will likely have such a large qualitative advantage over us that

  1. The fact that they have a quantitatively weaker starting position is irrelevant.
  2. The costs to them of destroying us are close to zero

Stalin was wrong. When you follow the curve far enough, quantity does not have a quality all of it’s own.

(Side-note: Could be the case that there’s a tech ceiling beyond which innovation is possible. Could be the case that in the far future better tech again becomes a linear multiplier. That doesn’t seem to be the case now so this logic *should* be relevant for the present and near-future.)

Leaking Meaning

There’s so much meaning in the world. I always think back to all the stories I’ve read. To Ursula Le Guin and the worlds she made. To how much they shaped me. I’m scared that even if we keep the paper, over time the meaning leaks out of stories. As culture changes, language shifts and the implicit concept map we have of the world alters over generations, stories no longer mean the same thing. Sometimes they mean something entirely new and hence from the death of one story another rises. Other times, most times the meaning just drains away. Most people reading the Iliad today can’t understand it, not really. They see the events and the order they happen in but the characters minds and the world is so alien that little meaning remains. The struggle of fighting against a better king than yours because your tribe is against their tribe. The age old question of how to come to terms with war knowing that some of your enemies are good people, that their struggles to fight for their families and people are as just as yours. The tastes and sounds and sights words bring to us. The web of social norms through which we interpret characters actions. I was watching the wire today and I realised that I saw so much more now than I had a few years ago, largely because I’m less autistic and can understand what’s happening between and within people better. Maybe we all have autism when trying to read stories from distant cultures.

There’s so much meaning lost and it’s so hard to preserve. Even if one day we have god-tech and can alter our minds so that we inhabited the same mindspace as a contemporary reader, it still won’t help. Changing yourself that much means it’s not you any more. Maybe. Maybe not. It’s sad to think of how much we’ll loose and how much we’ve already lost.

Systematising Foreign Policy

(Epistemic status: Low confidence.)

(Bias: High. Only giving one side of the argument. )

The more I think about foreign policy, the more I think it’s a strange extra-legal backwater. In all other aspects of statecraft, from economics to healthcare, there are harsh constraints on what leaders can and cannot do and huge amounts of law enforcing transparency and holding decisions makers accountable for failing to comply with statutory requirements. In international relations, there seem to be almost no constraints. A politician can declare war and kill 500’00, sponsor murderous regimes, islamist militias, subvert democratic governments etc.. All of this can be done at essentially the executive level, with only the most serious decisions like war requiring legislative approval. The public is seldom informed as these policies are secret and when and if things go wrong there is no domestic legal framework to hold people to account. Whenever I read the American conservative, Jacobin, Chomsky or any other dissident source, I hear the same refrains about how horrific some aspects of western foreign policy have been and how shocking It is that none of the instigators were held accountable. I seldom hear workable solutions. Looking at other areas of society I think one of the major reasons institutions such as the police, health services etc… are more functional, less corrupt and more accountable today than in the 1800’s is because they have been systematised. The legal system was extended to cover them and a variety of rules, regulations, targets dictate what conduct is acceptable and what conduct is illegal and constitutes gross misconduct or a felony. It seems like foreign policy is still a legal Wild West. Changing that may be a good way to ensure better decisions are made.

Sex Redistribution & Does anyone hold equality as a basic value?

I went to a rationalist meetup this weekend. We spent a long time talking about Robin Hanson’s article on sex redistribution, sex inequality, relationship inequality and potential policies to fix both. I think my greatest problems with comparing income redistribution to sex redistribution are:

  • Redistributing Income is less morally abhorrent than redistributing sex/relationships because the latter requires dictating to people what relationships they must, can and cannot have.
  • Redistributing relationships seems to require giving the state/society too much power over individuals.
  • Marginal utility for relationships/sexual encounters diminishes far faster than for money. Most people can only maintain a set number of relationships at a time whereas our capacity to spend money for anything ranging from shelter to education to life extension to yachts seems to be near limitless. Hence we’d hit a ceiling very quickly. (Although I am conflicted about this. It’s possible that the total supply is lower than demand, in which case this criticism is irrelevant)

(Note: None of these area criticisms of Hanson as his article does not talk about redistribution in the sense of taking from some and giving to others but rather in the sense of changing the a given distribution to make it more equal)

I think my problem with debates about equality is that I don’t see equality as a terminal value. I suspect that most people don’t either. The simplest test for whether we value inequality innately is to see if there are any other innate values we would be willing to trade away for some amount of equality, no matter how small. Let’s say I innately value people being happy (utilitarianism. It’s stupid. I know). If it is the case that equality is something I also value innately, I should be willing to trade off some amount of utility for some amount of equality. Let say there are two possible worlds: Equalistan which has a 100 people with 50 units of utility each and their arch enemy slightly-less-equalistan which has 99 people with 50 units of equality and 1 person with 51. I find the slightly less equal world where people are better off is preferable. If the second world had 99 people with 50 units and 1 person with 99’999’999’999, I’d still prefer it. No matter how big the numbers get or what thing I innately value I substitute for utility, my intuitions are the same. There is no amount “thing I value” that I would give up for an increase in equality, no matter how lopsided the tradeoff. To me that suggests that equality is not something with any value in and of itself. I instinctively think that most people agree with me and those who don’t don’t just aren’t capable of engaging in thought experiments stripping away their instrumental reasons for valuing equality. I don’t have a high degree of confidence in the view. It seems like it could very well be a case of mind projection fallacy and other people may have different axiomatic beliefs/moral intuitions to me. Empirical questions require empirical answers.