In military terms, quality and quantity can be conceptualised as being convertible. e.g: an army of 1000 tribal warriors may be roughly equivalent to an army of 800 similar warriors with slightly better weapons. The assumption of convertibility is true only up to a point. As the quality differential increases the quality multiplier to force effectiveness gradually goes from being multiplicative to exponential. In short, past a certain threshold no amount of quantity can compensate for a difference in quality. It doesn’t matter how many tribal warriors you have if your opponent has a self-replicating army of probes.
(One response is that having a larger quantity of forces does increase your comparative strength because it means that wiping you out requires more resources. This is true even if your forces are so comparatively weak as to be unable to fight. I think the claim that quantity usually makes you harder to wipe out is true. The problem is that being hard to exterminate does not mean that you have a fighting force with non-zero strength. Your opponent can still act as they wish, go where they wish and interfere with your political structures as they wish. Your ability to impede them is non-existent and that’s the ultimate measure of military strength. Being hard to kill is not equivalent to military strength. If it was, cockroaches would be stronger than any human superpower.)
The quantity/quality interchangeability fallacy often comes up is in discussions about alien life, artificial intelligence or other X-risk scenarios. Those discussions often descend into talk of war and it’s feasibility or rationality given the costs involved. How could an alien civilisation attack us. Why would they given the resource expenditure involved in crossing interstellar distances? How could AI take over the world? Surely we’d see it doing that and just bomb it. How could a single techno-authorotarian regime win against the rest of the world combined? The answer is that these questions are fundamentally mistaken because they presume a conflict. A conflict requires at least two sides. The force differentials involved in many X-risk situations are likely so large as to make any notion of resistance, war or a adversarial struggles generally laughable. Instead of an invasion, maybe an alien species would launch a probe carrying a plank worm which would convert our whole solar system and an ever growing sphere stemming from it to novo-vacuum. Instead of building killer robots, a.k.a the terminator scenario, maybe an AI cracks molecular nano-tech and creates self-replicating grey goo which quickly convert the earth and then the solar system to computronium. In short, any vastly more advanced agent will likely have such a large qualitative advantage over us that
- The fact that they have a quantitatively weaker starting position is irrelevant.
- The costs to them of destroying us are close to zero
Stalin was wrong. When you follow the curve far enough, quantity does not have a quality all of it’s own.
(Side-note: Could be the case that there’s a tech ceiling beyond which innovation is possible. Could be the case that in the far future better tech again becomes a linear multiplier. That doesn’t seem to be the case now so this logic *should* be relevant for the present and near-future.)