Beware brittle arguments
Often there is a tension between a simple argument suggesting that a trend is positive on average, and more subtle arguments suggesting it might be negative at the moment. For example, all of the following are arguments I have encountered in the last few months:
- Economic development. In general, if people get more of what they want and are richer I expect the world to improve. But richer people might eat more meat, which might be the most important impact of their actions and might cause the net impacts of economic development to be negative. Or richer societies might have more researchers working on any given problem in parallel, which might increase the probability of accidents.
- Well-informed philanthropists. In general, if philanthropists know more I expect them to make better decisions. Nevertheless it is often argued that people being wiser in one area or another would actually be bad, due to gettier-style antics. For example, philanthropists might support AI research too much because they underestimate the possible costs, but might nevertheless support AI research much more if they had more accurate estimates of the benefits (which they also underestimate), so better informed philanthropists might do worse. Similarly, philanthropists might support catastrophic risk reduction too little because they are not sufficiently concerned with human extinction, but might nevertheless support such interventions even less after obtaining more accurate estimates of that risk, so that publishing more accurate estimates could cause harm.
- Functional markets. In general, when there aren’t externalities I expect that allowing people to trade things will make them richer. But it is often argued that particular trades would have indirect negative effects. For example, functional organ markets might allow vendors to charge the poor higher prices and effectively coerce them into selling organs, increasing inequality. Allowing $20 / gallon prices for gas during disasters might cause environmentally or socially irresponsible behavior to earn those outsized profits.
I am generally skeptical of such arguments, for two related reasons:
These arguments tend to be of the form “A implies B implies C, which is bad” and to rest on not one but several tentative propositions (A implies B, B implies C, C is more important than the positive effects in question; each of those propositions is often itself conjunctive). If any of these propositions is wrong, the argument loses all force, and so they require a relatively detailed picture of the world to be accurate. The argument for the general goodness of economic progress or better information seems to be much more robust, and to apply even if our model of the future is badly wrong.
In many contexts very uncertain arguments are being weighed against other very uncertain arguments, and figuring out the sign of an expected effect is important despite uncertainty. But the situation is different when we can appeal to simple arguments based on relatively well-tested generalizations. The complicated arguments can only win if they are particularly solid or deal with much bigger effects.
These arguments tend to focus on a single step in a long line of changes, and claim that this change is negative despite the average change being positive. But this overlooks the fact that the marginal effect of taking one step down the road is not the same as the effect of the next step.
If developments are literally arranged on a line and proceed at a constant pace, then the effect of taking an extra step now is to take each future step that much sooner. The impact is the same as speeding up the average step by that much, and so is practically independent of the characteristics of the current step. This is roughly the situation with respect to economic progress or technological progress within a narrow area.
Normally things aren’t so straightforward, but similar considerations often apply:
- Developments in the same direction often depend on or facilitate one another.
- The same resources and people will often push the same kinds of developments (so replacement mixes the effects faster, and your impact ends up being the same as the average development of that kind).
- The most important impact of investing in many interventions may be building up the infrastructure to carry out similar interventions in the future.
- The same costs may afflict many otherwise good activities; accepting those costs once can open up many opportunities. (For example: if people are doing the right thing for the wrong reasons there are likely to be many things you don’t want to tell them. If they are doing a slightly worse thing but for the right reasons, then they can continue to improve their choices as they learn more.)
All of these push the effect of particular changes closer to the average effect of changes of that kind.
Of course this is not intended as a universal counterargument. I find many claims of exceptionality persuasive, sometimes because they are supported by empirical evidence and sometimes because the analytical considerations have been worked out in enough detail. But in general I find that they don’t meet the burden of proof needed to upset the general trend.