Beware brittle arguments
by paulfchristiano
Often there is a tension between a simple argument suggesting that a trend is positive on average, and more subtle arguments suggesting it might be negative at the moment. For example, all of the following are arguments I have encountered in the last few months:
- Economic development. In general, if people get more of what they want and are richer I expect the world to improve. But richer people might eat more meat, which might be the most important impact of their actions and might cause the net impacts of economic development to be negative. Or richer societies might have more researchers working on any given problem in parallel, which might increase the probability of accidents.
- Well-informed philanthropists. In general, if philanthropists know more I expect them to make better decisions. Nevertheless it is often argued that people being wiser in one area or another would actually be bad, due to gettier-style antics. For example, philanthropists might support AI research too much because they underestimate the possible costs, but might nevertheless support AI research much more if they had more accurate estimates of the benefits (which they also underestimate), so better informed philanthropists might do worse. Similarly, philanthropists might support catastrophic risk reduction too little because they are not sufficiently concerned with human extinction, but might nevertheless support such interventions even less after obtaining more accurate estimates of that risk, so that publishing more accurate estimates could cause harm.
- Functional markets. In general, when there aren’t externalities I expect that allowing people to trade things will make them richer. But it is often argued that particular trades would have indirect negative effects. For example, functional organ markets might allow vendors to charge the poor higher prices and effectively coerce them into selling organs, increasing inequality. Allowing $20 / gallon prices for gas during disasters might cause environmentally or socially irresponsible behavior to earn those outsized profits.
I am generally skeptical of such arguments, for two related reasons:
1. Brittleness
These arguments tend to be of the form “A implies B implies C, which is bad” and to rest on not one but several tentative propositions (A implies B, B implies C, C is more important than the positive effects in question; each of those propositions is often itself conjunctive). If any of these propositions is wrong, the argument loses all force, and so they require a relatively detailed picture of the world to be accurate. The argument for the general goodness of economic progress or better information seems to be much more robust, and to apply even if our model of the future is badly wrong.
In many contexts very uncertain arguments are being weighed against other very uncertain arguments, and figuring out the sign of an expected effect is important despite uncertainty. But the situation is different when we can appeal to simple arguments based on relatively well-tested generalizations. The complicated arguments can only win if they are particularly solid or deal with much bigger effects.
2. Dilution
These arguments tend to focus on a single step in a long line of changes, and claim that this change is negative despite the average change being positive. But this overlooks the fact that the marginal effect of taking one step down the road is not the same as the effect of the next step.
If developments are literally arranged on a line and proceed at a constant pace, then the effect of taking an extra step now is to take each future step that much sooner. The impact is the same as speeding up the average step by that much, and so is practically independent of the characteristics of the current step. This is roughly the situation with respect to economic progress or technological progress within a narrow area.
Normally things aren’t so straightforward, but similar considerations often apply:
- Developments in the same direction often depend on or facilitate one another.
- The same resources and people will often push the same kinds of developments (so replacement mixes the effects faster, and your impact ends up being the same as the average development of that kind).
- The most important impact of investing in many interventions may be building up the infrastructure to carry out similar interventions in the future.
- The same costs may afflict many otherwise good activities; accepting those costs once can open up many opportunities. (For example: if people are doing the right thing for the wrong reasons there are likely to be many things you don’t want to tell them. If they are doing a slightly worse thing but for the right reasons, then they can continue to improve their choices as they learn more.)
All of these push the effect of particular changes closer to the average effect of changes of that kind.
Conclusion
Of course this is not intended as a universal counterargument. I find many claims of exceptionality persuasive, sometimes because they are supported by empirical evidence and sometimes because the analytical considerations have been worked out in enough detail. But in general I find that they don’t meet the burden of proof needed to upset the general trend.
I understand that your argument is a more general one but, since it’s the first example you give and also something I feel is important, could you explain what you make of the economic growth -> more meat eating argument? It doesn’t seem to me to be much undermined by either of the weaknesses you point to:
‘Increased GDP will lead to increased meat eating’ seems a direct and empirically extremely well-supported prediction; the counter-arguments (eg increased GDP will lead to faster development of meat substitutes, or lower populations such that total consumption is lower even as per capita consumption rises) seem much more conjunctive and brittle, though they might be true. ‘Increased meat-eating will have bad effects that outweigh the good effects of increased GDP’ involves debatable moral judgements but from certain moral perspectives, it might be obviously true at first glance, though subject to revision on further consideration (as obviously true, say, as ‘increased GDP has good effects for human beings’).
I would imagine that people arguing that + GDP -> + meat eating -> bad outweighs good would also argue that previous increases in GDP which led to increased meat eating were bad, so there is no claim that this change is negative despite the average change being positive.
I’m not sure myself what I make of the argument, but it seems an important one to me and it doesn’t seem to fit with the rest of what you wrote here.
I think this is a relatively robust argument, amongst the examples I gave. But it still seems pretty flimsy:
1. Factory farming is one random effect of prosperity, and it’s not clear why it would be a big part of the picture. Indeed, even if we are concerned exclusively with animal welfare, it seems that the positive effects of economic growth on wild animal suffering (via habitat destruction) are much larger.
2. It’s still an argument about the very short term which appears to lose all force if you take a more reasonable view (and consider effects of progress across a broader period). It seems quite clear that in the long-run of human development there are low levels of farm animal suffering, so accelerating progress seems to probably reduce total suffering. It moves us along a development curve; no matter what animal suffering looks like at each point on that curve, if it eventually drops to something quite low, then speeding up progress would reduce total suffering. And even if it doesn’t go to 0 as the curve goes along, the value of such acceleration is still more or less totally independent of the situation at present w.r.t. factory farming.
3. It would be odd if the incidental side-effects of current activity on animals was the most important effect. I think a priori this isn’t very likely (since so much optimization power goes into other things). In addition to seeming a priori unlikely, almost all people consider this unlikely, and then based on actual detailed arguments it seems implausible (since effects on the future seem likely)
The case of withholding information that would make people oppose your view more is interesting. Often it happens to me due to values divergences rather than incomplete factual information, and in those cases, we don’t expect the divergence to eventually go away.
That said, I still think there’s a strong quasi-deontological (i.e., rule utilitarian) case for sharing information whether or not it hurts your cause because (a) this makes you more credible in the long run and (b) there’s a strong prior that more information is better, maybe even in ways you didn’t anticipate.
Still, maybe it could be argued that you don’t have to invest major resources in producing findings that seem likely to turn people against your position.
[…] Tomasik’s thoughts on cost-effectiveness in an uncertain world; with Paul Christiano’s Beware Brittle Arguments post; and probably much […]
[…] My thoughts on these topics were influenced by many effective-altruist thinkers, including Holden Karnofsky, Jonah Sinick, and Nick Beckstead. See also Paul Christiano’s “Beware brittle arguments.” […]