The value of prosperity
“Making the world richer” seems like a useful abstraction for thinking about impact; I’m quite interested in understanding how valuable making the world richer actually is.
For concreteness and simplicity, I want to think about an across-the-board change, where everyone’s real income increases by 1% (due to some exogenous increase in supply, improvements in efficiency, or whatever). Of course no intervention would have such a simple effect. But nevertheless it seems like a good starting point for similar analyses. [Note: this is half of an older post which I recently divided into two pieces for the sake of modularity.]
The effects of prosperity
The biggest effect of a 1% income increase seems to be speeding up everything that people are paying for (by about 1%). People will do more research, manufacture more stuff, conduct more charitable activities, etc. I think that speeding everything up would have little effect, so I’m going to focus on the gap between what gets sped up and what doesn’t. I see two major effects:
- Of all of the stuff that happens in the world, people are only paying for some of it. They aren’t paying for disease and natural disaster, and so we don’t expect those processes to be sped up by the same amount. (Those processes may still get sped up, if they are a byproduct of other processes that people are paying for.)
- Marginal consumption is not equal to average consumption. People buy more luxury goods and fewer basic necessities, they save more, and they change their behavior in subtler ways. So increasing income by 1% changes what kind of thing society does.
I think effect #1 carries most of the expected benefit, and is the source of the common intuition that economic activity makes the world better. So I think this is the most important effect to understand and evaluate. The critical question is: what gets sped up when people get more of what they want? For the most part we expect that people want things which are good for them. However, (1) many negative events are byproducts of productive activity, and (2) people’s individual interests may not be aligned with social value. So it requires a closer look to say that effect #1 is positive, and even if it is positive it will certainly require a much closer look to see just how positive it is. I’ll discuss this in the next section.
Effect #2 seems ambiguously signed. If people buy fewer crappy TVs and more good TVs, is that better or worse than effect #1 alone would suggest? I don’t know. People seem to save slightly more and donate slightly less as they get richer. (This is surprising to me, but here are some second-hand numbers, indicating that modern estimates for the income elasticity of giving are uniformly less than 1.) If we have robust estimates of the goodness of investment and giving then this might be a significant consideration (but I expect big differences in giving elasticity between different donors). Overall, I could imagine learning more and concluding that this consideration is important, but at the moment this effect seems small in expectation.
Beyond effects #1 and #2 there are other behavior changes as people get richer, but for the moment I’m going to set those aside (the effects seem uncertain and ambiguously signed). Maybe richer folk are happier (or less happy) and happier folk make slightly more (or less) altruistic decisions or something, but unless I had a particular robust line of argument in mind, that kind of reasoning seems pretty tenuous to me. I’m sure there are other effects as well that would be turned up in a more careful analysis, but I’m even more at a loss to estimate their signs, and I think that to first order they probably wash out.
So the critical question is: which of the processes in the world today are being driven by people getting what they want? Does giving people what they want accelerate the good stuff more, and if so by how much?
Setting up the comparison
Here is a simple, bad model: each year society does some useful work, and there is some probability of irreversible social collapse. As more work is done, the probability of collapse eventually falls (in the short run it may go up, but in the long run it eventually drops to 0, at least in all of the possibly valuable worlds). Given this model, what is the value of increasing the quantity of work by 1% this year? It’s easy to see that it’s simply 1% times the current annual risk of collapse.
In general, there may be many processes which are increasing or decreasing the value of the far future. Nevertheless, the basic picture seems to be basically the same. As long as most of the value is in the future, the positive and negative processes are guaranteed to be approximately balanced. If we want to know how good speeding up the positive stuff is, we can just estimate how bad all the bad stuff is.
Moreover, if we know that an intervention will accelerate all of the good stuff by 1%, and that accelerate most (but not all) of the negative stuff by 1%, then we can estimate the effects by looking at only the negative effects which are being accelerated by p < 1%. The goodness of the intervention is equal to the total badness of those effects (each multiplied by (1 – p)%).
The good stuff
I think that most good things are a result of people getting what they want. A priori this seems plausible, according to the picture in which value is relatively fragile (randomly changing something is bad) and so most good things happen as a result of someone explicitly trying to do good. Moreover, I can’t think of any serious exceptions. I can think of a handful of plausible counterexamples, but they don’t seem too significant:
- The sun shines. Some sources of energy aren’t driven by human activity. If you are already using all of the available space for wind power, you can’t easily scale up your production by 1%. At the moment this is a very small consideration, since the bottleneck for renewable energy is infrastructure and tech (which people are paying to scale up). In the far enough future it might be an issue. I guess on the very long run coal and gas are being replaced by natural processes, but that is so slow as to not be worth mentioning.
- Morally relevant stuff happens all of the time in nature, and it will be cut short if human activity accelerates. I’m not much worried about this because I’m mostly concerned with the quality of the far future.
- Societies change. Much of this change is driven by deliberate effort by people trying to improve the situation, or by technological progress, but some of it may depend on psychological processes that people aren’t pushing on. If we think those changes are significant and positive, this could be a disadvantage to accelerating human activity. I think this effect is relatively modest and quite ambiguous. I believe that society is probably getting better over time, but only because of the active efforts of people trying to improve it. I don’t see much reason to expect random social drift to be either good or bad. ETA: that said, social changes currently underway might take time to percolate through society, as Carl observes in the comments. That percolation may be driven by aging and turnover of people, so if in-progress social changes are good then this is a positive effect which wouldn’t be sped up. I don’t know if this is a big consideration, but it might be and is certainly worth keeping in mind.
- Environments recover independently of human activity. Surely some of this goes on, but I don’t know of any really significant examples.
The bad stuff
So now we are left with the question: what bad events aren’t purely byproducts of things we value? Note that if something is partly a byproduct of good things and partly exogenous, then it will be accelerated by less than 1%. Accelerating something by only 0.5% is half as good as stopping it altogether. If something is a complicated product of many inputs, some of which are being increased by 1% and some of which are being increased by less than 1%, it will tend to be increased by less than 1%, and that will capture many of the gains of stopping it altogether.
Here are my best guesses about bad things that won’t be sped up:
- Disease burden wouldn’t increase much.
- Robbery, rape, and many other forms of crime probably wouldn’t increase significantly (hard to predict whether they would increase or decrease).
- Resource consumption would increase, but probably not 1-for-1. Complementarity that might drive up demand for natural resources, but not on a 1-for-1 basis.
- Traffic wouldn’t increase much. Neither commuting nor leisure would increase. Shipping traffic would increase though.
- Natural disasters wouldn’t increase. The economic damage caused by each would increase 1-for-1.
- Terrorism probably wouldn’t increase significantly (hard to predict whether it would increase or decrease).
- Rate of war wouldn’t increase 1-for-1. War seems complicated and I don’t understand the causes well. But it seems almost certain that some of the inputs to war are governed by the same sorts of laws as 1-6 above, and none of the inputs will scale up more than 1-for-1, so the net effect will be less than 1-for-1.
- The risk of social collapse wouldn’t increase 1-for-1. Many relevant processes of decay are not sped up, and the argument above applies.
Notably absent from the list are industrial and technological accidents, which I would expect to increase nearly 1-for-1.
How bad is the bad stuff?
The question we ultimately care about is: what is the total impact of all of the bad events each year that aren’t sped up? Of course I won’t be able to do this question justice here, and I hope to return to it in future posts. My guess for the biggest single impact is from war, and so I want to spend the rest of the section taking a stab at estimating just how bad war is. As usual, I’m focused on the long-term impact. So for me, the biggest cost of a war is probably the possibility of destruction from which it will be difficult for society to recover.
The first note is that the actual probability of a massive war over the coming years looks quite low. But this does not mean that we don’t need to think about war now; if war is plausible in the future but not now, then presumably some changes will have to occur before war is plausible, and those changes are probably underway now.
In this case, the cost of one year of moving towards war is roughly the same as the expected annual cost of war over the whole future. A crude way to estimate this is to look at the historical severity of wars and try to extrapolate. The 20th century contained two wars that killed 1% of the population. The severity of wars is roughly power-law distributed, with wars that kill 100 times more people roughly 10% as likely. (Though I’m pretty skeptical about whether this is really a power law, and it is hard to tell whether it holds up at the very high end because we have few data points. See e.g. second-hand data reported here. The dubious statistics done there seems to be par for the course.) That would suggest that there is maybe a 0.1% chance per year of a war at such a large scale that it could plausibly kill everyone.
The actual risk of such a war, and the probability of recovery from such a massive war, seems to be very sensitive to improvements in military technology. Again, a big question which I don’t really have a handle on. I would guess that destructive tech will be much better, just based on the crudest trend extrapolation. Sufficiently advanced technology could increase the destructive potential even of smaller wars.
Based on arguments of this kind, I would eyeball the per-annum cost of wars to be about 0.01% of the total value of civilization.
I wouldn’t yet feel comfortable estimating the value of economic progress based on reasoning along these lines, and I wouldn’t trust any estimate that I produced (in particular, I think that producing a number from such a rough analysis would be worse than using some much simpler heuristic). I’m just hoping to clarify the necessary inputs to such a calculation, and maybe to give a crude sense of how large an answer we should expect. There are many possible routes forward, and I’m optimistic that some of them will yield relatively robust estimates.
[…] seem to have the effect of “making people richer.” In addition to understanding the long-run impacts of prosperity, I am interested in looking in more detail at what happens when you make someone richer, and […]
[…] would probably prefer replace someone very effective than someone very altruistic. (Note relevant assumptions though; if you thought most people were actively making the world worse, you would necessarily […]
[…] interests.) I am thinking about this issue more now, and I’m glad that others have started to write stuff on this topic which I think is […]
If prosperity means more people get what they want, and if they want others to suffer, more victims will suffer.
For example, if in the future prosperity means being able to simulate sentient minds (like Sims, but really sentient), torturing them for fun would become a common pastime, with real suffering as a consequence.
A lot of sadists will want their private simulated rape and torture, with really sentient victims.
I don’t really know if a 1% increase in the economy would mean more or less than a 1% increase in war likelihood / severity. This seems to be a central question. One might think that more prosperity means less international conflict. But it also means more technological power coming faster, before society knows how to respond.
[…] Beckstead (1, 2, 3) and Paul Christiano (1, 2) on whether economic growth and tech progress is net good or bad for […]