How useful is “progress”?
by paulfchristiano
Most of the things that are happening in the world seem valuable to me: we understand science and engineering better, we acquire more expertise, the productive workforce grows, we invest in infrastructure and capital faster than it degrades, and so on. If I make the world of today richer or more technologically sophisticated, it seems like those gains will persist and compound for quite a while. On the other hand, people who work at cross-purposes to progress seem to get little traction. So naturally, when I consider trying to make the world better, I’m inclined to try to accelerate progress. Unfortunately I think that our intuitions overstate the value of speeding up progress (of all kinds), and that in the aggregate I don’t much care whether human progress goes faster or slower.
The basic issue is that accelerating progress doesn’t change where we are going, it only changes how quickly we get there. So unless you are in a rush, speeding things up doesn’t make the world much better. Of course, there are some cases where speeding up society does change things—for example when society is racing against destructive natural processes—but I suspect those effects are small.
The world of today vs. the world of tomorrow
Consider the comparison between the world of tomorrow (or rather, a distribution over possible worlds of tomorrow) and the world of today. At face value it looks like the world of tomorrow is much better off—as I said above, civilization seems richer, more knowledgeable, more populous, etc. I basically buy arguments about the usefulness of technology, the value of a larger workforce, and the usefulness of the things we build.
But let’s look more closely at the “world of today.” I don’t care about what the world looks like per se—I care about how the universe evolves, starting from that world. And if we let the universe evolve, starting from the world of today, what we get is one day’s worth of morally valuable events + whatever we would have gotten, had we started from the world of tomorrow. (After we wait one day, the world of today turns into the world of tomorrow.) Of course I’m normally thinking about how I can change the world of tomorrow, but here I am trying to figure out the value of the developments around the world as I expect them to unfold, in order to reason about how useful it would be to make them unfold faster without trying to change their direction.
So we can say informally:
The value of the world of today = the value of the world of tomorrow + the value the events of today have for their own sake.
My view, and I think the current median view, is that the events of today have positive value for their own sake. Thus the value of the world of today is in fact higher than the value of the world of tomorrow. On net, the changes that took place between the world of today and the world of tomorrow, have negative value.
Personally, I have aggregative, time-insensitive values. I expect the future to be big, and I care about big things more. By comparison the world of today looks pretty small. So I don’t think that the changes occurring each day has much negative value on net. I would be happy to accelerate change, and forego some morally relevant experiences today, if it had even a very small impact on the quality of future. For me, I approximate the changes that occur each day as morally neutral on net.
What changes?
Between the world of today and the world of tomorrow, very many things change. All that I’ve said is that in total they have small or negative value. Here is one way of slicing things up:
- Natural process, not directed by humans. We endure some constant risk of catastrophe, the stars burn down and the universe beyond Earth runs its course, and so on.
- Things humans explicitly do to make the world better. Sometimes humans do stuff because they anticipate it will lead to a safer world, a happier world, a more sustainable world, with the intention of improving the future.
- Things humans do for other reasons, particularly to make themselves happier. I think that most things people do, and I would normally say most valuable things that humans do, are done to sustain themselves, to amuse themselves, to enjoy themselves, etc. It turns out trade works pretty well.
Each of these changes has some value to me, let’s ay V1, V2, V3. Let’s suppose we are in a regime (say a day) where we can ignore non-linear effects. What I’m claiming is that V1 + V2 + V3 ~ 0 (and similarly for any other way of breaking down the events of a day into buckets). One bucket can only be good, if the other buckets together are correspondingly bad.
Personally, I think that V1 is pretty small. The risk of catastrophe today, with our current technological capabilities, seems to be modest. The stars burn themselves down, but they do so oh-so-slowly on human timescales (though have lots of uncertainty which tends to amplify the benefits of haste). I think that the other categories are likely to have much larger impacts V1, even if I don’t know their sign. (But this is a tough and somewhat contentious empirical claim, which I’ll try to get some leverage on in a future post.)
So that leaves V2 = -V3. That is, I believe that one of two things is true:
- People are so badly mistaken (or their values so misaligned with mine) that they systematically do harm when they intend to do good, or
- Other (particularly self-interested) activities are harmful on average.
I believe that (2) is more likely than (1), and so faced with the fact that one or the other is true, I expect it is (2).
Differential progress
Saying that a difference is “small” is not itself meaningful. If I am a small part of the world, any change in my behavior will have a small effect, but I should still care about those effects. What matters are the relative sizes of the different impacts of my decisions, so if I say that “progress” has a small impact, this is only meaningful if I provide alternative mechanisms by which our actions have a larger impact. I think the most important intervention is differential development.
Informally, any change to the world can be divided into two components: a part which is “parallel” to the direction the world is heading, and a part which is “perpendicular” (and if the change is small, we can model the value of the change as approximately the sum of the values of those two components). For example, if I were to speed up progress in physics, I could say that this has two effects: one is to push the world farther on the trajectory it was going anyway, and the other is to shift which activities humans do (though the latter effect may be much reduced by replaceability). What I am suggesting is that it’s the “perpendicular” component which is most meaningful. Rather than viewing technological progress as a general contribution to the human project, if we are concerned with its long-term value we should instead consider it as a zero-sum shuffling of energy from one project to another.
It seems to me that different projects do have significantly different effects on the entire future, for example by changing the probability of humanity’s survival or by changing the character of human civilization. If so, shifting energy from one of these projects to another, e.g. by working on projects with positive effects, could have a large impact.
[…] I suspect that the events of each year are morally neutral, when taken altogether. This is not because I know anything about the future. It’s because I think the world of tomorrow is as valuable in expectation as the world of today as a tautology. I wouldn’t pay any money to transform the world of today into the world of tomorrow—I’d rather just wait a year. Unfortunately, in light of concerns about replaceability, many of our actions may (essentially) have the effect of accelerating progress in one domain or another. If that’s the case, it behooves us to have an understanding of which changes in the world we like and which are negative. […]
So (assuming the changes are small enough to neglect non-linear effects): V1 + V2 + V3 + V4 + V5 ~ 0.
Something feels a little odd about this argument (which is not itself enough to say it’s wrong, but encourages me to be suspicious). A couple of thoughts follow.
You’re valuing the world of today and that of tomorrow by total future histories. Practically we would tend to value them by estimated future histories. This is mostly for tractability, but with some interpretations of quantum mechanics it can also be the case that the world of today does not determine its future history. Either way with this approach it’s quite possible for the value of the world tomorrow to be significantly higher or lower than the value of the world today. Your argument then would instead show that the expected value of the world tomorrow will be close to the value of the world today — it’s just that if particularly good or bad events materialise or fail to materialise, that changes things. But the rough equality of expectations could still be enough to get the rest of your argument to go through.
A more major point is that it seems you can’t evaluate the size of V1 just by thinking about it in isolation; it’s necessary to consider our likely future path. You can see this because otherwise we could hold the natural processes fixed, and your argument would seem to imply that it’s not just a fact about our society, but a necessary condition that V2 is roughly -V3. We could conclude that not only do our actions fail to make the world much better in aggregate, but that it is impossible for them to do so. This seems highly implausible.
More explicitly, here’s why I think V1 will be sizeable (in expectation). While scenarios in which we spread out through the universe and persist on the order of the lifetime of the universe (i.e. expansion is faster than collapse) have a minority of my subjective probability mass, they carry a majority of my subjective expected value. So the stars dying earlier, or simply our failing to reach as many other stars before expansion prevents us, can have a large negative effect in expectation. In many possible histories, these effects will be totally irrelevant (in which case after the fact it will turn out that for our universe’s history V1 was small and V2+V3 close to zero), but in the ones we may care most about, V1 could be large.
[An aside: of course I don’t think that this means that pushing for progress is automatically good. The benefit of acceleration could easily be outweighed by a chance of pushing us onto a path with lower odds of realising a significant chunk of the total available value.]
I agree that I need to talk about subjective distributions over outcomes, so that it is very easy for the events of the next 100 years to be great ex post (so long as they are balanced by some unrealized probability of events going badly).
I also agree that scenarios where humans stick around for a long time have a lot of the value. I don’t agree that natural processes have an unusually large negative impact in such worlds, though I’m very open to the possibility that I’m wrong on this one. (I was planning to write a future post about this after thinking a bit more about it.)
I don’t know how fast stars become inaccessible. They burn down very slowly (the characteristic timescale is around a billion years, so each year of delay destroys ~ 1 part in a billion of the value.) I think they become inaccessible on a similar timescale, and that other sources of entropy that are burning down burn down more slowly.
Uncertainty about different types of decay does tend to drive up the cost of natural processes—if there are 10 types of resources, and most of them decay at 1 part / billion / year, but one of them decays at 1% / year, then we lose 1 part / thousand / year of the total resources. But even in light of that, it doesn’t seem like these effects are large relative to the effects of differential progress.
I think in either case, the largest negative impact of natural processes is decay and disaster here on earth that minimizes our chances of sticking it out.
Paul, when you use the word value, do you mean “marginal value relative to business as usual”? (For instance in the sentence “in total they have small or negative value”).
Yes, when I talk about the value of X, I mean the value of X in the current world. But I think the argument I gave would go through in other times or counterfactuals as well, and applies to marginal changes or larger changes, as long as the different setting is applied consistently.
Hm for some reason the comment here is different from the one in my e-mail feed…
Anyways, “in the current world” wasn’t the thing in question, it was the “marginal” and “relative to business as usual” parts. Assuming I’m correct that this is the intended meaning, you may want to clarify, as I was only able to figure that out after querying my model of you, which most readers won’t have =P.
(I apparently can’t reply more than 3 deep).
I think the analysis applies to big changes, not just marginal changes.
I don’t know what “relative to business as usual” means–what is an alternative interpretation?
I’ll change the wording to emphasize that I mean “more progress” as a change / consequence of our actions, rather than trying to assign value to stuff that is happening anyway.
(The comment probably differs from the one you got in email because I often edit comments right afterwards for tone.)
It seems to me that the majority of conscious experiences today are either close to neutral or outright negative. Some people enjoy psychological health, but when you look at conscious experiences and not merely people, you see that factory farms and wildlife is pack filled with disembowelment, sickness, cold, etc.
You do not need to be a negative utilitarian to recognize that the world produces massive amounts of suffering every second. Is this compensated, somehow, by the pleasure of the sentient beings who happen to be in a state of happiness or bliss? Even under classical utilitarianism, one would be hard pressed to make such case.
As such, the net value of every second today, when considered for the aggregate of conscious experiences, aside from the causal effects it has on the future, is very probably on the negative side.
But the future needs not be like this. Technology will allow us to recalibrate the hedonic treadmill and never need to suffer again. With biotech and ecosystem design, we can also eradicate wildlife suffering, and by passing progressive laws and modifying human nature we can make sure that no animal is ever harmed for the mild and transient pleasure of another animal.
So I would say that I would pay a lot to get things done. The world of tomorrow will be worth more, but only because the net value of today (without considering tomorrow) is negative.
[…] stuff, conduct more charitable activities, etc. I think that speeding everything up would have little effect, so I’m going to focus on the gap between what gets sped up and what doesn’t. I see two […]
[…] stuff, conduct more charitable activities, etc. I think that speeding everything up would have little effect, so I’m going to focus on the gap between what gets sped up and what doesn’t. I see two […]