How useful is “progress”?
Most of the things that are happening in the world seem valuable to me: we understand science and engineering better, we acquire more expertise, the productive workforce grows, we invest in infrastructure and capital faster than it degrades, and so on. If I make the world of today richer or more technologically sophisticated, it seems like those gains will persist and compound for quite a while. On the other hand, people who work at cross-purposes to progress seem to get little traction. So naturally, when I consider trying to make the world better, I’m inclined to try to accelerate progress. Unfortunately I think that our intuitions overstate the value of speeding up progress (of all kinds), and that in the aggregate I don’t much care whether human progress goes faster or slower.
The basic issue is that accelerating progress doesn’t change where we are going, it only changes how quickly we get there. So unless you are in a rush, speeding things up doesn’t make the world much better. Of course, there are some cases where speeding up society does change things—for example when society is racing against destructive natural processes—but I suspect those effects are small.
The world of today vs. the world of tomorrow
Consider the comparison between the world of tomorrow (or rather, a distribution over possible worlds of tomorrow) and the world of today. At face value it looks like the world of tomorrow is much better off—as I said above, civilization seems richer, more knowledgeable, more populous, etc. I basically buy arguments about the usefulness of technology, the value of a larger workforce, and the usefulness of the things we build.
But let’s look more closely at the “world of today.” I don’t care about what the world looks like per se—I care about how the universe evolves, starting from that world. And if we let the universe evolve, starting from the world of today, what we get is one day’s worth of morally valuable events + whatever we would have gotten, had we started from the world of tomorrow. (After we wait one day, the world of today turns into the world of tomorrow.) Of course I’m normally thinking about how I can change the world of tomorrow, but here I am trying to figure out the value of the developments around the world as I expect them to unfold, in order to reason about how useful it would be to make them unfold faster without trying to change their direction.
So we can say informally:
The value of the world of today = the value of the world of tomorrow + the value the events of today have for their own sake.
My view, and I think the current median view, is that the events of today have positive value for their own sake. Thus the value of the world of today is in fact higher than the value of the world of tomorrow. On net, the changes that took place between the world of today and the world of tomorrow, have negative value.
Personally, I have aggregative, time-insensitive values. I expect the future to be big, and I care about big things more. By comparison the world of today looks pretty small. So I don’t think that the changes occurring each day has much negative value on net. I would be happy to accelerate change, and forego some morally relevant experiences today, if it had even a very small impact on the quality of future. For me, I approximate the changes that occur each day as morally neutral on net.
Between the world of today and the world of tomorrow, very many things change. All that I’ve said is that in total they have small or negative value. Here is one way of slicing things up:
- Natural process, not directed by humans. We endure some constant risk of catastrophe, the stars burn down and the universe beyond Earth runs its course, and so on.
- Things humans explicitly do to make the world better. Sometimes humans do stuff because they anticipate it will lead to a safer world, a happier world, a more sustainable world, with the intention of improving the future.
- Things humans do for other reasons, particularly to make themselves happier. I think that most things people do, and I would normally say most valuable things that humans do, are done to sustain themselves, to amuse themselves, to enjoy themselves, etc. It turns out trade works pretty well.
Each of these changes has some value to me, let’s ay V1, V2, V3. Let’s suppose we are in a regime (say a day) where we can ignore non-linear effects. What I’m claiming is that V1 + V2 + V3 ~ 0 (and similarly for any other way of breaking down the events of a day into buckets). One bucket can only be good, if the other buckets together are correspondingly bad.
Personally, I think that V1 is pretty small. The risk of catastrophe today, with our current technological capabilities, seems to be modest. The stars burn themselves down, but they do so oh-so-slowly on human timescales (though have lots of uncertainty which tends to amplify the benefits of haste). I think that the other categories are likely to have much larger impacts V1, even if I don’t know their sign. (But this is a tough and somewhat contentious empirical claim, which I’ll try to get some leverage on in a future post.)
So that leaves V2 = -V3. That is, I believe that one of two things is true:
- People are so badly mistaken (or their values so misaligned with mine) that they systematically do harm when they intend to do good, or
- Other (particularly self-interested) activities are harmful on average.
I believe that (2) is more likely than (1), and so faced with the fact that one or the other is true, I expect it is (2).
Saying that a difference is “small” is not itself meaningful. If I am a small part of the world, any change in my behavior will have a small effect, but I should still care about those effects. What matters are the relative sizes of the different impacts of my decisions, so if I say that “progress” has a small impact, this is only meaningful if I provide alternative mechanisms by which our actions have a larger impact. I think the most important intervention is differential development.
Informally, any change to the world can be divided into two components: a part which is “parallel” to the direction the world is heading, and a part which is “perpendicular” (and if the change is small, we can model the value of the change as approximately the sum of the values of those two components). For example, if I were to speed up progress in physics, I could say that this has two effects: one is to push the world farther on the trajectory it was going anyway, and the other is to shift which activities humans do (though the latter effect may be much reduced by replaceability). What I am suggesting is that it’s the “perpendicular” component which is most meaningful. Rather than viewing technological progress as a general contribution to the human project, if we are concerned with its long-term value we should instead consider it as a zero-sum shuffling of energy from one project to another.
It seems to me that different projects do have significantly different effects on the entire future, for example by changing the probability of humanity’s survival or by changing the character of human civilization. If so, shifting energy from one of these projects to another, e.g. by working on projects with positive effects, could have a large impact.