One way to influence the future is to improve human decision-making—to make people smarter, encourage metacognition, improve institutional decision-making, etc. Any of these changes will probably have an impact on how future folk manage the problems they face, and on the sorts of infrastructure and capabilities they in turn build for the farther future. Even if we don’t know what those problems will be, or what exactly we would want smarter or better-educated people to do, it seems like a safe bet that there will be opportunities for them to apply their increased capabilities.
But at the same time, I think most of the problems humans face are caused by humans. So if you make humans better at doing whatever they do, you speed up the creation of problems as well as their resolution.
Nevertheless, I tend to suspect that increasing human capabilities is a positive change on balance. I’m not sure about this, or about the magnitude of the impact; since it looks like capability improvements might be leading contenders for altruistic interventions, it seems like an important question. Depending on the answer, I may decide to work directly on the biggest problems I can see, or instead to help prepare future folk to do the same.
For concreteness, I’d like to specify some examples of capability-building interventions before getting into a more abstract discussion.
- Cognitive enhancement. Experimentation with training programs, genetic engineering to increase intelligence, experimentation with drugs that might make people smarter. These interventions look to me like they would be very good investments if your goal was doing more of what society is already doing.
- Institutional decision-making. Experimentation with different organizational structures and institutions (for example decision markets), additional experience (distributed across society) organizing, assembling, training large groups of people. Technology for making decisions seems casually to be improving, and it looks like there is a lot of room to improve yet (and many remaining ideas that haven’t yet been tested).
Should I root for other people to succeed?
An important factor is the question: what are people trying to accomplish, anyway? If you make people better at getting what they want, is that good or bad? Settling this issue alone wouldn’t tell you whether making people smarter is better. But intuitively, it is a useful lemma (and it will play a role in the next section): if people are basically working to make the world better, the world may be better off if people become more capable. If people are basically engaged in counterproductive competition, working for their own profit at the expense of others, then it might even be worse off if people are better informed and more capable.
The punchline: I think that people’s plans would mostly make the world better off, to the extent they are successfully implemented.
In a world where everyone was a perfectly cutthroat competitor, working for their own benefit without concern for others, I think that people’s goals would mostly be orthogonal to social impact. Sometimes they would kill each other and engage in destructive competition, sometimes they would trade with each other and engage in useful enterprises, but for the most part they wouldn’t do things that were either very good or very bad for the world.
In the real world there are a few factors that cause people’s plans to be more aligned with my values (than they would be in the cutthroat world):
- On net, people care about other people. My values are not typical, but there are nevertheless lots of people who actively work towards a better world, at least with some of their energy. I’m a person, and my values are shaped by the same forces that shape others’ values. Moreover, it is a very small minority of people who invest much energy in making the world actively worse.
- Society creates incentives to “play nice.” My values are produced not just by thinking alone, but by a social process whereby some acts (and arguments) are condoned and other sanctioned. In part because of (1) and in part because of the play out of self-interest in a world made of humans, society is organized to reward pro-social behavior. Whether contemporary actors who appear to be altruistic really care about the world or just care about looking like nice people, they still do nice things. To the extent that the quality of public discourse is rising and intuitive perceptions of virtue are coming more in line with enlightened preferences about behavior, this coupling (between what looks nice, and what is nice) becomes stronger.
- Because my values resemble the aggregated preferences of the existing humans, their self-interest is itself aligned (somewhat) with my values. There are big divergences, particularly with respect to future people and population ethics. But in the modern world, when current people coordinate to make their lives better, they tend to make the world better in a way that has a lasting positive impact (similarly, it would be hard for a random set of 1 billion people to coordinate to make each others’ lives better without making everyone’s lives better).
I think that ongoing natural processes (disaster, decay, resource depletion) are negative but not nearly as large as the problems resulting from human activity. So I consider accelerating human activity by itself to be only very slightly positive (all of the links in this post are links to the same thing, just so you know), even if I am rooting for people to get what they want. Acceleration is the first-order effect of making people more capable in any way—whatever people are doing already, they will do better if they are smarter or wiser or etc.. So for me the question is: what are the second order deviations? How large are they, and are they positive?
Here are the largest second-order corrections that I can see:
1. People make decisions about what to do, not just how to do it.
To first order, making better decisions about how to run a hot dog stand will just cause me to run a hot dog stand better—to make more hot dogs, etc. But at a high level, I also made a decision about whether to run a hot dog stand at all (and as part of running a hot dog stand, I make other choices that control what impact I have and not just how much impact I have). To the extent that people are working towards good things, if they make better choices about which projects to pursue, we will get more of what they value.
Basically, if we split up human activity into “things that happen according to plan” and “things that happen not according to plan,” if we think that humans’ plans have good goals, then boosting “things that happen according to plan” is a good thing. Similar arguments appear to apply in force to regulators.
2. Some tasks depend more on good decision-making than others.
Some problems, like running a large multi-national organization, rely more heavily on good decision-making than others, like singing. So if you boost decision-making, this causes acceleration which isn’t equally spread across tasks. This differential progress will have a significant effect on outcomes—if the decision-making loaded problems are unusually good for the world the effect will be good, and if they are unusually destructive then the effect will be bad.
I tend to suspect that the problems that depend most on good decision-making are unusually important, including things like coordination, planning for the future and mitigating risks, etc. Of course, designing increasingly complex and potentially fragile systems also depends on improved decision-making (and there are many other potential harms). I think on balance, this effect is uncertain and not large.
This effect may be different for different kinds of improved decision-making. For example, an intervention which improves human intelligence may improve decision-making, but may also have a large impact on intelligence-loaded activities like tech progress. Nevertheless, it looks like the sign of the effect is highly uncertain, and I don’t see a systematic reason to suspect it to go one way or the other.
3. Some people will benefit more from improvements more than others.
This effect is also quite nebulous and uncertain. It is hard to know what groups will benefit most from improvements, and it is very hard to know which groups’ activities have a positive influence and which have a negative influence. I would guess in general that benefits from enhancement will accrue most to the young, but I don’t have strong feelings about whether this is a positive or negative change for the world.
Combining these two sources of uncertainty, this effect seems very uncertain. I also don’t think it is a huge effect, and so I’m inclined to focus on the other deviations.
An important special case is that institutional decision-making will tend to improve the performance of large organizations more than the performance of individuals. So to the extent that large organizations are engaged in destructive activities while individuals are doing good, this would be a harm of improving organizational decision-making. I am personally very skeptical of this narrative, and don’t see good reasons to support it.
Conclusions and magnitude
I think that the second-order correction #1 above is the most important effect of improving decision-making, and because I think that people’s goals are mostly aligned with my own values I suspect this impact is positive.
It’s hard to reason quantitatively about the magnitude of the effect. One way to try to get a handle on it is to talk about proportional changes in the value of the world, vs. proportional changes in people’s ability to do things. So we can ask: how much better off is the world if people’s decision-making is improved to such an extent that they get 1% at doing tasks? Of course this picture is going to be unrealistically simple, but I think it might give a basic picture of what’s going on.
I suspect that the impact of improving decision-making by 1% is substantial, and the biggest uncertainty is about the sign. This is a topic for another blog post, but the way I think about this involves a (very uncertain) parameter controlling the impact of human behavior today on the margin, call it “value up for grabs.” I expect improving decision-making by 1% to contribute roughly 1% of the “value up for grabs,” given that the effect is positive, and I think the value up for grabs is a significant fraction of the whole value of the world. There is then some further discounting based on uncertainty. (Are there big factors I am missing?)
Of course, improving humans’ ability to do things by 1% would be a colossal change; but I expect the benefits scale down roughly linearly, and improving humans’ abilities by one part in a million is not so unlikely. I’ll eventually write in more detail about approaches to improving human capabilities, and provide some reasons for skepticism about other interventions. But I wanted to have this post written down, since I often find myself discusisng this topic.
Most people are working towards futures I don’t want to live in (and most of them don’t want to live in either). A few people are working towards futures I do want to live in. Accelerating everyone accelerates both groups. But I think accelerating everyone is on net a negative since the first group outnumbers the second by such a large margin.
But that argument would apply to literally accelerating everything in the universe, which would clearly have no effect other than changing the speed with which things happened. (Unless you are saying that the future is just likely to be bad, so that delaying it is a good thing.) Is there some reason that you think accelerating everyone would shift outcomes, contra this basic picture?
(Personally, I think you also give others too little credit. I think they are working towards good outcomes, and sometimes making errors—though less often than you imply.)
I don’t entirely follow your post (my own fault I presume).
I always assume that, having thought about my goals deeply, if people were to increase their decision-making ability their goals should come to align to mine.
If you take resolutions as things that push the world closer to your goals, and problems as things that push it farther from it (As I’m taking it), then I would expect net change to be clearly positive.
[…] leaves us with the question: how good is it for people to get more of what they want? (This post is related.) For concreteness, I now want to think about an across-the-board improvement, where […]
[…] pick I would probably prefer replace someone very effective than someone very altruistic. (Note relevant assumptions though; if you thought most people were actively making the world worse, you would […]
Which institutional decision making fora are improving? Any thoughts on why you think they are? Thanks!