This will be a relatively short post, sketching my overall view of valuable altruistic endeavors.
I think there is a good chance that civilization will last a very long time and grow to a very large scale. For the most part, the lives of future people are as valuable as the lives of present people, and any non-negligible impact we can have on their welfare is very important. (I think there are also strong reasons to care about the welfare of existing people, but I am most interested in the welfare of future people, in part because they seem most badly neglected.)
I suspect that contemporary choices will have a significant impact on the character of this future civilization, and therefore that this impact is a primary consideration for contemporary decisions. In descending order of plausibility, I think the most important impacts are:
- Increasing the relative influence of human values over the future. I think there is a realistic prospect that automation and competition will push social values towards maximizing that which is easily quantified rather than maximizing anything we care about.
- Increasing the probability of some civilization surviving and prospering in the long-term. I think there are plausible (but not particularly likely) collapse or disaster scenarios on which civilization itself does not survive for a very long time.
- Changing the distribution of human values. Shifting influence to people who share values I like, or changing the values of those with influence. I tend to find optimizing for this unattractive based on decision-theoretic considerations (i.e., “be a nice person”) coupled with a general preference for non-idiosyncratic human values—if another person disagrees with me about what is good after they’ve thought about it at sufficient length, I’m inclined to accept their view.
- Increasing the quality of the far future. In principle there may be some way to have a lasting impact by making society better off for the indefinite future. I tend to think this is not very unlikely; it would be surprising if a social change (other than a values change or extinction) had an impact lasting for a significant fraction of civilization’s lifespan, and indeed I haven’t seen any plausible examples of such a change.
Changes in categories 1-3 can plausibly have very long-lasting impacts because social values might become increasingly entrenched over time. If I value something, I will tend to use my influence to increase the probability that people in the future will also value that thing. The fidelity of this process seems likely to increase over time, so that strong values today might exert an influence over the entire future of civilization. [People often find this scenario upsettings because it seems parochial to “lock in” values in this way, but I think that such objections tend to rely on a narrow construction of “value.” A value for diversity and freedom of thought is still a value, and those who hold it can work to be sure that future people will both be free and work to preserve that freedom.]
I suspect that the best interventions are in categories 1 & 2. However, decreasing the risk of disaster may not mean mitigating predictable disasters. I think it is quite likely that the most efficient way to reduce the risk of disaster is to change society in ways that increase our collective ability to reliably get what we collectively want (via various forms of human empowerment, growth, institutional innovation, etc.) This is particularly likely insofar as there are few plausible risks today.
I think the most promising interventions at the moment are:
- Increase the profile of effective strategies for decision-making, particularly with respect to policy-making and philanthropy. At the moment the most important qualities seem to be a basic quantitative mindset, a high degree of reflectiveness and metacognition, and a commitment to responding to compelling arguments. These qualities could be promoted by directly taking a quantitative, reflective attitude towards aid or other altruistic projects to set an example, or by promoting and trying to form a community around the ideals of “effective altruism” or “rationality” (or what have you).
- Better understand the long-run impacts of contemporary decisions in general, for example the long-run impacts of contemporary economic progress, poverty alleviation, environmental preservation, etc. I believe there are a large number of concrete questions in this space, and that it is relatively easy to come up with answers that outperform the naive intuition “good begets good.” The primary positive impacts of such work would be concrete recommendations for funders and policy-makers, backed by compelling arguments. Hopefully it would also encourage further intellectual work in this direction by serious thinkers.
- Better understand the impacts of automation of knowledge work, which looks likely to be disruptive (though potentially many decades in the future). This seems to require both understanding the social implications of the plausible technological developments, and understanding the most important technical aspects of these technologies (as an input to better understanding their impact, and as an input into making technical contributions which maximize the probability of a positive impact).
- Work directly on existing projects for human empowerment, particularly on human enhancement and collective decision-making.
- Work directly on existing projects for increasing the stability of society, particularly on efforts to increase social robustness to catastrophes and on efforts to promote peace and international stability.
These strategies generally fit into a hierarchy of justification. There are object-level plans which I suspect would significantly improve long-run welfare and which are not currently being optimally pursued (even given that long-run aggregative welfare is only a small piece of most decision-makers’ values). This justifies meta-level work to improve decision-making on these issues, by addressing particular shortcomings of existing decision-making. This in turn justifies outreach, recruitment, funding, and training, to build up an adequate infrastructure of thinkers and influencers who understand these potential shortcomings and have enough concern for long-run aggregative welfare that they are willing to invest effort in resolving them (though such meta-meta-level work is probably dependent on doing manifestly productive work at the meta or object level).
In general there is complementarity between influence (money, political capital, skilled labor) which is being directed by good arguments (in service of some goal), and intellectual capital which is being spent on producing correct arguments (about how to achieve that goal). I suspect that we currently have a deficit of influence being directed by “pretty good” arguments, and a deficit of intellectual capital being spent on assembling “very good” arguments. I think one problem here is that many of the thinkers who understand the pretty good arguments conclude that funders are not moved by arguments, often in part due to overestimating the strength of their own arguments. That is, there are plausible arguments for many causes—suggesting that those causes are much more efficient than standard giving or funding, even funding by very rational and quantitatively minded philanthropists—but there are few extremely strong arguments. This situation could be resolved either by finding funding which is willing to follow more speculative arguments, or by finding stronger arguments.
[…] strategy research, (2) improving technological forecasting, (3) improving science in general, (4) improving and spreading effective altruism and rationality, and (5) many […]
As you seem to be highly concerned with the welfare of people in the far future who do not yet exist, I was wondering what you thought about comparing outcomes in which different numbers of people are in existence. For example, are situations in which there are only a few very fulfilled people preferable to situations with many marginally happy people.
I don’t have confident enough views on population ethics to directly inform my decisions. Instead, I tend to reason about the value of the future by thinking about the total resources accessible to people with altruistic values broadly like my own. (See https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/ etc.)
The main important takeaway from my ethics is that the future is big, and massively more important than the present.
My tentative population ethics is average utilitarianism over “spots where people could be,” which I mean in the sense of “descriptions that could plausibly pick out people” rather than physical locations. So I decide between “someone existing” and “someone not existing” using my own intuitions about how I value existing vs. not existing.
“Social values become entrenched in time […] [S]trong values today might exert an influence over the entire future of civilization.” Really? It seems like your image is society slowly settling on various values and keeping them for long periods of time. But alternatively, one could see history in terms of waves of intellectual thought that sweep over and replace each other. Or one could see socioeconomic forces eventually forcing certain views to be accepted over others since they make people more successful. You might come up with something that helps people become more successful faster, but they would have gotten there eventually anyways.
When reasoning about the far future, it’s hard for me to imagine things that you or I could do that aren’t completely wiped out by future generations, whether because a different wave of ideas replaced our ideas, or because what we wanted people to believe wasn’t economically advantageous to them.
I already see this at Caltech, where I graduated from in 2012. The culture of the things I was a part of (my house, my church, the frisbee team, the Christian fellowship) seems to still bear some similarities to the ideals I tried to introduce/advance, but they’re starting to fade, and fade quickly. Of course, I ideally wanted to institute change that would be permanent, but it’s hard to do that with such high turnover. I imagine that the world is like college in that way, except maybe 10-20 times slower since people graduate that much faster than they die. Correspondingly, it’s hard to imagine having a large impact on values of people living 100 years later.
Adding in our tendency to be overly optimistic about the future, especially regarding our own actions, it seems better to me to aim for more time-local objectives.
I agree that social attitudes, popular ideas, etc. drift quite rapidly, and I’m not very sympathetic to plans like making more people vegetarian today so that the whole future will care more about animal suffering. This was category 3 above.
In terms of having a long-run impact at all, the main question seems to be whether you think civilization has much of a chance of prospering for the very long term. If you think it doesn’t, then of course nothing can have a very long-term impact. If you think it does, then actions today can have an effect by changing the probability that civilization survives and prospers (for example by slightly reducing the probability of social collapse in the next 100 years—this was category 2 above). I would refer to this as things getting “entrenched,” because the way things are currently heading it would be very unlikely for civilization to survive indefinitely without somehow doing itself in. So in order for civilization to last a billion years, it would have to dramatically increase its stability. I think this is plausible.
In terms of values getting entrenched, I think that I am imagining much larger time scales than you, and also talking about a much weaker notion of “values.” For example, I would say that human preferences and values are basically fixed for the moment, and they get modified only within a certain range by social norms and expectations and so on.
If we consider the very long-term, it won’t be long before society has a capability state where each generation can have very detailed control over the properties of the next generation—so detailed that they can ensure that the next generation will “pay it forward,” ad infinitum. For example, you could imagine a world where people with property X supported genetic engineering to increase the prevalence of property X, leading to a lock-in of property X. Of course, we might not *want* this sort of thing to happen. But not wanting to lock in values is itself a value, and if you don’t take any precautions to ensure that your descendants are also opposed to that sort of thing, one day one of them will do it.
To speak to the analogy with undergrad activities: as an undergrad you try to build organizations and cultural institutions that will persist, but they are implemented by future generations of undergraduates, who gradually pull those institutions back to their preferences. If you were trying to change the admissions policy to Caltech you would have more luck, as you would be able to recruit people who shared your views (including your views about how the admissions policy would work). This would still fade with time, though, because you will gradually get pulled back towards the median for society. But if you were able to design the next class of caltech students from scratch, things would be quite different. (Though again, many people would likely object to that.)
[…] EAs (e.g. Holden Karnofsky, Paul Christiano) have argued that even if nearly all value lies in the far future, focusing on nearer-term goals […]
[…] Paul Christiano’s essay “My outlook“ […]
[…] Paul Christiano’s essay “My outlook“ […]
[…] is currently wide disagreement among effective altruists on the correct framework for population ethics. This is crucially important for determining the […]
Could you comment further on why you don’t believe it’s important to focus on getting future people to care about animal suffering? In particular, could you clarify between
1. Animal suffering isn’t important
2. It’s not feasible to get future people to care about animal suffering
3. This could be pretty valuable, but other things are more important
or something else, if none of these accurately describe your position?
It seems to me like “getting distant future people to care about animal suffering” has a lot going against it. It’s not that one of these things are the reason for my belief, it is that “what to do” reflects the balance of many considerations. Even removing one of these considerations does not seem like it tips the balance.
1. The connection between present day actions and future values is more tenuous than the connection with survival. Counterfactual differences between possible cultures seem to mostly dissipate with time. I find most contrary examples offered by animal advocates to be unconvincing. My guess is that in the long run attitudes towards animal won’t be massively path sensitive. One way to think about this is that there are many generations over the course of history, and they can’t all have a very big impact. We have strong reasons to think that the present generations’ decisions are extremely important from the perspective of survival, while we have only modest reason to think they are important for establishing long-term values.
2. I don’t see a strong case for future people’s caring-about-animals having a huge effect on the goodness of the world. Today it is possible for a tiny share of economic activity to correspond to a large share of conscious experience, and this makes it easy for relatively modest preferences to have big impacts. But that doesn’t seem likely to continue. The main case seems to be via concern for “suffering subroutines” and this is very speculative.
3. What is the actual value proposition here? Is the claim that we can share considerations that will cause others to better understand what they want, or is it that you can systematically change what people want by intervention? In the former case I am even more inclined to expect that the outcome won’t be path-dependent (though activism might accelerate a change). In the latter case, I think there are relatively strong reasons to try to collaborate across different-things-people-want rather than engaging in zero sum conflict.
Overall, it seems clear to me that if animal activism is the right thing to do, it is because peoples’ attitudes towards animals will be important for object level issues (like how to make food) over the coming centuries. But these object level issues would probably be about the suffering of contemporary animals, and I tend to be more interested in future creatures.
[…] actualmente um amplo desacordo entre os altruístas eficazes sobre o enquadramento correcto da ética populacional. Isto é de importância crucial para […]