This will be a relatively short post, sketching my overall view of valuable altruistic endeavors.
I think there is a good chance that civilization will last a very long time and grow to a very large scale. For the most part, the lives of future people are as valuable as the lives of present people, and any non-negligible impact we can have on their welfare is very important. (I think there are also strong reasons to care about the welfare of existing people, but I am most interested in the welfare of future people, in part because they seem most badly neglected.)
I suspect that contemporary choices will have a significant impact on the character of this future civilization, and therefore that this impact is a primary consideration for contemporary decisions. In descending order of plausibility, I think the most important impacts are:
- Increasing the relative influence of human values over the future. I think there is a realistic prospect that automation and competition will push social values towards maximizing that which is easily quantified rather than maximizing anything we care about.
- Increasing the probability of some civilization surviving and prospering in the long-term. I think there are plausible (but not particularly likely) collapse or disaster scenarios on which civilization itself does not survive for a very long time.
- Changing the distribution of human values. Shifting influence to people who share values I like, or changing the values of those with influence. I tend to find optimizing for this unattractive based on decision-theoretic considerations (i.e., “be a nice person”) coupled with a general preference for non-idiosyncratic human values—if another person disagrees with me about what is good after they’ve thought about it at sufficient length, I’m inclined to accept their view.
- Increasing the quality of the far future. In principle there may be some way to have a lasting impact by making society better off for the indefinite future. I tend to think this is not very unlikely; it would be surprising if a social change (other than a values change or extinction) had an impact lasting for a significant fraction of civilization’s lifespan, and indeed I haven’t seen any plausible examples of such a change.
Changes in categories 1-3 can plausibly have very long-lasting impacts because social values might become increasingly entrenched over time. If I value something, I will tend to use my influence to increase the probability that people in the future will also value that thing. The fidelity of this process seems likely to increase over time, so that strong values today might exert an influence over the entire future of civilization. [People often find this scenario upsettings because it seems parochial to “lock in” values in this way, but I think that such objections tend to rely on a narrow construction of “value.” A value for diversity and freedom of thought is still a value, and those who hold it can work to be sure that future people will both be free and work to preserve that freedom.]
I suspect that the best interventions are in categories 1 & 2. However, decreasing the risk of disaster may not mean mitigating predictable disasters. I think it is quite likely that the most efficient way to reduce the risk of disaster is to change society in ways that increase our collective ability to reliably get what we collectively want (via various forms of human empowerment, growth, institutional innovation, etc.) This is particularly likely insofar as there are few plausible risks today.
I think the most promising interventions at the moment are:
- Increase the profile of effective strategies for decision-making, particularly with respect to policy-making and philanthropy. At the moment the most important qualities seem to be a basic quantitative mindset, a high degree of reflectiveness and metacognition, and a commitment to responding to compelling arguments. These qualities could be promoted by directly taking a quantitative, reflective attitude towards aid or other altruistic projects to set an example, or by promoting and trying to form a community around the ideals of “effective altruism” or “rationality” (or what have you).
- Better understand the long-run impacts of contemporary decisions in general, for example the long-run impacts of contemporary economic progress, poverty alleviation, environmental preservation, etc. I believe there are a large number of concrete questions in this space, and that it is relatively easy to come up with answers that outperform the naive intuition “good begets good.” The primary positive impacts of such work would be concrete recommendations for funders and policy-makers, backed by compelling arguments. Hopefully it would also encourage further intellectual work in this direction by serious thinkers.
- Better understand the impacts of automation of knowledge work, which looks likely to be disruptive (though potentially many decades in the future). This seems to require both understanding the social implications of the plausible technological developments, and understanding the most important technical aspects of these technologies (as an input to better understanding their impact, and as an input into making technical contributions which maximize the probability of a positive impact).
- Work directly on existing projects for human empowerment, particularly on human enhancement and collective decision-making.
- Work directly on existing projects for increasing the stability of society, particularly on efforts to increase social robustness to catastrophes and on efforts to promote peace and international stability.
These strategies generally fit into a hierarchy of justification. There are object-level plans which I suspect would significantly improve long-run welfare and which are not currently being optimally pursued (even given that long-run aggregative welfare is only a small piece of most decision-makers’ values). This justifies meta-level work to improve decision-making on these issues, by addressing particular shortcomings of existing decision-making. This in turn justifies outreach, recruitment, funding, and training, to build up an adequate infrastructure of thinkers and influencers who understand these potential shortcomings and have enough concern for long-run aggregative welfare that they are willing to invest effort in resolving them (though such meta-meta-level work is probably dependent on doing manifestly productive work at the meta or object level).
In general there is complementarity between influence (money, political capital, skilled labor) which is being directed by good arguments (in service of some goal), and intellectual capital which is being spent on producing correct arguments (about how to achieve that goal). I suspect that we currently have a deficit of influence being directed by “pretty good” arguments, and a deficit of intellectual capital being spent on assembling “very good” arguments. I think one problem here is that many of the thinkers who understand the pretty good arguments conclude that funders are not moved by arguments, often in part due to overestimating the strength of their own arguments. That is, there are plausible arguments for many causes—suggesting that those causes are much more efficient than standard giving or funding, even funding by very rational and quantitatively minded philanthropists—but there are few extremely strong arguments. This situation could be resolved either by finding funding which is willing to follow more speculative arguments, or by finding stronger arguments.