We can probably influence the far future
I believe that the (very) far future is likely to be very important, and that my decisions today probably have (very) long-lasting effects. This suggests that I ought to be mindful of the effects of my actions on the very far future, and indeed I am most interested in activities that I think will have positive long-term effects. In contrast, many or most thoughtful people consider it unproductive to think more than a few decades ahead, to say nothing of millions of years.
When I express this view, I often encounter intense skepticism about the possibility of having any predictable influence on the far future. Even if our decisions have some effect, can we reason about it any useful way? I believe that the answer is yes, and that we can in fact have a fairly detailed and specific effect on the very far future. In this post I want to make a very simple argument for this conclusion: if we think there is a good chance that we will ever have an opportunity to have a predictable long-term influence, as I suspect we should, then interventions which improve our own capacity (such as investment) will have an indirect, significant, and predictable long-term effect.
The basic argument
It is easy to see that in principle there are opportunities for interventions to have very long-last effects. In the most extreme case, we can imagine the emergence of a pressing existential risk, an event which threatens to permanently cut short or curtail human history. If possible, averting such a risk would have a massive effect; it could be the difference between an essentially barren future and one full of rich experiences and valuable lives.
The practical question is: are these opportunities actually available? Surely there is some probability with which we might avert an existential risk over the coming few decades (say), but this probability might be very small. Many people would balk at the suggestion that they prioritize a one in a billion reduction in extinction risk over a very significant improvement in the quality of life of 1% of the existing population. Although I am unusually sympathetic to the aggregative utilitarian position, I would also find such an extreme tradeoff problematic even on purely moral grounds.
There is some debate about this question today, of whether there are currently good opportunities to reduce existential risk. The general consensus appears to be that serious extinction risks are much more likely to exist in the future, and it is ambiguous whether we can do anything productive about them today.
However, there does appears to be a reasonable chance that such opportunities will exist in the future, with significant rather than tiny impacts. Even if we don’t do any work to identify them, the technological and social situation will change in unpredictable ways. Even foreseeable technological developments over the coming centuries present plausible extinction risks. If nothing else, there seems to be a good chance that the existence of machine intelligence will provide compelling opportunities to have a long-term impact unrelated to the usual conception of existential risk (this will be the topic of a future post).
If we believe this argument, then we can simply save money (and build other forms of capacity) until such an opportunity arises. If nothing looks promising and we have waited long enough, we may eventually fall back to what we would have done anyway–my own view is that this wouldn’t be so bad–or we can reevaluate the situation and perhaps hold out for an opportunity even further in the future.
I don’t think this is necessarily (or even probably) the best thing to do with resources today. I want to clearly make the point that we can have a predictable long-term impact.
Given that we can have a predictable long-term impact, I think that in order to justify any other intervention on aggregate welfare grounds we must argue that it has an even better long-term impact—in the same way that supporters of an anti-poverty intervention arguably have an obligation to argue that it compares favorably to an unconditional cash transfer. I do believe that many interventions can meet this bar, but I think that the case is not often made (even for many interventions which are explicitly targeted at long-term effects).
Viewed in light of this obligation, I think that the (aggregative utilitarian) case for many contemporary philanthropic projects is much more speculative than it at first appears, resting on a tentative chain of hypotheses about the relationship between “generally good-seeming” stuff today and very long-term outcomes. For many of these common-sensical projects, I am skeptical that they meet the bar I’ve described here. They might be justified on other (also morally compelling) grounds, but are unlikely to be cost-effective when considered exclusively in terms of long run aggregate welfare.
Will the opportunities be taken?
In addition to the existence of opportunities, we’d like to know whether those opportunities will have any “room for more funding”: if they are such compelling opportunities for a long-term impact, will other people take them regardless of what we do?
In this respect, the situation seems similar to more familiar philanthropic projects. For example, many people today are interested in mitigating the negative impacts of poverty, improving education, or accelerating technological progress. But these are large projects, and so a funder interested in them today can have a significant marginal impact even though they already receive substantial attention. I don’t see any reason to expect far-future-influencing to be fundamentally different from more conventional objectives, except perhaps by being somewhat more unusual and consequently somewhat more neglected.
It is sometimes argued that in general the future will be richer and more capable, so that the low-hanging fruit will be more reliably taken and the remaining philanthropic opportunities will be less good. I find this argument plausible, but am skeptical about the magnitude of the effect. First, I would observe that insofar as the world is getting richer, a philanthropist who saves should expect to get richer as well (in fact slightly faster). And I think it is unlikely that philanthropic spending (as a fraction of the world economy) will rapidly increase over the coming generations. So the effect is being driven primarily by overall improvements in the quality of decision-making around the world. But the sign of this increased decision-making-quality effect is not entirely clear: similar gains might also accrue to the individual philanthropist, who certainly benefits from the wisdom of the world around her. Again, on balance I expect that such improvements will tend to reduce the impact of an individual thoughtful philanthropist, but I am not convinced this is a large effect.
Concretely, looking at the historical record I don’t believe that a philanthropist 100 years ago would be expected to have a much larger effect on the world (using the same fraction of of the world’s resources) than a similarly-positioned philanthropist today, and I am highly skeptical that we are at the unique moment of maximally-impactful-philanthropy.