The best reason to give later
by paulfchristiano
I’ve written about saving vs. giving before, focusing on the issue of interest rates vs. returns on good deeds. But for now, I think there is a much more compelling reason to save: there is a very good chance that the best giving opportunities we can identify in the near future will be better than the best giving opportunities we can identify this year.
The basic reason for optimism is that only a very small fraction of all existing charitable opportunities—much less all possible altruistic interventions—have been rigorously assessed. Activities like political advocacy and differential tech development seem like they could easily have very large impacts, but at the moment those impacts are poorly understood. So today we have a choice between focusing on relatively simple, well-understood interventions, or going out on a limb and supporting more speculative work.
The speculative projects seem like they probably have much larger impact, but speculative projects probably vary greatly in their cost-effectiveness in ways we don’t yet understand: in those areas we do understand, we observe large gaps between the most and least effective interventions which are often hard to notice without more detailed analysis, and it seems likely that similar gaps will exist in harder-to-quantify areas. So at the moment it seems like we will be able to do much more with our money in the future.
This situation should be familiar to many donors, and especially to “effective altruists”: our current best guesses (in most cases) are not only different from our past best guesses, they are also likely to be several times more cost-effective.
There are many positive effects from giving now, via enhancing the credibility and coherence of effective altruism (EA) as a movement, increasing the reach of effective altruist organizations. My current view is that the gains to giving now are significant and in many cases dominate the direct impacts; however, because the cost of giving large amounts now is so high it is very valuable to explore alternatives that can produce similar amounts of value without being so wasteful. I suspect that appropriate arrangements involving large EA donor-advised funds would be able to capture many of these gains.
An even more important consideration is that if we all stopped giving we would mostly stop learning as well—most things we learn comes from our current activities, rather than the passage of time itself. However, a small fraction of activities today seem to be responsible for much of the learning, and so I believe that the great majority of giving by effective altruists could be better spent either on activities better designed to help learn or on saving for the future. If learning is the main goal of giving, then you should be engaged in a different kind of cost-benefit analysis, and you are much more likely to support research and exploratory projects than scaling up proven health interventions (though there are also challenges to scaling up proven interventions which are important to learn more about).
I’m not going to lay out the case for giving later in great detail here, because I think most readers will already believe it. Instead I will look at the most compelling reasons to give now anyway, and explain my response to these considerations.
(ETA: Giving What We Can has just set up a Charitable Trust in the UK, which functions in the same way as a DAF, and will probably have a DAF in the US at some point in the future. My main concern with writing this post is discouraging people from giving at all, which is definitely not what I want to do!)
The credibility of the effective altruism movement depends on giving now
Through the growth of organizations like GiveWell, 80.000 hours, and Giving What We Can, it is clear that there is a growing contingent of donors who are willing to direct their donations according to considerations of effectiveness. The fact that the community actually does donate is important, for a number of reasons:
- It increases the credibility, coherence, and influence of the community itself. I have only a weak idea of what it means to be a “rationalist,” but donating significant amounts to cost-effective charities is a costly signal of “effective altruism.” It grounds the community and reduces the probability that it will lose its focus or direction. Moreover, it significantly increases the external credibility of the movement and makes participation more respectable and attractive. It also increases the persuasiveness of the movement’s ideas, and the ability of people within the movement to influence funders.
- It increases the access of effective altruists to charities. If GiveWell moved $100,000 a year, few charities would want to deal with them or go through the effort to become extremely transparent about their own effectiveness (unless they were intrinsically motivated to do so). If GiveWell moves $10M a year, the situation is different.
- It increases the perceived demand for effective charities. Some causes may be particularly interesting to effective altruists while being relatively unattractive to normal donors. Those causes face a chicken-and-egg problem: until charities are doing work on them there is little evidence for their impact and so EAs don’t want to donate, and until EAs are donating to those causes or similar ones, would-be charities are discouraged by the lack of available funding and so must court a more mainstream audience. If the effective altruism movement is currently directing millions of dollars to our current best guesses and being transparent about what exactly they are looking for, this may help us get out of the chicken-and-egg situation by making it clear there is a large pot of money which is supporting the most demonstrably-cost effective interventions.
I think each of these considerations is significant on its own, and taken together they are definitely a big deal. But I am not convinced that giving now is necessary in order to realize these benefits. In particular, I think that an appropriately organized donor-advised fund (DAF) could capture many or most of them. Giving to a charity that is 1/10 as effective as what you expect to find in 10 years is only marginally better than wasting money altogether, and so it seems like a relatively high priority to figure out how to capture these benefits while giving later.
There are a number of ways that EA DAFs might be organized; here are three that seem to me like they might be able to capture a significant fraction of the total value of giving now:
- Directly under the control of EA organizations, such as GiveWell or Giving What We Can, which have missions statements and track records that make it clear that they will give to demonstrably cost-effective charities.
- Under the supervision and perhaps nominal control of EA organizations, such that grant-making is still at donor’s discretion but there is moderate social and logistical pressure to make donations to demonstrably cost-effective charities (particularly those which receive broad support within the EA community, or those backed by strong cost-effectiveness estimates)
- Under the control of individual donors, but (1) loosely affiliated with the EA community via some explicit arrangement such as pledging or registering, and (2) periodically making small grants which indicate the nature of the donor’s interests.
Effect on credibility, coherence, etc. of the EA movement: Any of these options would probably work to increase the coherence of effective altruism (and establish habits of giving, signal commitment to altruism, etc.) There are disadvantages to such indirect schemes, but if anything a large community of donors publicly setting up donor-advised funds and investing in the search for more cost-effective giving appears to be more distinctive (and more easily justified) than a community giving to their current best guesses. In terms of appeal to normal folk who might consider affiliating with the EA movement, giving later appeals to a very different audience than giving to the developing world now (which is smaller but more likely to pursue higher-impact interventions discovered down the line). Depending on which of these audiences is more valuable, the difference could be a significant cost to completely switching from giving now to giving later. But I think that at a minimum redirecting a large share of funding to giving later would make the movement attractive to a broader range of people.
Effect on EA organizations: These options have quite different effects on the influence and prestige of effective altruist organizations. Option 1 may well have a larger positive impact than recommendation-based giving. Option 2 seems to be more nebulous, and I don’t know whether the effect would be larger or smaller (it would probably be sensitive to details of the arrangement). Option 3 would have a smaller impact on the credibility of EA organizations; it would be significantly larger than the effect of e.g. pledges for Giving What We Can, but probably significantly smaller than the impact of actual donations. Funds which are publicly committed to the recommendations of a particular organization might have a large effect on that organization’s credibility, but if the commitment is over a longer timescale than most fundraising charities are concerned with (which seems to be relatively short) then this effect would be muted.
Effect on perceived demand for effective charities: If relatively well-planned and backed by a larger effective altruism movement, I think that any of these proposals would have a large effect on the perceived demand for efficient charities. As above, the primary difficulty is that funders might be willing to let funds sit for decades, while fundraisers are not really concerned with the demand for effective charities in 20 years. For charities which are not plausibly top recommendations for effective altruists this may not be a huge deal; the biggest concern is if there is ambiguity over whether any opportunity could earn this designation in a realistic timeframe. Periodic small grants would help this issue somewhat,
Learning depends on giving now
I think the most important impact of giving now is probably that it accelerates the process of learning. At the level of the EA movement, the main reason to be optimistic about better giving opportunities emerging in the future is that we will actively seek out such opportunities, and discover through experience what directions are most fruitful to explore. (As an individual you can expect your money to go further if you wait and do nothing, but only because you can benefit from the work of others.)
However, I think that most causes that EAs currently donate to are not responsible for this learning, except indirectly for the reasons explored in the last section (e.g., giving to AMF is not a cost-effective way of learning in and of itself, but may facilitate GiveWell’s other activities, which are a big driver of current learning). A relatively small set of activities seems to be responsible for most learning that is occurring (for example, much of GiveWell’s work, some work within the Centre for Effective Altruism, some strategy work within MIRI, hopefully parts of this blog, and a great number of other activities that can’t be so easily sliced up). The argument I’ve given definitely doesn’t justify delaying any of this funding: I’m recommending delaying object-level do-gooding relative to learning, not delaying do-gooding altogether.
However, it may be that some of these activities produce info much more efficiently than others, and depending on the relative importance of funding and haste it may be worthwhile to stall some of these activities while the most important info-gathering proceeds. To me it currently looks like the value of getting information faster is significantly higher than the value of money, and on the current margin I think most of these learning activities are underfunded. A more serious concern is that there seems to currently be a significant deficit of human capital specialized for this problem and willing to work on it (without already being committed to work on it), so barring some new recruitment strategies (e.g. paying market wages for non-EAs to do EA strategy research) there are significant issues with room for more funding.
These issues seem important to me, and I’ll certainly return to them in future posts. For now, I’d leave it at: a small fraction of activities EAs fund are directly producing relevant info, and those are probably important and worth scaling up. However, the majority of EA funding does not fall into this category.
Some activities have very high returns
The final compelling reason to fund work now, despite expecting better interventions to be available in the future, is that some activities have very high rates of return. In general I am quite skeptical about most such claims. But I know of two plausible cases for interventions that have high enough discount rates to justify early spending (I have looked at many other possibilities but haven’t conducted an exhaustive inquiry):
- Outreach and movement building. See Giving What We Can, 80.000 Hours, and the Center for Applied Rationality. It seems plausible that investments in outreach can speed up the growth of some of these movements, hastening the point where they are saturated in the population. If these movements are currently quite small compared to their long-term potential, such speed-ups could result in large multipliers on effort invested today. Moreover, these movements might produce human and financial capital which can be used in the medium-term for learning. So even if the returns to these interventions are only severalfold, they could still be justified as an indirect way of learning more. My main concern with movement building is that the communities they produce may end up not be useful for exploring new interventions, either because the values or intellectual standards of these communities decline, because they irrevocably specialize in order to grow, or because they are not able to inspire very many strong researchers who would not otherwise have been interested.
- Research on impacts of artificial intelligence. See MIRI. It looks like a transition to machine intelligence will very likely occur sometime in the next few centuries, and the transition itself may be quite disruptive. I think there is room for valuable work understanding what such a transition might look like and understanding how to influence it. Because there is a modest probability of a surprising transition relatively soon, it might be worth engaging in this project now even without a reliable estimate of its impact. This is especially true if we expect philanthropists of 30 years from now to more reliably pursue high-impact interventions, so that there will be fewer good opportunities in the future.
Although both of these activities could have very high returns and so could in principle be worth supporting instead of learning more, I don’t think this is a realistic position. Instead, I consider [1] a potentially efficient way to accelerate learning, and expect that most of the value from movement building comes from the possibility of setting up a community which can learn much faster. I think the most valuable parts of [2] are targeted strategy work which will shed light on the overall landscape and help us learn whether it is worth investing much more heavily in research on AI impacts. That is, although in both of these cases there is potentially high-impact object-level work, there are analogous high-impact approaches to learning, and so that remains the most important output.
Is there any progress being made on evaluating more speculative interventions? Is our understanding of them substantially different from what it used to be? If not, why and how do we expect that our understanding will change in the future? I’m trying and failing to imagine a world in which we’re as good at judging, say, x-risk interventions as we are now at judging global health ones.
I think our understanding of x-risk interventions is materially better than it was 5 years ago, and it seems to be getting better at a steady clip for mundane reasons (people are thinking about it more and building up a stock of knowledge, and there isn’t yet enough known that anyone can forget).
GiveWell is getting closer to making meaningful recommendations on speculative projects, and CEA is getting closer to mobilizing some of the human capital they are accumulating to working on the problem. MIRI is generally improving significantly as an organization and will probably be in a position to communicate their current understanding more effectively and start actually making progress in the relatively near future (and they are already starting to do this more seriously, though there is a very long way to go).
In terms of what could be, I think I have a better imagination than you 🙂 Perhaps the important point is what we mean by “good.” I agree that we can’t have such tight predictions of the ex post value created by an intervention, but we might be comparably good at building credible cases and building consensus amongst moderately large groups of people.
Okay, that seems reasonable. I agree that it’s probably a problem with my imagination 😛 To help with my imagination, can you give some concrete examples of things we understand now that we didn’t five years ago? (I actually just haven’t been following x-risk stuff for that long!) Thanks!
I haven’t been involved with this sort of thing for long either. But some examples:
– GiveWell is beginning to try evaluating more speculative causes, see e.g. their shallow investigations [http://www.givewell.org/shallow], and there has been much discussion of relevant considerations around GiveWell. Most concretely, recently a slightly better investigation of asteroid risk.
– MIRI, FHI, and Robin have all thought and written a great deal about the transition to machine intelligence. Many of the relevant arguments have been laid out much more clearly and expanded, though this could have been done much more effectively and could definitely be organized much better. We have a slightly better sense of AI timelines, of plausible outcomes and useful taxonomies, of plausible interventions.
– A bunch of basic arguments have been made, questioned, corrected, etc. Many of the posts here are of this flavor (though not this one), e.g. https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/, and this blog is naturally a pretty small piece of the total picture.
Providing concrete examples is tough when the insights are so small. But look through the last 5 years of Nick Bostrom’s publications, Luke or Wei Dai’s submitted tab on LW, this blog’s archives, GiveWell’s conversations page, etc., and you will find a large number of small ideas and incremental steps forward. If you cast a wider net you’ll catch a lot more stuff. I think the main question is whether these little steps are building on each other or just retreading the same first few considerations. I think this is a particular concern w.r.t. the machine intelligence inquiry to date, whereas I am relatively confident that GiveWell is making headway (but going very slowly).
It appears to me that an awful lot of the foundational work to be done on stably self-modifying AI looks plausibly serial-time-limited or not-easily-parallelizable – that is, it will turn out to require insights that must occur in sequence. This seems like the most important thing to start on ASAP rather than delaying.
Maybe, but even if there are a bunch of steps that need to follow each other, more and better people with the aid of interim progress in AI generally could still produce a lot more marginal steps per unit time.
And investment returns could be converted into those things.
I feel like multiple distinct issues are being invoked here.
My original point is about serial dependencies. The degree to which one commonly reads papers and finds that equation 16, an obvious-seeming improvement over equation 15, has a citation date 2 or 10 years later, is a source of very grave concern to me when it comes to doing FAI work.
One can directly assert that this is the wrong concern because parallel human resources can easily overcome an advantage of serial depth.
Separately, one could argue about later-starting work requiring fewer resources per unit progress due to field boosting – but this might not overcome a loss of serial advantage, especially if the field’s avenues of investigation are not naturally very similar to your own, and early publications could serve to steer a field.
Similarly, one could argue that investment returns can result in more resources being invested later, but again the original thesis is that this may not overcome the difficulties associated with lack of serial depth, 9 women cannot have a baby in 1 month etc.
“The degree to which one commonly reads papers and finds that equation 16, an obvious-seeming improvement over equation 15, has a citation date 2 or 10 years later”
I don’t know much about academia, but would 2 years be plausible if someone came up with the improved equation after reading the paper as soon as it came out?
This is certainly plausible, but I think this particular scenario (not-easily-parallelizable foundational work on AI determines whether the development of AI has a positive impact, we can foresee the relevant research directions, this happens in the near future, and there aren’t any more important considerations bearing on the question) is not very likely at all. Given enough urgency or fast enough returns it might still be reasonable, but it is taking a big hit from improbability.
“I suspect that appropriate arrangements involving large EA donor-advised funds would be able to capture many of these gains.”
Giving What We Can has recently set up a tax-deductible Charitable Trust in the UK, and is considering setting up a donor-advised fund in the US. The Trust allows donors to pay in and get gift aid now, and can pay out to charities either immediately or in the future.
[…] You can also expect to benefit from the research produced by groups within the effective altruism, strategic philanthropy and evidence-based policy movements. A serious research program to work out which causes have the most impact is relatively new, so we can expect more discoveries in the next few years.2 For more, see Paul’s essay on this topic. […]
[…] A serious research programme to work out which causes and interventions have the most impact is relatively new, so we can expect more discoveries in the next few years.[^2] For more, see Paul’s essay on this topic. […]