Four flavors of time-discounting I endorse, and one I do not
by paulfchristiano
(I apologize in advance for a bit of a long post. There is a more concise summary at the end.)
We often choose between doing good for the people of today and doing good for the people of the future, and are thus faced with a question: how do we trade off good now vs. good later? A standard answer to this question is to invoke exponential time discounting with one discount rate or another.
When I consult my intuition, I find that at least over the next few thousand years, I don’t much care about whether someone gets happier today or happier tomorrow—it’s all the same to me. (See also here for a much more thorough and correct discussion of this issue, and see here for a much more poetic description.)
Nevertheless, there are a few senses in which I do discount the future, and I think it’s important to bring those up to clarify what I do (and don’t mean) by saying that I have weak time preferences.
Also, a tangential fact about numbers: if something grows at a rate r per year, with r << 1, then it will take about (0.7 / r) years for it to double. (0.69 = ln(2))
Interest and investing
If I want to make people’s lives better today, I have access to a standard array of philanthropic options (poverty alleviation, etc.). If I want to make people’s lives better in 100 years, I have access to a (different) standard array of options (technology, education, etc.), but I have an additional option: invest my money for the next 100 years, then in 100 years try to make people’s lives better.
So if market interest rates are 5% and the array of present-do-gooding options is roughly stable over time, I should be indifferent between investing for a year and pursuing an intervention that saves 100 lives, and pursuing an intervention that saves 105 lives next year for $1M today. In some sense I would be right to exhibit a “discount rate” of 5%.
But this is not a statement about what I value, just how I want to get it. If I can get 5% inflation-adjusted returns (where inflation is indexed by an appropriate basket of good deeds), and I care about the future as much as today, then this doesn’t imply that I should consider doing good today instead of doing a more efficient good deed with a delayed payoff.
My discount rate is lower than the apparent social discount rate, and so I should be keen to buy influence in the future from those who are willing to sell it. Robin Hanson has repeatedly made this point. It seems to not be taken seriously enough by people who want to do good in the future. Time-insensitive altruistically motivated folks control a relatively small share of wealth today, but weighted by patience and degree of concern for the future, an efficient market might let them control a rather large share of wealth in the future.
Unfortunately, as others have observed, it is very hard to create institutions which will accumulate capital over a period longer than my lifetime. I don’t see any formal way to do it in the US (due to a combination of 5% required giving for foundations, huge estate taxes, and no legal room for corporations that will plausibly honor your wishes). That said, if investing for the long haul would be a good idea, investing for the short haul is as well (i.e., set up or support a foundation which will be spent down once suitably motivated successor leadership can’t be identified). Also, to be honest, no one seems to have worked particularly hard on this problem, and I wouldn’t be surprised if you could make significant headway on it.
If forward-looking altruists would be investing for the future if only it were possible, then developing tools to do that is highly leveraged even if you personally won’t make too much use of them.
What is the interest rate?
There are a different interest rates in society, depending on the level of risk an investor is willing to take.
In their own money-making endeavors, altruists want to be fairly risk-loving, since twice as much money is nearly twice as good for many altruistic purposes. In investment decisions, this is less obvious. The problem is that all risk in an efficient market is correlated, since all uncorrelated risks would get bundled together to produce less risky assets. (This isn’t true for e.g. founding a startup, where issues of moral hazard keep startup founders from insuring against their risk. But financial institutions don’t normally invest in a single startup because they don’t have to: they invest in a large portfolio of startups which is only correlated with the overall environment for startups—a nd then they invest in a large portfolio of portfolios, so that their investments are primarily correlated with the overall health of the market.)
This means that when an altruistic investor takes risk, they are typically also correlating their investments with the market as a whole. For a normal investor they mostly care about correlating their investments with their other investments, but an altruist has other concerns. In worlds where markets do well, much more charitable money is available, and fewer low-hanging opportunities are passed up. So altruistic money is more useful in poorer worlds, and in particular in worlds where other rich donors (whose wealth is particularly closely tied with market performance) have less money. So it’s not obvious on balance whether an altruist should be more or less risk-averse than the average investor (who appears to be awfully risk averse). I think this should be a topic for another post.
In terms of actually estimating the rate, the clearest resource I’ve found (though I don’t know how accurate it is—it doesn’t look like they had much room to monkey around) is this 2011 update from the authors of the millenium book. We have to take care to correct for survivorship bias, and to generally be careful about systematic biases in the numbers we can’t find (there are a lot of really spectacular examples of this—they point out that some retrospectives looked at the historical performance of equities without considering railroads, which are today a negligible share of the market but in 1900 were over 30%! Even using the US as opposed to other countries can lead to a significant bias, as its ex post salience is related to the strength of its equity markets.)
Real returns to equities appear to be about 5-6% around the world, and real returns to bonds are around 1-2%. The estimate for equities is quite noisy—the variance of stock market returns is 15-20%, so if we treat each year as an independent draw from the same barrel the error bars are 2-3%. In fact the situation is worse than this, because nearby years are significantly correlated and the distribution is not normal (the existence of large spikes makes it hard to estimate the real risk from the realized risk, and uncertainty about the possibility of such spikes should surely be considered a risk). On the other hand things are a bit better when we can average across many nations, but given the correlations of shocks across countries this is little help. (A consequence of the error being so large is that it matters a great deal whether we are doing this exercise in 2007 or 2010.)
The high rate of returns to equities, given the returns on bonds, is often considered a bit of a puzzle. If we accept that, then we should probably regress a bit to the mean, which would put real returns to equities well under 5%.
Returns to other investments seem much higher than returns to equities. (E.g., Wikipedia reports annualized returns of 35% [after management fees] for Rennaissance Technology’s $5 billion medallion fund.) These numbers are much more seriously contaminated by survivorship bias and noisy estimates + correlations across similar instruments. Moreover, many of the real expected returns should also be accounted as wages for wiley investors rather than interest. I think that the existence of these investment opportunities should cause us to be a bit more optimistic about interest rates, but not substantially so (though I could be convinced to revise this position completely.)
So I’m inclined to accept an estimate of 5% for market interest rate, and 2% for risk-free (rather uncorrelated with the market) interest rates. I don’t know which of these altruists should be using, but I think it is somewhere in this range. For comparison, the world economy seems to grow by 3.5 – 4.5% per year (though an actual economist can correct me on these numbers). I had expected a much larger gap between interest rates and growth rates. If such a gap exists, it must be because of investment opportunities much better than the stock markets. I don’t have good evidence for this, but it does seem plausible.
It’s also worth pointing out that it seems that there is complementarity between money and the time of motivated smart people which increases the returns to investors who put their own time in (this wouldn’t happen in an efficient world, but there are big principal agent problems + reputational problems which make these markets quite inefficient).
Uncertainty
However I try to do good, I face uncertainty about the impact of my actions. As those impacts get further in the future, they typically become more uncertain.
Re: Doom
Most simply, there is uncertainty about whether humans survive, whether society remains essentially stable, whether the structure of the world stays basically in line with expectations, etc. (almost any intervention is predicated on a fairly extensive picture of how the world works, which is typically reasonably but not infinitely robust).
My estimates for the probability of extinction or social collapse are in the regime of 0.1 – 0.3% per year, based on making stuff up and inspired by the modern and impending development of the first technologies which could plausibly kill everyone. (If I write enough blog posts, justification for estimates in this regime will definitely appear eventually, but I think this is not far from most reasonable estimates.) This is quite slow discounting relative to the other sorts I’m discussing here, but over the very long term it could make the difference between not caring at all about the distant future or caring primarily about the distant future.
In general
The specific stories I tell about what I’m doing typically only apply to the world of today which I understand well, and they become increasingly shaky as they are extrapolated. It isn’t quite clear what it means to say that one effect is 1% less certain than another, but it feels safe to say that my impacts on the future become at least 1% less certain with each passing year. Planning horizons for successful human activities seem to be very rarely longer than 20 years, suggesting discount rates due to uncertainty that are more like 5% (or maybe simply reflecting real interest rates of about 5%…)
But…
It doesn’t make sense to keep discounting at a constant rate indefinitely based on uncertainty, or to apply a constant discount rate across all interventions. For example, I ought to assign much more than 2^(-1000) probability to civilization lasting for a million years, as long as I can imagine any possible world in which such a stable civilization comes to exist. Likewise, though I should generally be more uncertain of events farther in the future, it would be an error to claim to know literally nothing about my impacts of the world centuries hence, especially if I haven’t yet thought hard about it.
In fact I think there are available interventions which have reliable influences far into the future. Preventing extinction is the most straightforward, but I think it is safe to say that even more subtle interventions like making the world wealthier or more peaceful, or changing population characteristics, or etc. have a modest effect projecting even a million years hence, and that it is possible to reason productively about these effects. (Or at least worth trying—personally, that’s a lot of why I’m here.)
The fact that humans can rarely construct successful plans on very long scales, does not mean that they cannot successfully anticipate any of the consequences of their actions on long scales, merely that they do not have sufficiently detailed anticipations to usefully plan.
Decision theory, bargaining, and the golden rule
(This section is directed only at the hardcore utilitarians among us; others should probably pass. Also see Carl Shulman, again, with a more careful explanation of similar ideas.)
I often do things to make my own life better, even though I think it is clear there are more efficient ways to make other people’s lives better. I could characterize this as a failure of will, and not spend any time thinking about how to make my own life better, but pragmatically I think that would be an error—it would make my life quite a bit worse, and it would make me much less enthusiastic about pursuing my own explicit plans. It might be a useful tactic in some internal struggle amongst my desires for influence over my actions, but overall it seems clearly destructive (and probably if I decide not to take this tactic I can locate proper Pareto improvements).
A similar argument applies at the level of societies. We could declare our altruistic priority to be concern for the future without time-discounting and not spend any of our altruistic energy on making our collective lives better today, but I think that would be an error pragmatically. In light of our individual preferences to lead better and richer lives, we might all be better served by adopting altruistic priorities which paid some heed to our own lives, in addition to the lives of distant future folk—doing so will let us more efficiently use available resources to have a good life today, and make us collectively more enthusiastic about pursuing our explicit values.
If I wanted to venture into more exotic waters, I could offer what I find compelling decision-theroetic/Kantian/Rawlsian/Drescherian reasons to be nice to other people in proportion to their influence, even when you aren’t making explicit bargains with them. Distinguishing between these arguments and moral intuitions is complicated, and there seems to be some double-counting of motivations for altruism, but overall I take these considerations seriously.
I think the relevant moral intuitions are well illustrated by the example of a utilitarian deciding whether to treat the people around them with respect. The proto-utilitarian may reason: “The selfish driver costs other drivers minutes, so that he might save himself seconds. If that behavior isn’t objectionable, I don’t know what is!” Hardcore utilitarians sometimes come full circle, reasoning: “If I am selfish, I can save myself seconds at the expense of others’ minutes. But each of my seconds will be used to further my values, while others’ minutes are used for furthering their values instead. Surely my minutes are worth many times others’ minutes, since their values aren’t well-aligned with mine?” Of course this reasoning is literally identical to the selfish driver’s, and is subject to the same intuitive sanctions.
Of course this argument applies not only to time discounting—it also applies to people in the first world doing nice things for each other at the expense of the developing world which they could more easily help, and so on. Basically, it seems a bit problematic to split our energies up between “doing good for myself” and “doing good for everyone,” and then to fail to put any collective reasoning into any of the goals of intermediate levels of selfishness. (I think this an error only hardcore utilitarians are likely to make.)
In summary: we could fight amongst ourselves and jockey to have as much influence on the world as possible, or we could just agree to settle on values which reflect our respective influence (without actually engaging in the jockeying). I’d prefer the latter, and the people of today have much more influence on the world than the people of tomorrow. To the extent these people have selfish values, it seems worth giving those values some weight. (Note: this policy is best combined with incentives and collusion to discourage jockeying for influence.)
These arguments don’t recommend temporal discounting per se. I’ll write more about normative uncertainty and aggregating value in the future, but I think the appropriate response is much closer to allocating a fixed share of resources to improving the lives of people today and a fixed share of resources to improving the lives of people across all time. (And we can split this up further into buckets whose granularity is limited only by our ability to find bargains.)
Social returns
If I invest in education today, I do so hoping to improve the capabilities of affected students. Those students, I hope, will then spend the coming years using their improved capabilities to make the world slightly better than it would otherwise have been. They might in turn support further educational efforts, which will in turn create even more capable students etc.
In general many forms of do-gooding aim to create resources that themselves do more good, and so creating those resources earlier has a larger impact. Here again, it doesn’t make sense to talk about a discount rate which is constant across interventions. However, it might make sense to talk about a general “efficient marginal social rate of return,” (for investors with particular values), in the same way it makes sense to talk about the market interest rate. That is, all of the projects that merit investment today should have comparable rates of return on the margin, or it would be better to delay investment in some of them. I have never seen anyone whose understanding of the world was so good the they could justifiably talk about such a unified rate of social return, but sometimes in narrower contexts (for example, within the context of a particular altruistic project) it makes sense.
Note: for goods that compound in kind, for example increased quality of education leading to future increases in quality of education, it makes sense to talk about growth rates in kind. But in general resources of type A generate resources of type B (or a complicated bundle) and to assess rates of return we need to use some rate of exchange between different resources. The rate of exchange between resources A and B depends on their marginal value, which generally depends both on how useful they are and how easy they are to produce. I’m going to set that aside for a moment, but its worth keeping in mind that outputs which are easily produced by a compounding process tend to be less valuable by virtue of their easy production, and the existence of such a compounding process tends to drive up the value of complementary goods. (For example, the ability of an organization to grow quickly reduces the marginal value of future members of that organization by decreasing their price, while increasing the value of complementary goods like “good ideas about what the organization should do.”)
Local growth
Some of these resources may be very well-localized, for example membership in a particular organization or support for a particular cause, which might tend to grow exponentially (often at far above market rates) as new members of the organization (or enthusiasts for the cause) invest their resources in further growing the organization or supporting the cause. These growth rates are often very large, say in the regime of 50-100% / year. However, the apparent rate is typically highly deceptive: such growth comes at a cost, namely, the exhaustion of an exogenously produced stock of people sympathetic to the relevant ideas (even if we assume that the resulting redistribution of resources, from whatever people were supporting before, is costless).
Thus such growth typically consists of a phase of rapid increase, followed by saturation in the relevant population. At this point the resources acquired by an organization (or marshaled in support of a cause) are turned to some other opportunity. It may be that they find a new cause with comparable rates of return. But much more often they will engage in activities with much lower apparent rates of return—after my pro-education group has doubled for a few years, we stop working on outreach and (with luck) buckle down to actually improve education.
If an organization is poised to double each year for some time, and will eventually invest its resources into projects with social returns of 10% / year, then it is usually wrong to say that the social rate on investment in that organization is 100%. Speeding up the progress of such an organization by a year increases its size by 100% at first, but in the long run it only has the effect of speeding the organization’s activities by a year. And the value of that speed-up is a 10% increase in produced value rather than a 100% increase. (We can reason about adding 5 people to a 5 person organization that doubles each year as speeding up the organization by a year. Of course it also has other effects which are orthogonal to its growth, for example changing its composition, and these need to be reasoned about separately.)
One situation in which such local growth can make a huge difference is when the growth will be cut off for some reason. For example, if there is an exogenously determined 5 year window during which an organization can double each year after which it is guaranteed to stop growing, then adding a person at the beginning of the 5 year period really can correspond to adding 32 people at the end of the period. Similarly, if an organization is doubling each year but exists to address some opportunity which will come to pass before the organization saturates its potential for growth, then increasing the size of that organization by 1% early on may increase its capacity by 1% at the time when that opportunity arises.
A second situation in which social returns are actually equal to growth rates is when a change in character of an organization (or cause) will spread with the organization (or cause). If I expect that this group of 10 people will double in size until it is a group of 10,000, then by joining the organization I might be able to shift its aggregate values by 10% now but only 0.01% in the future. If the values of the organization are locked in as it grows, then my early impact can have an outsized effects.
Diffuse growth
Most investments produce a very wide range of outputs. The basic reason for this is that even if each investment has relatively localized returns, these returns are then reinvested, and at each stage of reinvestment the returns are spread increasingly diffusely. (Also, for most complicated systems substitution effects become an important consideration.)
So for example, investing in education does have a small impact on improving educational outcomes further in the future, but almost all of the gains are spread out much farther across society. Putting money in the hands of poor people today seems to increase the amount of money in the hands of poor people tomorrow, but over time these gains are distributed quite broadly throughout the world, as the recipients consume, invest, have children, etc. (and their trading partners, borrowers, relations, and children go on to do their own thing in the world…)
The world economy grows at roughly 4% a year, and the world population grows at more like 1% a year. In the US the numbers are slightly lower, with economic growth more like 2-3%. I expect that social rates of return for most resources fall somewhere in this range, depending on the type of investment. This seems like a fairly robust result across many simple models. For example any investment that creates economic value that can be exchanged on a market will compound somewhere between growth rates and interest rates due to substitution effects. If population characteristics next generation are determined in the straightforward way by population characteristics this generation, than changes in population characteristics compound at the rate of population growth. And so on.
Conversely, any changes that compound significantly faster than this are very unstable—if something is increasing by 15% a year, it’s hard for it to do that for long. It doesn’t take many years of 15% growth to overtake the world economy itself. (Over a century, 15% growth corresponds to 23,000x the growth of the world economy.) Such processes are typically either exogenously driven, or will soon reach a limit where they are exogenously limited. (And then we are back in the previous case.)
These “discount rates” mostly reflect the fact that as the world grows bigger it becomes increasingly hard to change using constant resources. If we want to understand the impact of an investment, it makes sense to think about that investment as a proportion of all of the activity going on in the world, rather than an absolute number. A similar absolute investment in the future represents a smaller proportion.
Conclusion
My views:
- Financial investment should be considered another intervention, which is worth keeping in mind. Market interest rates do not change the value of other interventions, but if other interventions don’t have good payoffs when time-discounted at market rates of return, that is a big clue that investment is likely to be a more effective intervention.
- Society should invest a significant share of our resources into making present peoples’ lives better, and some of this support should be reasoned about in the framework of altruistic activity. But this is orthogonal to issues of discounting, and should not distract some investment in the very long view.
- Uncertainty makes influencing the future hard, but this should be reasoned about on a case-by-case basis. Severe uncertainty should push us to pursue interventions in which we are confident, rather than ignoring the future.
- Social rates of return make acting early attractive, but when the rate of returns looks much larger than the rate of economic growth it should be treated with skepticism. Like uncertainty, this should be treated on a case by case basis, and should be taken as a positive reason to support the particular interventions which exhibit such compounding rather than applied in general as a discount factor. (Sometimes there really are huge opportunities.) But the future is projected to be bigger, and this should be taken into account when evaluating interventions that will have their impact in the future.
Also note: except for uncertainty, each of these forms of discounting is closely tied to the rate at which the modern world is growing, and extrapolating them to the future via constant rate exponential discounting is an error. Discounted values of the long-run will be dominated by the possibility that interest rates fall (and it seems certain they will fall eventually, when available resources actually run out).
Nice post.
I especially like the point about complementary inputs.
Regarding diffuse returns, since the Industrial Revolution there have been sustained differences in economic growth and population growth across countries, at least over periods of decades to centuries.
Great post, thanks. Lots of nuggets of insight here.
Re. Uncertainty about the future, I somewhat agree with tackling it on a case-by-case basis, but I think there are also effects which are likely to be common across a lot of cases. It’s all quite hard to reason about, so there seems to be some merit in doing your thinking once and then using that as a default in other cases unless you have particular reasons to adjust. You’re quite right that a uniform exponential is a bad model for the long term, though, because part of the uncertainty is a chance to move to more or less stable systems.
Were planning horizons longer in the past? People built a lot of cathedrals and castles which took longer than twenty years. I’m not sure off-hand whether this is a selection effect where I just don’t know about the many failures, or if it’s really tied to interest rates, or if uncertainty just used to be lower. Could there be natural reasons to expect uncertainty to be tied to growth rates? It seems a little implausible (certainly no growth doesn’t mean no uncertainty).
I agree that its worth factoring out some types of uncertainty (especially things like “How much will X change in expectation over the next decade?”) and considering them separately from interventions. I haven’t seen this done seriously, though it probably has been in at least a few cases. To find out just what X should be (in order to make a useful question) it is probably best to consider a few interventions and see what kinds of structural assumptions their long-term impacts depend on.
I think that the rate at which uncertainty accumulates should be expected to be similar (at least roughly) to the rate of growth, since many processes occur on timescales commensurate with each other and with growth. (At least this is my general sense, though I don’t understand the empirical issue very well.)
Historical time horizons are an interesting question, though I know little about them besides the stylized fact that people have done things like build cathedrals which take a pretty long time. Those projects don’t involve too much uncertainty, though, so it could well be tied to collective discount rates, which are more closely tied to growth.
a) Do you (generally) use ‘consequentialist’ and ‘utilitarian’ interchangeably?
b) Like your ‘2)’ this isn’t time discounting per se, but Johann Frick has some interesting fairness-based arguments about the allocation of risk to individuals and the preference for helping identified victims over statistical victims. I don’t think they quite work, and the framework is contractualist, but they’re potentially interesting for consequentialist who arrive at their consequentialism via considerations about fairness (unlike those who arrive at their consequentialism via considerations about the intrinsic awesomeness of aggregate hedonium/ aggregate preference-satisfactium):
The link failed to lock on to the right minute for some reason: 50m into the video.
a) No–I would describe myself as a utilitarian-leaning committed consequentialist. The claim “this is a mistake a utilitarian would be most likely to make” are sociological/psychological rather than philosophical (in principle it should afflict all consequentialists comparably).
b) I think this is a bit of an unfair characterization of serious utilitarians. There is a complicated set of intuitions that support utilitarianism (as well as consequentialism), and fairness is unequivocally one of them. I am sympathetic to (what I understand as) the general contractualist perspective though.
a) Can you give an informal definition of what you mean by ‘utilitarian’? I ask because for some people utilitarian = hedonic utilitarian, for others utilitarian = consequentialism + value monism, for yet others utilitarian = consequentialism + welfarism (for any definition of welfare) + aggregation, and so on.
b) I think that at least many hedonic utilitarians base their utilitarianism on beliefs about the intrinsic value of certain qualia ‘from the point of view of the universe,’ and that for these utilitarians an anti-utilitarian argument from fairness is a non-starter because (per such utilitarians) the value of hedonium has nothing to do with human notions like fairness. At the very least David Pearce would be an example of a very smart person who falls into this category.
Thanks for this post. It discusses a lot of things that I have also thought seriously about and talked extensively about with others at Oxford, and it does it very well. A few points:
1) You are right to point out that the ‘discount rates’ across different project types are different, and that they are not constant rates (i.e. not exponential discounting). The same goes, as you say, for uncertainty. I find that pretty much all the confusion ends if you stop discounting and instead just assess the value of each option (e.g. give to cause A now, invest then give to cause A in 10 years, give to cause B now etc.). This can also flush out some of the subtle and pernicious effects of discounting. For example, cases where your hands are tied so you can’t invest and give later (which is true for many government departments). In such cases, sometimes an a great available option gets neglected because it is pareto dominated by an even better but unavailable option that involves investing.
2) A huge factor in discussions of whether to give now or later is how quickly better spending opportunities are uncovered through research/thought. This seems to have happened a lot in the last 20 years or so and to have plausibly been much greater than the growth rate etc.
3) Other key factors are the extent to which you are less likely to give in the future (maybe not relevant to your audience, but important in general) and the drying up of the best opportunities to spend as time goes on.
If you haven’t already, you may want to look at:
http://www.givingwhatwecan.org/about-us/our-research/donating-vs-investing
Click to access discounting-health.pdf
Yes, I wrote this post largely to explain my view of “discount only on a case by case basis” (though then many other ideas got dragged in).
It seems you have covered many of the same issues I talk about in that piece with Robert Wiblin. I mostly agree, but disagree with some of the analysis re: “it will get solved whatever we do,” because solving a problem today that would have been solved tomorrow anyway also does value by freeing up the resources that would have been used to solve it tomorrow.
I agree that a changing landscape of available interventions is probably the most important consideration concerning the passage of time. Hopefully this will get covered more in future posts!
Also, it seems that making “unavailable options” involving investment more available may be a very important intervention. (Harder for governments than philanthropists given political realities, but still not obviously impossible.)
[…] also have steeper discount rates than I would like, and so are likely to base investment in a technology on its potential over the next decade. In the […]
[…] After the money is transferred it’s in the rest of the world’s hands. At this point I expect it grows at roughly the same pace as the world economy at […]
[…] think that altruists concerned about the far future should consider investing and earning market returns […]
[…] his talk, Jeff Sebo pointed out that if we value future animals similarly to animals living in the present, then future non-human animals might very well be the top priority, since we have every reason to […]