Altruism and profit

by paulfchristiano

When I suggest that supporting technological development may be an efficient way to improve the world, I often encounter the reaction:

Markets already incentivize technological development; why would we expect altruists to have much impact working on it?

When I talk about more extreme cases, like subsidizing corporate R&D or tech startups, I seem to get this reaction even more strongly and with striking regularity: “But that’s a for-profit enterprise, right? If it were worthwhile to spend any more money on R&D, then they’d do it.” Recently I’ve encountered this argument again, in the context of working to improve governance broadly. I sympathize with the sentiment, but the actual arguments don’t seem strong enough to carry the conclusion. Ultimately this is an empirical question about which I’m uncertain, but at this point it seems very unwise to take profitable opportunities off the table.

There are a many good arguments on both sides of this discussion, but I’m going to run through what I consider the five strongest points in favor of being open to profitable opportunities. Some of these are responses to common counterarguments, and some stand on their own.

1. “Invested” ≠ “Invested as much as possible”

The simplest argument implicit in “but that’s a profitable opportunity” is that it’s already being done, so why bother?

The world is a big place; if we were inclined to look at things coarsely we might conclude that everything useful is already being done. But just because someone somewhere does X, it hardly means that X has “been done.” X can typically be done faster, better, at a larger scale, with a higher probability of success, in a wider variety of ways, etc. Additional investment can typically have a positive effect, in many ways (for example, expanding existing projects, setting up parallel projects, doing supporting work…)

Yes, there is a multiplier to be had for working on a neglected project. But that multiplier doesn’t always dominate. It seems unlikely that every project should have less than 10, or even less than 1000 people working on it. And as long as I’m working towards the same goals as someone else somewhere, it doesn’t seem that I should be particularly troubled by working alongside people who don’t share all of my values, as long as they care about the success of the project (e.g. they care about improving governance, preventing disaster, or advancing technology).

2. “Significant externalities” ≠ “all externalities”

If I notice a promising opportunity and believe that I can capture all of the gains from pursuing it, I might suspect that the market would have scooped it up if it were really as good as it seems. But if I can only capture some of the gains, this reasoning falls apart. If the opportunity involves diminishing returns and allows capturing only a tiny fraction of all of the social gains, then it might be worth investing a tiny bit for profit, leaving significant room for further altruistic investment. I think the existence of diminishing returns is really doing the work in this argument; it will generally cause even very good altruistic opportunities to be very profitable at first, despite a huge gap between social value and profit potential.

For example, Google stands to profit from helping develop self-driving cars. But their potential profits seem to represent a small fraction of the social value they might create. And so the fact that Google is not interested in investing more in self-driving cars is minimal evidence that more investment wouldn’t be useful, on altruistic grounds. If Google is rational and is only able to capture 1% of the value of marginal investment, then spending $1 to increase their spending by $1 (which seems like a plausible result of a matching grant) would still generate 100:1 returns.

Just because you can capture some of the gains from a project doesn’t mean that there aren’t externalities, and doesn’t even exclude the possibility that there are massive externalities. Indeed, there are good reasons to expect that the projects which create the largest social value will be the kinds of projects from which you can make some money. (It may still be the case that, as an altruist, it isn’t worth actually collecting everything you could collect.)

If I create a lot of value, I can generically capture some of it for myself. It is surprising if you can’t extract 1% of someone’s willingness to pay, or if only 1% of the people you affect will pay you back. So in many cases we should expect creating a lot of value to be associated with capturing some of it.

There are a few possible exceptions, when the value you create is accruing exclusively to a group that can’t pay it back. Today, the most common examples seem to be:

  1. The extremely poor
  2. Non-human animals
  3. Future folk

But I think there are a few reasons we should expect to create value for existing people (some of which we might then capture), even when we are targeting these groups.

A big impact is a big impact

In these cases I often think about total impact as being the product of two components:

  1. The value of transfers to the targeted group (e.g. how valuable is exerting a fixed amount of influence on behalf of the future)
  2. The multiplier on our effectiveness (e.g. how much influence can I exert on behalf of the future for $1)

I think the best opportunities involve big multipliers in step 2, as well as big multipliers in step 1. I think this is what we should generically expect, given that there is significant variability in each and we want to maximize the product, and I think this is basically what the current landscape looks like.

If step 2 is doing much work, then even when we are targeting a group which cannot plausibly pay, it still seems quite likely that someone will care about what we are doing. In general either they would be willing to pay, either for it or pay to prevent it. (And if someone is willing to pay to prevent it, there are natural problems.) So then we are back in the situation described above.

For example, poverty reduction can make a plausible case for not being able to capture any significant fraction of the value it creates. But even there, the best opportunities in global health are believed to get very large multipliers on their impact, in addition to transferring resources to poorer people. Saving someone’s life for $2000 creates a lot of value in the ordinary sense even when the recipient is extremely poor, and it is only because of further market failures that this isn’t a profitable opportunity.

I think a core thing going on in many cases is that the impact of these activities is pretty linear over the relevant regime. So we rarely observe an opportunity which is profitable for the first billion dollars but then becomes unprofitable, whereas such scenarios might be quite common in domains like tech development where there are more steeply diminishing returns.

Cooperation is useful

If we are engaged in a project that many other people are in favor of and are attempting to encourage (for example by paying for it), we should expect to have some extra leverage from their cooperation. We should expect to meet less active resistance, to find more willing allies, to be able to strike more compromises, etc. Casually I’d expect this to be a significant factor, and this seems to be consistent with my own experiences.

3. Altruists still care about money

Money can be applied, by design, towards a wide variety of ends. And so whether we are interested in making our own lives better or making the world better, we are interested in acquiring more money. All else equal, if a charitable opportunity can make a profit, that’s a feature not a bug. If the profit is not competitive with the market, then it may not be too surprising to find the opportunity lying around untaken.

If we face a landscape of opportunities, each of which makes some amount of money and creates some amount of social value, then it seems very likely to me that an altruist ought not choose the option all the way at the “maximum profit, minimum social value” end of the spectrum. But almost by the token it seems very unlikely that we ought to pick the option “minimum profit, maximum social value.” Given that both resources are valuable, it would require some significant extra arguments to focus exclusively on one or the other.

The difference between altruistic and egoistic motivations is more like a modest quantitative correction than a qualitative change; it seems very likely that it will change which options are best for an altruist, but I don’t see why it would make the difference between caring a lot about profit and caring none at all. On top of that, there are many other differences of a similar magnitude that bear on these tradeoffs (e.g. a perfect altruist’s risk neutrality, weak time preferences, higher willingness to endure suffering or long hours, etc.) many of which seem to point strongly in the other direction, and I’m not even sure on balance whether altruists should care more or less about earning money.

4. Attention is a resource

A common argument against for-profit altruistic endeavors seems to be skepticism at the prospect of finding a $20 bill lying on the ground. But attention and time are resources, and thinking long and hard about a thing before identifying a promising opportunity is not the same as finding it lying on the ground.  In reality the $20 bill argument applies to charities as well; the difference is quantitative rather than qualitative. It’s not the case that no one is looking to do good in the world, and the way you find better opportunities is by putting in some additional effort, or being cleverer or better informed or what have you.

If you are willing to put in the extra effort, or are cleverer or better informed, you should expect to be able to identify marginal entrepreneurial opportunities that will make money. So you probably shouldn’t be so surprised if you can find opportunities that make money (though less) that also have social returns.

The situation is somewhat different for donors than for opportunity-hunters. Once an opportunity has been identified, the necessary attention has been invested, and all that remains to do is put in capital, then the above analysis doesn’t apply as well. The entrepreneur shouldn’t be dissuaded from looking for a $20 bill on sale for $1, but when they try to sell it to you for $1 they better have a good explanation for why the price hasn’t been bid up to $19.99.

5. Replacing people has some value

Replaceability seems to be a serious consideration in altruistic activities. When working on a project that would be done anyway, it is tempting to reason “this will be done anyway in one year, so I should be thinking about the impact of doing it one year sooner.” This may be an improvement over the naive view, but its still not the complete picture. If you complete a high-impact technology project which other technologists would have tackled in a year, the counterfactual doesn’t just involve that project finishing a year sooner—it also involves those technologists going off to do whatever else they would have done.

In general replacing altruists is better than replacing random people, because we are more confident that the projects that they go on to do will be useful. But e.g. replacing entrepreneurs who are selected for attacking the most effective social project we can think of (de re, but still), is not such a bad deal:

  1. There is a good chance that such entrepreneurs are either implicitly or explicitly pursuing social value. It’s not generally an accident when people do things to make the world better, and I think there is a real hazard of overestimating the extent of one’s own uniqueness.
  2. Those entrepreneurs will tend to be unusually capable, have good judgment, etc. etc. As long as you are going to replace someone, it’s extremely important to replace someone good. (This seems to be one of the strongest arguments for focusing on one’s comparative advantage.) This is related to the comparison between effectiveness and altruism—it would be best to replace very effective altruists, but if you have to take your pick I would probably prefer replace someone very effective than someone very altruistic. (Note relevant assumptions though; if you thought most people were actively making the world worse, you would necessarily have a different view.)
  3. If you are basing your altruistic decisions on relatively broad considerations, then you might be more optimistic about the displaced entrepreneurs because they are  poised to work on important problems, and have a much elevated likelihood to work on closely related problems (or advance further in the same direction, etc.)