Rational Altruist

Adventures of a would-be do-gooder.

Certificates of Impact

0. Introduction

In this post post I describe a simple institution for altruistic funding and decision-making, characterized by the creation and exchange of “certificates of impact.”

Typically an effectiveness-minded altruist would try to do as much good as possible. Instead, users of the certificates system try to collect certificates for as much good as possible.

Whenever anyone does anything, they can declare themselves to own an associated certificate of impact. Users of the system treat owning a certificate for X as equivalent to doing X themselves. In the case where certificates never change hands, this reduces precisely to the status quo.

The primary difference is that certificates can also be bought, sold, or bartered; an altruist can acquire certificates through any combination of doing good themselves, and purchasing certificates from others. Read the rest of this entry »


The golden rule

Most of my moral intuitions are well-encapsulated by the maxim: “do unto others as you would have them do unto you.” This is a principle which has extremely broad intuitive appeal, and so it seems worth exploring how I end up with a relatively unusual ethical perspective.

Read the rest of this entry »

Three impacts of machine intelligence

I think that the development of human level AI in my lifetime is quite plausible; I would give it more than a 1-in-5 chance. In this post I want to briefly discuss what I see as the most important impacts of AI. I think these impacts are the heavy hitters by a solid margin; each of them seems like a big deal, and I think there is a big gap to #4. Read the rest of this entry »

Machine intelligence and capital accumulation

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences.

Read the rest of this entry »

We can probably influence the far future

I believe that the (very) far future is likely to be very important, and that my decisions today probably have (very) long-lasting effects. This suggests that I ought to be mindful of the effects of my actions on the very far future, and indeed I am most interested in activities that I think will have positive long-term effects. In contrast, many or most thoughtful people consider it unproductive to think more than a few decades ahead, to say nothing of millions of years.

When I express this view, I often encounter intense skepticism about the possibility of having any predictable influence on the far future. Even if our decisions have some effect, can we reason about it any useful way? I believe that the answer is yes, and that we can in fact have a fairly detailed and specific effect on the very far future. In this post I want to make a very simple argument for this conclusion: if we think there is a good chance that we will ever have an opportunity to have a predictable long-term influence, as I suspect we should, then interventions which improve our own capacity (such as investment) will have an indirect, significant, and predictable long-term effect.

Read the rest of this entry »

Altruism and profit

When I suggest that supporting technological development may be an efficient way to improve the world, I often encounter the reaction:

Markets already incentivize technological development; why would we expect altruists to have much impact working on it?

When I talk about more extreme cases, like subsidizing corporate R&D or tech startups, I seem to get this reaction even more strongly and with striking regularity: “But that’s a for-profit enterprise, right? If it were worthwhile to spend any more money on R&D, then they’d do it.” Recently I’ve encountered this argument again, in the context of working to improve governance broadly. I sympathize with the sentiment, but the actual arguments don’t seem strong enough to carry the conclusion. Ultimately this is an empirical question about which I’m uncertain, but at this point it seems very unwise to take profitable opportunities off the table.

Read the rest of this entry »

Against moral advocacy

Sometimes people talk about changing long-term social values as an altruistic intervention; for example, trying to make people care more about animals, or God, or other people, or ancestors, etc., in the hopes that these changes might propagate forward (say because altruistic people work to create a more altruistic world) and eventually have a direct effect on how society uses available resources. I think this is unlikely to be a reasonable goal, not necessarily because it is not possible (though it does seem far-fetched), but because even if it were possible it would not be particularly desirable. I wanted to spend a post outlining my reasoning.

Disclaimer: this is a bit of an odd post. The impatient reader is recommended to skip it.  Read the rest of this entry »

The best reason to give later

I’ve written about saving vs. giving before, focusing on the issue of interest rates vs. returns on good deeds. But for now, I think there is a much more compelling reason to save: there is a very good chance that the best giving opportunities we can identify in the near future will be better than the best giving opportunities we can identify this year.

Read the rest of this entry »

My outlook

This will be a relatively short post, sketching my overall view of valuable altruistic endeavors. Read the rest of this entry »

Contributing to tech progress

I think that contributing to technological progress may be one of the most efficient ways to make the world richer. I don’t yet know much quantitative justification, so this impression could easily be wrong. In any case it seems to be worth asking: “How effectively can we increase the pace of tech progress?” Read the rest of this entry »