Rational Altruist

Adventures of a would-be do-gooder.

Tag: methodology

Four flavors of time-discounting I endorse, and one I do not

(I apologize in advance for a bit of a long post. There is a more concise summary at the end.)

We often choose between doing good for the people of today and doing good for the people of the future, and are thus faced with a question: how do we trade off good now vs. good later? A standard answer to this question is to invoke exponential time discounting with one discount rate or another.

When I consult my intuition, I find that at least over the next few thousand years, I don’t much care about whether someone gets happier today or happier tomorrow—it’s all the same to me. (See also here for a much more thorough and correct discussion of this issue, and see here for a much more poetic description.)

Nevertheless, there are a few senses in which I do discount the future, and I think it’s important to bring those up to clarify what I do (and don’t mean) by saying that I have weak time preferences. Read the rest of this entry »

Replaceability

Suppose I am trying to evaluate the consequences of taking job X. Here is a sequence of (hopefully decreasingly) naive ways to think about the impact of my decision.

(I don’t know if the later analyses are actually reasonable, but I’d really like to see more intellectually serious discussion of this issue by people who understand the world, and particularly economics, better than I do. I wouldn’t be surprised if more sophisticated versions of this analysis are already well-understood amongst economists, and simply haven’t been noticed by altruists trying to understand this issue. In that case, hopefully someone can point that out to me. Some of these analyses, and more sophisticated elaborations on them, appear in Ben Todd’s masters thesis.) Read the rest of this entry »

Guesswork, feedback, and impact

The way we get most complicated things done might be described as trial and error: we have some model of how a plan will lead to our desired goal, we try and implement the plan and discover that some aspect of our model was wrong, and then we refine the model and try again.

For example, if you write a large program it will have bugs in it (unless you have written very many programs before). If you try to run a business, your initial plans will probably fail (though a simple business plan might stick around as many details of the business are tweaked). If you try to build a machine, it won’t work unless you have quite a bit of relevant experience (e.g. building a similar machine before).

Unless a plan’s success rests on very simple arguments—for example, comparisons with similar plans that have worked before—it is likely to get thwarted by some unanticipated detail. (If there are implicitly N things that could go wrong with a plan, most of which you may not have thought of, then each one needs to go wrong with probability around 1/N for the whole thing to hold together. That’s pretty confident, for complicated domains in which many things might go wrong.) However, if we can try a plan and implicitly ask Nature “What were we wrong about? How will we fail now?” then the situation is changed. We can determine where our model of the world is wrong,  patch that particular error and repeat. Even if our model was wrong in many places, and even if we can never hope to build a complete model, at least we can eventually get a model which is right in the relevant ways.

Unfortunately, if we want to have a positive impact ont he world, we almost never get to test all of the relevant aspects of our world model. I think it’s useful to split up plans into two parts: Read the rest of this entry »