Rational Altruist

Adventures of a would-be do-gooder.

Category: Ethics

Against moral advocacy

Sometimes people talk about changing long-term social values as an altruistic intervention; for example, trying to make people care more about animals, or God, or other people, or ancestors, etc., in the hopes that these changes might propagate forward (say because altruistic people work to create a more altruistic world) and eventually have a direct effect on how society uses available resources. I think this is unlikely to be a reasonable goal, not necessarily because it is not possible (though it does seem far-fetched), but because even if it were possible it would not be particularly desirable. I wanted to spend a post outlining my reasoning.

Disclaimer: this is a bit of an odd post. The impatient reader is recommended to skip it.  Read the rest of this entry »

Advertisements

Consequentialist-Recommendation Consequentialism

An act consequentialist evaluates possible acts by the goodness of their consequences. In some situations this leads to bad consequences. For example, I may decline to trust a consequentialist because I am (justifiably) concerned that they will betray my trust whenever it is in their interest. This outcome is widely considered unsatisfactory, and is often taken to imply that a person should not willingly become an act consequentialist. Read the rest of this entry »

Four flavors of time-discounting I endorse, and one I do not

(I apologize in advance for a bit of a long post. There is a more concise summary at the end.)

We often choose between doing good for the people of today and doing good for the people of the future, and are thus faced with a question: how do we trade off good now vs. good later? A standard answer to this question is to invoke exponential time discounting with one discount rate or another.

When I consult my intuition, I find that at least over the next few thousand years, I don’t much care about whether someone gets happier today or happier tomorrow—it’s all the same to me. (See also here for a much more thorough and correct discussion of this issue, and see here for a much more poetic description.)

Nevertheless, there are a few senses in which I do discount the future, and I think it’s important to bring those up to clarify what I do (and don’t mean) by saying that I have weak time preferences. Read the rest of this entry »

Pressing ethical questions

In general I spend surprisingly little time thinking about ethics. My thoughts tend to go like this: even if I don’t know exactly what I want now or what I will want in the future, there are some convergent instrumental goals which I want to pursue anyway, and I can mostly postpone ethical deliberation. (Here I am going to set aside my self-interest and focus on my altruistic interest.)

In particular, for a broad range of values, the first thing to do is to establish a stable, technologically sophisticated civilization at a large scale, which can then direct its action on the basis of careful argument and reflection. When I need to make a tradeoff between clarifying my ethics and increasing the probability of such a civilization existing, I’m not inclined to reflect on ethics. This might be an error, but it’s my current well-intentioned best guess.

However, there are a few decisions I face today that do require that I have some idea what I value. So it seems worth putting in a bit of time to get a clearer picture. Here are some ethical questions that seem to bear on immediate practical issues (albeit, often in a roundabout way): Read the rest of this entry »