Rational Altruist

Adventures of a would-be do-gooder.

Category: Looking ahead

Machine intelligence and capital accumulation

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences.

Read the rest of this entry »

Advertisement

Against moral advocacy

Sometimes people talk about changing long-term social values as an altruistic intervention; for example, trying to make people care more about animals, or God, or other people, or ancestors, etc., in the hopes that these changes might propagate forward (say because altruistic people work to create a more altruistic world) and eventually have a direct effect on how society uses available resources. I think this is unlikely to be a reasonable goal, not necessarily because it is not possible (though it does seem far-fetched), but because even if it were possible it would not be particularly desirable. I wanted to spend a post outlining my reasoning.

Disclaimer: this is a bit of an odd post. The impatient reader is recommended to skip it.  Read the rest of this entry »

Astronomical waste

Warning: this is an unusually unusual post.

Previously I argued that human activity is only useful on net to the extent that non-human activity is harmful. This raises the question: how harmful are events in nature? I think that the big things to have in mind are disasters, resource use, and decay. But another possible problem is that natural resources throughout the universe may be slowly running out through no fault of humanity’s. Galaxies are receding, time is running out, stars are burning down. Intuitively this doesn’t seem like a big deal. The universe is very old, and our entire history is less than an instant from its perspective, while our behavior on earth is having big effects on short timescales. But it seems worth checking anyway. Read the rest of this entry »

Why might the future be good?

When talking about the future, I often encounter two (quite different) stories describing why the future might be good:

  1. Decisions will be made by people whose lives are morally valuable and who want the best for themselves. They will bargain amongst each other and create a world that is good to live in. Because my values are roughly aligned with their aggregate preferences, I expect them to create a rich and valuable world (by my lights as well as theirs).
  2. Some people in the future will have altruistic values broadly similar to my own, and will use their influence to create a rich and valuable world (by my lights as well as theirs).

Which of these pictures we take more seriously has implications for what we should do today. I often have object level disagreements which seem to boil down to disagreement about which of these pictures is more important, but rarely do I see serious discussion of that question. (When there is discussion, it seems to turn into a contest of political ideologies rather than facts.) Read the rest of this entry »

Improving decision-making

One way to influence the future is to improve human decision-making—to make people smarter, encourage metacognition, improve institutional decision-making, etc. Any of these changes will probably have an impact on how future folk manage the problems they face, and on the sorts of infrastructure and capabilities they in turn build for the farther future. Even if we don’t know what those problems will be, or what exactly we would want smarter or better-educated people to do, it seems like a safe bet that there will be opportunities for them to apply their increased capabilities.

But at the same time, I think most of the problems humans face are caused by humans. So if you make humans better at doing whatever they do, you speed up the creation of problems as well as their resolution.

Nevertheless, I tend to suspect that increasing human capabilities is a positive change on balance. I’m not sure about this, or about the magnitude of the impact; since it looks like capability improvements might be leading contenders for altruistic interventions, it seems like an important question. Depending on the answer, I may decide to work directly on the biggest problems I can see, or instead to help prepare future folk to do the same.  Read the rest of this entry »

Taxonomy of change

I suspect that the events of each year are morally neutral, when taken altogether. This is not because I know anything about the future. It’s because I think the world of tomorrow is as valuable in expectation as the world of today nearly as a tautology. I wouldn’t pay any money to transform the world of today into the world of tomorrow—I’d rather just wait a year. Unfortunately, in light of concerns about replaceability, many of our actions may (essentially) have the effect of accelerating progress in one domain or another. If that’s the case, it behooves us to have an understanding of which changes in the world we like and which are negative.

The observation that the total of all changes is neutral, may help us pin down the impact of some kinds of change. For example, suppose A, B, and C are all changing. If we have no good arguments about whether A and B are changing in a good way, but we can tell that C is changing in a negative way, we can conclude that A and B together are changing in a positive way (and the default presumption should be that each of them is positive).

I am particularly curious about whether economic and technological progress are good, and how good they are. But in order to attack that question, I first have to ask: how good are the other events taking place over the same time? Should we be happy that faster technological change leaves less time for other developments, or should we be concerned? Here I’ll give a more elaborate taxonomy than I have in the past, and in future posts I’ll flesh out some of these categories further.

This is not an exhaustive taxonomy, but I’ve tried to include the categories that seem most significant to me: Read the rest of this entry »

How useful is “progress”?

Most of the things that are happening in the world seem valuable to me: we understand science and engineering better, we acquire more expertise, the productive workforce grows, we invest in infrastructure and capital faster than it degrades, and so on. If I make the world of today richer or more technologically sophisticated, it seems like those gains will persist and compound for quite a while. On the other hand, people who work at cross-purposes to progress seem to get little traction. So naturally, when I consider trying to make the world better, I’m inclined to try to accelerate progress. Unfortunately I think that our intuitions overstate the value of speeding up progress (of all kinds), and that in the aggregate I don’t much care whether human progress goes faster or slower.

The basic issue is that accelerating progress doesn’t change where we are going, it only changes how quickly we get there. So unless you are in a rush, speeding things up doesn’t make the world much better. Of course, there are some cases where speeding up society does change things—for example when society is racing against destructive natural processes—but I suspect those effects are small.

Read the rest of this entry »