Machine intelligence and capital accumulation
by paulfchristiano
The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences.
Disclaimer: I want to stress that throughout this post I’m not making any normative claims about what ought to be or what would be nice or what kind of world we should try to have; I’m just trying to understand what is likely to happen.
A naïve model of capital accumulation
Here is a naïve (and empirically untenable) model of capital accumulation.
For the most part, the resources available in the world at time t+1 are produced using the resources available at time t. By default, whoever controls the resources at time t is able to control the new resources which are produced. The notions of “who” and “controls” are a bit dubious, so I’d actually like to cut them out of the picture. Instead, I want to think of people (and organizations, and agents of all sorts) as soups of potentially conflicting values. When I talk about “who” controls what resources, what I really want to think about is what values control what resources. And when I say that some values “control” some resources, all I mean is that those resources are being applied in the service of those values. Values is broad enough to include not only things like “aggregative utilitarianism” but also things like “Barack Obama’s self-interest.” The kinds of things idealistic enough that we usually think of them as “values” may get only a relatively small part of the pie.
Some values mostly care about the future, and so will recommend investing some of the resources they currently control, foregoing any other use of those resources at time t in order to control more resources at time t+1. If all resources were used in this way, the world would be growing but the distribution of resources would be perfectly static: whichever values were most influential at one time would remain most influential (in expectation) across all future times.
Some values won’t invest all of their resources in this way; the share of resources controlled by non-investors will gradually fall, until the great majority of resources are held by extremely patient values. At this point the distribution of resources becomes static, and may be preserved for a long time (perhaps until some participants cease to be patient).
On this model, a windfall of 1% of the world’s resources today may lead to owning 1% of the world’s resources for a very long time. But in such a model, we also never expect to encounter such a windfall, except as the product of investment.
Why is this model so wrong?
We don’t seem to see long-term interests dominating the global economy, with savings rates approaching 1 and a risk profile tuned to maximize investors’ share of the global economy. So what’s up?
In fact there are many gaps between the simple model above and reality. To me, most of them seem to flow from a key observation: the most important resources in the world are people, and no matter how much of the world you control at time t you can’t really control the people at time t+1. For example:
- If 1% of the people in the current generation share my values, this does not mean that 1% of the people in the next generation will necessarily share my values. Each generation has an influence over the values of their successors, but a highly imperfect and unpredictable influence; human values are also profoundly influenced by human nature and unpredictable consequences of individual lives. (Actually, the situation is much more severe, since the values of individual humans likewise shift unpredictably over their lives.) Over time, society seems to approach an equilibrium nearly independent of any single generation’s decisions.
- If I hold 1% of the capital at time t, I only get to capture 0.3% of the gross world product as rents—most of the world product is paid as wages instead. So unless I can somehow capture a similar share of all wages, my influence on the world will decay.
- Even setting aside 2, if I were to be making 1% of gross world product in rents, they would probably be aggressively taxed or otherwise confiscated and redistributed more equitably. So owning 1% of the stuff at time t does not entitle me to hold 1% of the stuff at time t+1.
- Even setting aside 2 and 3, if I hold 1% of the resources at time t, I have some probability of dying before time t+1. In light of this risk, I need to identify managers who can make decisions to further my values. It’s hard to find managers who precisely share my values, and so with each generation those resources will be controlled by slightly different values.
The fact that each generation wields so little control over its successors seems to be a quirk of our biological situation: human biology is one of the most powerful technologies on Earth, but it is a relic passed down to us by evolution about which we have only the faintest understanding (and over which we have only the faintest influence). I doubt this will remain the case for long; eventually, the most useful technologies around will be technologies that we developed for ourselves. In most cases, I expect we will have a much deeper understanding, and a much greater ability to control, technologies we develop for ourselves.
Machine intelligence
I believe that the development of machine intelligence may move the world much closer to this naïve model.
Consider a world where the availability of cheap machine intelligence has driven human wages below subsistence, an outcome which seems not only inevitable but desirable if properly managed. In this world, humans rapidly cease to be a meaningful resource; they are relevant only as actors who make decisions, not as workers who supply their labor (not even as managers who supply economically useful decisions).
In such a world, value is concentrated in non-labor resources: machines, land, natural resources, ideas, and so on. Unlike people, these resources are likely to have the characteristic that they can be owned and controlled by the person who produced them. Returning to the list of deviations from the naïve model given above, we see that the situation has reversed:
- The values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. If at time t 1% of the world’s machine intelligences share my values and own 1% of the world’s resources, then 1% of all new machine intelligences will also share my values and at time t+1 it’s likely to also be the case that 1% of the world’s machine intelligences share my values and own 1% of the world’s resources.
- A capital holder with 1% of the world’s resources owns about 1% of the world’s machine intelligences, and so also captures 1% of the world’s labor income.
- In a world where most “individuals” are machine intelligences, who can argue as persuasively as humans and appear as sympathetic as humans, there is a good chance that (at least in some states) machine intelligences will be able to secure significant political representation. Indeed, in this scenario the complete oppression of machine intelligences would be something of a surprisingly oppressive regime. If machine intelligences secure equal representation, and if 1% of machine intelligences share my values, then there is no particular reason to expect redistribution or other political maneuvering to reduce the prevalence of my values.
- In a world where machine intelligences are able to perfectly replace a human as a manager, the challenge of finding a successor with similar values may be much reduced: it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. Once technology is sufficiently stable, the same manager (or copies thereof) may persist indefinitely without significant disadvantage.
So at least on a very simple analysis, I think there is a good chance that a world with human-level machine intelligence would be described by the naïve model.
Another possible objection is that a capital owner who produces some resources exerts imperfect control over the outputs–apart from the complications introduced by humans, there are also random and hard-to-control events that prevent us from capturing all of the value we create. But on closer inspection this does not seem to be such a problem:
- If these “random losses” are real losses, which are controlled by no one, then this can simply be factored into the growth rate. If every year the world grows 2% but 1% of all stuff is randomly destroyed, then the real growth rate is 1%. This doesn’t really change the conclusions.
- If these “random losses” are lost to me but recouped by someone else, then the question is “who is recouping them?” Presumably we have in mind something like “a random person is benefiting.” But that just means that the returns to being a random person, on the lookout for serendipitous windfalls at the expense of other capital owners, have been elevated. And in this world, a “random person” is just another kind of capital you can own. A savvy capital owner with x% of the world’s resources will also own x% of the world’s random people. The result is the same as the last case: someone who starts with x% of the resources can maintain x% of the resources as the world grows.
Implications
If we believe this argument, then it suggests that the arrival of machine intelligence may lead to a substantial crystallization of influence. By its nature, this would be an event with long-lasting consequences. Incidentally, it would also provide the kind of opportunity for influence I was discussing in my last post.
I find this plausible though very far from certain, and I think it is an issue that deserves more attention. Perhaps most troubling is the possibility that in addition to prompting such crystallization, the transition to machine intelligences may also be an opportunity for influence to shift considerably—perhaps in large part to machines with alien values. In Nick Bostrom’s taxonomy, this suggests that we might be concerned about the world ending in a “whimper” rather than a “bang”: even without a particular catastrophic or disruptive event, we may nevertheless irreversibly and severely limit the potential of our future.
It is tempting to be cosmopolitan about the prospect of machine intelligences owning a significant share of the future, asserting their fundamental liberty to autonomy and self-determination. But our cosmopolitan attitude is itself an artifact of our preferences, and I think it is unwise to expect that it (or anything else we value) will be automatically shared by machine intelligences any more than it is automatically shared by bacteria, self-driving cars, or corporations.
Well, yes, this is all relatively obvious. Now write the post about how we actually ensure a good outcome.
I don’t this claim is widely regarded as obvious in the broader world, and I don’t think it’s too likely to be true. Admittedly the arguments I give here may be obvious, and I’m glad you already agreed with the claim 🙂
One way to try to have a positive impact is to be on the lookout for leverage points and to build relevant capacity. Another is to improve our understanding of the situation and to foster higher-quality discussions amongst practitioners. Another is to look for particular scenarios where doing things in advance may be important, for example speculative scenarios on which the development of AI is rapid/surprising. I think all of these things are reasonable, along with others, and I do hope to write a lot more about them over time. But for now I also think that working out basic things in more detail is worthwhile.
I’m not sure that points 2 and 3 currently weigh as much against the naive model as you suggest.
For 2–most wages are consumed rather than invested; furthermore, empirically large amounts of capital can achieve somewhat greater returns than small amounts. So if you own 1% of the world’s capital and you’re reinvesting nearly all of the returns, the growth of your wealth may well outpace that of the laborers’.
For 3–empirically, large fortunes do not seem to be very aggressively taxed and redistributed. Currently it seems fairly easy to escape high taxes via various international-law hacks. Furthermore, again most of the taxation and redistribution goes to consumption, not investment (e.g. welfare programs and defense), so again this may not be a drag proportionally.
One factor pushing us away from the naive model that I don’t think you mentioned explicitly is exogenous shocks: for instance, the two World Wars destroyed a lot of capital (and a lot of capitalists). You went in a sort of similar direction with your point about random losses, but the difference is that the losses due to exogenous shocks are infrequent, large when they do occur, and very unequally distributed across the population, such that you can’t amortize them out to a small yearly loss for everyone.
For 2 I agree (I initially included a discussion of this issue, but thought it bogged down the post too much)—market interest rates are higher than growth rates for now, but probably won’t remain so. At any rate, it’s certainly a divergence from the naive model, which would predict that patient investors would rapidly outcompete anyone who was spending anything on consumption.
For 3 I agree only partially. You are right that taxes can be mostly avoided by tax-efficient management (at the moment even within the US, though current political trends suggest this may not be the case for too long). But a large, long-lived investment fund still seems quite likely to be dismantled for the good of the people alive today, one way or another. Gwern and Robin have written some about historical analogies I think. Moreover, I think tax-efficient management does exert fairly substantial drag, though it can be pretty easily masked by the high returns to capital at the moment. This means that a majority who consumed (say) 10% of their income could potentially compete with a concentrated interest that consumed only 1% of their income.
I’m not convinced about the world wars and similar exogenous shocks. If you wiped out 50% of the capitalists at random, it wouldn’t really matter: the expected share of the world owned by each capitalist is not affected. So I don’t know if infrequent / stochastic / large losses really make the difference. I agree that these shocks tend to exacerbate issues 1-4, e.g. they might be downward redistributive (though I’ve often heard it suggested that they redistribute upwards) or might kill a lot of people and exacerbate difficulties with finding like-minded replacements. But aside from those issues, I don’t see why the unequal distribution of losses would modify the conclusions.
Conflict has another interesting implication, which is that e.g. even if 10% of people shared values X, you might end up with a predictable coordinated effort by everyone else to stamp out value X, which would totally mess with the model (and wouldn’t be changed by the arrival of machine intelligence).
> market interest rates are higher than growth rates for now, but probably won’t remain so.
Why do you think market interest rates will fall below growth rates? Haven’t they been higher for most of history?
> Gwern and Robin have written some about historical analogies I think.
Do you have references? Sorry, I don’t know what to Google for.
> [The world wars] might be downward redistributive (though I’ve often heard it suggested that they redistribute upwards)
Piketty claims that the world wars were downwards redistributive, in particular because they increased growth during a “catch-up” period, yielding a (temporary) period where the growth rate was greater than the return on capital. I’m not sure whether this is right or whether other large shocks will have the same effect, though. For instance, it seems likely that in a world dominated by machine intelligences, thinking about a capital-labor dichotomy is not particularly helpful.
If you’re thinking more about the dynamics of wealth over time, though, I do recommend reading Piketty or at least a summary (since it’s something of a brick).
Under the status quo, with interest rates much higher than growth rates, investors would continue to get richer until they drove down interest rates (without much stronger redistributive policy, which is perhaps what Piketty recommends?). My understanding is that interest rates have been falling considerably faster than growth rates over the last century, say, and though returns to capital are harder to measure it sure looks like the same is true (I assume Piketty analyzes this issue in much more depth than I have; is the conclusion not right?). I would never necessarily expect them to be exactly equal, but would just expect a diminishing gap. After taking into account labor’s share of earnings, I would actually expect returns to capital to keep falling until they were considerably smaller than growth rates: you need a pretty low savings rate to keep the returns to capital above growth, and such low savings rates look pretty unstable (since saving in that regime so reliably leads to increasing wealth).
Re: trusts being dismantled, see the discussion of property rights here http://www.gwern.net/The%20Narrowing%20Circle, for example (I remember reading this but haven’t checked). Hopefully it will provide search terms. It doesn’t make quite the point I’d want, but it is what I had in mind.
I don’t see why a period of catch-up growth would tend to redistribute down. Why wouldn’t it boost returns to capital along with growth rates?
So my source for most of what I know about historical interest rates and growth rates is Piketty. This chart shows his findings on long-term return to capital vs. growth rate: http://www.motherjones.com/files/blog_piketty_r_vs_g.jpg (My impression is that the data after 1700 for return to capital are pretty, and before are pretty hazy.)
Empirically (at least according to these findings), returns to capital fell below growth in the aftermath of the World Wars. I don’t recall why that was at the moment; in retrospect I should have been more surprised by this. I’ll see what I can find about it.
OK, I think Piketty is probably better-informed than I am about historical returns to capital 🙂 They do seem like surprising data. In the very long run I still intuitively expect returns to capital to be below growth, because the alternative seems so fundamentally unstable. But I agree that the empirical record doesn’t show this trend (though it does show some weak signs of instability), so at a minimum I should be very uncertain.
Nevertheless, it seems very strange to consider the future where growth rates stay below interest rates indefinitely. I imagine that it would require a much stronger and more explicit anti-investment political stance than we have seen to date, probably universally or near-universally (or else wars against countries with higher growth rates and/or massive trade surpluses). Otherwise it would eventually become clear to everyone that the future is up for purchase and is going cheap, and I don’t see 100% buy-in to the sentiment “I do expect that I could buy the future for a few million dollars, but I’ll take a pass.” (With a 2% gap between returns and growth, it takes about a millennium for a million dollars to literally compound to be 100% of the economy).
Probably the best response here is “if things literally kept going just as they are, the gap would probably eventually close, but the time scales are so long that all sorts of other stuff is going to happen in the interim,” followed by revisiting just why we care about what happens in the very long run anyway.
If we consider machine intelligence based on brain emulation, it isn’t clear how exactly you are thinking that situation changes from today. Perhaps you are assuming that emulations could be enslaved while humans cannot? While ems need not randomly mix genes to create kids with differing values, their values would still change over time with experience, and the economy would select an unpredictable subset of prior ems to be in high demand later.
First, I expect the creators of ems to have more control over their values. Yes, it’s true that people’s values change over time. But the situation seems qualitatively different than for children, if I can choose to create emulations of individuals who apparently have values similar to mine (and for some kinds of values, this really is a workable strategy). At a minimum, some values of some people are fairly stable, and that is probably sufficient.
I don’t find it very likely that values correlate particularly strongly with productivity, above and beyond “if I care about the project I’m doing, I’ll do it better.” I expect this even less given the ability to experiment more extensively with human psychology and to make modest adjustments to emulations.
Second, I expect emulations personally to receive much less of the value they create than humans. This is closely related to the above point. But even if we could not find employees with similar values, note that we can choose to selectively emulate minds which are willing to work for particularly low wages. And if we try to politically enforce rights for ems, I expect enforcement to be a nightmare and to be associated with very large competitive disadvantages if there is variation across countries. I don’t think “has the desire to save” would be at the top of the list of rights our society would try to guarantee for ems. And on top of everything else, slavery or virtual slavery is simply not that rare historically. So overall I think there is a good chance that the owners of em IP and hardware will capture most of the value they create.
Third, note that the existence of emulations allow us to identify relatively reliable and long-lived managers, namely copies of ourselves or of a single known manager, and it significantly reduces the plausible political asymmetry between the long-dead founder of a dynasty and the other people around today.
I don’t understand why you are talking about the correlation between values and productivity, and I don’t see much competitive disadvantage to places that forbid em slavery, though there would be big disadvantages to requiring high min wages. I agree that ems allow immortality, which allows long term consistent control over investment funds. I also agree that the fraction of income going to capital would increase, if we include the hardware that runs em minds. However, there would still be a substantial fraction of income that doesn’t go to distant capital owners. In particular there would be agency costs, paid to local managers of enterprises, and wages paid to em clans with especially productive workers. Having a substantial fraction of income go to things that distant generic investment funds can’t own or control seems to me sufficient to prevent the ability of such funds to own and control the future.
(I have remaining objections even assuming em/AI values are hard-to-influence, but this thread is probably not the easiest place to hash them out. For now I can leave my position as “At whatever level of granularity we can predictably determine em values, we should expect society’s values to be stable.”)
Doesn’t Piketty (Capital in the 21st Century) argue that the historical default resembles your naive model, with dynastic wealth serving as a stable value?
I agree that machine intelligence could greatly exacerbate the concentration of wealth.
I have not read Piketty’s book. My understanding is that he thinks history looks a bit like the naive model, but would presumably agree that it doesn’t look *too* much like it (dynasties don’t seem particularly long-lived, and the values of today’s dynasties seem to me to have a weak causal connection with the values of yesteryear’s dynasties). If I understood Piketty’s opinion better I might have made more of the relationship between the ideas, but I’ve only encountered it second hand.
Also, note that I’m not necessarily arguing that capital will be concentrated (it’s not even clear if the notion of “concentration” will make sense in a world with many machine intelligences), I’m fairly agnostic on that question.
Just a data point here: I liked this post and would like to see more about the intersection of AI and economics. In particular, this:
“…the availability of cheap machine intelligence [could drive] human wages below subsistence, an outcome which seems not only inevitable but desirable if properly managed. ”
caught my eye, and I’d like to see some expansion on it.
Presumably when you talk about a “crystallisation of influence”, you don’t mean a static distribution of influence, but rather a distribution that looks more stochastic, but which is more stable than the distribution today?
It seems to me that unless there was significantly more risk management done in the post-AI world than there is done among well-regarded investors today, then investments would have a range of possible outcomes. Thus we would see some investments having significantly higher payoffs than others, resulting in individuals wealths following trended random-walk-like behaviours. So while the expectation for each person is that they would maintain a constant share of total wealth, the global distribution of wealth would not be static and we would not see a “crystallisation of influence”.
If we you were assuming that most agents would be risk averse, then an agent that was risk neutral would be able to capture more influence in expectation, and this would presumably be a preferable strategy for some agents.
That seems right–the argument here only implies “static in expectation.” The actual share of influence could vary considerably. As you point out, in the very long run we would expect to see not only vanishing time preferences but also a vanishing risk premium.
[…] Christiano outlined long-term resource inequality as a possible consequence of developing advanced machine […]
[…] Christiano outlined long-term resource inequality as a possible consequence of developing advanced machine […]
[…] or ever. Bostrom (2003), Scott Alexander, Hanson (forthcoming), Fabiano & Caleiro (2015), and Christiano (2014) make the case for us having reason to suspect that incentive structures will not lead to […]
[…] There is nothing inevitable about the polarizing impact of AI as some have argued any more than there was anything inherently polarizing for society with the invention of the steam engine or electricity, except in so far as technology is a part of a class-based economy bound to disadvantage the lower classes in the race for capital accumulation. The issue is how the new science technology will operate under the capitalist system as an instrument of capital accumulation and how politicians, from the populist right wing that may oppose AI to the progressive left that may favor it under a certain regulatory regime intended to benefit the broader population. https://rationalaltruist.com/2014/05/14/machine-intelligence-and-capital-accumulation/ […]
[…] There may be nothing inevitable concerning the polarizing influence of AI as some have argued any greater than there was something inherently polarizing for society with the invention of the steam engine or electrical energy, besides in as far as expertise is part of a class-based economic system sure to drawback the decrease lessons within the race for capital accumulation. The difficulty is how the brand new science expertise will function beneath the capitalist system as an instrument of capital accumulation and the way politicians, from the populist proper wing that will oppose AI to the progressive left that will favor it beneath a sure regulatory regime meant to profit the broader inhabitants. https://rationalaltruist.com/2014/05/14/machine-intelligence-and-capital-accumulation/ […]
[…] is how the new science technology will operate under the capitalist system as an instrument of capital accumulation and how politicians, from the populist right wing that may oppose AI to the progressive left that […]
[…] Of course, this is not an open-and-shut case. In particular, if one believes that radical change will happen soon, e.g. due to transformative AI, then we are now at a unique window of opportunity for s-risk reduction as well as other longtermist goals. In this case, hingeyness is extremely high for many value systems, and investing arguably does not make much sense.12 […]