Contributing to tech progress
I think that contributing to technological progress may be one of the most efficient ways to make the world richer. I don’t yet know much quantitative justification, so this impression could easily be wrong. In any case it seems to be worth asking: “How effectively can we increase the pace of tech progress?”(It’s also worth asking, “how much does technological progress help?” but I’ll ignore that question for now.)
Over a few upcoming posts, I want to consider some of the plausible ways that a philanthropist could directly push on tech progress:
- Support/create tech companies. There may be some tech companies or opportunities for tech companies where returns are slightly (or significantly) below market returns, but which create significant positive externalities. Such companies might exist with a smaller probability, or might accomplish less, without investment from a philanthropist.
- Subsidize corporate R&D. If you reimburse a company a fraction or their R&D budget (or R&D budget on a particular project) above a given threshold, you can get them to do more R&D.
- Subsidize academia. Fund academic research, create academic positions, offer fellowships, etc.
- Offer prizes.
At the moment I expect indirect approaches (especially capacity-building) to be more systematically neglected and consequently higher impact, but it seems worth getting a better handle on direct interventions for comparison. Moreover, many of the indirect ways to contribute to tech progress are themselves technological projects, and so we would still be left with the question of how to directly push on those efforts.
In the rest of this post I’ll discuss some general considerations which affect all of 1-4. In future posts I’ll look at particular options in more detail.
If we fund one area of tech progress, we typically bid up the price of inputs (especially human capital). So the area we fund will get more attention at the expense of other areas that get less attention. If we provide funding which causes 1 extra person to join project Q working on problem P in subfield S of field F using skills K, we expect that this person is not taking a similar job working on problem P, in subfield S or in field F, or using skills K. Some fraction of a person is lost from each of these other areas to compensate for the extra person on project Q. These fractions are almost certainly increasing as we consider larger and larger groups—the probability that someone leaves a job in field F is necessarily larger than the probability that they leave a job in a subfield S.
(Of course in reality this isn’t literally what is going on. The person who takes a job on project Q is very likely to pass up a quite similar job. But someone else is likely to take the job they passed up, and someone else will take their job in turn, etc. etc., and ultimately the issue is one of economics.)
Exactly how people redistribute is an empirical question. A naive starting point would be to guess that the replacements are distributed roughly evenly across different levels of specialization: perhaps 1/8 of a person substitutes away from the 10 most closely related jobs, 2/8 of a person substitutes away from the 100 most closely related jobs, up to 7/8 of a person substituting away from the 10M most closely related jobs and a whole person substituting away from the 100M closest jobs. There are straightforward natural experiments that could shed light on the issue (particularly studying the effect of exogenous shocks to demand for laborers of a certain type); I’m sure economists have done some of this work but I’m not familiar with it.
We could imagine drawing a diagram consisting of concentric circles representing increasing specializations of laborers, with project Q at the center and “all laborers” at the outermost level. Our intervention increases the size of project Q by 1 person, and increases the size of succcessively larger circles by successively smaller amounts, having essentially no effect on the largest circle.
So the largest impact will be the most localized impact. We should be happiest if we can find a particular project which has an unusually large impact, given the problem they are working on and the approach they are taking. All else equal, it’s slightly less good if we can find a particular problem which is particularly important, and less good still if we identify a broad area which is particularly important, etc.
Consuming low-hanging fruit
In general I expect that the most attractive problems in a research area will be the first targets for research (and the most attractive technologies will be the first to be developed). If the availability of additional researchers depends on the attractiveness of unsolved problems, then this creates a negative feedback: putting more effort into a research area today decreases the attractiveness of that field tomorrow by consuming the low-hanging fruit in that area. This causes slightly fewer researchers to go into that research area tomorrow, offsetting our impact.
Suppose we are interested in the impact of doing 1 unit more research on X in 2012. To think about the impact we compare the status quo to a counterfactual in which we did the extra unit of research, in the other we didn’t. In 2012, area X is 1 unit more developed in the counterfactual. In 2013 there will be some slight offsetting drop in research on X, and so area X may only be (say) 0.99 more developed. In 2014 there will be a similar offsetting drop and so the gap will close further.
Assuming that this process is roughly time-invariant in expectation, it is sensible to talk about the half-life of a contribution in field X. After how many years will a contribution of 1 unit of research have deteriorated into 0.5 units of research because of negative feedbacks?
In general I think that we should expect the half-life to be roughly comparable to the characteristic timescale of the decisions which are mediating this negative feedback. So the characteristic timescale for labor on a very narrow problem would be quite short, perhaps a few years, since researchers often move between narrow problems. The characteristic timescale for labor on a broad area like biology may be much longer, perhaps a few decades, since the process of students moving into a field and becoming productive researchers is much slower to respond to changes in demand (there is typically a large large between the point when someone decides to do biology and when they start contributing usefully to research in the area, and a big lag between low-hanging fruit being consumed and people becoming less excited about an area).
Personally, I expect the negative feedback from using up low-hanging fruit to be very large, based on informal models of science. I think within an academic field this claim is fairly clear; more broadly it is hard to say, but I think it is much more obvious if we imagine scaling down large changes.
If progress on a problem has a half-life of N years, it means that exogenously introducing an extra person to work on that problem indefinitely causes about N person-years of progress on that problem in total.
The conclusion of this line of reasoning is that it is significantly more robust to support a field when you think that work being done in that field today will have a lasting positive impact than when you think that the field will eventually lead to valuable results.
In addition to the negative feedback from exhausting low-hanging fruit, there may be positive feedbacks to research on a problem. In particular, having more researchers or engineers in an area may:
- Resolve “bottlenecking” problems, thereby increasing the attractiveness of the field.
- Increase the visible successes of the field, thereby increasing its prestige and making it more attractive (to practitioners, funders, and profit-motivated companies)
- Increase infrastructure for training new researchers or engineers (particularly via increased opportunities for mentorship and advising).
In general I would expect these effects to partially offset the negative feedbacks. In a very small number of cases the result may be that the amount of work in an area is relatively stable, and the sensitivity to an external intervention is roughly 1. But a priori we should expect a precise balance to be relatively rare without a compelling mechanism which balances the positive and negative feedbacks. So most of the time we should expect an exogenous change to research in an area to either compound or decay a rate which is roughly comparable to the rate at which negative feedbacks alone would cause it to decay. A balance between positive and negative factors might double or triple those timescales, but would be unlikely to multiply it by a factor of 10.
So the most interesting question is typically whether positive or negative feedbacks dominate.
When the positive feedbacks dominate, we expect a high (>1) sensitivity to exogenous forcing. But unless current levels of investment are precisely balanced, we also expect to see either a rapidly increasing or rapidly decreasing investment in an area. When we observe rapid changes without exogenous drivers, we can reason relatively straightforwardly about the effects of an intervention: if investment in an area is increasing by 25% a year and is not being driven by exogenous changes (like growth in a larger industry which drives investment), then a 1% increase in next year’s investment will accelerate this development trajectory by 1/25 of a year. Similarly investment would be expected to delay declines (unless exogenously driven).
The total impact of such changes would depend on the anticipated long-run behavior of the field. For example, if we expect field X will grow 10-fold in the next decade, after which growth will significantly slow , then a investment in field X today would be roughly 10 times as good as you might otherwise expect (modulo time preferences).
In a very small class of cases we might have very high sensitivity to exogenous changes but not observe high rates of change, most likely when rapid growth is occurring but is difficult to detect (because it is occurring at a small scale and/or because the things which are growing are hard to isolate). It may be possible to recognize such cases by making accurate arguments about the potential for growth, but it should be recognized that this is necessarily an unstable situation which obtains for a small fraction of a field’s life, and there is a strong prior presumption against such conditions persisting. Cases where condition (1) obtains may be particularly easy to identify ex ante based on expert opinion.
One possible counterbalance to the “low-hanging fruit effect” you mentioned is that the people who don’t go into X research (as a result of the effect) probably go into another field of research instead. So even though you only add N person-years to X research you also add some person-years to other fields also. The effect probably isn’t huge, because people probably allocate themselves to research badly. However, if incentives were sufficiently aligned to encourage self-allocation to high-impact research fields (which I hope will happen anyway), this could become an important consideration.
[…] extra causes we considered including are: boosting technological progress, especially R&D to increase crop yields and green energy R&D (highly rated by the […]