To illustrate the claims in this book, I rely on three primary metaphors throughout....The second is of history as molten glass. At present, society is still malleable and can be blown into many shapes. But at some point, the glass might cool, set, and become much harder to change. The resulting shape could be beautiful or deformed, or the glass could shatter altogether, depending on what happens while the glass is still hot. William MacAskill (2022) What We Owe The Future, p. 6
This is the third post on MacAskill's book. (The first one is here which also lists some qualities about the book that I admire; the second one is here.)
A key strain of MacAskill's argument rests on three contentious claims: (i) that currently society is relatively plastic; (ii) that the future values of society can be shaped; (iii) that in history there is a dynamic of “early plasticity, later rigidity” (p. 43). Such rigidity is also called "lock-in" by MacAskill, and he is especially interested in (iv) "value lock-in."
Before I get to criticizing these three claims, it's worth stating that in a previous post I noted that from (ii) MacAskill and the social movement he is creating (pp. 243-246) have (v) claimed for themselves the authority to act as humanity's legislators without receiving authority or consent to do so. It's quite odd that MacAskill doesn't reflect on the dangers in the vicinity here because most of the examples MacAskill offers of relatively long-lasting 'value lock-in' are, by his lights, the effects of "conquest." (p.92) Not to put too fine a point on it, but in general the project of 'value lock-in' is team evil (as MacAskill notes, this is the project of imperialists, colonialists, religious monopolists, etc.) You would hope that one of the lessons one takes from this fact is that it's not a good idea to be on team lock-in. I return to this in the future.
On (i) I don't think MacAskill ever offers a metric of social plasticity or even really provides thorough evidence that our age is genuinely molten. (And, in fact, I have noted that at times he undermines this claim by suggesting that our age is characterized by "homogeneity" (p.96) and the effects of "modern secular culture" or a "single global culture." (158)) But it's worth looking at how MacAskill articulates the first claim:
In China, the Hundred Schools of Thought was a period of plasticity. Like still-molten glass, during this time the philosophical culture of China could be blown into one of many shapes. By the time of the Song dynasty, the culture was more rigid; the glass had cooled and set. It was still possible for ideological change to occur, but it was much more difficult than before.
We are now living through the global equivalent of the Hundred Schools of Thought. Different moral worldviews are competing, and no single worldview has yet won out; it’s possible to alter and influence which ideas have prominence. But technological advances could cause this long period of diversity and change to come to an end.--(p. 79)
Not unlike MacAskill, I am fascinated by the period of the Hundred Schools of Thought. Each year I spent some time on it with my undergrads. But I never fail to point out that this era is also known as the 'Warring States period.' (In fact, I observe, as a puzzle, that the intellectual fertility and relentless war within a relatively fractured political system is something it shares with the Italian Renaissance of Machiavelli and, perhaps, Kautilya's age in India.) The absence of empire is beneficial to value pluralism.
I don't mean to suggest that value pluralism is a necessary effect of multi-polar world. Presumably there are other social and institutional sources. Max Weber thought such pluralism was the effect of the advanced division of labor, Plato and Al-Farabi seem to have thought it was the effect of the diversity of human passions that flourish in democratic societies alongside freedom of speech and lack of educational uniformity. That's to say, one may obtain value pluralism without context of permanent war.
So, if one thinks cultural plasticity is worth having -- or at least if one thinks rigidity is something threatening -- then one should be thinking about the practices and institutions that prevent empire and that promote enduring cultural diversity. MacAskill is rather fond of thinking about society in terms of cultural evolution. But, as I have repeatedly noted, he is thoroughly uninterested in thinking of the role of institutions -- as selection mechanisms or ecological structures-- in generating and sustaining such pluralism.* And the effect of this is to flatten (to use his lingo) the cultural fitness space. For MacAskill treats technology as a determining cause of value lock-in.
Now, I don't want to suggest technology never shapes values, but it is peculiar that MacAskill doesn't notice that technology can be neutral among competing values and that technology is often shaped by values. I call it peculiar because (a) the hot topic in AI ethics today is that even neutral algorithms often reflect and amplify existing structural (that is, institutional) injustice(s); and (b) lots of military technology gets used for competing ends. We can observe this peculiarity in the very next paragraph:
When thinking about lock-in, the key technology is artificial intelligence. Writing gave ideas the power to influence society for thousands of years; artificial intelligence could give them influence that lasts millions. I’ll discuss when this might occur later; for now let’s focus on why advanced artificial intelligence would be of such great longterm importance.--(p.79)
Let's stipulate that it's true that "writing gave ideas the power to influence society for thousands of years" but writing itself does not limit the number of ideas that can be expressed. From the perspective of cultural evolution, the invention of writing creates an explosion of cultural variation. And while, surely, some technologies may be homogenizing along some dimensions (including as instruments of empire), it is simply not intrinsic to technology to be value-homogenizing. (As an aside, it is notable that in his work MacAskill draws on economists who think about productivity, but that he has ignored the rich area of philosophy of technology and what we might call STS studies. MacAskill is engaged in (what Nathan Ballantyne calls) epistemic trespassing without realizing, it seems, which fields he has ignored.)
In fairness, MacAskill cites this paper (here). But a key premisse in the argument is this: "If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions)." The dangerousness of AGI would be a possible effect of (near) world peace. So, even if one grants that the probability of AGI in the next fifty years is "no lower than 10%," (p. 91) the whole argument for (iv) relies on the utopian thought that in the context of the stress of rising climate change the great powers of humanity opt for world peace!
In fact, MacAskill's argument for (ii/iii) rests on the idea that stagnation is inevitable because scientific and technological innovation become harder and harder (pp. 150-151) and as countries grow wealthier fertility drops (and there is an implied absolute plateau to population on Earth (pp. 152-155). MacAskill is clearly influenced by Tyler Cowen's work (147-148), but he cites as authority research by Stanford University's Chad Jones on "longer timescales." (p. 150) And since I try to keep up in philosophy of economics, I thought it useful to take a look at some of Jones' papers (which is, as MacAskill himself notes in an endnote, a dressed up Solow-Swan growth model--these leave considerable known uncertainty in long range forecasting).+
Jones assumes that "a larger population means more researchers which in turn leads to more new ideas and to higher living standards." Something MacAskill also embraces (p. 152).Here's a passage from a conclusion of one of the key papers MacAskill cites:
Of course, the results in this paper are not a forecast—the paper is designed to suggest that a possibility we have until now not considered carefully deserves more attention. There are ways in which this model could fail to predict the future even though the forces it highlights are operative. Automation and artificial intelligence could enhance our ability to produce ideas sufficiently that growth in living standards continues even with a declining population, for example. Or new discoveries could eventually reduce the mortality rate to zero, allowing the population to grow despite low fertility. Or evolutionary forces could eventually favor groups with high fertility rates (Galor and Moav 2002). Nevertheless, the emergence of negative population growth in many countries and the possible consequences for the future of economic growth make this a topic worthy of further exploration.--"The End of Economic Growth? Unintended Consequences of a Declining Population" American Economic Review, November 2020
The reason I quote this passage is not to refute MacAskill although MacAskill is not sufficiently attentive to the implied difference between a model-driven scenario and a forecast. But because it helps explain why MacAskill is so focused on artificial intelligence. (Some critics of longtermism suggest that it is the effect of the values of Sillicon Valley and its donors on the EA movement, but while one cannot rule it out, I like to think it's model driven.) At one point MacAskill writes:
Think of the innovation happening today in a single, small country—say, Switzerland. If the world’s only new technologies were whatever came out of Switzerland, we would be moving at a glacial pace. But in a future with a shrinking population—and with progress being even harder than it is today because we will have picked more of the low-hanging fruit—the entire world will be in the position of Switzerland. Even if we have great scientific institutions and a large proportion of the population works in research, we simply won’t be able to drive much progress. (p. 155)
Neither Jones nor MacAskill really considers the benefits of educating a large part of the world's population at, say, Switzerland's levels. (Go look up, say, its patents or education spending per capita.) Presumably because in the Solow-Swan model such benefits are a one-off and don't generate a permanent productivity spiral. But it seems to have the perverse effect on MacAskill's program/longtermism that the economic development of poor countries (and, say, opening markets to their products) does not figure in What we Owe the Future as an especially important end worth pursuing. As an aside, in another paper (also cited by MacAskill) Jones and his co-authors notes the significance of the fact that ideas are non-rivalrous. Their model implies that 'educating a large part of the world's population at, say, Switzerland's levels' would be worth doing.
I quote Jones for two other reasons. First, Solow-Swan does not imply that technology driven future productivity or intensive growth is impossible. It's important to MacAskill's general argument that something like "past [scientific/technological] progress makes future progress harder," (151) is true (this is Cowen's influence on MacAskill). And the main empirical argument for it is the record of declining productivity growth of the last half century or so (which gets accentuated by drop in fertility in countries with good education systems). We are at risk of reaching what in the eighteenth century was called a 'stationary state.' But even if we were to really understand what caused the scientific and industrial revolution to happen, there is really no reason to think a future leap in productivity would necessarily have to follow the same underlying causal structure.
As a non-trivial aside, for MacAskill a civilizationa plateau is (if we don't destroy ourselves) inevitable due to physical constraints of the universe. He thinks the number of atoms puts an absolute upper limit on growth (p. 27). But I really don't understand the argument for why increasing value-added per atom is impossible on his view. Again, it is noticeable that institutions are irrelevant to MacAskill's argument.
In a future post, I will explore MacAskill's "ideal" that shapes his argument for (ii). Here I just want to close with the observation that it is odd to see a model, Solow-Swann, which (let's stipulate) is "foundational for all of modern growth theory" (note 18, p. 304) has known problems as a forecasting device because there is considerably room for uncertainty,+ be presented as a reliable guide to very longterm developments. This is a scientific field that is still in its infancy. And the known uncertainty in the error margins of the models don't get eliminated over the very long term, but the sensitivity to even minor modeling mistakes get worse. To sum up, any collective decision for the long term future made on the prospect of world peace and this model is an expression of a lovely faith.
+I thank John Quiggin for this reference.
*Yes, on p. 86 MacAskill mentions the significance of institutional design. But it plays no role in his actual argument.
Comments
You can follow this conversation by subscribing to the comment feed for this post.