Olaf Stapledon (1886 – 1950) is known only to aficionados of science fiction now (Last and First Men and Star Maker are classics that shaped the genre throughout the twentieth century, and arguably kept Spinozistic ideas alive in a positivist age), but in his writings he also helped invent 'futurology' and 'transhumanism.' I hasten to add, in case you are are of more discerning taste, J.L. Borges was an admirer of Stapledon. Stapledon obtained a PhD in philosophy from University of Liverpool under Alexander Mair (hitherto unknown to me). Stapledon's first book (1929), A Modern Theory of Ethics: A Study of The Relations of Ethics and Psychology, was clearly based on his PhD. Judging by citations, it made no impact on the scholarly world of his day.
The quoted passage is from a short book written in preparation of a post (second) world war age. In the book, Stapledon is committed to renewed internationalism (he clearly regrets the failures of the League of Nations) and a form of democratic socialism that can avoid dictatorship (even though Stapledon is clearly very impressed by achievements of the Soviet Union). About his politics some other time more.
The passage caught my attention because it entertains the possibility of technology induced human/species extinction of the species that deploys a new technology. This extinction possibility is in the news again thanks to the (recall) impact of Parfit's writing on longtermism and (the publicity surrounding) rapid developments in AI/AGI.
Contemplating civilizational collapse or the extinction of species are not new thoughts in the history of philosophical writing or social theory. As I have noted (recall), in book 3 of Plato's Laws, (677a-678a) Plato's Athenian stranger entertains the possibility that the Earth has experienced frequent catastrophic natural events (he explicitly mentions massive floods and pandemics, but it's easy to imagine he also assumes earthquakes and perhaps even meteorite impacts) that have wiped out whole populations of humans and other species of animals, and has set-back the technological capability of civilizations for millennia. (We know from the Hebrew Bible account of the flood and other ancient writings that Plato is not unique in this.)
It's also not new to think that civilizational collapse is endogenous to civilizational life. Arguably this is one of the the natural ways to understand the Hebrew Bible's account of the scattering after the tower of Babel; in (recall) The Great Endarkenment, Elijah Millgram suggests that the linguistic diversification and differentiation (and subsequent cultural implosion and scattering) is itself the effect of the advanced division of labor that makes the building of the tower possible. (This is the great set-piece that opens Millgram's (2015) book.) Social theory going back to Ibn Khaldun is rife with cyclical, civilizational processes in which the key drivers of expansion and success also become self-undermining.
It's also not wholly unusual to recognize the potential civilizational collapse that is endogenous to one's own civilizational life. The writings of a number of elite authors of the final century of the roman republic are pretty clear on the fact that endurance of the roman republic is threatened by its own past history of success (due to conquest imported habits of luxury and/or empowered generals to aim for political power). Of course, often the point of recognizing the potential of civilizational collapse is to create a call to political arms, or a philosophical prophecy, to prevent the collapse from occurring (we can see hints of this in Plato's story of Atlantis). But sometimes the doom is inevitable or fated (as some of the Roman authors clearly came to think).
But, as Parfit de facto notes in Reasons and Persons, there is a non-trivial difference between foreseeing civilizational collapse of one's civilization and the collapse of one's whole species. Now, absent a certain kind of ethical or axiological cosmopolitanism, it seems obvious to me that most authors of the past lamented the possible collapse of their own civilization and showed remarkable indifference to the possible collapse of larger humanity. (This is an option that Parfit does not consider, but if he had he would clearly have suggested that this shows a clear error in their implied axiology.)
Now, Stapledown's writings suggest he is a cosmic cosmopolitan. And in this respect, he clearly fits in a philosophical tradition tracing back to Fontenelle, Huygens, and Kant. So, it's no surprise he contemplates the possibility of endogenous technologically induced extinction among aliens as an example worth avoiding in the harnessing of atomic power. There are some hints in Youth and Tomorrow that it was written during wartime, and interestingly enough he does not mention the use of atomic weapons. But just before the passage quoted at the top of this post, he claims that "the manufacture of atomic bombs necessitates organisation of production on a national scale and the danger of the misuse of these terrific weapons forces world-wide organisation upon us." (p. 73; emphasis added)
It's a bit unfortunate that Stapledown does not clarify what he means by 'misuse' of atomic weapons here. (He was a pacifist during World War I, but not during the second world war.) Does he mean their deployment, or does he worry about them falling in the hands of rogue states/actors, or does he worry about nuclear world war? But he clearly thinks that atomic power isn't merely a technological problem, but also a political problem on a global scale. After all, he thinks that a world-wide organization is required to handle it.
In the quoted passage at the top of this post, Stapledon is discussing what we now call the 'Kuiper belt' (or Edgeworth-Kuiper belt') [UPDATED: see Ken McLeod's comment below.] But was only then recently proposed by K.E. Edgeworth in a 1943 paper in the Journal of the British Astronomical Association. It would be worth trying to figure out if Stapledon knew Edgeworth personally. In Edgeworth there is no mention of an alien civilization. I am not claiming that Stapledon is the first to propose the hypothesis that the asteroids are themselves the effect of a technologically induced existential collapse, but given this timeline he has to be among the first.* Clearly the function of Stapledon's hypothesis is to alert us to avoid this fate (not, perhaps, to create a testable hypothesis for planetary explorers of the solar system).*
It is, of course, an open question to what degree global institutional control of atomic power mitigates risk or accelerates existential risk. But since this post is long enough as is, I close with the following observation: the possibility of technology induced (endogenous) extinction is, thus, not new with the possibility of AGI. Our society has lived with the possibility of technology induced (endogenous) extinction of humanity for almost eighty years now. To say that is not to underestimate the existential risk that follows from the introduction of new technology. But it is worth noting that we have a history of living with, and perhaps managing, such risk, and perhaps we can learn from this history. For, while it would be nice if there were a technological fix to the existential extinction risk of AGI, it's more likely than not we'll need institutional and social mitigation of such risk.
Recent Comments