« A rant on FTX, William MacAskill, and Utilitarianism | Main | Hobbes and Descartes on the Distribution of Wit, and some Plato »



Feed You can follow this conversation by subscribing to the comment feed for this post.

Michael Kates

Very interesting post. I must say that I find the last two pages of Reasons and Persons extremely moving. It's quite beautiful, in my view.

And I can't recall at the moment, but I believe Parfit gives a reason for including the Nietzsche quote as the epigraph of the book that doesn't relate to what you discuss in the post. Indeed, I believe he included it for a somewhat odd reason. Perhaps someone else knows.

John Quiggin

I don't like the cardinal scales required to say that diff(2,3)> diff(1,2). But I'm comfortable with uncertainty.

So, let's posit that a nuclear war wiping out 99 per cent of humanity would also destroy nuclear weapons and the knowledge of how to make them, and create a durable taboo against ever looking into this question again. And let's say (my best guess) that there is something like a 10 per cent chance of nuclear extinction this century. If you could press the 99 per cent button, would you do so? I wouldn't.

Having said that, I agree that there is something special about total extinction, so I guess I must place some weight on potential future people.


You mention a gap in Parfit's reasoning for the conclusion that the difference between 3 and 2 is greater than the difference between 2 and 1.

This gap is (mostly) filled by preceding discussion in the fourth section of Reasons and Persons. Namely, Parfit defends the claim that future people are axiological equals to present people (assuming they are guaranteed to exist). Separately, he argues against person-affecting views. Additionally, he argues that mere addition always constitutes an improvement.

These, combined with the Egyptology objection he discusses in R&P, entail that the gap between 3 and 2 is greater than that between 2 and 1, permitting reasonable empirical assumptions about humanity.


In reference to my previous comment, a condensed argument based off of Parfit's reasoning would go as follows.

1 - when a person exists does not vastly affect the moral value of their existence.

Argument for 1 - if 1 is false, the world would be made impartially better if present people received a minor benefit, and future people received a massive reduction in quality life. Example: bury nuclear waste more cheaply, but it will increase cancer rates of future people (assume they will still have lives worth living.)

2 - The fact that the identity of future people is up in the air does not undermine their value.

Argument for 2 - It would seem that a mother makes the world better if she takes on minor inconveniences so as to make it so that her child won't have an ailment, even if doing so changes the child's identity.

3 - If a population has extra happy people, it is better by an amount that is not highly variable over the other people.

Argument for 3 - if 3 wasn't true, the badness of many humans being harmed would depend on greatly disconnected people (Martians, or ancient Egyptians, for example)

By 1, 2, and 3, and because the future contains vastly many people, the loss in value when comparing 99 to 100 is much greater than 99 and 1, because going from 99 to 100 eliminates all future people, who matter to a comparable degree as present people.


Hi Chris,
The question is about relative distances between 1&2 and 2&3, such that 2&3> 1&2. But you don't seem to take the suffering of 99% death rate from nuclear war really seriously (and the many centuries of subsequent sufferig from Nuclear fall out). That's not "minor inconvenience." Anyway, we're now entering repugnant conclusion territory, and I have nothing to add to that discussion that hasn't been explored in the literature.

Michael Kates

I remember! According to Jeff McMahan, Parfit wanted to use the picture he took of a ship in Venice as the cover (which, of course, he did), but he needed a way to explain it to the publisher. The Nietzsche quote then dawned on him, and so he wrote the last chapter to justify it! (This is from a wonderful interview of McMahan by Simon Cushing: http://profiles.cognethic.org/McMahan.pdf


Thank you for that: Here's what McMahan says
"Well, I’ll tell you the story that Derek told me. I think Dave Edmonds has discovered something a bit different, but I’m quite sure that what Derek told me, I remember very clearly what Derek told me, which was, he wanted to put that particular picture of the ship in the harbor at Venice on the cover, so he thought, “how can I get this on the cover?” And he knew of this quotation from Nietzsche about “our horizons are open and we can set sail,” and all this kind of thing, so he thought well, I’ll use that quotation as an epigram, but then he thought well why would I have that quotation as an epigram, so he thought I’ll write a last chapter that says, “moral philosophy is in its early days.
Everything is open to us now as a result of the shift away from the dominance of religion in moral thinking.” And, so that’s what he said to me. Now, I think Dave has a different
story, but basically, what Derek told me was, he wanted to get the photograph on the cover, for that he needed the Nietzsche quotation, and to rationalize the Nietzsche quotation, he had to write this last chapter."

Chris Minge

Hi Eric,

Here is the more thorough reasoning I have in mind. 99% of humanity dying would mean 7.92 billion people die, leaving 80 million. If we assume the human population returns to 8 billion in the time it first took us to go from 80 million to 8 billion (4000 years), and that the human species will ultimately last as long as the average mammal species (1 million years, 200,000 already lived), and we assume 80 year average life-span, then there will be 9950*8 billion future people.

So to compare the numbers [in units of 8 billion people].
1 - nobody dies
2 - 0.99 of the population doesn’t exist which otherwise would exist.
3 - 9951 populations who otherwise would have existed won’t exist.

If future people are axiologically comparable to present people without large factors (my previous comment discussing how Parfit argues for this), then the gap between (2) and (3) is ~9950, while the gap between (1) and (2) is ~1, making 2&3 larger than 1&2 according to this analysis.

You are right to point out that there is more than just lives being lost in these scenarios. 99% die painfully in (2) and (3), and in (3), 1%, and many of their descendants, will likely have very poor lives because of nuclear fallout and related chaos. For Parfit’s point, I’m sure he would have been happy to revise it to a claim about a scenario in which the survivors aren’t harmed (less realistic, but it contains the relevant longtermist claim). Regarding the intitial claim, because 99% die of nuclear war in both (2) and (3), we can ignore that for the comparison 2&3. However, survivor-suffering is unique to (3).

However, I still think the conclusion follows from reasonable assumptions, even granting this detail. If we grant Parfit’s other arguments (controversial of course), then the original scenarios should be roughly equivalent (in terms of axiological evaluation) to the following.

There are 9951 isolated earths, each with 8 billion people, [all of whom are sterile], and everyone will live the average quality of life of the future people of scenario (1), likely better than present-day earth’s quality of life due to technological advancement.
1+: nobody dies.
2+: On one earth 80% die, 20% live in nuclear fallout, and the remaining 9950 earths are unaffected.
3+: All 9951 earths are eliminated in nuclear war.

The 1 earth lives of (2+) with nuclear fallout is how many would experience nuclear fallout lives in the original (2), assuming that it lasted for 1000 years. If the moral arguments go through, and 2+&3+ > 1+&2+, then 2&3 > 1&2. Reasonable judgments about the quality of life in nuclear fallout, and the significance of death, support 2+&3+ > 1+&2+.

Regarding calling it a “minor inconvenience”, I do not think Parfit’s arguments imply 99% of earth dying is a minor inconvenience. I would take it to show that 100% is an incredibly massive tragedy, while 99% dying is an incredibly massive, but not nearly as massive, tragedy (not an inconvenience). Considering that it is impossible to put into words how horrible 99% of earth dying is, it is naturally impossible to use words which can distinguish 99% and 100% while still accurately describing both (assuming Parfit is right).

Lastly, I should just note that this longtermist conclusion is compatible with rejecting the repugnant conclusion. Of note - Parfit (2016) “Can We Avoid the Repugnant Conclusion?”, Hajek & Rabinowicz (2021) “Degrees of Commensurability and the Repugnant Conclusion” (I do not know whether or not Hajek or Rabinowicz endorse other claims which rule out Parfit’s longtermist axiological conclusion, but this paper does not). I myself am inclined to accept such a view.

Eric Schliesser

I am not especially moved by the idea that possible distant future people are what you call axiological equals to present people (and find the arguments for this commitment unpersuasive). So, I feel bad that somehow my post made you spell out an argument in such detail that I don't think should even get off the ground.
But I will note that you don't really address my observation that I think that (2) is itself objectively a far worse situation than (3). And the only way you seem to get out of that is to start positing lots of possible lives lost (or lots of possible artworks never created) that somehow counterbalance that claim.
Anyway, thank you for taking the time to do so.

Chris Minge

Yes, arguing that distant future people are axiological equals is no easy task. Much of section 4 of R&P is just trying to do this (as is my brief second most recent comment), and it constitutes a long discussion. Not surprising that debate continues to this day.

Your observation of (2) 1+&2+ (which you very well may not share), and the axiological equality of future people (which you are not on board with). You're right that the many future possible lives are important for counterbalancing. This is precisely the empirical core of why Parfit thinks 3&2 > 1&2.

Chris Minge

">" signs made some of my previous comment not displayed. Should be "Your observation of (2) worse than (3) is addressed by 1+,2+,3+ conditional on the judgment that 3+ greater than 2+ (which you very well may not share), and the axiological equality of future people (which you are not on board with, but would make 2+,3+ analogous to 2,3 if true). You're right that the many future possible lives are important for counterbalancing. This is precisely the empirical core of why Parfit thinks 3&2 > 1&2."

Aljosa Kra

It seems that for Parfit the death of God plays double role here. On the one hand the death of God enables rational morality. A belief in moral progress (or rationaal hopes) is a part of that rational morality.

On the other hand, the death of God is itself an event in moral progress.

Taken together, this means that the death of God enables us to recognize the death of God as a step in the morally desirable direction. It's a kind of a revelation then.

Matthew Adelstein

I think that it would make sense to address the many arguments Parfit presents for the conclusion in the book rather than merely these few pages summarizing the conclusion. For a quicker presentation of some arguments for this conclusion, see here. https://benthams.substack.com/p/longtermism-is-correct-part-1

Eric Schliesser

Hi Matthew,
Those arguments have been addressed by many others. I did start reading your substack account. I stopped reading when you claimed "If you are going to create a person with a utility of 100, it is good to increase their utility by 50 at the cost of 40 units of suffering." It is pretty clear that (a) you are not restricting who is the subject of this suffering , and if they have a say in the matter; (b) this kind of reasoning is easily abused (as history has shown). Objections to this move are well known. (As I also note above, the near symmetry you posit between actually existing people and possible humans in the very distant future is also something I reject--again for well known reasons.)

John Quiggin

The argument about distant people is critical and represents a misunderstanding of utilitarianism. It's a political philosophy, based on the idea that everyone *in a given society* should count equally. Bentham is quite clear on this.

That's why the early utilitarians were Malthusians: if you accept Malthus' economics, population restriction is the only way to raise average living standards.

Eric Schliesser

Hi John,
Longtermists tend to reject the Malthusian restrictions -- in fact, Macaskill is all in on population growth -- because they think any non-suffering addition to the population is worth having on its own terms and has good effects on innovation, economic growth, and values.

The comments to this entry are closed.

Here's a link to my past blogging (and discussions involving me) at: New APPS.


Blog powered by Typepad