The death penalty has various serious problems and lifetime imprisonment is really really expensive.
I guess we should be happy every time someone so thoroughly bad we want them out of society forever (like a serial murderer) does us the favour of killing themselves. Nothing of value is lost, and the justice system saves money. Right?
It seems to me it logically follows that we should incentivize such suicides. Like: 5000 dollars to a person of your choice if you're dead within the first year of your lifetime sentence, wink, wink, nudge, nudge.
It feels very wrong and is clearly outside the overton window. But is there any reason to expect this wouldn't be a net benefit?
Jump in the discussion.
No email address required.
Notes -
People keep accusing me of being "uncharitable" when I claim that utilitarianism is fundamentally incompatible with human flourishing because in order to remain logically consistent it must inevitably devolve into either mindless wire-heading or a fucking death cult. And yet here we are with one of my "strawmen" made flesh asking do we really need to value life? Really? and why cant we just euthanize the people [user] doesn't like.
On the off chance you're just naive and ignorant rather than a troll, the answer to your question in the OP is; No, we shouldn't, because we've already established that this is a slippery slope, and at the bottom of this slope lies a mountain of skulls.
You as a utilitarian may see that as a fair trade for some nebulous benefit but I do not
As well take a look at the posts in the main thread here and conclude that anti-woke purity spiraling inevitably turns into Stormfront.
Dogmatic adherence to utilitarianism doesn't lead to any better outcomes than dogmatic adherence to the Old Testament. Anything has to be alloyed with common sense and some flexibility for appeals to emotion, otherwise you end up with one Repugnant Conclusion or another. TRVDITION is all fun and games and human flourishing until you have to disown your daughter for marrying the Wrong Sort (oops, guess I've been reading too many of SecureSignals' posts).
There's also an aspect of status signaling, whereby emotion is low status. The more coldblooded, the more points you get for being purely rational and high IQ - thus, nuke the GPUs/kill the degenerates/forced sterilization and eugenics.
Precisely.
My position is essentially that none of this is a coincidence because nothing is ever a coincidence.
More options
Context Copy link
More options
Context Copy link
Why do you think this has anything to do with utilitarianism? Utilitarianism doesn't value the lives and well-being of mass-murderers any less than it values anyone else. It only recommends harming them as an instrumental goal to serve a more important purpose, such as saving the lives of others. A 20-year-old who raped and killed a dozen children still has plenty of potential QALYs to maximize, even adjusting his life-quality downward to account for being in prison. It's expensive but governments spends plenty of money on things with lower QALY returns than keeping prisoners alive. Also OP only differs from conventional death-penalty advocacy in that he seems concerned with the prisoners consenting, proposing incentivizing suicide instead of just executing them normally, and once again that is not something utilitarianism is particularly concerned with except in instrumental terms.
The utilitarian approach would be to estimate the deterrent and removal-from-public effect of execution/suicide-incentivization/life-in-prison/etc. and then act accordingly to maximize the net welfare of both criminals and their potential victims. It doesn't terminally value punishing evil people like much of the population does, though I think rule-utilitarianism would recommend such punishment as a good guideline for when it's difficult to estimate the total consequences. (In Scott's own Unsong the opposition of utilitarians to the existence of Hell is a plot point, reflecting how utilitarianism doesn't share the common tendency towards valuing punishment as a terminal goal.) But neither is utilitarianism like BLM in that it cares more about a couple dozen unarmed black people getting shot in conflicts with police than about thousands of additional murder victims and fatal traffic accidents per year from a pullback in proactive policing. That's just classic trolley-problem material: if one policy causes a dozen deaths at the hands of law-enforcement, and the other policy causes thousands of deaths but they're "not your fault", then it's still your responsibility to make the choice with the best overall consequences. There are of course secondary consequences to consider like the effect on police PR affecting cooperation with police, but once you're paying attention to the numbers I think it's very difficult to argue that they change the balance, especially when PR is driven more by media narratives than whether the number is 12 or 25 annually.
Notably, when utilitarians have erred regarding prisoners it seems to have been in the exact opposite direction you're concerned about. A while back someone here linked a critical analysis of an EA organization's criminal-justice-reform funding. They were primarily concerned with the welfare of the criminals rather than with secondary effects like the crime rate. The effect on the welfare of the criminals is easier to estimate, after all, an easy mistake reflecting the importance of utilitarians avoiding the streetlight effect. It was also grossly inefficient compared to other EA causes like third-world health interventions. They did end up jettisoning it (by spinning it off into an independent organization without Open Philanthropy funding), but not before spending $200 million dollars including $50 million on seed funding for the new organization. However, I think a lot of that can be blamed on the influence of social-justice politics rather than on utilitarian philosophy, and at least they ultimately ended up getting rid of it. (How many other organizations blowing money on "criminal justice reform" that turns out to be ineffective or harmful have done the same?). In any case, they hardly seem like they're about to start advocating for OP's proposal.
More options
Context Copy link
I think there's a lot of definitions of "utilitarianism" and they get kind of incorrectly smooshed together. On one level there's "human pleasure is the only goal and we should optimize for human pleasure". If you're optimizing solely for the short run, yes, it leads to that; if you're optimizing solely for the long run, then in the long run it perhaps leads to that; but sort of counterintuitively, if you're optimizing solely for the long run, then in the short run it reasonably doesn't lead to that, because in order to have the most humans to eventually be happy we need to accomplish a lot of other things before exterminating humanity.
Another thing that it's used to mean, though, is "any philosophy that optimizes for something", and there's plenty of somethings that don't result in that at all.
If I had to distil "utilitarianism" as I understand and use the use the term here on theMotte down to one or two sentences, it would be the confluence of the two ideas that "Moral behavior is behavior that optimizes for pleasure" and "Moral behavior is behavior that optimizes for the absence of suffering".
...and I stand by my position that the confluence of these two ideas is fundamentally incompatible with human flourishing.
Given a robust background in game theory, I'd say that utility functions can be whatever it is that you think ought to be optimized for. If maximizing pleasure leads to "bad" outcomes, then obviously your utility function contains room for other things. If you value human flourishing, then define your utility function to be "human flourishing", and whatever maximizes that is utilitarian with respect to that utility function. And if that's composed of a complicated combination of fifty interlocking parts, then you have a complicated utility function, that's fine.
Now, taking this too broadly, you could classify literally everything as utilitarianism and render the term meaningless. So to narrow things down a bit, here's what I think are the broad distinguishers of utilitarianism.
1: Consequentialism. The following of specific rules or motivations of actions matter less than their actual outcomes. Whatever rules exist should exist in service of the greater good as measured by results (in expectation), and the results are the outcome we actually care about and should be measuring. A moral system that says you should always X no matter whether it helps people or hurts people because X is itself a good action is not-consequentialist and thus not utilitarian (technically you can define a utility function that increases the more action X is taken, but we're excluding weird stuff like that to avoid literally everything counting as stated above)
2: Moral value of all people. All people (defined as humans, or conscious beings, or literally all living creatures, or some vague definition of intelligence) have moral value, and the actual moral utility function is whatever increases that for everyone (you can define this as an average, or a sum, or some complicated version that tries to avoid repugnant conclusions). The point being that all the people matter and you don't define your utility function to be "maximize the flourishing of Fnargl the Dictator". And you don't get to define a subclass of slaves who have 0 value and then maximize the utility of all of the nonslaves. All the people matter.
3: Shut up and multiply. You should be using math in your moral philosophy, and expected values. If you're not using math you're doing it wrong. If something has a 1% chance of causing 5300 instances of X then that's approximately 53 times as good/bad as causing 1 instance of X (depending on what X is and whether multiple instances synergize with each other). If you find a conclusion where the math leads to some horrible result, then you're using the math wrong, either because you misunderstand what utilitarianism means, you're using a bad utility function, or your moral intuitions themselves are wrong. If you think that torturing someone for an hour is worse than 3↑↑↑3 people getting splinters it's because your tiny brain can't grasp the Lovecraftian horror of what 3↑↑↑3 means.
Together this means that utilitarianism is a broad but not all encompassing collection of possible moral philosophies. If you think that utilitarianism means everyone sitting around being wireheaded constantly then you've imagined a bad utility function, and if you switch to a better utility function you get better outcomes. If you have any good moral philosophy, then my guess is that there is a version of utilitarianism which closely resembles it but does a better job because it patches bugs made by people being bad at math.
Sometimes, I like to throw out my own robust background in game theory. This is one of those times. Behold!
More options
Context Copy link
It's the same bullshit though.
Any multi-agent game is going to be by nature anti-inductive, because "the one weird trick" is stops working the moment other players start factoring it into their decisions. As such it is the optimizing impulse itself. IE the idea that morality can somehow be "solved" like a mathematical equation that ultimately presents the problem. All the jargon about Qualia, QALYs, and multiplication is just that, Jargon. Epicycles tacked on to a geocentric model of the solar system to try and explain away the inconstancies between your theory and the observed reality.
Better than a geocentric model of the solar system with no epicycles, which is what I'd compare most other moral philosophies to.
The over-optimization is largely solved by epistemic humility. Assume that whatever is actually good is some utility function, but you don't know what it actually is in proper detail, and so any properly defined utility function you write down might be wrong in some way, so don't over-optimize it to the exclusion of all else. I don't think this is somehow distinct from any other moral philosophy, which also lead to horrible results if taken to extremes.
Is that what you think you have? To repeat myself from another thread what predictions does you model make? In what way are they better than the alternatives? If as Yud Singer and Caplan allege "properly understood, utilitarianism approaches virtue ethics" why are you even wasting your time with utilitarianism instead of trying to teach your children virtue?
I'm a moral absolutist, not a relativist. I believe that there is one actual objective morality that describes the thing we are talking about when we mean "right" and "wrong", and each action is either right or wrong in some universal sense. Moral philosophies that people come up with should be viewed as attempts at approximating this thing, not as actual competing definitions of the words "right" and "wrong", which is why when someone comes up with an edge case where a moral philosophy extrapolates to lead to some horrific result, the most common response is either denial "no it doesn't lead to that", or an attempt to patch the theory, or "that result is actually good because X,Y,Z" where X,Y,Z are good in some other sense (usually utilitarian). Whereas if you had relativist morality or just definitions the response "yep, I believe that that horrific result is right, because that's how I've defined 'right'".
As a result, it's perfectly logical that properly understood and robust versions of any moral philosophy should approach each other. So I could make an equal claim that properly understood, virtue ethics approaches utilitarianism (is it virtuous to cause misery and and death to people which decreases their utility?). And if someone constructed a sufficiently robust version of virtue ethics that defined virtues in a way that massively increased utilities and covered all the weird edge cases then I would be happy to endorse it. I'm not familiar with the specific works of Yud Singer or Caplan you're referring to, but if they're arguing that utilitarianism eventually just turns into standard virtue ethics then I would disagree. If they're making a claim more similar to mine then I probably agree.
But again, I think utilitarianism isn't meaningless as a way of viewing right and wrong, because people are bad at math and need to use more of it. And I think fewer epicycles need to be added to most utilitarian constructions to fix them than would need to be added to virtue ethics or other systems, so it's more useful as a starting point.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why does it have to remain logically consistent? I haven't seen a single alternative belief system that met that standard yet.
Because unlike many other beliefs systems they actually try to justify their positions on the basis of being "logical" and "correct"
More options
Context Copy link
You see, this might be because the problem is wider than utilitarianism. It's the whole of sufficiently deeply considered consequentialism, optimizing over global outcomes. Utilitarianism is an ethical decision theory; something like deontology is a set of heuristics.
Good point.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link