site banner

Should lifetime prisoners be incentivized to kill themselves?

The death penalty has various serious problems and lifetime imprisonment is really really expensive.

I guess we should be happy every time someone so thoroughly bad we want them out of society forever (like a serial murderer) does us the favour of killing themselves. Nothing of value is lost, and the justice system saves money. Right?

It seems to me it logically follows that we should incentivize such suicides. Like: 5000 dollars to a person of your choice if you're dead within the first year of your lifetime sentence, wink, wink, nudge, nudge.

It feels very wrong and is clearly outside the overton window. But is there any reason to expect this wouldn't be a net benefit?

-3
Jump in the discussion.

No email address required.

It's the same bullshit though.

Any multi-agent game is going to be by nature anti-inductive, because "the one weird trick" is stops working the moment other players start factoring it into their decisions. As such it is the optimizing impulse itself. IE the idea that morality can somehow be "solved" like a mathematical equation that ultimately presents the problem. All the jargon about Qualia, QALYs, and multiplication is just that, Jargon. Epicycles tacked on to a geocentric model of the solar system to try and explain away the inconstancies between your theory and the observed reality.

Better than a geocentric model of the solar system with no epicycles, which is what I'd compare most other moral philosophies to.

The over-optimization is largely solved by epistemic humility. Assume that whatever is actually good is some utility function, but you don't know what it actually is in proper detail, and so any properly defined utility function you write down might be wrong in some way, so don't over-optimize it to the exclusion of all else. I don't think this is somehow distinct from any other moral philosophy, which also lead to horrible results if taken to extremes.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about?

Is that what you think you have? To repeat myself from another thread what predictions does you model make? In what way are they better than the alternatives? If as Yud Singer and Caplan allege "properly understood, utilitarianism approaches virtue ethics" why are you even wasting your time with utilitarianism instead of trying to teach your children virtue?

I'm a moral absolutist, not a relativist. I believe that there is one actual objective morality that describes the thing we are talking about when we mean "right" and "wrong", and each action is either right or wrong in some universal sense. Moral philosophies that people come up with should be viewed as attempts at approximating this thing, not as actual competing definitions of the words "right" and "wrong", which is why when someone comes up with an edge case where a moral philosophy extrapolates to lead to some horrific result, the most common response is either denial "no it doesn't lead to that", or an attempt to patch the theory, or "that result is actually good because X,Y,Z" where X,Y,Z are good in some other sense (usually utilitarian). Whereas if you had relativist morality or just definitions the response "yep, I believe that that horrific result is right, because that's how I've defined 'right'".

As a result, it's perfectly logical that properly understood and robust versions of any moral philosophy should approach each other. So I could make an equal claim that properly understood, virtue ethics approaches utilitarianism (is it virtuous to cause misery and and death to people which decreases their utility?). And if someone constructed a sufficiently robust version of virtue ethics that defined virtues in a way that massively increased utilities and covered all the weird edge cases then I would be happy to endorse it. I'm not familiar with the specific works of Yud Singer or Caplan you're referring to, but if they're arguing that utilitarianism eventually just turns into standard virtue ethics then I would disagree. If they're making a claim more similar to mine then I probably agree.

But again, I think utilitarianism isn't meaningless as a way of viewing right and wrong, because people are bad at math and need to use more of it. And I think fewer epicycles need to be added to most utilitarian constructions to fix them than would need to be added to virtue ethics or other systems, so it's more useful as a starting point.