Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
If you have a secular ethical framework that is not utilitarianism or something utilitarian-adjacent (eg consequentialism), what is it? I’m having a difficult time imagining a system that can’t be understood through some broadly-conceived utilitarian underpinning.
Rule-consequentailism is reasonably close to deontology in actual practice. Kant is (widely considered to be) a deontologist but his formulation of the golden rule could pass for R-C.
So if you can understand R-C, then you're most of the way to Kant.
More options
Context Copy link
The first thing I can think of is that utilitarianism doesn’t have much to say about what happiness actually is beyond subjectively defined well-being. It would seem hard for a utilitarian to say that a state where everyone deems themselves happy should not be pursued, no matter what this happy life actually consists of, but Western philosophy tackles this question very early on (I’ll try and find the quote from Socrates where he flat out denies that the interlocutor claiming to be happy is really happy). If someone thinks that true happiness requires certain prerequisites (say freedom from ignorance or a well moderated character), then schemes for promoting happiness which have the force of moral obligation under utilitarianism can be dismissed as misguided, shallow or evil.
As a thought experiment you could imagine a world where technology has granted the ability to shape the wants of humanity such that everyone can attain maximum subjective well-being. The catch is that this is achieved by a numbing of the feelings that make man dissatisfied with who he is and his lot in life. The current mix of virtues and vices will become all that a man could ever expect from himself and he will be satisfied.
It’s hard to see how a utilitarian would object to this, but it brings to prominence the question of “what are the proper things to want?”. Ironically it was Mill who put this best when he said “better to be Socrates dissatisfied than a pig satisfied”. It seems like there is ethical import to wanting the proper things and a person who is well ordered in this way is on an ethically superior path even if he is subjectively suffering from the difficulty of it.
More options
Context Copy link
My framework tends to be a sort of virtue ethics derived from a mix of Aristotelian teleology and Xunzi's atheistic strain of Confucianism, further reinforced by a sort of "natural law" theory with emphasis on the concept of evolutionary adaptations. In particular, beyond Xunzi himself, I find Alexander Eustice-Corwin's "Confucianism After Darwin" particularly insightful.
More options
Context Copy link
I don't really live by this system, but for several years now I've believed all ethical frameworks are bunk. That, for all frameworks, if you keep asking "But why do you believe that's good?" you eventually arrive at "Well, it just is okay!" or "Because God says so!" without sufficient proof that God actually exists.
But I don't think that means we should be pure nihilists where no action is any better than any other. That is because, just because we don't know any ethical framework with actual evidence behind it being meaningful, does not mean it does not exist. So I think the only ethical thing to do, in a world where we don't know what is ethical, is to search for what is ethical.
And it's possible perhaps that we can discover what is ethical through standard philosophy that we've been doing since Ancient Greece. But I think it's more likely any new break throughs will come through physics and mathematics break throughs. So in practice, the most ethical actions are whatever most quickly leads to our total understanding of physics and mathematics.
You might ask, what makes me think there's any possibility mathematics and physics could lead to ethical knowledge? What makes those spaces better to search than just choose a random spot in the ground, digging, and hoping I somehow find a note with a complete explanation? That is because physics and mathematics have often found knowledge I would've thought unknowable, yet have proven things true beyond a shadow of the doubt. The nature of atoms, the nature of galaxies, imaginary numbers, etc. all sorts of things that are true and we can use to real effect in the world like making planes fly or creating nigh-unbreakable codes have been found with physics and math. So while it may seem impossible that it could discover an ethical framework, I don't consider "seeming impossible" a guarantee it is impossible.
But in real life I don't want to be too weird so I just live as a rules-based utilitarian.
More options
Context Copy link
Consequentialism is generally misunderstood to mean "consequences matter." Really it means "only consequences matter." Pretty much all other ethical systems still care about consequences to some extent.
I think consequentialism is self-evidently wrong--why should an action's morality not take into account the mindset of the actor? If someone tries to kill you, but happens to stab you in a tumor and save your life, does their action become ethical? If someone genocides an entire race out of hatred, but due to the butterfly effect this ends up saving n+1 lives, does this render their action ethical? Actions must be judged, not based on their consequences, but based on their expected consequences, and since humans are not omniscient this necessarily leads to ethical systems such as deontology.
There is a difference between saying "The world is incidentally a better place because Alice stabbed Bob in a tumor" (what Utilitarianism is happy to say) and "we shouldn't punish Alice for stabbing Bob" (what Utilitarianism does not say).
This is because Utilitarianism doesn't justify punishment on the basis of right/wrong or, indeed, even intent. It justifies it on whether the punishment would increase utility (yes, shocking).
It happens to be true, in this universe, that punishing based on intent often yields to better societies than punishing based on results. But if you lived in an upside-down universe (or were governing a weird species, say one that didn't arise from evolution) where punishing Alice increased her propensity for violence, then Utilitarianism gives you the tools to realize your moral intuitions are leading you astray – that the deontological rules that work sensibly in our universe would be actively detrimental if applied to the other one.
So no, punishing based on intent doesn't necessarily lead away from consequentialism, because it's plain that we live in a world where punishing people who merely try to inflict harm (and mitigating punishment when the perpetrator's intent is good) is a more effective social policy (or parenting policy, etc.) than ignoring people's intentions.
Sure, but I didn't mention punishment, what I mentioned was morality. Morality has nothing to do with game theory or with the results of what society decides to call moral vs immoral. Something is either moral or it isn't.
A pure utilitarian view would generally decide an action's morality based on the consequences, whatever the intent.
I disagree pretty strongly with that -- I think that "Bob is a moral person" and "people who are affected by Bob's actions generally would have been worse off if Bob's actions didn't affect them" are, if not quite synonymous, at least rhyming. The golden rule works pretty alright in simple cases without resorting to game theory, but I think game theory can definitely help in terms of setting up incentives such that people are not punished for doing the moral thing / incentivized to do the immoral thing, and that properly setting up such incentives is itself a moral good.
To be clear, there's morality, which is sort of the end goal ideal state we're working towards, and there's game theory/policy, which is how we get to that ideal state.
Penalizing murder may reduce murder, or may increase it for some reason, but either way has very little to no bearing on whether murder is immoral.
Punishing intent happens to work, and if it didn't then I'd probably agree that we shouldn't punish intent, but either way I do think intent is one ingredient of morality.
Game theory can help people be moral, sure, but it can't actually define morality.
I think the relationship between game theory and morality is more like the one between physics and engineering. You can't look at physics alone to decide what you want to build, but if you try to do novel engineering without understanding the underlying physics you're going to have a bad time. Likewise, game theory doesn't tell you what is moral and immoral, but if you try to make some galaxy-brained moral framework, and you don’t ay attention to how your moral framework plays out when multiple people are involved, you're also going to have a bad time.
Though in both cases, if you stick to common-sense stuff that's worked out in the past in situations like yours, you'll probably do just fine.
Yeah, I like that comparison more.
More options
Context Copy link
More options
Context Copy link
You say consequentialism is self-evidently wrong, and then you define morality as “the end goal ideal state we're working towards”? And you say you support punishing intent because “it works” — ie because of it’s consequences.
It seems to me you agree with the underlying framework of consequentialism, you just insist that the label “morality” apply simultaneously to both states and actions, whereas Utilitarians throw a InvalidTypeError for the latter.
If you agree that morality is the end state we want to achieve, how can you apply the same word to apply to actions and not have it be about achieving that state?
I agree that consequences matter, but don't believe they're the only thing that matter, so I disagree with consequentialism. That was the whole point of my original comment.
Pretty much all deontologists, virtue ethicists, etc. will agree that good governance is important. Very few will assert that consequences are entirely irrelevant to morality.
“the end goal ideal state we're working towards” was poorly worded. I just meant to gesture vaguely towards morality and terminal values.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Consequentialism does not demand ignoring intent, because intent is frequently important and not treating it as such would lead to bad consequences in many cases.
That’s the great thing about consequentialism, when it leads to bad consequences you can adjust it to lead to better ones.
Your own thought experiments bear this out. Moral uncertainty and the fundamental randomness and contingency of future events plague all systems.
Rule utilitarianism, or something like Cowen’s “economic growth plus human rights”, attempt to strike a balance between baseline rules and considering the effects of any given act. The US constitution sets forth rules, limitations, and rights in a framework of promoting the general welfare, directly in line with rule utilitarianism.
If your god inspired the US constitution, he’s clearly a fan of rule utilitarianism.
More options
Context Copy link
More options
Context Copy link
I am a deontologist primarily because I do not think that I'm smart and prescient enought to compute the consequences of my actions: many times actions that I considered immoral brought me success [1] and viceversa, action that I considered good were rejected [2]. My personal rules are generally derived both by my experiences and by "historical" experiences, both fictional and not, by looking at my emotions and reactions at those second hand experiences. The consequences to these rules are irrelevant because, as I said, I do not think that I can predict what things will be good for me, only what have been. And even then, it is possible what was good for me will not be in the future, but at least it is more probable.
[1]During a Physics Lab I started trating everyone as crap (insulting, overworking them) because they didn't meet my expectations. While I regretted it, at the end of the semester my Lab mates thanked my for my "leadership", go figure.
[2]It has happened at least three times, that I can remember: I volunteer to help someone and they reply to me that the only reason I want to help them is because I want to brag how much more capable than them I am. Go. Figure.
With what framework do you establish your deontological rules? How are you smart enough to establish them?
The optimal level of asshole leadership is very far from zero, but workers tend to only tolerate it willingly when there’s a cult of personality.
More options
Context Copy link
More options
Context Copy link
If the term "utilitarianism" can be extended to cover any ethical system that a reasonable person might adopt, then we run the risk of making the term vacuous.
Suppose we have a person who has to choose between two mutually exclusive options. He can spend his life becoming a great novelist, or he can spend his life working in tech and making a lot of money to spend on malaria nets for Africans. If he becomes a novelist, his work will be regarded by future generations of literature aficionados as one of the pivotal novels of the 21st century, although it will have limited impact outside of academic circles. If he instead spends his life buying malaria nets, he will save some non-trivial number of lives in the DRC (although of course the future impact of these individuals is impossible to calculate).
According to the brand of utilitarianism endorsed by Peter Singer and a number of Effective Altruists, it would be morally blameworthy of the person to not spend his life buying malaria nets and saving other people. I on the other hand think he is equally free from a moral perspective to choose either option, and in fact I'd be inclined to say that becoming a great novelist is the better option, because it would be a shame to waste a genuinely unique talent. How can utilitarianism accommodate my position?
You could say "well you're still basing your decision off of what you think maximizes The Good, and utilitarianism is just maximizing The Good, so it's still utilitarian". But the claim that we should pursue The Good is uncontroversial, perhaps even tautological. The purpose of a moral system is to describe, in explicit terms, what The Good is in the first place.
There is room for both. This scenario also presupposes accurate forecasting of outcomes. There is no way to know if you'll be able to wright the seminal novel of a generation or even be a great programmer. Your position only exists in the fictitious past. If your choice is between working and saving some people people and doing nothing, then you should work and save some people. That is something we can predict the outcome of.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link