site banner

Why Rule Utilitarianism Fails as a Moral Stance

By rule utilitarianism, here I will mean, roughly speaking, the moral stance summarized as follows:

  1. People experience varying degrees of wellbeing as a function of their circumstances; for example, peace and prosperity engender greater wellbeing than for their participants than war and famine.
  2. As a matter of fact, some value systems yield higher levels human wellbeing than others.
  3. Moral behavior consists in advocating and conforming to value systems that engender relatively high levels of human wellbeing.

Varieties of this position are advocated in academic philosophy, for example, by Richard Brandt, Brad Hooker, and R.M. Hare -- and in popular literature, for example, by Sam Harris in The Moral Landscape and (more cogently, in my opinion) by Stephen Pinker in the final chapter of his book Enlightenment Now. I do not believe that rule utilitarianism cuts much ice as a moral stance, and this essay will explain why I hold that opinion.

1. The problem of speciesism

In his book Enlightenment Now, psychologist Steven Pinker hedges on proposition #3 of the utilitarian platform, which I will call the axiom of humanism. Pinker writes, "Despite the word's root, humanism doesn't exclude the flourishing of animals" [p. 410]. Well, professor, it sho' 'nuff does exclude the flourishing of animals! If the ultimate moral purpose is to promote human flourishing, then the welfare non-human animals is excluded from consideration in the ultimate moral purpose. To be charitable, I suppose Pinker means that it is consistent with (3) that there is a legitimate secondary moral purpose in maximizing the wellbeing of animals. However, (a) I cannot be sure that he means that, and (b) it is unfortunate that he did not say what he meant, especially since this point is central to a weakness in the humanist position.

In The Moral Landscape, and in his TED talk on the same thesis, Sam Harris spends most of his time defending propositions (1) and (2) of the utilitarian position --- which is unfortunate, because I believe they are self-evident. To his credit, Harris does briefly address issue of "speciesism" (that is, assigning greater moral weight to the wellbeing of animals than humans), saying, "If we are more concerned about our fellow primates than we are about insects, as indeed we are, it's because we think they're exposed to a greater range of potential happiness and suffering." What Harris does not do is to place the range of animal experience on the scale with that of human experience to compare them, or give us any reason to think bottom of the scale for humans is meaningfully (if at all) above the top of the scale for other animals. Moreover, he gives no reason why we ought to draw a big red line at some arbitrary place on that scale, and write "Not OK to trap, shoot, eat, encage, or wear the skins of anything above this line." Perhaps that line reaches well down into the animal kingdom, and perhaps the line falls above the level of some of our fellow men. As a matter of ultimate concern, I cannot imagine an objective reason why it should not.

Perhaps Pinker and Harris don't spend much effort on the issue of speciesism because it is uncontroversial: of course human wellbeing has greater moral gravity than animal wellbeing, and the exact details of why and how much are not a pressing moral concern of our day. I submit, on the contrary, that accounting for speciesism is the one of the first jobs of any moral theory. I am not saying that perhaps we ought to start eating our dim-witted neighbors, or that we should all become vegans; I am saying that if you purport to found a moral theory on objective reason then you should actually do it, and that how that theory accounts for speciesism is an important test case for it.

To wit, either animals count as much as humans our moral calculus, or they do not. If animals count as much as humans, then most of us (non-vegetarians) are in a lot of trouble, or at least ought to be. On the other hand, if animals don't count as much as humans, then the reason they don't, carried to its logical conclusion, is liable to be the reason that some humans don't count as much as others. Abraham Lincoln famously made a similar argument over the morality of slavery:

You say A is white, and B is black. It is color, then; the lighter, having the right to enslave the darker? Take care. By this rule, you are to be a slave to the first man you meet, with a fairer skin than your own. You do not mean color exactly? — You mean whites are intellectually the superiors to blacks, and therefore have the right to enslave them? Take care again. By this rule, you are to be slave to the first man you meet, with an intellect superior to your own. But say you, it is a question of interest; and, if you can make it your interest, you have the right to enslave another. Very well. And if he can make it his interest, he has the right to enslave you. [From Lincoln's collected notes; date uncertain]

In the spirit of Lincoln's argument, for example, do non-humans count less, as Harris claims, because they allegedly have less varied and vivid emotional experience? It is not clear to me that they do in the first place, or, if they do, that they do by a degree sufficient to make cannibalism forbidden and vegetarianism optional. Do non-humans count less because they are less intelligent? In that case, the utilitarian is obliged to explain why the line is drawn just where it is, and, indeed, why, in the best of all possible worlds, deep fried dumbass shouldn't be an item on the menu at Arby's.

The fact that Pinker and Harris do not resolve the issue of speciesism is important -- not because their conclusions on the matter are or ought to be controversial, but because it is the first sign that they are not deriving a theory from first principles, but instead rationalizing the shared common sense of their own culture.

2. And who is my neighbor?

John Lennon famously sang, "All you need is love". Perhaps love is all you need, but, as Bo Diddley famously sang, the hard question remains: Who do you love?

Suppose we were to grant (which I do not) that it is objectively evident that the wellbeing of humans is categorically more valuable than that of other animals. The fact remains the wellbeing of some humans might count more than that of others, from my perspective, as an ultimate moral concern. Indeed, some humans might not count at all except as targets and trophies -- and many cultures have taken this view unashamedly. For example, the opening stanza of the Anglo Saxon poem Beowulf extolls the virtues of the Danish king Sheild Sheafson for the laudable accomplishment of subjugating not just some but all of the neighboring tribes, and for driving them in terror -- not from their fortresses, not from their castles, but from their bar stools:

So. The Spear-Danes in days gone by
And the kings who ruled them had courage and greatness.
We have heard of those princes’ heroic campaigns.
There was Shield Sheafson, scourge of many tribes,
A wrecker of mead-benches, rampaging among foes.
This terror of the hall-troops had come far.
A foundling to start with, he would flourish later on
As his powers waxed and his worth was proved.
In the end each clan on the outlying coasts
Beyond the whale-road had to yield to him
And begin to pay tribute. That was one good king.
[translation by Seamus Heaney, emphasis added]

The literature, monuments, and oral traditions of the world are replete with examples of this sentiment, but for the sake of space I will give just one more example. The inscription on a monument to honor the Roman general Pompey in honor of his 45'th birthday reads,

Pompey, the people’s general, has in three years captured fifteen hundred cities, and slain, taken, or reduced to submission twelve million human beings.

In the West today, people tear down monuments claiming that the men they honor were enslavers or imperialists -- but evidently other cultures put up monuments to glorify their leaders for those very characteristics. So it is not a no-brainer -- that is to say, not at all self-evident -- that the welfare of members of foreign tribes ought to play any role in our moral calculations at all -- or indeed that the subjugation and exploitation of foreign tribes is not a positive moral good from our own perspective. That is how the Romans and the Saxons saw it. If you or I had been born into those cultures we would probably would have felt the same way -- and so, I dare say, would Sam Harris and Steven Pinker.

I imagine the utilitarian response would that we are all better off if we take the modern Western view (of course) that exploiting foreigners is inherently immoral. Sam Harris asks us to imagine two people who are forced to choose between (a) cooperating to build a better society, or (b) smashing each other in the face with rocks. If those are the options, I would certainly choose (a) -- but those are not the options. Harris left out the salient option (c): I smash you in the face with a rock and take your wallet. This is not a straw man, and it has been the position of many honest, intelligent, and upright people. If one suggested to Alexander the Great -- a student of Aristotle -- that "we all" are better off if "we all" lay down our weapons, he might ask, "who is we all??" He might say that he cares little for what makes barbarians (that is, non-Greeks) better off, any more than he cares what makes apes or pigs better off -- and that your way of drawing the circle of love, so as to include all humans but exclude apes and pigs, is no better than his. I imagine the utilitarian response to that would be, "You fiend!", and that the Greek response to that would be "Get out of my sight, you womanish punk". So, from a logical perspective, who won the debate? Nobody did.

Pinker almost concedes this, writing "If there were a callous, egoistic, megalomaniacal sociopath who could exploit everyone else with impunity, no argument could convince him he had omitted a logical fallacy" [Enlightenment Now, p. 412] -- but it is not clear whether he means that the sociopath could not be convinced because he has a thick skull, or whether he is actually not committing a logical fallacy. I emailed Dr. Pinker for a clarification, and he was kind enough to reply, acknowledging that the megalomaniacal sociopath is actually not committing a logical fallacy. Note that (1) this is a second example of Pinker's writing being vague exactly where the argument is weakest (the first being the issue of speciesism), and (2) a Roman general is not a megalomaniacal sociopath or anything of the sort, and whatever we claim to be objectively self-evident ought to be evident to him as well as us.

In fact, Pinker considers something like the argument with a Roman general in his book. His imagined response (in his version to Nietzsche, a Romanophile, rather than an actual Roman) is as follows:

I [Steven Pinker] am a superman, hard, cold, terrible, without feelings and without conscience. As you recommend, I will achieve glory by exterminating some chattering dwarves. Starting with you, shortly. And I might do a few things to that Nazi sister of yours, too. Unless you can think of a reason why I should not [Enlightenment Now, p. 446].

But now that Professor Pinker himself has switched gears, from the strictly logical to the ad baculum, I submit that he is obviously bluffing, and the Romans were not bluffing at all. Someone would win that debate -- and that the Roman argument might have something to do with a crucifix.

In any case, an allegedly universal and self-evident regard for "human wellbeing" leaves unanswered the central question of morality: in the words of the great moral philosopher Bo Diddley, Who do you love? Speciesism is only the thin end of the wedge: not only do I generally value the wellbeing of people more than that that of cows and rabbits, I also value the wellbeing of my people more than I value that of other people -- and, in my view, rightly so. Thus, premises (1) and (2) of the humanist/utilitarian position do not imply conclusion (3) because, as an ultimate concern, I value the wellbeing of people in my identity group over that of people outside my identity group, and it is only right that I should do this. Thus, maximizing human wellbeing -- in the sense where all humans count equally -- is not something I am actually interested in. Moreover, I do not feel it is something I ought to be particularly interested in.

Now in response to this, you might say that I am a scoundrel and a villain. In response to that, I say that's just, like, your opinion, man. Philosophers such as Peter Singer, and popular writers like Stephen Pinker, insist that I must be impartial between the wellbeing of my people on the one hand, and that of homo sapiens at large on the other. For example, Pinker writes, "There is nothing magic about the pronouns I and me that would justify privileging my interests over yours or anyone else's" [Enlightenment Now, p. 412]. To that I reply, why on Earth would I need magic, or even justification, to privilege my own interests over yours as matter of ultimate concern? Of course I privilege my interests over yours, and almost certainly vice versa. In fact, unless you are a friend of mine, not only do I privilege my own interests over yours, I privilege my dog's interests over yours. For example, if my dog needed a life-saving medical procedure that costs $5000, I would pay for it, but if you need a life-saving medical procedure that cost $5000, I probably would not pay for it -- and if an orphan from East Bengal needed a life-saving medical procedure that costs $5000 (which one probably does at this very moment), I would almost certainly not pay for it, and neither would you (unless you are actually paying for it).

I must not be alone in caring more about my dog than I do about a random stranger. The average lifetime cost of responsibly owning a dog in the United States is around $29,000 -- while, according to the Givewell organization, the cost of saving the life of one unfortunate fellow man at large by donating to a well chosen charity is around $4500 [source]. If those figures are correct, it means that if you own a dog, then you could have allocated the cost of owning that dog to save the lives of about six people on average (and if the figures are wrong, something like that is true anyway). Now, Steven Pinker himself once tweeted that dog ownership comes with empirically verifiable psychological benefits for the dog owner. In his excitement over those psychological benefits, I suppose, he neglected to mention that it also comes with the opportunity cost of six (or so) third world children dying in squalid privation. If Pinker really believes, as he claims to believe, that "reducing the suffering and enhance the flourishing of human beings is the ultimate moral purpose," he sure isn't selling it very hard. But then again, no one in their right mind is.

Not only do I actually value my dog's wellbeing above that or a random human stranger, I submit it is only right that I should. That is to say, it would be immoral of me not to privilege my dog's wellbeing over that of a human stranger. Indeed, if a man let his family dog pass away, when he could have saved the dog's life for a few thousand dollars, and he spent that money instead to save the life of a foreigner whom he had never met, then, all else being equal, I would prefer not to have him as a countryman, let alone a friend.

In one talk, Pinker has said, "You cannot expect me to take you seriously if you are espousing moral rules that privilege your interests above mine." But, professor, if I have more and better men on my side, it does not matter whether you take me seriously; it only matters whether they do. Again, I am not saying that it would be right to take advantage of that situation of having more and better men on my side; I am saying that the Pinker and Harris's egg headed argument yields no objective reason why I shouldn't.

3. Degrees of neighborship

One of Sam Harris's favorite examples to illustrate the objectiveness of values is "the worst possible misery for everyone", which he claims is self-evidently and objectively bad situation. I think this is an egregious misdirection, because moral questions are not about the choice between win-win and lose-lose. If iron axes work better than bronze axes, for everyone, at every level, all things considered, then by all means let us use iron -- but that is not a moral decision. I have a moral decision to make when, for example, (1) I have the opportunity to gain at your expense, all things considered, and (2) I don't care about your wellbeing as much as I care about mine.

But not all moral tradeoffs are one-on-one. Human nature being what it is, groups at all levels split into factions which then try to have their way with each other -- from nuclear families, to PTA boards, to political parties, to nations, to the whole of humanity. A code of conduct that is good for my community at one level might subtract from the good of a smaller, tighter community of which I am also a member -- so, real which of those codes should (should, in the moral sense) I advocate and adhere to?

As a matter of fact, the wellbeing of different groups, of which I am a member, often trade against each other in moral decisions -- and the purpose of moral precepts, largely if not mainly, is to manage tradeoffs between our concerns for the welfare of different groups, with different degrees of shared identity, of which we are a common member. What looks like the same group from afar, or in one conflict, may look like different ethnicities or religions when you zoom in, or look at a different conflict. This is nothing new. For example, the Book of Joshua, written at the latest around 600 BC, records nations being formally subdivided into hierarchies of tribes, clans, and families:

So Joshua got up early in the morning and brought Israel forward by tribes, and the tribe of Judah was selected. So he brought the family of Judah forward, and he selected the family of the Zerahites; then he brought the family of the Zerahites forward man by man, and Zabdi was selected. And he brought his household forward man by man; and Achan, son of Carmi, son of Zabdi, son of Zerah, from the tribe of Judah, was selected.

And these various hierarchies were well known to endure conflicts of interest, if not outright enmity, at every level of the hierarchy, from civil war between tribes (and coalitions of tribes) within a nation, right down to nuclear families:

Then the men of Judah gave a shout: and as the men of Judah [Southern Israel] shouted, it came to pass, that God smote Jeroboam and all [Northern] Israel before Abijah and Judah. [2 Chronicles: 15]
And Cain talked with Abel his brother: and it came to pass, when they were in the field, that Cain rose up against Abel his brother, and slew him. [Genesis 4:8]

So what your "ingroup" looks like depend on the particular conflict we are looking at, and the level of structure at which the conflict takes place. These conflicts can be life and death at all levels, and someone who is in your ingroup during a conflict at one level may be in the outgroup in a conflict at another level on another occasion. This phenomenon is a major theme -- arguably the major theme -- of the oldest written documents that exist on every continent where writing was discovered. Thus is a mistake in moral reasoning to conceptualize a code of conduct that benefits "the community": each person is, after all, a member of multiple overlapping communities of various sizes and levels of cohesion, whose interests are frequently in conflict with each other.

Now hear this: when we are looking for a win-win solution that benefits everyone at all levels without hurting anyone at any level of "our community", this is an engineering problem, or a social engineering problem, but not an ethical problem. It is the win-lose scenarios, which trade the wellbeing of one level of my community against that of another, that fall into the domain of ethics. Of course we are more concerned for the welfare of those whose identities have more in common with our own -- but how steep should the drop-off be as a function of shared identity? Should it converge to zero for humanity at large? Less than zero for our enemies? How about rabbits and cows? As an ultimate concern, who do you love, and exactly how much, when it comes to decisions that trade between the wellbeing of one level of your community and another (self, family, clan, tribe, nation, humanity, vertebrates at large)? That is a central problem, if not the central problem, of ethical discourse -- and it is a question about which utilitarianism has nothing to say, and about which humanism begs the question from the outset.

4. No Moral Verve

At the end of the day, the conversation on ethics should come to something more than chalk on a board. When the chalk dust settles, if we have done a decent job of it, we should bring away something that can inspire us to rise to the call of duty when duty gets tough. The fatal defect of humanism in this regard is that practically no one -- neither you, nor I, nor Steven Pinker, nor Sam Harris, nor John Stuart Mill himself -- actually gives a leaping rat's ass about the suffering or the flourishing of homo sapiens at large. Such was eloquently voiced by Adam Smith, and his statement is worth quoting at length:

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened. [Smith (1759): The Theory of Moral Sentiments]

Here is the point. As C.S. Lewis wrote, In battle it is not syllogisms [logic] that will keep the reluctant nerves and muscles to their post in the third hour of the bombardment ["The Abolition of Man"]. Lewis was right about this -- and, while we are making a list of things that do not inspire people to rise to call of duty when duty gets tough, we can include on that list any concern they might claim to have for the welfare of human beings at large.

5. Mumbo-Jumbo

Be careful what you do,
Or Mumbo-Jumbo, God of the Congo,
And all of the other
Gods of the Congo,
Mumbo-Jumbo will hoo-doo you,
Mumbo-Jumbo will hoo-doo you,
Mumbo-Jumbo will hoo-doo you.*
[ Vachel Lindsay: The Congo]

In a discussion with Alex O'Connor, Sam Harris invites us to imagine two people on an Island, who can choose either to cooperate and build a beautiful life together, or to start smashing each other in the face with rocks. It is immoral to smash someone in the face with a rock, he infers, because if we all started smashing each other in the face with rocks, we would all be miserable. Indeed, I agree that if I had a choice between everyone smashing everyone in the face with a rock, and nobody smashing anyone in the face with a rock, I would certainly choose nobody smashing anyone. But in reality, I do not get to choose whether everybody smashes everyone in the face with a rock. The decision I get to actually make, in which my ethics actually plays a role, is whether I smash you in the face with a rock, and take your wallet in the bargain, which, say, has $5,000 in it. With that, it is suddenly not so clear why that wouldn't maximize the satisfaction of my ultimate concerns -- especially if I value $5,000 more than I value your life (which again, unless you are a friend of mine, I unashamedly do).

On top of ignoring the multilayered, competitive nature of the human condition, and on top of having no practical motivational force even for its professed adherents, another problem with rule utilitarianism is the voodoo it invokes to connect (a) performing a particular action with (b) what would happen, counterfactually, if everyone followed the salient rule that permits the action. For a simple example, let us imagine that I steal a tootsie roll from a convenience store. In this scenario, let us imagine that I am poor and the convenience store owner is rich, and that tootsie roll does me more good than it would have done the store owner if it had remained in his store. In the world of mystical utilitarian counterfactuals, if everyone stole everything all the time, then everyone would clearly be worse off -- but in the actual world, me stealing a bite of candy is not going to cause everyone to steal everything all the time, or, probably, cause anyone else to steal anything else ever. Even if an individual tootsie roll pilferage did have some miniscule ripple effect on society, I would still expect the material impact on me personally to be less than the cost of the candy I stole. To put it more generally, when someone steals something and gets away with it, they do not reasonably expect to lose net income as a result. So, why on Earth should I care about what would happen in the sci-fi scenario where everyone stole everything all the time? I cannot imagine an objective reason why I should.

Perhaps there is some deep metaphysical argument that establishes, on an objective basis, that one ought to behave in the way they wish others in "their community" to behave (if, again counterfactually, there were such a thing as "their community") -- or perhaps there is some kind of cosmic karma stipulating that what goes around invariably comes around on this Earth, but (1) I cannot imagine what that metaphysical argument would be, (2) the world doesn't look to me like it works that way, and (3) neither Pinker, nor Harriss, nor Singer, nor Benthem, nor Mill actually give such metaphysical arguments, nor attempt to show that the world does work that way.

Let me repeat once again that I am not advocating nihilism here. What I am saying is this: if utilitarians claim to base their moral theory on objective reason -- indeed, if they claim to do anything other than manufacture a flimsy rationalization for the moral common sense of their own culture -- then it is precisely the Devil's advocate that must contend with, and it seems to me they are in a hopeless position to do so.

Conclusion

When I say that utilitarianism has nothing useful to say about real world ethical problems, I mean it. Of course one might give evidence about the impacts of some rule or policy, which might then inform whether we want to adopt that rule or policy -- but I doubt the following words have ever been uttered in a real debate over policy or ethics: I conceded that your policy/rule/value-premise, if adopted, would benefit every level of our community more than mine does, but to Hell with that. The fact is that everyone prefers policies that benefit their communities when they are a win/win at every level -- whether or not they have read one fancy word of John Stuart Mill, Jeremy Bentham, Peter Singer, Sam Harris, or Steven Pinker. Thus, to the degree that utilitarianism has any force in the real world, it adds nothing to the conversation that wasn't already inherent in common sense. On the other hand, to the degree that utilitarianism is not redundant with common sense, it has no motivational force, even for its professed adherents -- especially if they own a dog.

I submit that the position of utilitarianism is not only weak, but so evidently preposterous that its firm embrace requires an act of intellectual dishonesty. I can say this without contempt, because less than ten years ago I myself espoused utilitarianism. I knew then and I know now that I was not being intellectually honest in espousing this view. To my credit, I could not bring myself to write a defense of utilitarianism, even though I tried -- because I could not come up with an argument for it that I found convincing. Yet, I presumed that I would eventually be able to produce such an argument, and I did state utilitarianism as my position, without confessing that I could not defend the position to my own satisfaction.

I further submit that programs like utilitarianism are not only mistaken but harmful. They are not just a little bit harmful, but disastrously harmful, and we can see the engendered disaster unfolding before our eyes. The problem is not that utilitarians are necessarily bad people; it is that, if they are good people, they are good people in some sense by accident: reflexively mimicking the virtues and values of their inherited traditions, while at the same time denigrating tradition, and mistaking their moral heritage for something they have discovered independently. As John Selden wrote,

Custom quite often wears the mask of nature, and we are taken in [by this] to the point that the practices adopted by nations, based solely on custom, frequently come to seem like natural and universal laws of mankind. [Natural and National Law, Book 1, Chapter 6]

The problem with subverting the actual source of our moral norms and replacing it with a feeble rationalization is this: each generation naturally (and rightly) pushes back against their inherited traditions, and pokes to see what is underneath them. If the actual source of those traditions has been forgotten, and they are presented instead as being founded on hollow arguments, the pushback will blow the house down. Sons will live out the virtues of their fathers less with each passing generation, progressively supplanting those virtues with the unrestrained will of their own flesh. That is what we are seeing in our culture today -- and impotent, ivory tower theories of morality are part of the problem.

12
Jump in the discussion.

No email address required.

This doesn't address the main thrust of your argument, which (to try to sum it up in less than one sentence) I think is about how proximity correlates to care, and what that says about universalist ethics, but...

Perhaps there is some deep metaphysical argument that establishes, on an objective basis, that one ought to behave the way they wish others in "their community" to behave

If you want society to follow a rule, hold to that rule and propagate that rule. If you hold to it but don't propagate it, it won't last. And if you propagate it but don't hold to it, people will eventually Notice.

Doesn't really matter what the rule is. Utilitarianism, Christianity, Nazism, whatever. And clearly other factors can be involved (like losing WWII).

Of course, if one were merely aiming for a short-term effect, like personal benefit, that doesn't apply. One might be able to fool enough of the people, enough of the time, to get away with whatever one wants.

If you want society to follow a rule, hold to that rule and propagate that rule.

The issue is that there is no general law of cause and effect that would cause society to follow the rule because I do. On the contrary, it might sometimes be the case that society will follow the rule more if I (1) break the rule, (2) keep it secret that I broke the rule, and (3) use my ill-gotten gains from breaking the rule to promulgate the rule. If you claim that could never happen, then the burden of proof is on you and best of luck. Or do you claim that secretly breaking a rule for the purpose of strengthening the rule is moral if the rule is a good one?

Do you believe, for example, that stealing a horse is immoral because it causes other people to steal other things if and when they find out about it? I don't see how it would at all. Let us suppose the following:

  1. Person A steals a horse and executes a very good plan to keep it secret, so that the horse will be presumed to have run off, and that he took possession of a formerly wild horse. The thief benefits more from owning the horse than its former owner would have. Moreover, person A rides the horse to work daily, where he writes widely read and highly influential essays about the importance of the rule of law, and the wrongness of theft.
  2. Person B, an influential intellectual, writes an essay about why it is morally OK to steal, because there is no such thing as private property in the first place. He writes cogently and in good faith. The essay gets ten million views and can be blamed with high confidence to at least three actual thefts (suggesting that there are presumably hundreds or thousands of others inspired by it).
  3. Person C serves on a jury in a case of grand theft, and stubbornly hangs the jury because he has read the essay written by B. In the jury room, Person C argues cogently and in good faith. The accused person is not retried and goes free.

Is the immorality of A's theft mitigated by its secrecy, and the fact that it is instrumental in him promulgating anti-theft mores?

I believe that B and C have done more damage to the moral prohibition against stealing than A has. If so, should the actions of B and C be illegal, and punishable by prison terms longer than what A would serve if he had gotten caught stealing the horse?

On the contrary, it might sometimes be the case that society will follow the rule more if I (1) break the rule, (2) keep it secret that I broke the rule, and (3) use my ill-gotten gains from breaking the rule to promulgate the rule.

I don't think it could never happen, but I think it doesn't happen very often, and has a chance of severely backfiring. SBF comes to mind. I think it much more likely that someone who breaks the rule and keeps it secret, will be less likely to follow through on promulgation, and more likely to continue to break rules and keep them secret. I think that anyone who actually cares about promulgating the rule shouldn't use high-variance strategies that risk destroying everything they worked for.

Or do you claim that secretly breaking a rule for the purpose of strengthening the rule is moral if the rule is a good one?

I think that's classic self-delusion, and while it might happen to lead to a correct conclusion in some instances, the chain of thought that leads to it is corrupt.

Do you believe, for example, that stealing a horse is immoral because it causes other people to steal other things if and when they find out about it?

Not solely because, but yes, among other things it contributes to the collapse of civil society, especially if it's never punished.

Is the immorality of A's theft mitigated by its secrecy, and the fact that it is instrumental in him promulgating anti-theft mores?

Not very much, but it's better than not hiding the theft, and better than using the proceeds from the theft to do more evil. Do you disagree?

I believe that B and C have done more damage to the moral prohibition against stealing than A has.

Partly this depends on whether A ever gets caught. (SBF, again.)

If so, should the actions of B and C be illegal, and punishable by prison terms longer than what A would serve if he had gotten caught stealing the horse?

Are you now, or have you ever been, a member of an organization that advocates for the violent overthrow of the government of the United States of America?

I don't think we humans have a good track record at using laws to propagate virtue. Especially when it comes to people acting in good faith who we think happen to be wrong. Are you suggesting that everything bad should be illegal, and that the law should be a perfect mapping of all possible actions to their ethical value and from there to the punishment or reward that is appropriate? (Should I report your comment for advocating for a system under which the comment itself might be bannable?)

All I recall advocating for was integrity and a bit of forethought. The alternatives, while not uniformly worse, seem quite lopsidedly worse. Hopefully we (humans) are in this (civilization) for the long haul.

Are you suggesting that everything bad should be illegal, and that the law should be a perfect mapping of all possible actions to their ethical value and from there to the punishment or reward that is appropriate

No, that would be stupid. On the other hand, if (1) action X is immoral and illegal because it tends to cause a certain harm, and (2) action Y tends to cause more of the same harm, then it seems to follow that action Y is at least as immoral, and ought to be punished at least as severely, as X. Does it not?

me: Do you believe, for example, that stealing a horse is immoral because it causes other people to steal other things if and when they find out about it?
you: Not solely because, but yes, among other things it contributes to the collapse of civil society, especially if it's never punished.

then why else?

Me: Is the immorality of A's theft mitigated by its secrecy, and the fact that it is instrumental in him promulgating anti-theft mores?
You: Not very much, but it's better than not hiding the theft, and better than using the proceeds from the theft to do more evil. Do you disagree?

Yes I disagree. The word "it" I think is a potential point of equivocation here. "It" could refer (a) to the theft, or (b) to the transaction of the theft, concealment, and essay-writing. Let me clarify that "it" is the theft, and ask the question again: is the immorality of the theft mitigated by the other two actions? If so, should A receive a lighter penalty for the theft, if he is caught, than if he had not carefully concealed the theft and written the essays?

Are you now, or have you ever been, a member of an organization that advocates for the violent overthrow of the government of the United States of America?

My guess is that this is supposed to be part of some implicit, clever argument, but it is too clever for me and I can't be sure what it is, so I have to guess. I wish I did not have to guess. My guess is that it is an example showing that speech inciting certain actions can be justifiably illegal in certain circumstances. I agree with that but I do not think it answers the questions I asked. Are you saying that B and C should go to jail? To the extent that theft is illegal because it has a tendency to cause other thefts, perhaps they should.

Nicely put post, it also touches one of my findings years back that basically all this fuss about utilitarianism EA style is mostly for its aesthetics and as an intellectual pastime, but potentially dangerous if somebody takes it too seriously. Yud himself even used the concept called Adding Up to Normality, he likes to use eating babies as such an example - if your utilitarian train of thought leads you to eating babies, then stop. Except of course this begs the question of why eating babies is such a sin? Child canibalism was considered as normal in some cultures, it is not as if an angel with fiery sword shows up and stops you if you think about it. But I like this example as you have these enlightened thinkers like Harris or Pinker and others - who already have some implicit moral system - and then just use utilitarianism to rationalize it, because Christianity or Judaism is so cringe. One issue I have here is that I am not sure if three generations down the road - when this background morality of "normality" disappears - we actually won't end up eating babies as the new normal. In the end we already consider flushing unborn babies down the drain as a normal thing in order for people to enjoy nice things in life without burden of parenthood - net positive for utils, right? Eating aborted fetuses for benefits that stem cells provide for human wellbeing is not as far fetched in this context. See, we are nicely getting there eventually.

Another issue I have with rule utilitarianism is that it is often indistinguishable from deontological moral system. In general most people don't have time, inclination or intellect to deeply investigate all their moral intuitions, so they end up just following some "utilitarian rules" like Christians follow ten commandments. Donate 10% of income also called tithe to your local church Effective Altruism where priests experts know best how to use it. Always ask yourself what would Jesus Peter Singer do. There is a lot of weird stuff going on, to me it feels as if we are just reinventing the wheel all the time, because traditional thinking was so backwards and we are now much more refined and sophisticated. Heck, St. Augustin did not even know who Bayes was. And after all this innovative thinking more often than not we just find out, that as in the spirit of the saying tradition is experiment that works we end up reconstructing it, often worse and clunkier.

An act is good if it follows the rule that leads to "good" outcome. Good outcome means maximizing utility, which according to Harris is described as "maximizing human flourishing" or at least "minimizing human suffering". Am I the only one who sees it as very religious style of thinking? Maximum flourishing is literally heaven - or at least heaven on Earth - and maximum suffering is literally hell. So if you want to be a good person then act so that you will bring about heaven as opposed to hell. And this is supposed to be rule utilitarianism as opposed to Christian deontology, mind you.

One issue I have here is that I am not sure if three generations down the road - when this background morality of "normality" disappears - we actually won't end up eating babies as the new normal.

I believe that is where we are headed, and where we have been headed since the Enlightenment.

Thomas Jefferson wrote that the doctrine of equal, negative human rights under natural law was self-evident. Taken literally, this would mean that any mentally competent person who considered the matter would find it to be true -- like the axiom that m+(n+1) = (m+n)+1 for any two natural numbers m and n. Clearly the doctrine of human rights is not self-evident in this sense -- unless Plato, Aristotle and Socrates were morons after all.

For many years I charitably assumed that Jefferson meant that the doctrine of equal human rights defined us as a people. But now, after further reading, I believe that I was too charitable in my assessment of Jefferson, because I idolized him as a founding father. He actually failed to realize that the doctrine of equal human rights was not self-evident at all, but was part of his heritage as an Englishman and a nominal Christian.

The carrying on and handing down of our traditions takes effort, quite a lot of effort really. To the extent that we accept the Enlightenment liberal view that our moral traditions will take care of themselves, because they are spontaneously evident to any mentally competent person, we will not expend that effort -- and the consequence will be generational moral rot, slowly at first and then quickly. We are seeing this unfold before our eyes.

Your argument would be more sound if you didn't misrepresent one of the most famous lines in the English language.

The line is "We hold these truths to be self evident, that all men are created equal," etc. What Jefferson is doing here is declaring his axioms. He does make several arguments later in the Declaration, but they follow from those axioms; they aren't meant to prove them. Jefferson is speaking to multiple audiences, some of whom reject his axioms--the Declaration is a bold statement that the American colonies intend to chart a path entirely separated from the monarchical institutions of Europe, from the bedrock assumptions of society up.

You can't rephrase "we hold these truths to be self evident" as "obviously." Your conclusion, that handing down traditions takes effort, is sound, but Jefferson would likely agree. Ben Franklin certainly would; when asked what kind of government the Constitution created, he responded, "A republic, if you can keep it." The conditional displays your point, that traditions and institutions require maintenance, and are not immune to decay if neglected.

You can't rephrase "we hold these truths to be self evident" as "obviously."

Isn't that exactly what "self-evident" means?

self-ev·i·dent
/ˌselfˈevəd(ə)nt/
adjective
not needing to be demonstrated or explained; obvious.
[Oxford Dictionary of the English Language]

Also,

What Jefferson is doing here is declaring his axioms. He does make several arguments later in the Declaration, but they follow from those axioms; they aren't meant to prove them.

I don't dispute this; there is no need to prove something if it is self-evident, in the dictionary sense of being obvious.

Your conclusion, that handing down traditions takes effort, is sound, but Jefferson would likely agree.

I believe not only that it takes effort, but also that it a moral duty of each generation. I am curious why you think Jefferson would agree. For example, coincidentally, here is another use of the phrase "self-evident" by Jefferson:

The question Whether one generation of men has a right to bind another, seems never to have been started either on this or our side of the water. Yet it is a question of such consequences as not only to merit decision, but place also, among the fundamental principles of every government. The course of reflection in which we are immersed here on the elementary principles of society has presented this question to my mind; & that no such obligation can be so transmitted I think very capable of proof. I set out on this ground, which I suppose to be self-evident, ‘that the earth belongs in usufruct to the living’: that the dead have neither powers nor rights over it. The portion occupied by any individual ceases to be his when himself ceases to be, & reverts to the society...We seem not to have percieved that, by the law of nature, one generation is to another as one independant nation to another.. [Letter to James Madison, (6 September 1789), emphasis added]

Jefferson seems to hold that a given generation has no obligation to carry on the traditions of its forebears, even from a single generation ago -- and, in the same letter to Madison of September 1789, he argues that, therefore, the national constitution should be rewritten every twenty years:

The constitution and the laws of their predecessors extinguished then in their natural course with those who gave them being. This could preserve that being till it ceased to be itself, and no longer. Every constitution then, and every law, naturally expires at the end of 19 years.

He says in the same paragraph that the constitution should not merely be amended, but rewritten from scratch in each generation (that is, every 20 years). So much for sacred tradition.

An axiom is a premise to an argument. You don't set out to prove axioms within the scope of an argument not because they are obviously true, but because they are outside the scope of the argument by definition. You use axioms to prove conclusions. Yes, you may use "self-evident" more or less interchangeably with "obvious," but I never said otherwise. I said that "we hold these truths to be self-evident" is not the same as "self-evidently." "We hold" is doing crucial work here, and may not be discarded without changing the meaning of the statement.

Jefferson's use of "self-evident" in the quoted letter to Madison is consistent with the above. Again, Jefferson is declaring an axiom, or at least offering one for discussion--"I suppose to be" is a somewhat less emphatic phrasing than "we hold," but it serves the same basic purpose.

Jefferson's twenty-year sunset idea is famously nutty[1], but there's a distinction to be drawn between his private writings to Madison, and the public documents he drafted, like the Declaration. In the Declaration, Jefferson isn't just speaking for himself--after all, there's a long list of signatories, and Jefferson's early drafts got cut down a fair bit in editing-by-committee.

[1] Well, they are famously nutty now, with the posthumous publication of a great many letters and documents that were private at the time they were written. As I recall, Madison's response was more or less, "what a fascinating idea; you should definitely not mention it to anyone else." Madison was considerably more sensible than Jefferson, admittedly not the highest of bars.

As you are likely aware, Jefferson was strongly influenced by John Locke in the writing of the Declaration. Locke wrote that the doctrine of equal natural negative rights was plainly discernable by independent reason:

The state of Nature has a law of Nature to govern it, which obliges every one, and reason, which is that law, teaches all mankind who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty or possessions; [Locke, Two Treatises of Government, essay II, section II].

In the opening words of the Declaration, Jefferson follows Locke's wording in this passage fairly closely, writing "life, liberty, and the pursuit of happiness", where Locke had written "life, health, liberty or possessions." In the same paragraph, Locke says, not that these precepts are part of English or Christian traditions, but that they can be ascertained by "all mankind who will but consult it [reason]". So the idea that these notions would be obvious is not a new idea in Jefferson's time, nor a straw man, but the stated opinion of the thinker who was probably most influential on Jefferson's writing. Logical inference from self-evident premises, in the style of Euclid, was in fashion during that period in writing on subjects from physics to politics -- however queer a fashion it now appears to us.

The fact that Jefferson was advised by Madison to keep his opinions to himself does not make them any less his opinions, and the fact that he was writing a public document does keep his opinion from working its way into the text. The text says what the text says.

An axiom is a premise to an argument. You don't set out to prove axioms within the scope of an argument not because they are obviously true, but because they are outside the scope of the argument by definition.

The phrase "self-evident" has meant the same thing from the time of Aquinas, through the time of Jefferson, up until now:

self-ev·i·dent\ /ˌselfˈevəd(ə)nt/
adjective
not needing to be demonstrated or explained; obvious.
[Oxford Dictionary of the English Language]

Can you explain more carefully, from a textual perspective, why you think it means something else in the Declaration? If this is your whole argument,

I said that "we hold these truths to be self-evident" is not the same as "self-evidently".

I don't buy it.

I recently made a post about Darwin related to this. Since there are many definitions of "human being", people usually refer to biology as the "objective" definition, but Darwinian biology can only define mankind by the mere fact that it survives. Insofar as being alive means not being dead, human dignity relies on the conservation of the means by which existence is preserved. The problem is that existence itself becomes condition not only necessary, but sufficient, to human dignity. This is where the prejudice of caring about the welfare of human beings qua human beings comes from. From this perspective human life is sacred, but only the objective share of human life, that which relies on nourishment, breathing, reproduction and defecation - all things that human beings can control. To paint a rather gross picture, this is like changing the definition of "human being" to "poop-making machine", and then congratulating ourselves with big parades to celebrate that we've solved the human mistery by inventing a more efficient way to ingest food and excrete it. Your conclusion is absolutely on point: This not only does not solve anything, it also clouds our vision and that of the future generations from the heart of the problem.

The divinization of mere life has also an impact at a social level, and I believe this was the aim from the beginning. The objective of utilitarism was to be a foundation for law that was objective and did not depend on religion or tradition. A universal law everyone could agree with, and if any disagreements did arise then a simple calculation would suffice to solve it. This allowed the State to take complete control over the life of its citizens. In ancient times there was a divine randonmess to life: Death, disease, famine, love, they were all regarded as private affairs over which the State did not directly intervene. But through the divinization of mere life, the State could take over the management of health, food, marriage, funerary services, etc. Every aspect of objective existence has come under scrutiny of the State in order tu secure the existence of its subjects, and as the subjects need to be alive regardless of their personal convictions or beliefs, they need the State. So when a person decides to invest is his dog rather than in a random person, they can justify their behavior by stating the fact that they pay taxes, which is precisely the fee an individual pays to guarantee the common good. So if you need help, if you suffer ill-fortune and look for a loving neighbor, ask the State. But on the other hand, the State is -supposedly- justified to intervene over the existence of any individual in order to secure the common good. The modern State is a devilish trap: The more it fulfills your needs the more you give up your freedom, until there's nothing but mindless, soulless satsifaction.

Thus, to the degree that utilitarianism has any force in the real world, it adds nothing to the conversation that wasn't already ambient common sense.

I think you're confusing that which should be obvious and that which is obvious. Though- I think Peter Singer makes a similar mistake. All moral frameworks have this same problem, they aren't absolute, they're post hoc reconstitutions of what their creators think works. It's not like Jeremy Bentham was pulling ideas out of thin air at random, he was trying to formalize something that was already informally present in the zeitgeist.

Right up until we enter a completely new domain, and then we are very glad that we have a bunch of different ethical tools to test and see which ones generalize to this new scenario.

Love is all you need.

Who do I love? Everyone.

What does that mean in practice? In terms of actually applying love-the-emotion as love-the-action? It means I cultivate any who stand before me who can receive it, and continue to attempt to expand the horizons of those who might stand before me.

In what sense is love all that I need? Well in the sense that this policy makes everything around me shining, shimmering, and splendid, and leads to the acquisition of new classes of person to polish. Additionally, when I love a person, and help them to grow, they are slowly remade in my image and I in theirs. Through this process I reproduce and expand my ingroup. I breed with the alien, and all the children of the world become my descendants. I myself become the child of my past self and the other. We begin to align. And when those that I have loved go out into the world, to distant and strange lands, and interface with the people there, they breed me intellectual children, and intellectual grandchildren. My influence and personality spread and replicate. There is an immediacy to those nearby, but there is an eventual link to the distant as well. They are the great-great-great grandparents of my descendants.

It's not like Jeremy Bentham was pulling ideas out of thin air at random, he was trying to formalize something that was already informally present in the zeitgeist.

While there might be some truth to that, Jeremy Bentham and John Stuart Mill both came to conclusions that were very unpopular in their day. Jeremy Bentham's idea of decriminalizing gay sex was way ahead of its time, but a very natural consequence of utilitarian reasoning. And John Stuart Mill's arguments for the political equality of women and men was a natural enough idea coming from utilitarianism's universality, but had not yet found widespread acceptance in society. (Reading "The Subjection of Women" is an interesting exercise, because the positions John Stuart Mill has to argue against are often things that basically no one today believes. I think it's hard for a modern person to truly put themselves in the mindset of the kind of positions Mill was arguing against.)

I think we swim in fairly utilitarian waters, and much language around things like victimless crimes and harm reduction come originally from Mill and Bentham.

The moral instinct itself functions as a way to secure the genes of the moral person’s group. That is the evolutionary purpose of morality. If moral people are coerced into doing good to everyone equally, then (1) this would decrease morality on the whole [moral genes] because behavioral energy expenditure is zero-sum and so amoral or immoral genes will flourish [see sociopathy], and (2) it would be unjust because they are owed the benefit of their own nature (moral in-group formation). As an example, members of Group A have strong moral interests in the form of guilt and pity and care for others, whereas Group B evolved such that this cognitive expenditure is spent on self-interest or family-interest. If Group A and Group B have to live together, and if Group A is pressured into giving moral help to Group B, then B is parasitic off of A, and in the future there will be less of Group A. In order for Group A to secure its genes, given that it spends cognitive energy on moral interest, it must direct its moral interest only to the in-group. Otherwise those genes will fade into oblivion. Anyone telling Group A to share its moral concerns (and this includes some mainstream interpretations of Christianity) are fundamentally against moral development, what may simply be called evil or satanic. If moral genes are enhanced from in-group interest, then moral in-group interest is better for the whole world. We can call this the trickle down theory of morality if we want to be cheeky, except in this case it’s real.

[in-group based rules-based utilitarianism fails because] each person is, after all, a member of multiple overlapping communities of various sizes and levels of cohesion, whose interests are frequently in conflict with each other

This is your brain on 20th century propaganda . jpg. More seriously, nobody in the past had any problem understanding what their in-group was. It was ethnicity and religion. This is the most natural way to develop an in-group such that it secures genes. The only exception you find in the favoring of ethnicity is when people join cults, like early Christianity, but consider that this cult was unusual in its promotion and selection for morality, and in any case these were region-specific churches which didn’t do a lot of charitable intermingling with each other.

Utilitarianism as a practical framework comes with huge benefits for an in-group, just not when seen as a top-level explanation of morality. I was writing about the Amish yesterday so let me use them as an example again. One of the rules they developed is that when you make a lot of money you give it back to the community. What is an ineffectual platitude in mainstream America is a gene-enhancing practice among the Amish. The recipients of the charity are all similar to the giver, both ethnically and religiously. For this reason, the Amish have the highest rate of charity and the lowest income disparity among all ethnic groups. Here, you see that a utilitarian rule works wonders. But note that this moral action would be wasted if not for the fact that the Amish select for moral genes via their cultural practices. Once you stop selecting for moral genes, then a moral person giving away money even to his own community is reducing sum-total morality over generations.

Another way that utilitarianism can help us practically is by creating rules which govern interaction between groups without privileging any one group. For instance, “in the case of a disaster, every nation should spend 2% of their resources on the affected nation”. This is a great rule and motivated purely by self-interest, because (1) we can all imagine being the struck nation, (2) it does not harm any moral nation by making them spend more. This is essentially insurance. People don’t take out insurance because they are moral but because they are self-interested. So maybe this shouldn’t even be counted as utilitarian, but we can imagine moral scenarios where this occurs.

There’s an interesting question of, “should we send malaria nets to Africa if we knew for certain they would not send aid to a different poor nation if they could” (in other words, if they were in our shoes). I would say absolutely not. If this intuition is shared, then it hints to a reciprocity rule undergirding morality between groups. Note that we can still send malaria nets, we would just require something of interest for ourselves: a territorial claim, resources, investment, etc.

Sorry it took me so long to respond to this. Thanks for the thoughtful engagement.

@NelsonRushton: It is a mistake to picture a code of conduct that benefits "the community": each person is, after all, a member of multiple overlapping communities of various sizes and levels of cohesion, whose interests are frequently in conflict with each other.

@coffee_enjoyer: This is your brain on 20th century propaganda. More seriously, nobody in the past had any problem understanding what their in-group was. It was ethnicity and religion.

Shared ethnicity and/or religion is a matter of degree, not a Boolean function. What looks like the same ethnicity or religion from afar, or in one conflict, may look like different ethnicities or religions when you zoom in, or look at a different conflict. This is nothing new. For example, the Book of Joshua, written at the latest around 600 BC, records nations being formally subdivided into hierarchies of tribes, clans, and families:

So Joshua got up early in the morning and brought Israel forward by tribes, and the tribe of Judah was selected. So he brought the family of Judah forward, and he selected the family of the Zerahites; then he brought the family of the Zerahites forward man by man, and Zabdi was selected. And he brought his household forward man by man; and Achan, son of Carmi, son of Zabdi, son of Zerah, from the tribe of Judah, was selected.

And these various hierarchies were well known to endure conflicts of interest, if not outright enmity, at every level of the hierarchy, from civil war between tribes (and coalitions of tribes) within a nation, right down to nuclear families:

Then the men of Judah gave a shout: and as the men of Judah [Southern Israel] shouted, it came to pass, that God smote Jeroboam and all [Northern] Israel before Abijah and Judah. [2 Chronicles: 15]

And Cain talked with Abel his brother: and it came to pass, when they were in the field, that Cain rose up against Abel his brother, and slew him. [Genesis 4:8]

So what your "ingroup" looks like depend on the particular conflict we are looking at, and the level of structure at which the conflict takes place. These conflicts can be life and death at all levels, and someone who is in your ingroup during a conflict at one level may be in the outgroup in a conflict at another level on another occasion. This phenomenon is a major theme -- arguably the major theme -- of the oldest written documents that exist on every continent where writing was discovered. In Greece and Isarael, for example, those documents were composed orally in the iron age based on legendary events of the bronze age. If that is my brain on propaganda, it isn't 20'th century propaganda.

@coffee_enjoyer: Utilitarianism as a practical framework comes with huge benefits for an in-group, just not when seen as a top-level explanation of morality... Another way that utilitarianism can help us practically is by creating rules which govern interaction between groups without privileging any one group...

Let's imagine a Medieval man at arms, standing atop a rampart, says to his comrade, "Aristotle wrote that an object falls at a speed proportional to its weight. Wanna see?" Then he takes a boulder and shoves it off the edge of the castle wall, and it falls on an enemy soldier and squishes him. "Good ol' Aristotle," he says, "What would we do without him?".

By analogy, your post describes some interesting examples of groups with competing interests entering into agreements, binding one way or another, that tend to benefit everyone involved. But I do not believe they are really deploying utilitarianism as a moral theory (as opposed to some other theory to morally justify their actions, if indeed they feel the need to morally justify them at all), and I do not believe that the success of their stratagems is evidence for utilitarianism. That is, Aristotle in my story analogous to, say, John Stuart Mill in yours. For your examples to count as anecdotal evidence of the normative force of utilitarianism, you would have to argue that the people in those stories were acting morally -- not just that they benefitted from what they did -- by comparison with what they would have done if they had acted on some other moral theory. For your examples to count as evidence of the efficacy of utilitarianism, you would have to argue that (1) they were thinking in utilitarian terms, as opposed to some other moral theory, and (2) other groups that employ competing theories fare worse by comparison.

If I want to benefit my identity group, how can I rationally go about this without thinking in terms of ”ingroup utilitarianism” (which you dispute in your OP)? You don’t offer any alternative to rule-based utilitarianism for ensuring the mutual benefit of an identity group among the members who wish for mutual benefit. Do you believe that people should just do what their instincts tell them to do? Do you believe it is unknown? Do you ever talk about splitting the bill with friends, or ever get mad when a favor is not reciprocated?

I don’t consider “there are layered ingroups” to be a serious criticism of communal utilitarianism; it would just imply that there is not one rule but there are different rules which mediate one’s ingroups. I agree with most of your criticism of utilitarianism but not here.

Since you yourself admit that this argument is restrained to humanist rule utilitarianism, shouldn't you edit the title to include the full phrase?

I clicked this post expecting a serious attack on the compromise between deontology and consequentialism that rule utilitarian offers, maybe some review of the contradictions in J.S.Mill and I'm treated to hackneyed criticisms of Sam Harris. Those are all well and good, but to hell with clickbait and false advertising.

Since you yourself admit that this argument is restrained to humanist rule utilitarianism, shouldn't you edit the title to include the full phrase?

I don't actually admit that. It starts off with the humanistic version, but the later paragraphs address broader forms of the view. Do you have a particular variation in mind?

I clicked this post expecting a serious attack on the compromise between deontology and consequentialism that rule utilitarian offers,... to hell with clickbait and false advertising.

I don't think the title suggests this topic exclusively. Even if I am mistaken, and it did, "clickbaiting" is a deliberate deception, and I plead innocent to that charge.

Well for instance, your objection that rule utilitarians don't purport to provide metaphysics to justify that rules ought to be universal is discounting work that has been done to unify rule utilitarianism with some kantianisms and some contractualisms.

There's a large and storied debate as to the validity of rule consequentialism vis a vis metaethics going on right now and I'm a bit upset that you restrict yourself to popular old figures given your philosophy background. Screw the new atheists, what about Robert Audi?

I suppose it may be unfair for me to expect veering into obscure and niche positions that exist mostly in theory, but surely we can do better than shoot at the utility ambulance.

I don’t know that you’re espousing rule utilitarianism properly, but I don’t think you’re straw manning it. Like, I agree with your three points, but that’s not what makes rule utilitarianism rule.

You clearly don’t like universalism, which is implied in utilitarianism as a general rule, but not unique to it. You seem to really focus on that. I don’t understand what you consider “standard” utilitarianism to be in comparison.

If you ask me how many billion people I would rather die than my cat, my emotional response is I’m okay losing the three billion+ people in Africa and China and India and such. I don’t know those people. My cat loves me.

Logically though, there’s gotta be a better way to strike a balance between partiality and self-interest, alongside recognizing it’s pretty hard to justify a moral system that values my cat so much. If you recognize that other moral agents exist and that you should seek fair compromises as much as possible, then that seems better than any alternative I’m aware of.

Which is to say, one can distinguish between personal and systematic moral decision levels. Rule utilitarianism sets rules that protect individual liberty as a bulwark against oppression and as a safety valve. Obviously, opinions differ on the fine points.

Rule utilitarianism also recognizes that certain aspects of virtue and deontological ethics have immense practicality. You should not steal because stealing is bad, because it’s not prosocial and attacks property rights.

I too value my identity groups over others. I was willing to formally kill people to protect the interests of my preferred groups. Still am informally.

In my mind, the US constitution is a good representation of rule utilitarianism. I don’t think it’s correct to blame the ills you do on rule utilitarianism specifically. Theoretically, other forms of government could still be in line with rule utilitarianism, say a sufficiently benevolent philosopher king. Or what communists think communism should be if only we could become New Men or whatever. Sky’s the limit if we give up concerns of “how would this go in real life.”

So I’m going with “not even wrong” because you come out swinging, but I think you might be beating up the wrong guy.

If you ask me how many billion people I would rather die than my cat, my emotional response is I’m okay losing the three billion+ people in Africa and China and India and such. I don’t know those people. My cat loves me.

Logically though, there’s gotta be a better way to strike a balance between partiality and self-interest, alongside recognizing it’s pretty hard to justify a moral system that values my cat so much. If you recognize that other moral agents exist and that you should seek fair compromises as much as possible, then that seems better than any alternative I’m aware of.

Yeah, as someone who has long been roughly aligned with utilitarianism as an ethical philosophy, I've wondered if it's not better to think of it as one answer to what can happen when a lot of people with policy-making power come together, and want to justify their policy goals in a way that most people would consider "fair."

Basically, if a politician wants to build a road, and they're going to have to tear down your house to do it, it's easier to swallow if they justify their decision by saying they took everyone in the country's well-being into account, and they think the new road is going to do more good than your house in its current location is doing. (It is also easier to swallow if they try to be fair to you by giving you enough money to relocate, so you can reap the benefits of the new road as well.)

I've long wondered if "discounted utilitarianism" or "reflective equilibrium hedonism" would be a better philosophy for individuals to adopt instead. Basically, acknowledging that you don't value the life of 1 foreigner the same as 1 person from the same city, and you don't value that person as much as you do a family member or friend. So you just discount each circle of concern by the amount you don't care about them. You might say, "Well a person from China might make my phone, and that has some value to me, so I value their life at 0.001 times that of one my friends." And then you can do the utilitarian calculus with those decisions in mind. Let the 1-to-1 values be in the hands of politicians and diplomats who have to work out fair policies and justify them to their constituents.

Yeah the level of analysis and scope matters a lot here.

It’s not on me, personally, to do the moral calculus as a perfectly wise, impartial judge with universal scope.

But I definitely want government policy to be doing cost-benefit analysis, focusing on efficiency, and the “greater good,” so long as it is done so in a way that doesn’t run completely roughshod over individual rights.

Rule utilitarianism sets rules that protect individual liberty as a bulwark against oppression and as a safety valve.

It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist. It is very tempting to think they exist, because I agree with their conclusions, but I do not believe this is not how people actually defend those principles in practice. For example, ...

In my mind, the US constitution is a good representation of rule utilitarianism.

My response to this has a lot in common with my response to @coffee_enjoyer above [https://www.themotte.org/post/966/why-rule-utilitarianism-fails-as-a/205363?context=8#context]. I love the US constitution, but I do not think it has much to do with rule utilitarianism. Most provisions of the American Constitution and Bill of Rights are borrowed almost wholesale from the English Constitution, English Petition of Right, and English Bill of Rights that came just before them in the same tradition. Where there was a discussion of which changes to make,

  1. when the argument was, we should do this rather than that because the calculated consequences of this are better than the calculated consequences of that, I submit that is political science or social engineering, not utilitarian ethics.
  2. when the argument was, we should do this rather than that because that wrongfully infringes on our rights as Englishmen, I submit that argument was based in sacred tradition, not utilitarian ethics, and
  3. when the argument was, we should do this rather than that because that wrongfully infringes on our self-evident natural human rights, the argument was based in deontology.

I reject your framing here.

Standard Econ and political science in the Western tradition has long been effectively rule utilitarian.

Sacred tradition is also frequently utilitarian.

“Self-evident natural human rights” being based on deontology plays exactly into my description above about how rule utilitarians love to take the best parts of deontology and virtue ethics.

In essence, you’re saying “those things did not originate by people explicitly using rule utilitarianism” and I’m saying “yeah, isn’t that great?”

Consequentialism cares about outcomes. The provenance of how say the US constitution came to be is not nearly so important as the fact that it implements a system that’s aligned with rule utilitarianism.

If you think the constitution is great I don’t see how you don’t like rule utilitarianism in at least some form.

It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist.

I am flabbergasted by this since I’m basically just mirroring the logic the Founding Fathers used to create a system that allowed a lot of liberty to lower the risk of tyranny and internal strife. They did this consciously and explicitly. You can disagree with them, but these arguments have long existed.

Standard Econ and political science in the Western tradition has long been effectively rule utilitarian.

Utilitarianism is a stance for reaching moral conclusions, not conclusions of cause and effect. I do not believe economists or political scientists make are in much the business of making assertions of this sort in their academic work -- though you can prove me wrong by citing cases where they do.

@NelsonRushton: It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist.

@SwordOfOccom: I am flabbergasted by this since I’m basically just mirroring the logic the Founding Fathers used to create a system that allowed a lot of liberty to lower the risk of tyranny and internal strife.

To explain your flabbergastedness, can you reproduce, or quote, or outline one of the arguments you are talking about? Then we can talk about whether it does what I say it doesn't do.

Utilitarianism is a stance for reaching moral conclusions, not conclusions of cause and effect. I do not believe economists or political scientists make are in much the business of making assertions of this sort in their academic work -- though you can prove me wrong by citing cases where they do.

I think there's arguably a "descriptive" version of utilitarianism, and a "prescriptive" one.

For an analogy, look at medicine. Medicine as a field of investigation concerns itself with health, and to complete that investigation it tries to find causal relationships between various activities and bodily states of health. There's a descriptive and a prescriptive component to medicine. We pour money into medicine because, broadly speaking, the aggregate demands of humans for health are enough to fund the investigations, but many of the descriptive discoveries could be used to make people healthy or unhealthy.

In the same way, economics as a field of the social sciences is "merely" the descriptive study of how economies work, but the reason we study economies is because we want stable, functioning economies that do a good job of allocating resources and have positive effects on well-being.

As I see it, the "descriptive" part of utilitarianism is the aggregate conclusions of the "descriptive" parts of other fields like medicine, economics, sociology, and psychology, that allow us to answer questions like "If we take action X, what effect will that have on QALY's/preference fulfillment/etc." Those questions are in theory "value neutral" questions, but the reason we are asking the question, and the reason we care about the answer is because enough people think that it is worthwhile field of inquiry. That's the implicit "prescriptive" part - it is derived from the fact that we ask the questions to make a larger policy decision.

I think there's arguably a "descriptive" version of utilitarianism, and a "prescriptive" one... For an analogy, look at medicine... In the same way, economics as a field of the social sciences is "merely" the descriptive study of how economies work, but the reason we study economies is because we want stable, functioning economies that do a good job of allocating resources and have positive effects on well-being.

This all makes sense, but I do not believe it is right picture. As you suggest, there is a line between, say, the academic discipline of economics on the one hand, and, on the other hand, the role played in moral decisions by the findings of economics. But, also inherent in your description is the fact that the academic discipline of economics falls entirely on one side of that line. So economics as such does not, after all, have a normative component. Moreover, utilitarianism, as a theory of the moral good, lies on the other side of that line -- that is, it says, as a function of the findings of fact and causal law, what is moral and immoral, while remaining silent on those findings of fact and causal law (except as hypothetical illustrative examples). If it were otherwise, we would see whole chapters of the work of Mill, Bentham, and Harris (in The Moral Landscape) devoted to deep investigations of fact and causal law (but of course we do not).

The practice of medicine probably does span both sides of the line, but I think this is a special case, because doctors deal face to face with their patients, who in fact have widely varying degrees of compliance with medical advice. One common tool for increasing compliance is moral suasion (for example, when your doctor wags his finger at you and says you are a bad boy or girl for not taking your medicine, or getting your regular checkup, or whatever). Thus, genuinely moral suasion is part of the practice of medicine, and I conceded that utilitarian reasoning plays a major role, insofar as what physicians morally pressure people to do is a function of scientific findings of cause and effect. I will chalk that up in support of utilitarianism as one tool in our moral toolbox.

On the other hand, I do not believe this argument transfers from medicine to politics, or foreign policy, or individual ethics. The hard part of medicine is knowing what works and getting people to do it in spite of their stubbornness and lack of discipline. The hard part of economics, diplomacy, and life on the street is trading between the interests of various overlapping groups and coalitions engaged in zero-sum conflicts of interest. That is where the study of ethics really ought to help us, and where I claim utilitarianism does not.

This is well put.

Economists in particular are way more utilitarian than the average person because they are trained in math and principles that highlight how to increase overall “wellbeing.”

(They’re also more libertarian/free market than average for the same reasons.)

The Founding Fathers were intimately aware of the problems with religious warfare and bad monarchs.

So they built upon English common law and designed a government with competing branches, federalism, and individual liberty. To promote the general welfare.

Utilitarianism is a particular form of consequentialism, which I assure you is pretty concerned with cause and effect as it relates to moral outcomes.

Standard economics and public policy are essentially aligned with rule utilitarianism because they are typically focused on increasing public wellbeing/wealth within the confines of our legal system.

Good writeup, and I agree with pretty much all of it. In fact I think many posters here will agree with it, but I believe we also still have some Effective Altruists who post here so hopefully they'll weigh in.

My speculation is that most people who claim to value all humans (or even all life) exactly equally either a) hold that belief because it makes them feel good to hold that belief, and it helps them build the type of self-image they want, or b) they hold that belief because they're attracted to its theoretical simplicity. But I think it's unlikely that they truly feel a spontaneous, pre-reflective love for all of humanity. I don't think I've ever in my life interacted with any person who showed evidence of harboring such feelings, and I don't think that it's a coincidence that people who espouse this belief are also more likely to be EA/LessWrong types, or philosophers - people who are already predisposed to be attracted to beliefs because of considerations like theoretical elegance.

But I could also just be projecting and this could be a failure of imagination on my part. It's certainly possible that someone genuinely feels that all humans should be treated identically. I just don't think it's as common as is typically claimed.

But I think it's unlikely that they truly feel a spontaneous, pre-reflective love for all of humanity.

As someone with consequentialist/utilitarian leanings, I cultivated my utilitarianism as a set of demanding ethical duties that correct the problems of humanity's natural inclinations. I basically think that emotional empathy is "flawed." I don't feel 100 times worse about 1000 strangers dying than I do about 10 strangers dying. And the fact that my emotional empathy is activated more by seeing a video of someone suffering, than reading about that same suffering feels like a flaw of human sociality.

One of the maxims I've tried to live by is to "act as I would if my emotions could accurately reflect differences in scale of suffering to the minute degree required by utilitarianism." It's not perfect by any means - I effectively have to have a set of rules or heuristics that will broadly lead to that result, because I don't have the ability to cognitively process all of the different ways society will go, and I'm still using flawed human hardware and interacting with humans and animals with flawed hardware, but I think it informs my Effective Altruism, and my larger political goals.

But I agree with you, that I don't actually feel a love for all of humanity. I just try to make my actions indistinguishable from the actions of someone who has a spontaneous, pre-reflective love for all of humanity.

Imagine holding two thoughts in your head at the same time:

  • I do not value everyone equally for both reasons of emotion and practicality.

  • Logically, other people have every right to feel the same way I do. Perhaps there’s a compromise in here if we distinguish between the personal and higher levels of moral policy. We do not live in an ideal world, but perhaps we can move towards “better.”

Good writeup

I appreciate you saying so.

I suspect that people are drawn to rule utilitarianism because it resolves a certain bind they find themselves in. Let's suppose that I have a landlord who is an all around scumbag, and I don't like him. Suppose I know that, if he needed a live saving medical procedure that cost, say, $10,000, and asked me to help out, I would say no. So his life is worth less to me than $10,000. On the other hand, if I had an opportunity to do him in and take $10,000 in the bargain, and get away clean like Raskolnikov, I would not do it. I think people with certain worldviews feel obliged to articulate an explanation of why that is not irrational, and within those same worldviews, they don't have much to cling to in formulating the explanation. Dostoyevsky's explanation would probably strike them as unscientific. They don't realize that if they keep that up, their grandson might actually do it.

Or maybe, they need a pretext for pointing a finger at the people they feel are doing wrong, and being able to say more authoritative sounding than "boo!", that, again, flies within their worldview.

People who are not weird Internet guys think action and inaction have different moral valences.

I guess I’m confused why you think rule utilitarianism is uniquely required or faulty for the situation you describe.

What moral system would demand the $10k payment? Some might encourage it, sure. Christianity in particular seems to encourage this kind of thing.

Even if I like my neighbor, why am I the only option for $10k? As a one shot with no alternative, I might just feel compelled to do it. Problem is, life is so often not full of contrived one shots, and so I don’t think I would have to “value my neighbor’s life at less than $10k” or $1. As a rule, expecting neighbors to bail you out is not a very good system. Go get a loan or a gofundme.

guess I’m confused why you think rule utilitarianism is uniquely required or faulty for the situation you describe.

I don't think rule utilitarianism is uniquely required; it just seems to be the most common candidate for a theory of absolute morality based on Enlightenment epistemology. I don't think the other candidates are any better.

Even if I like my neighbor, why am I the only option for $10k?

It is an element of the hypothetical.

What moral system would demand the $10k payment?

What facially seems to merit the $10K payment is not a particular moral theory, but the situation that (1) I would not pay $10K to save his life, therefore (2) his life is worth less to me than $10K, (3) I have an opportunity to do away with him and collect $10, and (4) I am allegedly a rational agent; so by #2, #3, and #4 I should be expected to pull a Raskolnikov. Yet I don't. What requires a moral theory is to resolve the paradox (or am I just being a sentimental sucker?).

I think this is a classic example of misusing homus economicus in a contrived scenario that has no real bearing on reality.

Life is full of iterated games and reality is complex.

Your scenario does nothing to highlight a problem with rule utilitarianism so far as I can tell.

Is this original to themotte or is it quoted from somewhere?

Original to theMotte