By rule utilitarianism, here I will mean, roughly speaking, the moral stance summarized as follows:
- People experience varying degrees of wellbeing as a function of their circumstances; for example, peace and prosperity engender greater wellbeing than for their participants than war and famine.
- As a matter of fact, some value systems yield higher levels human wellbeing than others.
- Moral behavior consists in advocating and conforming to value systems that engender relatively high levels of human wellbeing.
Varieties of this position are advocated in academic philosophy, for example, by Richard Brandt, Brad Hooker, and R.M. Hare -- and in popular literature, for example, by Sam Harris in The Moral Landscape and (more cogently, in my opinion) by Stephen Pinker in the final chapter of his book Enlightenment Now. I do not believe that rule utilitarianism cuts much ice as a moral stance, and this essay will explain why I hold that opinion.
1. The problem of speciesism
In his book Enlightenment Now, psychologist Steven Pinker hedges on proposition #3 of the utilitarian platform, which I will call the axiom of humanism. Pinker writes, "Despite the word's root, humanism doesn't exclude the flourishing of animals" [p. 410]. Well, professor, it sho' 'nuff does exclude the flourishing of animals! If the ultimate moral purpose is to promote human flourishing, then the welfare non-human animals is excluded from consideration in the ultimate moral purpose. To be charitable, I suppose Pinker means that it is consistent with (3) that there is a legitimate secondary moral purpose in maximizing the wellbeing of animals. However, (a) I cannot be sure that he means that, and (b) it is unfortunate that he did not say what he meant, especially since this point is central to a weakness in the humanist position.
In The Moral Landscape, and in his TED talk on the same thesis, Sam Harris spends most of his time defending propositions (1) and (2) of the utilitarian position --- which is unfortunate, because I believe they are self-evident. To his credit, Harris does briefly address issue of "speciesism" (that is, assigning greater moral weight to the wellbeing of animals than humans), saying, "If we are more concerned about our fellow primates than we are about insects, as indeed we are, it's because we think they're exposed to a greater range of potential happiness and suffering." What Harris does not do is to place the range of animal experience on the scale with that of human experience to compare them, or give us any reason to think bottom of the scale for humans is meaningfully (if at all) above the top of the scale for other animals. Moreover, he gives no reason why we ought to draw a big red line at some arbitrary place on that scale, and write "Not OK to trap, shoot, eat, encage, or wear the skins of anything above this line." Perhaps that line reaches well down into the animal kingdom, and perhaps the line falls above the level of some of our fellow men. As a matter of ultimate concern, I cannot imagine an objective reason why it should not.
Perhaps Pinker and Harris don't spend much effort on the issue of speciesism because it is uncontroversial: of course human wellbeing has greater moral gravity than animal wellbeing, and the exact details of why and how much are not a pressing moral concern of our day. I submit, on the contrary, that accounting for speciesism is the one of the first jobs of any moral theory. I am not saying that perhaps we ought to start eating our dim-witted neighbors, or that we should all become vegans; I am saying that if you purport to found a moral theory on objective reason then you should actually do it, and that how that theory accounts for speciesism is an important test case for it.
To wit, either animals count as much as humans our moral calculus, or they do not. If animals count as much as humans, then most of us (non-vegetarians) are in a lot of trouble, or at least ought to be. On the other hand, if animals don't count as much as humans, then the reason they don't, carried to its logical conclusion, is liable to be the reason that some humans don't count as much as others. Abraham Lincoln famously made a similar argument over the morality of slavery:
You say A is white, and B is black. It is color, then; the lighter, having the right to enslave the darker? Take care. By this rule, you are to be a slave to the first man you meet, with a fairer skin than your own. You do not mean color exactly? — You mean whites are intellectually the superiors to blacks, and therefore have the right to enslave them? Take care again. By this rule, you are to be slave to the first man you meet, with an intellect superior to your own. But say you, it is a question of interest; and, if you can make it your interest, you have the right to enslave another. Very well. And if he can make it his interest, he has the right to enslave you. [From Lincoln's collected notes; date uncertain]
In the spirit of Lincoln's argument, for example, do non-humans count less, as Harris claims, because they allegedly have less varied and vivid emotional experience? It is not clear to me that they do in the first place, or, if they do, that they do by a degree sufficient to make cannibalism forbidden and vegetarianism optional. Do non-humans count less because they are less intelligent? In that case, the utilitarian is obliged to explain why the line is drawn just where it is, and, indeed, why, in the best of all possible worlds, deep fried dumbass shouldn't be an item on the menu at Arby's.
The fact that Pinker and Harris do not resolve the issue of speciesism is important -- not because their conclusions on the matter are or ought to be controversial, but because it is the first sign that they are not deriving a theory from first principles, but instead rationalizing the shared common sense of their own culture.
2. And who is my neighbor?
John Lennon famously sang, "All you need is love". Perhaps love is all you need, but, as Bo Diddley famously sang, the hard question remains: Who do you love?
Suppose we were to grant (which I do not) that it is objectively evident that the wellbeing of humans is categorically more valuable than that of other animals. The fact remains the wellbeing of some humans might count more than that of others, from my perspective, as an ultimate moral concern. Indeed, some humans might not count at all except as targets and trophies -- and many cultures have taken this view unashamedly. For example, the opening stanza of the Anglo Saxon poem Beowulf extolls the virtues of the Danish king Sheild Sheafson for the laudable accomplishment of subjugating not just some but all of the neighboring tribes, and for driving them in terror -- not from their fortresses, not from their castles, but from their bar stools:
So. The Spear-Danes in days gone by
And the kings who ruled them had courage and greatness.
We have heard of those princes’ heroic campaigns.
There was Shield Sheafson, scourge of many tribes,
A wrecker of mead-benches, rampaging among foes.
This terror of the hall-troops had come far.
A foundling to start with, he would flourish later on
As his powers waxed and his worth was proved.
In the end each clan on the outlying coasts
Beyond the whale-road had to yield to him
And begin to pay tribute. That was one good king.
[translation by Seamus Heaney, emphasis added]
The literature, monuments, and oral traditions of the world are replete with examples of this sentiment, but for the sake of space I will give just one more example. The inscription on a monument to honor the Roman general Pompey in honor of his 45'th birthday reads,
Pompey, the people’s general, has in three years captured fifteen hundred cities, and slain, taken, or reduced to submission twelve million human beings.
In the West today, people tear down monuments claiming that the men they honor were enslavers or imperialists -- but evidently other cultures put up monuments to glorify their leaders for those very characteristics. So it is not a no-brainer -- that is to say, not at all self-evident -- that the welfare of members of foreign tribes ought to play any role in our moral calculations at all -- or indeed that the subjugation and exploitation of foreign tribes is not a positive moral good from our own perspective. That is how the Romans and the Saxons saw it. If you or I had been born into those cultures we would probably would have felt the same way -- and so, I dare say, would Sam Harris and Steven Pinker.
I imagine the utilitarian response would that we are all better off if we take the modern Western view (of course) that exploiting foreigners is inherently immoral. Sam Harris asks us to imagine two people who are forced to choose between (a) cooperating to build a better society, or (b) smashing each other in the face with rocks. If those are the options, I would certainly choose (a) -- but those are not the options. Harris left out the salient option (c): I smash you in the face with a rock and take your wallet. This is not a straw man, and it has been the position of many honest, intelligent, and upright people. If one suggested to Alexander the Great -- a student of Aristotle -- that "we all" are better off if "we all" lay down our weapons, he might ask, "who is we all??" He might say that he cares little for what makes barbarians (that is, non-Greeks) better off, any more than he cares what makes apes or pigs better off -- and that your way of drawing the circle of love, so as to include all humans but exclude apes and pigs, is no better than his. I imagine the utilitarian response to that would be, "You fiend!", and that the Greek response to that would be "Get out of my sight, you womanish punk". So, from a logical perspective, who won the debate? Nobody did.
Pinker almost concedes this, writing "If there were a callous, egoistic, megalomaniacal sociopath who could exploit everyone else with impunity, no argument could convince him he had omitted a logical fallacy" [Enlightenment Now, p. 412] -- but it is not clear whether he means that the sociopath could not be convinced because he has a thick skull, or whether he is actually not committing a logical fallacy. I emailed Dr. Pinker for a clarification, and he was kind enough to reply, acknowledging that the megalomaniacal sociopath is actually not committing a logical fallacy. Note that (1) this is a second example of Pinker's writing being vague exactly where the argument is weakest (the first being the issue of speciesism), and (2) a Roman general is not a megalomaniacal sociopath or anything of the sort, and whatever we claim to be objectively self-evident ought to be evident to him as well as us.
In fact, Pinker considers something like the argument with a Roman general in his book. His imagined response (in his version to Nietzsche, a Romanophile, rather than an actual Roman) is as follows:
I [Steven Pinker] am a superman, hard, cold, terrible, without feelings and without conscience. As you recommend, I will achieve glory by exterminating some chattering dwarves. Starting with you, shortly. And I might do a few things to that Nazi sister of yours, too. Unless you can think of a reason why I should not [Enlightenment Now, p. 446].
But now that Professor Pinker himself has switched gears, from the strictly logical to the ad baculum, I submit that he is obviously bluffing, and the Romans were not bluffing at all. Someone would win that debate -- and that the Roman argument might have something to do with a crucifix.
In any case, an allegedly universal and self-evident regard for "human wellbeing" leaves unanswered the central question of morality: in the words of the great moral philosopher Bo Diddley, Who do you love? Speciesism is only the thin end of the wedge: not only do I generally value the wellbeing of people more than that that of cows and rabbits, I also value the wellbeing of my people more than I value that of other people -- and, in my view, rightly so. Thus, premises (1) and (2) of the humanist/utilitarian position do not imply conclusion (3) because, as an ultimate concern, I value the wellbeing of people in my identity group over that of people outside my identity group, and it is only right that I should do this. Thus, maximizing human wellbeing -- in the sense where all humans count equally -- is not something I am actually interested in. Moreover, I do not feel it is something I ought to be particularly interested in.
Now in response to this, you might say that I am a scoundrel and a villain. In response to that, I say that's just, like, your opinion, man. Philosophers such as Peter Singer, and popular writers like Stephen Pinker, insist that I must be impartial between the wellbeing of my people on the one hand, and that of homo sapiens at large on the other. For example, Pinker writes, "There is nothing magic about the pronouns I and me that would justify privileging my interests over yours or anyone else's" [Enlightenment Now, p. 412]. To that I reply, why on Earth would I need magic, or even justification, to privilege my own interests over yours as matter of ultimate concern? Of course I privilege my interests over yours, and almost certainly vice versa. In fact, unless you are a friend of mine, not only do I privilege my own interests over yours, I privilege my dog's interests over yours. For example, if my dog needed a life-saving medical procedure that costs $5000, I would pay for it, but if you need a life-saving medical procedure that cost $5000, I probably would not pay for it -- and if an orphan from East Bengal needed a life-saving medical procedure that costs $5000 (which one probably does at this very moment), I would almost certainly not pay for it, and neither would you (unless you are actually paying for it).
I must not be alone in caring more about my dog than I do about a random stranger. The average lifetime cost of responsibly owning a dog in the United States is around $29,000 -- while, according to the Givewell organization, the cost of saving the life of one unfortunate fellow man at large by donating to a well chosen charity is around $4500 [source]. If those figures are correct, it means that if you own a dog, then you could have allocated the cost of owning that dog to save the lives of about six people on average (and if the figures are wrong, something like that is true anyway). Now, Steven Pinker himself once tweeted that dog ownership comes with empirically verifiable psychological benefits for the dog owner. In his excitement over those psychological benefits, I suppose, he neglected to mention that it also comes with the opportunity cost of six (or so) third world children dying in squalid privation. If Pinker really believes, as he claims to believe, that "reducing the suffering and enhance the flourishing of human beings is the ultimate moral purpose," he sure isn't selling it very hard. But then again, no one in their right mind is.
Not only do I actually value my dog's wellbeing above that or a random human stranger, I submit it is only right that I should. That is to say, it would be immoral of me not to privilege my dog's wellbeing over that of a human stranger. Indeed, if a man let his family dog pass away, when he could have saved the dog's life for a few thousand dollars, and he spent that money instead to save the life of a foreigner whom he had never met, then, all else being equal, I would prefer not to have him as a countryman, let alone a friend.
In one talk, Pinker has said, "You cannot expect me to take you seriously if you are espousing moral rules that privilege your interests above mine." But, professor, if I have more and better men on my side, it does not matter whether you take me seriously; it only matters whether they do. Again, I am not saying that it would be right to take advantage of that situation of having more and better men on my side; I am saying that the Pinker and Harris's egg headed argument yields no objective reason why I shouldn't.
3. Degrees of neighborship
One of Sam Harris's favorite examples to illustrate the objectiveness of values is "the worst possible misery for everyone", which he claims is self-evidently and objectively bad situation. I think this is an egregious misdirection, because moral questions are not about the choice between win-win and lose-lose. If iron axes work better than bronze axes, for everyone, at every level, all things considered, then by all means let us use iron -- but that is not a moral decision. I have a moral decision to make when, for example, (1) I have the opportunity to gain at your expense, all things considered, and (2) I don't care about your wellbeing as much as I care about mine.
But not all moral tradeoffs are one-on-one. Human nature being what it is, groups at all levels split into factions which then try to have their way with each other -- from nuclear families, to PTA boards, to political parties, to nations, to the whole of humanity. A code of conduct that is good for my community at one level might subtract from the good of a smaller, tighter community of which I am also a member -- so, real which of those codes should (should, in the moral sense) I advocate and adhere to?
As a matter of fact, the wellbeing of different groups, of which I am a member, often trade against each other in moral decisions -- and the purpose of moral precepts, largely if not mainly, is to manage tradeoffs between our concerns for the welfare of different groups, with different degrees of shared identity, of which we are a common member. What looks like the same group from afar, or in one conflict, may look like different ethnicities or religions when you zoom in, or look at a different conflict. This is nothing new. For example, the Book of Joshua, written at the latest around 600 BC, records nations being formally subdivided into hierarchies of tribes, clans, and families:
So Joshua got up early in the morning and brought Israel forward by tribes, and the tribe of Judah was selected. So he brought the family of Judah forward, and he selected the family of the Zerahites; then he brought the family of the Zerahites forward man by man, and Zabdi was selected. And he brought his household forward man by man; and Achan, son of Carmi, son of Zabdi, son of Zerah, from the tribe of Judah, was selected.
And these various hierarchies were well known to endure conflicts of interest, if not outright enmity, at every level of the hierarchy, from civil war between tribes (and coalitions of tribes) within a nation, right down to nuclear families:
Then the men of Judah gave a shout: and as the men of Judah [Southern Israel] shouted, it came to pass, that God smote Jeroboam and all [Northern] Israel before Abijah and Judah. [2 Chronicles: 15]
And Cain talked with Abel his brother: and it came to pass, when they were in the field, that Cain rose up against Abel his brother, and slew him. [Genesis 4:8]
So what your "ingroup" looks like depend on the particular conflict we are looking at, and the level of structure at which the conflict takes place. These conflicts can be life and death at all levels, and someone who is in your ingroup during a conflict at one level may be in the outgroup in a conflict at another level on another occasion. This phenomenon is a major theme -- arguably the major theme -- of the oldest written documents that exist on every continent where writing was discovered. Thus is a mistake in moral reasoning to conceptualize a code of conduct that benefits "the community": each person is, after all, a member of multiple overlapping communities of various sizes and levels of cohesion, whose interests are frequently in conflict with each other.
Now hear this: when we are looking for a win-win solution that benefits everyone at all levels without hurting anyone at any level of "our community", this is an engineering problem, or a social engineering problem, but not an ethical problem. It is the win-lose scenarios, which trade the wellbeing of one level of my community against that of another, that fall into the domain of ethics. Of course we are more concerned for the welfare of those whose identities have more in common with our own -- but how steep should the drop-off be as a function of shared identity? Should it converge to zero for humanity at large? Less than zero for our enemies? How about rabbits and cows? As an ultimate concern, who do you love, and exactly how much, when it comes to decisions that trade between the wellbeing of one level of your community and another (self, family, clan, tribe, nation, humanity, vertebrates at large)? That is a central problem, if not the central problem, of ethical discourse -- and it is a question about which utilitarianism has nothing to say, and about which humanism begs the question from the outset.
4. No Moral Verve
At the end of the day, the conversation on ethics should come to something more than chalk on a board. When the chalk dust settles, if we have done a decent job of it, we should bring away something that can inspire us to rise to the call of duty when duty gets tough. The fatal defect of humanism in this regard is that practically no one -- neither you, nor I, nor Steven Pinker, nor Sam Harris, nor John Stuart Mill himself -- actually gives a leaping rat's ass about the suffering or the flourishing of homo sapiens at large. Such was eloquently voiced by Adam Smith, and his statement is worth quoting at length:
Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened. [Smith (1759): The Theory of Moral Sentiments]
Here is the point. As C.S. Lewis wrote, In battle it is not syllogisms [logic] that will keep the reluctant nerves and muscles to their post in the third hour of the bombardment ["The Abolition of Man"]. Lewis was right about this -- and, while we are making a list of things that do not inspire people to rise to call of duty when duty gets tough, we can include on that list any concern they might claim to have for the welfare of human beings at large.
5. Mumbo-Jumbo
Be careful what you do,
Or Mumbo-Jumbo, God of the Congo,
And all of the other
Gods of the Congo,
Mumbo-Jumbo will hoo-doo you,
Mumbo-Jumbo will hoo-doo you,
Mumbo-Jumbo will hoo-doo you.*
[ Vachel Lindsay: The Congo]
In a discussion with Alex O'Connor, Sam Harris invites us to imagine two people on an Island, who can choose either to cooperate and build a beautiful life together, or to start smashing each other in the face with rocks. It is immoral to smash someone in the face with a rock, he infers, because if we all started smashing each other in the face with rocks, we would all be miserable. Indeed, I agree that if I had a choice between everyone smashing everyone in the face with a rock, and nobody smashing anyone in the face with a rock, I would certainly choose nobody smashing anyone. But in reality, I do not get to choose whether everybody smashes everyone in the face with a rock. The decision I get to actually make, in which my ethics actually plays a role, is whether I smash you in the face with a rock, and take your wallet in the bargain, which, say, has $5,000 in it. With that, it is suddenly not so clear why that wouldn't maximize the satisfaction of my ultimate concerns -- especially if I value $5,000 more than I value your life (which again, unless you are a friend of mine, I unashamedly do).
On top of ignoring the multilayered, competitive nature of the human condition, and on top of having no practical motivational force even for its professed adherents, another problem with rule utilitarianism is the voodoo it invokes to connect (a) performing a particular action with (b) what would happen, counterfactually, if everyone followed the salient rule that permits the action. For a simple example, let us imagine that I steal a tootsie roll from a convenience store. In this scenario, let us imagine that I am poor and the convenience store owner is rich, and that tootsie roll does me more good than it would have done the store owner if it had remained in his store. In the world of mystical utilitarian counterfactuals, if everyone stole everything all the time, then everyone would clearly be worse off -- but in the actual world, me stealing a bite of candy is not going to cause everyone to steal everything all the time, or, probably, cause anyone else to steal anything else ever. Even if an individual tootsie roll pilferage did have some miniscule ripple effect on society, I would still expect the material impact on me personally to be less than the cost of the candy I stole. To put it more generally, when someone steals something and gets away with it, they do not reasonably expect to lose net income as a result. So, why on Earth should I care about what would happen in the sci-fi scenario where everyone stole everything all the time? I cannot imagine an objective reason why I should.
Perhaps there is some deep metaphysical argument that establishes, on an objective basis, that one ought to behave in the way they wish others in "their community" to behave (if, again counterfactually, there were such a thing as "their community") -- or perhaps there is some kind of cosmic karma stipulating that what goes around invariably comes around on this Earth, but (1) I cannot imagine what that metaphysical argument would be, (2) the world doesn't look to me like it works that way, and (3) neither Pinker, nor Harriss, nor Singer, nor Benthem, nor Mill actually give such metaphysical arguments, nor attempt to show that the world does work that way.
Let me repeat once again that I am not advocating nihilism here. What I am saying is this: if utilitarians claim to base their moral theory on objective reason -- indeed, if they claim to do anything other than manufacture a flimsy rationalization for the moral common sense of their own culture -- then it is precisely the Devil's advocate that must contend with, and it seems to me they are in a hopeless position to do so.
Conclusion
When I say that utilitarianism has nothing useful to say about real world ethical problems, I mean it. Of course one might give evidence about the impacts of some rule or policy, which might then inform whether we want to adopt that rule or policy -- but I doubt the following words have ever been uttered in a real debate over policy or ethics: I conceded that your policy/rule/value-premise, if adopted, would benefit every level of our community more than mine does, but to Hell with that. The fact is that everyone prefers policies that benefit their communities when they are a win/win at every level -- whether or not they have read one fancy word of John Stuart Mill, Jeremy Bentham, Peter Singer, Sam Harris, or Steven Pinker. Thus, to the degree that utilitarianism has any force in the real world, it adds nothing to the conversation that wasn't already inherent in common sense. On the other hand, to the degree that utilitarianism is not redundant with common sense, it has no motivational force, even for its professed adherents -- especially if they own a dog.
I submit that the position of utilitarianism is not only weak, but so evidently preposterous that its firm embrace requires an act of intellectual dishonesty. I can say this without contempt, because less than ten years ago I myself espoused utilitarianism. I knew then and I know now that I was not being intellectually honest in espousing this view. To my credit, I could not bring myself to write a defense of utilitarianism, even though I tried -- because I could not come up with an argument for it that I found convincing. Yet, I presumed that I would eventually be able to produce such an argument, and I did state utilitarianism as my position, without confessing that I could not defend the position to my own satisfaction.
I further submit that programs like utilitarianism are not only mistaken but harmful. They are not just a little bit harmful, but disastrously harmful, and we can see the engendered disaster unfolding before our eyes. The problem is not that utilitarians are necessarily bad people; it is that, if they are good people, they are good people in some sense by accident: reflexively mimicking the virtues and values of their inherited traditions, while at the same time denigrating tradition, and mistaking their moral heritage for something they have discovered independently. As John Selden wrote,
Custom quite often wears the mask of nature, and we are taken in [by this] to the point that the practices adopted by nations, based solely on custom, frequently come to seem like natural and universal laws of mankind. [Natural and National Law, Book 1, Chapter 6]
The problem with subverting the actual source of our moral norms and replacing it with a feeble rationalization is this: each generation naturally (and rightly) pushes back against their inherited traditions, and pokes to see what is underneath them. If the actual source of those traditions has been forgotten, and they are presented instead as being founded on hollow arguments, the pushback will blow the house down. Sons will live out the virtues of their fathers less with each passing generation, progressively supplanting those virtues with the unrestrained will of their own flesh. That is what we are seeing in our culture today -- and impotent, ivory tower theories of morality are part of the problem.
Jump in the discussion.
No email address required.
Notes -
I don’t know that you’re espousing rule utilitarianism properly, but I don’t think you’re straw manning it. Like, I agree with your three points, but that’s not what makes rule utilitarianism rule.
You clearly don’t like universalism, which is implied in utilitarianism as a general rule, but not unique to it. You seem to really focus on that. I don’t understand what you consider “standard” utilitarianism to be in comparison.
If you ask me how many billion people I would rather die than my cat, my emotional response is I’m okay losing the three billion+ people in Africa and China and India and such. I don’t know those people. My cat loves me.
Logically though, there’s gotta be a better way to strike a balance between partiality and self-interest, alongside recognizing it’s pretty hard to justify a moral system that values my cat so much. If you recognize that other moral agents exist and that you should seek fair compromises as much as possible, then that seems better than any alternative I’m aware of.
Which is to say, one can distinguish between personal and systematic moral decision levels. Rule utilitarianism sets rules that protect individual liberty as a bulwark against oppression and as a safety valve. Obviously, opinions differ on the fine points.
Rule utilitarianism also recognizes that certain aspects of virtue and deontological ethics have immense practicality. You should not steal because stealing is bad, because it’s not prosocial and attacks property rights.
I too value my identity groups over others. I was willing to formally kill people to protect the interests of my preferred groups. Still am informally.
In my mind, the US constitution is a good representation of rule utilitarianism. I don’t think it’s correct to blame the ills you do on rule utilitarianism specifically. Theoretically, other forms of government could still be in line with rule utilitarianism, say a sufficiently benevolent philosopher king. Or what communists think communism should be if only we could become New Men or whatever. Sky’s the limit if we give up concerns of “how would this go in real life.”
So I’m going with “not even wrong” because you come out swinging, but I think you might be beating up the wrong guy.
Yeah, as someone who has long been roughly aligned with utilitarianism as an ethical philosophy, I've wondered if it's not better to think of it as one answer to what can happen when a lot of people with policy-making power come together, and want to justify their policy goals in a way that most people would consider "fair."
Basically, if a politician wants to build a road, and they're going to have to tear down your house to do it, it's easier to swallow if they justify their decision by saying they took everyone in the country's well-being into account, and they think the new road is going to do more good than your house in its current location is doing. (It is also easier to swallow if they try to be fair to you by giving you enough money to relocate, so you can reap the benefits of the new road as well.)
I've long wondered if "discounted utilitarianism" or "reflective equilibrium hedonism" would be a better philosophy for individuals to adopt instead. Basically, acknowledging that you don't value the life of 1 foreigner the same as 1 person from the same city, and you don't value that person as much as you do a family member or friend. So you just discount each circle of concern by the amount you don't care about them. You might say, "Well a person from China might make my phone, and that has some value to me, so I value their life at 0.001 times that of one my friends." And then you can do the utilitarian calculus with those decisions in mind. Let the 1-to-1 values be in the hands of politicians and diplomats who have to work out fair policies and justify them to their constituents.
Yeah the level of analysis and scope matters a lot here.
It’s not on me, personally, to do the moral calculus as a perfectly wise, impartial judge with universal scope.
But I definitely want government policy to be doing cost-benefit analysis, focusing on efficiency, and the “greater good,” so long as it is done so in a way that doesn’t run completely roughshod over individual rights.
More options
Context Copy link
More options
Context Copy link
It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist. It is very tempting to think they exist, because I agree with their conclusions, but I do not believe this is not how people actually defend those principles in practice. For example, ...
My response to this has a lot in common with my response to @coffee_enjoyer above [https://www.themotte.org/post/966/why-rule-utilitarianism-fails-as-a/205363?context=8#context]. I love the US constitution, but I do not think it has much to do with rule utilitarianism. Most provisions of the American Constitution and Bill of Rights are borrowed almost wholesale from the English Constitution, English Petition of Right, and English Bill of Rights that came just before them in the same tradition. Where there was a discussion of which changes to make,
I reject your framing here.
Standard Econ and political science in the Western tradition has long been effectively rule utilitarian.
Sacred tradition is also frequently utilitarian.
“Self-evident natural human rights” being based on deontology plays exactly into my description above about how rule utilitarians love to take the best parts of deontology and virtue ethics.
In essence, you’re saying “those things did not originate by people explicitly using rule utilitarianism” and I’m saying “yeah, isn’t that great?”
Consequentialism cares about outcomes. The provenance of how say the US constitution came to be is not nearly so important as the fact that it implements a system that’s aligned with rule utilitarianism.
If you think the constitution is great I don’t see how you don’t like rule utilitarianism in at least some form.
I am flabbergasted by this since I’m basically just mirroring the logic the Founding Fathers used to create a system that allowed a lot of liberty to lower the risk of tyranny and internal strife. They did this consciously and explicitly. You can disagree with them, but these arguments have long existed.
Utilitarianism is a stance for reaching moral conclusions, not conclusions of cause and effect. I do not believe economists or political scientists make are in much the business of making assertions of this sort in their academic work -- though you can prove me wrong by citing cases where they do.
To explain your flabbergastedness, can you reproduce, or quote, or outline one of the arguments you are talking about? Then we can talk about whether it does what I say it doesn't do.
I think there's arguably a "descriptive" version of utilitarianism, and a "prescriptive" one.
For an analogy, look at medicine. Medicine as a field of investigation concerns itself with health, and to complete that investigation it tries to find causal relationships between various activities and bodily states of health. There's a descriptive and a prescriptive component to medicine. We pour money into medicine because, broadly speaking, the aggregate demands of humans for health are enough to fund the investigations, but many of the descriptive discoveries could be used to make people healthy or unhealthy.
In the same way, economics as a field of the social sciences is "merely" the descriptive study of how economies work, but the reason we study economies is because we want stable, functioning economies that do a good job of allocating resources and have positive effects on well-being.
As I see it, the "descriptive" part of utilitarianism is the aggregate conclusions of the "descriptive" parts of other fields like medicine, economics, sociology, and psychology, that allow us to answer questions like "If we take action X, what effect will that have on QALY's/preference fulfillment/etc." Those questions are in theory "value neutral" questions, but the reason we are asking the question, and the reason we care about the answer is because enough people think that it is worthwhile field of inquiry. That's the implicit "prescriptive" part - it is derived from the fact that we ask the questions to make a larger policy decision.
This all makes sense, but I do not believe it is right picture. As you suggest, there is a line between, say, the academic discipline of economics on the one hand, and, on the other hand, the role played in moral decisions by the findings of economics. But, also inherent in your description is the fact that the academic discipline of economics falls entirely on one side of that line. So economics as such does not, after all, have a normative component. Moreover, utilitarianism, as a theory of the moral good, lies on the other side of that line -- that is, it says, as a function of the findings of fact and causal law, what is moral and immoral, while remaining silent on those findings of fact and causal law (except as hypothetical illustrative examples). If it were otherwise, we would see whole chapters of the work of Mill, Bentham, and Harris (in The Moral Landscape) devoted to deep investigations of fact and causal law (but of course we do not).
The practice of medicine probably does span both sides of the line, but I think this is a special case, because doctors deal face to face with their patients, who in fact have widely varying degrees of compliance with medical advice. One common tool for increasing compliance is moral suasion (for example, when your doctor wags his finger at you and says you are a bad boy or girl for not taking your medicine, or getting your regular checkup, or whatever). Thus, genuinely moral suasion is part of the practice of medicine, and I conceded that utilitarian reasoning plays a major role, insofar as what physicians morally pressure people to do is a function of scientific findings of cause and effect. I will chalk that up in support of utilitarianism as one tool in our moral toolbox.
On the other hand, I do not believe this argument transfers from medicine to politics, or foreign policy, or individual ethics. The hard part of medicine is knowing what works and getting people to do it in spite of their stubbornness and lack of discipline. The hard part of economics, diplomacy, and life on the street is trading between the interests of various overlapping groups and coalitions engaged in zero-sum conflicts of interest. That is where the study of ethics really ought to help us, and where I claim utilitarianism does not.
More options
Context Copy link
This is well put.
Economists in particular are way more utilitarian than the average person because they are trained in math and principles that highlight how to increase overall “wellbeing.”
(They’re also more libertarian/free market than average for the same reasons.)
More options
Context Copy link
More options
Context Copy link
The Founding Fathers were intimately aware of the problems with religious warfare and bad monarchs.
So they built upon English common law and designed a government with competing branches, federalism, and individual liberty. To promote the general welfare.
Utilitarianism is a particular form of consequentialism, which I assure you is pretty concerned with cause and effect as it relates to moral outcomes.
Standard economics and public policy are essentially aligned with rule utilitarianism because they are typically focused on increasing public wellbeing/wealth within the confines of our legal system.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link