Utilitarianism is a philosophical epidemic in contemporary social and political dialogue. In one form or another, the notion of a “greater good” above the good of individual agents has taken root in group-centric ideologies. Dictators have invoked it on nationalistic or ethnocentric grounds; leftists have promoted it in the name of “mankind”; even the most individualistic of nations, such as the United States, are overrun by policy concerns for the “national interest.” Part of its appeal is intuitive: if happiness is good, and we want more of good things, then the greatest happiness is what we ought to look for. This looks decent enough in words, but it is only a connection of similar sounds that holds it together. What is meant by “happiness,” and what is the sense in which it is “good”? To whom does the “greatest happiness” pertain, and why “ought” we to pursue it? Utilitarianism, even in its most “complete” form, fails to address these questions with valid answers.
Overall, utilitarianism is a philosophical mess. Its moral implications are strange; its practice is filled with difficulties and absurdities; and its justification arises from, principally, nowhere. Mill’s treatment of utilitarianism, in particular, is one with egregious flaws. Its essential characteristics can be listed as follows: pleasure (in the non-degrading sense) is all that is intrinsically valuable; each person’s pleasure is valued equally, with allowance for “kind”; an action’s morality is measured by its positive productive correspondence with pleasure (for all); and agents must be disinterested spectators in order to consider their decisions correctly.[1] Problems with utilitarianism can be generally summarized in three, often inter-related, categories: the alienation of agents from themselves, problems of moral calculation, and lack of justification.
Consequentialist Utilitarianism
The doctrine of consequentialism[2] is the principal cause of bizarre moral demands expressed by utilitarianism. As Philippa Foot notes, there is a very compelling idea behind suggesting that no one would ever consider it morally acceptable to choose a worse state of affairs over a better one.[3] Using that idea as an argument against non-consequentialist theories, however, either begs the question or causes friction with pre-existing, weak intuition, depending on the senses of the words “better” and “worse.” If they are used normatively, clearly no deontological[4] moral theorist would reject a morally “better” state of affairs (more duties fulfilled) for a “worse” one – to do so would be antithetical to having a moral theory in the first place, and thus this case is trivial.[5] If they are used supposing some usage which is non-normative but somehow value-based, then they essentially ignore the moral theory under criticism, instead imposing a standard from a foreign hierarchy of values, i.e. a floating abstraction. For example, one might criticize Ayn Rand’s Objectivism for providing no political guarantee for the care of physically handicapped (beyond protection from coercion), indicating an alleged problem with the consequences of Objectivism. However, Objectivism establishes a direct relationship between its principles (e.g., non-interventionism) and concretes (e.g., nature of reality, nature of humans, validity of the senses, etc.), so this criticism merely involves some relativistic view of a “good” state of affairs being used as a standard to judge a reasoned moral theory.
Consequentialism affects the role of the agent in moral decision-making quite severely. Under consequentialist utilitarianism, an action’s morality can be defined thusly: that action is moral which brings about the intrinsically desirable state of affairs. An agent is as much morally responsible for those states which arise from his action as those from his inaction. Therefore, at any given point in time, he must decide which action will produce the proper state from that point forward, regardless of the past. An important observation here is that the actions of other agents are no greater factors in the agent’s decision than anything else, i.e. they are simply part of the environment in which he must bring about the best state of affairs.
An ethical puzzle for consequentialists
In other words, all considerations for an agent’s decision are exogenous: his concern ought to be to take what is given, and to produce the best outcome from it. Suppose a man, a proficient utilitarian named John Stuart, is walking down the street of a very happy society. Along comes a man of great evil who is part of an alliance of reputable wrong-doers, who presents the utilitarian with an ultimatum: “kill yourself, or I will kill ten people very much like you in this very happy society.” The evil alliance of wrong-doing does, indeed, have a strict policy of honoring its sinister promises when they are made, and has yet to break it. J.S., as a good utilitarian, knows that if he does not kill himself, he will be morally responsible for the deaths of ten very happy people, since he could have prevented it.[6]
Clearly, there is something amiss in this example. A determined utilitarian could say that the evil man is morally blameworthy for presenting J.S. with such a terrible situation, since either outcome (minus one person or minus ten people) reduces the greatest overall happiness. Nonetheless, this does not change the nature of J.S.’s decision: he must weigh the utilities at hand, which obviously point away from his continued existence, and make the right or wrong choice. The relevant implication is that J.S., as an agent, is alienated from his own person. He is not only responsible for his own actions, but for the actions of others (the evil man and the sinister organization). As Bernard Williams states, consequentialism implies that “from the moral point of view, there is no comprehensible difference which consists just in my bringing about a certain outcome rather than someone else’s producing it.” Because the focus of utilitarian moral action is to produce a specific end-state for everyone, J.S. (or anyone in his position) must put aside all of his projects (being happy, being a philosopher, being alive) when the ultimatum is issued to him and act as the projects of others demand.[7]
The calculation problem
The utilitarian calculus is also the source of much trouble. How practical is it to fully integrate all requisite information (the happiness of all others, the results of an action on it, etc.) into a properly weighted scale in order to make the right decision? Mill qualifies this problem by suggesting that it is a problem shared by all moral theories:
There is no ethical creed which does not temper the rigidity of its laws by giving a certain latitude, under the moral responsibility of the agent, for accommodation to peculiarities of circumstances and, under every creed, at the opening thus made, self- deception and dishonest casuistry get in. There exists no moral system under which there do not arise unequivocal cases of conflicting obligation.
This contention does not suffice, however. It is indeed true that all moral theories encounter difficulties, but to invoke that as an absolution of utilitarianism’s difficulties is analogous to telling an upstart typewriter company in the 21st century, “every business has trouble and needs to invest at a loss when it gets started- don’t worry, go ahead and invest.” Utilitarianism requires a tremendous amount of input in order to properly discover the right decision, across many persons in both the short-run and long-run. The sheer volume of particulars in the world must be considered in every case, as opposed to the fewer particulars that would be necessary in, say, a natural law framework of morality. The conclusion is that the “calculated” utilitarian choice is often wildly divergent from its ideal, morally correct choice. Contrarily, in a natural law theory, someone makes a contract and knows to keep it because, via reason, he has concluded that contracts ought to be kept, and that in this specific situation he knows that it applies (for the obvious reason that he signed his name on a physical contract with another person). In the context of the theory, there is only a miniscule chance that he has made an error in his judgment.