Useful Utility: Consequences (Part 1)
This is a heavily-edited version of a post I wrote in December.
Since then, I’ve decided I want this to be the first of a four-part series defending a much more restrained, useful version of utilitarian philosophy. I’ve deleted the original post, and I’m replacing it with this.
Utilitarians have it rough.
While their philosophy has an intuitive appeal, almost everybody I know regards it as crude. It’s the philosophy that suggests that forced organ harvesting is ethical. The one people appeal to when they rationalize human rights abuses. The one a comic book writer might give to the villain, if they want the motives to be “complicated”.1
And of course, the utilitarians you meet in the real world are obnoxious. They’re smug, they act like they think they’re more rational than you are, they do things like scold you for donating to less-than-optimal charities (never mind that they are behind one of the most impactful philanthropic movements in recent memory)...it’s a lot. To top it all off, Sam Bankman-Fried is one of them.
It’s too bad, because it’s a remarkably useful theory. And I think what’s happened is that some people have expected it to do things it cannot, saddled it with more than it can support, and then draw extreme conclusions from it. Others see that, find the conclusions to be absurd, and then jettison the entire thing – not just the conclusions, but utility functions, consequentialism, and sometimes the very concept of trying to compare two things.
Many utilitarian conclusions are absurd, and the critics have a point. But they don’t undermine the foundational ideas. I’d even say they’re the only sensible way to think about how to live a life and make decisions in it. This foundation needs a defense, one that separates the core of utilitarian thinking from the conclusions that get derived from it. This foundation is useful, and it deserves to be written about far more than the abstract hypotheticals that have come to define the genre.
This is a big topic, so I’m going to spread it out over a few posts (for now it’s four, but who knows where it ends up):
Consequentialism: Why outcomes are the only thing that matter when we make decisions
Tastes: Where wanting to “do good” comes from
Utility functions: How we evaluate what outcomes are “good” in the first place
Heuristics: Why principles are relevant (and necessary), even when focused on consequences
So let’s get into the first topic: consequentialism
Utilitarianism’s less-popular, more important parent
Behind utilitarianism is a simple idea: actions don’t matter, their consequences do. This is consequentialism. I would say “the ends justify the means”, but that implies that means are something that can be justified in the first place. To a consequentialist, justifying the means makes as much sense as justifying a hammer – it is not something that is just or unjust, it merely a tool that can be used.
Let’s take a hypothetical: you have a hundred dollars, and I have a shovel. I’m thinking about hitting you with the shovel and taking your money – then I would have a hundred dollars. I need to decide what I should do.
A common misconception is if I’m a consequentialist, I ought to take your money. That’s incorrect. The best decision depends entirely on my values.
If all I value is my own material gain, it looks like I should hit you with the shovel. And obviously, if I value your health, I shouldn't. But there’s plenty of room for nuance. If I value my overall welfare – not just my property, but also my state of mind, my conscience, etc. – then perhaps I still shouldn’t. The act of beating and robbing somebody might put me in a worse state than I would be otherwise, even with an extra hundred dollars.
There are a few points I want to make about this scenario:
First, there is no judgment. Consequentialism is not so much a system of morals as it is a method of decision-making. It dictates what should be taken into consideration, but not how to actually consider. The values come from somewhere else; consequentialism doesn’t have any of its own. While it’s usually discussed in the context of a selfish value or a utilitarian one, it’s very flexible. If you value filial piety, pick the choice that leaves your parents best-off.
A related point is about the scope of the consequences – they are not limited to material realities. “I feel guilty” is as valid a consequence as “I have a hundred dollars”. This might seem strange, and potentially circular (I might only feel guilty if my actions are unjust, and at the same time I’m deciding if my actions are just based on how I will feel). It’s necessary, though, because in some sense all well-being comes back to psychological experience.
Also note that we aren’t concerned with what actually happens. Initially, this seems strange – consequentialism is about the consequences, how can we possibly ignore those? But this is a decision-making framework. We are thinking about the anticipated consequences. Walking up to a roulette table and putting all my money on 17 is a worse choice than not playing, and it doesn’t retroactively become a good choice if I happen to win. To the extent that you evaluate the quality of a decision, you should evaluate it based on what you knew at the time.
My final point is that choices cannot be “good” or “bad”, only “better” and “worse”. In our shovel example, I only listed two choices, but of course there are more. I could ask you nicely for the money. I could try and trade you my shovel for it. I could see if you will wager the cash on a bet. There are an infinite number of options,2 and you will not find the “optimal” one. Make your peace with uncertainty; the best you can do is evaluate a good number of choices, and take the best one out of those.
Why bother with this?
So we’ve defined consequentialism. Why is this the system we should pin our decisions on?
What else could we use? If you don’t spend a ton of time thinking about moral philosophy, consequentialism might sound so obvious that you aren’t sure how else you could even think. But you probably agree with one of the alternatives, to some extent. Generally speaking, they are some version of “deontological ethics”, which is an abstruse (but brief!) way of saying: actions are inherently good or bad, based on some criteria. A deontologist would say that hitting somebody with a shovel is wrong, whether or not you prefer the outcomes. That sounds pretty reasonable.
But consider: In any context outside of ethical decision-making, you almost certainly don’t decide anything deontologically. Decisions are made to accomplish goals, and goals are just desired outcomes. You have goals when you order a sandwich. The goal might not be as straightforward as “eat something tasty” – perhaps you’re trying to lose weight, and calories are also a factor – but you still have one. When you choose a book to read, or a plumber to hire, or a school to send your kid to, you have a goal in mind.
This goes deep. It’s in your primal wiring; you have a goal to check the boxes on Maslow’s pyramid, starting with food and shelter.
Reducing all decision-making to “evaluate what’s best for your goals” sounds clinical, and of course nobody’s literally making pro/con lists for everything they do in life. We all follow heuristics and principles and other shortcuts that save time. Most of it is subconscious. But it is all in service of a goal. Nobody advocates for picking between Chinese and Thai food based on some deontological characteristic of the cuisine. It’s about what outcome you would prefer.
Only in the realm of ethics are we tempted to abandon consequentialist reasoning. Why should the system we use to make every other decision in our lives not apply in this specific area? I understand why the temptation is there.
It gets back to the heuristics and principles and shortcuts. There are too many decisions in life to evaluate all of them, and most of them are near-inconsequential. So our subconscious brain follows patterns that usually get us the outcomes we want. Every now and then, the rule-of-thumb leads us down the wrong path, but for the most part it’s right. If we rigorously evaluated every decision our outcomes would actually be worse, because we’d be too paralyzed to act most of the time.
If a decision is really important, though, we override those instincts. Imagine you’re on Let’s Make a Deal. The host shows you three closed doors, and tells you there’s a Mercedes behind one of the doors and a goat behind the other two. You get to pick a door, and you get whatever’s behind it. You make your choice.
The host opens one of the other doors, showing that there’s a goat behind it. And he offers you a chance to switch your guess to the last door. Do you take it?
Your instincts say no. All those subconscious processes that usually make your life livable tell you that nothing has changed. But of course, the “best” decision (by “best” I mean “the one that gives you the best chance of winning a car”) is to take the host’s offer and switch your guess to the third door. This is the now-famous Monty Hall problem, and the counter-intuitive strategy has been proven optimal.
And if you’re on TV, and you’re talking to Monty Hall (well actually, whoever replaced him, he’s dead now), and you know about this problem, you will suppress your instincts and pick the other door. Because at the end of the day, you care about the consequences.
Ethics are no different. We’re filled with all kinds of subconscious tastes and biases and wants that we don’t really have control over, that have taken root because of our genes, or our upbringing, or our education, or who knows what. And we have a strong impulse to satisfy these, and that’s what feels “moral” to us. Asking ourselves to throw all that away feels wrong. But so does switching your choice when you’re talking to Monty Hall. The fact is, sometimes our impulses are wrong, and the whole point of our frontal lobes, our rational centers, is to keep those impulses in check.
Indeed, raise the stakes high enough, and most of us find consequentialism to be intuitively correct. If millions of lives are at stake, almost all of us would agree that it is acceptable to lie, or cheat, or steal, to save those lives. We might have issues with this at smaller scales, but at a sufficiently large one, consequentialism becomes obviously correct.
After all, if we set aside the consequences, what else is there? Abstract ideas about what actions are logical, or just, or righteous, lack any force without consequences. Explaining why stealing is wrong, without examining the material impact to the person who has been stolen from, is near-impossible. The best argument I’ve read in this vein is Kant’s, and it’s thoroughly unconvincing (as an example, he recommends that you never tell a lie, even to an ax murderer, because Lying is Wrong™).3 Even traditional religious ethics fall back on consequentialist theory, damning sinners to hell and rewarding the virtuous with heaven.
The fact is, everything we perceive, everything we feel, everything we know is the product of consequences. Embracing this does not mean embracing the conclusion of every utilitarian thought experiment you’ve ever read; it just means that you should be honest with yourself about what you’re trying to do, and then ask yourself if what you’re doing is really the best way to accomplish it.
Now if you’re interested in what you should try to do, stick around for Part 2.
This has become so commonplace that utilitarianism is almost de facto evil in these stories. Magneto, Ozymandias, Doc Ock, Ra’s al Ghul…the list goes on. They don’t mention utilitarianism, but whenever they get to the part where they monologue about their grand plans, you can almost see Jeremy Bentham’s ghost in the background.
Technically this is maybe not an infinite number, just a very large finite one. But for our purposes, the number might as well be infinite.
This is the “categorical imperative”, which dictates that you only take actions that could become universal laws. The whole thing is rooted in logical contradictions. Consider stealing: “stealing” as a concept relies on taking somebody else’s property. If everybody were allowed to steal, the very concept of personal property would be meaningless, because any personal property you have could just as easily become somebody else’s if they steal it.
I don’t think the framework really makes sense (you could imagine some very specific, complicated rules that both allow stealing and could be universalized, and Kant never really explains where the line is), but that critique will have to wait for another post.