9 Comments

Your argument is closely related to the 'ticking time bomb' question: a terrorist has placed a nuclear weapon in the middle of New York City; it's due to go off in an hour, killing millions; if you torture him, he'll tell you where it is and you can de-fuse it. Should you do so?

If you say 'yes', you may find the same problem being posed with less-than-a-nuclear weapon, and less than millions as victims, on a sliding scale right down to a captured kidnapper who will not reveal where his victim is tied up, unless he is tortured. (And 'torture' can be slid along the scale too, from electrodes on the private parts, to twisting an arm.) And from death for the innocent victim, we can posit just extended extreme discomfort for the victim, who will eventually be found anyway. So ... when is torture justified?

I have always thought that the answer to this question is ... not to answer. In the case of the hidden atom bomb, there is obviously only one sane answer (according to me).

But if you make that answer 'public', you have provided intellectual justification for the police giving the third-degree to a teenage shoplifter. You have established that torture may, in principle, be justified. (And, as the punchline to an old joke goes, 'Now we're just quibbling about the price.')

So ... refuse to answer. A concrete instance of the wise saying that 'there are things which are said, and not done. And there are things which are done, and not said.'

Expand full comment
author

This is a nice argument. Here's a related thought: Even though I think most reasonable people might accept that torture is *in principle* justifiable, you might still favor a complete legal prohibition, on the grounds that there's no way to make the law sufficiently discriminating and so any legal permission to torture would do more harm than good. Perhaps it's a good idea to have an inflexible social norm against torture for the same reason?

Expand full comment

I also propose — or maybe this is merely restating — that to create a world that does not create cases that might “require” torture, we should reduce violence as much as possible, including by not torturing.

Expand full comment

I think my gripe with utilitarianism is the same I have with applying the concept of the reward function from reinforcement learning to human beings:

Any reasonable function for utility or reward that matches human experience is so complex as to be unknowable. It may be useful as a reduction to provide guidance in thinking, but any function that is described and substitutes for the “real” or “ideal” utility/reward is necessarily limited and correspondingly “wrong.”

I think your point is that the critiques are actually critiquing a limited, substitute utility function, and then applying that to the concept of utilitarianism in general.

From another point of view though, it seems valid to say, well fine, but if you can’t ever know the complexities of what utility really is, why bother at all? You’re back to where your article ends up: following heuristics and calling it utility. But that’s also a limited and wrong definition!

My answer is to sit in the uncertainty. Think problems through from multiple frameworks, knowing none of them is likely right, but all may have something to tell me.

Expand full comment
author

I'm pretty consequentialist myself, but I think your gripe is a strongest criticism. In practice, utilitarian calculations might be so hard that it's almost impossible to do them. (Certainly, we can't go about our days doing these calculations for each movement we make!) My instinct is that utilitarianism is most useful in situations where you can actually attempt an explicit calculation, e.g. public health or similar.

Expand full comment

totally this. maybe in simpler terms, the utility function (output) and context (input) are very high dimensional. combine that with already defined high dimension action space and you've created these fuzzy "irrationalities" and moral instincts/common sense.

these thought experiments are vast dimension reductions of this high dim space and parsimonious for conversational but not real life/practical purposes. having said that, continuing to talk about it in all its nuances is still an important mechanism for society to coordinate our values and ethics.

you often see this show up in how legal systems work, they are meant to be a cut/dry rule of law that, in practice, actually has a ton of discretion and enough wiggle room to be the wiggly/inconsistent overfit moral partitioning you describe.

i'd like to think that we're not actually that irrational, we just have a set of hyper endogenous experiences, contexts, utilities, and actions taken that aren't low dimensional/parsimonious enough to discuss more broadly or economically.

ps huge fan of your writing dynomight, and the very thoughtful commenters here too!

Expand full comment

> Finally, you might object that just killing grandma one time won’t make it a routine practice, or that just lying to one person on their deathbed won’t destroy trust.

This is the strongest argument against your defense of utilitarianism. I think your addition of second and third order effects changes utilitarianism into deontology. You are fighting the hypothetical in the sense that the hypothetical posits that *these helpful things* will be the only effects of your action. There won't be any widespread adoption of murderous organ harvesting and there won't be changing ethical norms in society. But your argument why utilitarianism wouldn't prescribe killing grandma relies entirely on second and third order effects. I.e. it's addressed towards a different hypothetical.

My take on it is the utilitarian camp people will unashamedly choose to kill grandma in the hypothetical. If you are against killing grandma in the pure hypothetical, then you aren't a utilitarian (or you prefer your grandma living over the charity assistance). You're probably a deontologist.

I definitely do see that there's a large-ish group of people who espouse utilitarianism but without accounting for any higher order effects at all. Like, even in the grandma scenario, but in the real world, without any of the hypothetical guidelines, they'll still suggest killing the grandma. I think those people are just weird and like killing grandmas to signal how moral yet coldly logical they are.

I think a different way to phrase your post is that utilitarianism requires updateless decision theory (or functional or timeless decision theory, I can't figure out the difference between them) to function as a moral framework. Utilitarian agents need to account for how their decisions evidence what decisions other utilitarian agents will make.

Expand full comment
author

I think the empirical fact that we evolved so that we tend to (by default) "think deontologically" is a strong empirical fact, because evolution is definitely a decision theorist interested in maximizing expected values.

My guess is that the game theory works out that this is typically the right way to behave both because of the need to use heuristics instead of calculating expected utility, and for the kind of meta-reasoning reasons that Parfit explores. E.g. even if you're totally self-interested, it's often going to be a good policy to be honest because otherwise in the long term you'll be excluded from many mutually beneficial pacts.

But I still think utilitarianism is the way to go when you have lots of time to do very careful calculations. E.g. I think I'd like criminal justice reform or public health to be driven by a utilitarian-type calculation.

Expand full comment
Jun 12, 2022·edited Jun 13, 2022

Killing a grandma to use her fortune to better the world is quite literally the plot of one of the greatest literary works of humankind – Dostoevsky's Crime and Punishment. ( Also, one of my favorite books of all time.)

Highly recommend.

Expand full comment