9 Comments

Your argument is closely related to the 'ticking time bomb' question: a terrorist has placed a nuclear weapon in the middle of New York City; it's due to go off in an hour, killing millions; if you torture him, he'll tell you where it is and you can de-fuse it. Should you do so?

If you say 'yes', you may find the same problem being posed with less-than-a-nuclear weapon, and less than millions as victims, on a sliding scale right down to a captured kidnapper who will not reveal where his victim is tied up, unless he is tortured. (And 'torture' can be slid along the scale too, from electrodes on the private parts, to twisting an arm.) And from death for the innocent victim, we can posit just extended extreme discomfort for the victim, who will eventually be found anyway. So ... when is torture justified?

I have always thought that the answer to this question is ... not to answer. In the case of the hidden atom bomb, there is obviously only one sane answer (according to me).

But if you make that answer 'public', you have provided intellectual justification for the police giving the third-degree to a teenage shoplifter. You have established that torture may, in principle, be justified. (And, as the punchline to an old joke goes, 'Now we're just quibbling about the price.')

So ... refuse to answer. A concrete instance of the wise saying that 'there are things which are said, and not done. And there are things which are done, and not said.'

Expand full comment

I think my gripe with utilitarianism is the same I have with applying the concept of the reward function from reinforcement learning to human beings:

Any reasonable function for utility or reward that matches human experience is so complex as to be unknowable. It may be useful as a reduction to provide guidance in thinking, but any function that is described and substitutes for the “real” or “ideal” utility/reward is necessarily limited and correspondingly “wrong.”

I think your point is that the critiques are actually critiquing a limited, substitute utility function, and then applying that to the concept of utilitarianism in general.

From another point of view though, it seems valid to say, well fine, but if you can’t ever know the complexities of what utility really is, why bother at all? You’re back to where your article ends up: following heuristics and calling it utility. But that’s also a limited and wrong definition!

My answer is to sit in the uncertainty. Think problems through from multiple frameworks, knowing none of them is likely right, but all may have something to tell me.

Expand full comment

> Finally, you might object that just killing grandma one time won’t make it a routine practice, or that just lying to one person on their deathbed won’t destroy trust.

This is the strongest argument against your defense of utilitarianism. I think your addition of second and third order effects changes utilitarianism into deontology. You are fighting the hypothetical in the sense that the hypothetical posits that *these helpful things* will be the only effects of your action. There won't be any widespread adoption of murderous organ harvesting and there won't be changing ethical norms in society. But your argument why utilitarianism wouldn't prescribe killing grandma relies entirely on second and third order effects. I.e. it's addressed towards a different hypothetical.

My take on it is the utilitarian camp people will unashamedly choose to kill grandma in the hypothetical. If you are against killing grandma in the pure hypothetical, then you aren't a utilitarian (or you prefer your grandma living over the charity assistance). You're probably a deontologist.

I definitely do see that there's a large-ish group of people who espouse utilitarianism but without accounting for any higher order effects at all. Like, even in the grandma scenario, but in the real world, without any of the hypothetical guidelines, they'll still suggest killing the grandma. I think those people are just weird and like killing grandmas to signal how moral yet coldly logical they are.

I think a different way to phrase your post is that utilitarianism requires updateless decision theory (or functional or timeless decision theory, I can't figure out the difference between them) to function as a moral framework. Utilitarian agents need to account for how their decisions evidence what decisions other utilitarian agents will make.

Expand full comment
Jun 12, 2022·edited Jun 13, 2022

Killing a grandma to use her fortune to better the world is quite literally the plot of one of the greatest literary works of humankind – Dostoevsky's Crime and Punishment. ( Also, one of my favorite books of all time.)

Highly recommend.

Expand full comment