Economists are wedded to utilitarianism as their collective moral compass. This is why we speak of social planners, welfare, utility maximization, and quality of life. The essence of utilitarianism is that moral judgments are reserved for final outcomes, not the means via which those outcomes are achieved (unless people have preferences over those means). As Bentham said, it is about the greatest happiness of the greatest number of people. In modern jargon, classic utilitarianism is about getting the highest number of total happy life years.
The quiz has 4 questions. My ‘classical utilitarian’ answers and discussions on Friday:
- To which identifiable group should society allocate its scarce supply of life-saving donor organs? I am thinking here of gender, age, race, area, anything that is a potential basis for an administrative allocation.
- There is a potential terrorist of whom there is a probability that he will cause a million deaths and he can only be stopped by being killed. How high should the probability of the threat materializing be for you to agree that your society should have institutions (such as drone programs) that kill him off pre-emptively? And how high should the probability be for you yourself to be willing to kill him off pre-emptively, presuming no other consequences for yourself of that act?
- Suppose you are in the position whereby you alone can choose to make it statistically visible what socially-unwanted things are done to pets by people in their own homes, but no-one knows you have that ability. In this hypothetical, making the data available would in no way change outcomes. Would you make that information visible?
- Suppose you are in the position to decide on whether to have an institution that saves the lives of an identified group of patients, say with a particular genetic or childhood disease. With the same money you could set up an institution that prevents 10% more deaths in the general population, for instance by innoculation or investments in road quality that reduce accident rates. Hence the second institution saves more lives, but the lives saved are not visible, either beforehand or afterwards: even afterwards, you do now know who was saved so the lives saved are ‘statistical’. Would you invest in the first or the second institution? More generally, what is the ratio of ‘statistical lives saved’ to ‘identified lives saved’ you implicitly choose via your policies?
Paul you do love simple binaries !
my offerings
1- best left to individual patients/doctors to decide on a case by case basis, for both ethical and pragmatic reasons.
2 question is loaded if “the probability of the threat materializing” is sufficiently high then you would have real hard evidence of ‘intention’ i.e a stockpile of nerve gas ,l a long string of intercepts, purchases of ammonium nitrate , so on and therefore taking action falls into old ethical conundrum: defense of the innocent Vs the path of non-violence . ( On the other had if the question was ‘is it OK to kill somebody simply because they might be planning mass murder ,or really don’t like us.’ then its not a hard question to answer.)
3 “socially-unwanted”- Cruelty?- or what? And again there is a framing to the question: “making the data available would in no way change outcomes” ?, a “if you could throw a rock in a pond , but create no ripples” question,no? . And faith would say, if you are talking about definite cruelty, do the right thing no matter.
4 Not sure, statistics are often BS, so answer would depend upon the circumstances and strength of the particular case .
1/ More or less random allocation excluding people with finite use for it (i.e., likely to die soon anyway). This way you get around any biases apart from the universal one that happens to us all. Allocation based on need would be worth thinking about (e.g., people likely to die soon if they don’t get it) but obviously more open to bias — I think I’d put up with a fairly unbiased needs-based method.
More radically, you could allow the giver to choose (I think you can already do this within families), although obviously this creates arguments about the free resources used not to do with the actual organ (i.e., transplanting it and so on). It also creates arguments about the legitimacy of creating groups that are potentially excluded due to social biases that are not based on the individual (e.g., “I’m willing to give me organs to anyone, unless they are XXX”).
I’ll have to think harder on these latter two points!
2/ We need some estimate of the probability. Presumably the guy with a PhD on virology working on influenza in a chemistry lab is especially dangerous, and has some probability of wiping out a good chunk of the world. But the probability is so close to zero. Can we work out a probability * lives value function? Possibly, since some (perhaps all) countries do it implicitly now anyway. Clearly, however, it has to have some absolute threshold before getting triggered, because if you can wipe out 10% of the population with a very small chance, any simple additive model will make you worse than the guy who can get 100 with some decent probability. Nevertheless, absolute thresholds can be set quite high, because the chance of getting hit by a comet etc. is actually not that terrible, and we don’t even think about this.
3/ No I wouldn’t bother. We’re cruel to any number of animals we eat and destroy the environment of, and clearly killing and eating animals is worse than rather minor cases of pet cruelty, so if nothing is to be gained there is no point on wasting energy on it — I could use the energy on something more productive. There are also cases of socially unwanted things that the pet doesn’t mind (I’ll let you think of those ones), and socially unwanted things the pet does mind. I don’t care at all about the first of these.
4/ The first possibility — We should spend far more money on preventive health than we do. I’m willing to use statistical definitions that evaluate lives on years saved versus actually causal deaths, and even willing to get into the much greyer ares (as in 1/) where there are probably some points in people’s lives that are worth less than others but you would want them to be fairly obvious. Most people already implicitly assume this. We don’t, for example, try and stop 88 year olds from smoking.
That being said, I’m somewhat swayed by what is known as the repugnant conclusion in philosophy, so if you couldn’t find evidence that the general preventive measures made any discernable difference to people’s lives apart from living a small amount longer, you should help the individuals where it does.
It is not unreasonable to have a utilitarian position in areas where it relatively easy to make these sorts of calculations, but to resort to other principles in other areas.
For 1) and 4), which are both about resource allocation, I would be close to the utilitarian position (marginal benefit). For 2), I would invoke other principles (eg a process-based justice principle) to want a much higher than 1 in a million chance for such a life and death decision. I’m not sure of your point in 3). If nothing would change, I would probably do nothing. Does that count as utilitarian?
“If nothing would change”
Truth is even doing nothing has effects. Question 3 is a trick question, Paul provably got it in Christmas cracker.
How would you measure that marginal benefit? For 1), would you measure it in economic terms based on the productivity of those saved (or how much they were willing to pay), or base it on simple increased life-time?
Could I ask what is the extent of the society you mention?
Can I assume nationhood is the basic unit?
Could be language group?
Or is this a hypothetical society and each utilitarian outcome will evolve within each unit only?
Think current Australians. Please pretend for the hypothetical whatever we do has no effect on anyone outside Australia (or that we just dont care).
1. Sick people.
2. Shoot the hostage.
3. Lolcats already exists.
4. Tough one. I’m going with c) tax cuts.
1. Australia already allocates available transplants so as to exclude poor prospects in longevity terms (too old, too sick) i.e. inherently utilitarian principles already apply.
2. Inherently unlikely scenario. People like Alan Dershowitz invented similar preposterous examples in attempted justification of torture warrants to prevent terrorism. Many of the contrary arguments also apply to this example. Moreover, a more sophisticated rule utilitarian approach might well result in a conclusion against resort to pre-emptive execution in any event for anything much short of prevention of certain mass murder. For example, how much “collateral damage” aka innocent deaths do the drones cause? How much resentment does that cause? What effect does that have on overall terrorism levels?
3. If “socially-unwanted” implies harm to the animal then the answer seems clear to me, unless you exclude consequences for the animal from the utilitarian calculus (which I would see as obnoxious). If it is socially unwanted then almost by definition there is SOME effect/change i.e. social disapprobation. Accordingly, presumably this example is positing that the animal molesters are all sociopaths, and the rest of society though disapproving chooses not to impose any concrete sanctions on the offender. Leaving aside the unlikelihood of the scenario, I would still choose to “name and shame” on Kantian grounds.
4. The choice here also seems clear, certainly on a utilitarian calculus but on several other moral philosophical bases as well. Moreover, both your categories are actually statistically bounded rather than individual, unless you are positing that the first institution will treat only people diagnosed with a particular genetic or childhood disease at a specific moment in time, in which case it would be difficult to find any approach to moral philosophy which would favour that option. I would opt for supporting institutions which dispense help in a fairer way rather than play the cheap populist sympathy game. However I acknowledge that most politicians would be likely to adopt the opposite choice, on personal utilitarian grounds (to avoid electoral punishment by non-reflective sentimental voters).
For 1, I basically go for saving the young. I’m mid 50’s, and have had a life. If there is a 15 year old, they should definitely get an organ before me.
2. That is not hypothetical, it was a group headed by Rumsfeld and Chaney. I didn’t kill them, restricting myself to rather pitiably marching through the centre of Perth with a lot of other people.
3. If it makes no difference at all, don’t do it.
4. Phrase this one in reverse. We willingly accept that a couple of thousand people a year will die on the roads in Australia. We put a lot of money and effort into reducing these, but because the deaths are random, we find it much more acceptable than allowing (say) a ship with 2000 people aboard to sink because it was too expensive to save it. And I have this prejudice, so I’ll save the identified group. Even though its not the sensible thing to do.
[1] Administrative allocation is inappropriate, but even in cases where administrative allocation is practised, the decision is made by individual people, not by “society”.
[2] You are saying that some group of people told me that this person is a terrorist and I should just trust them on that matter, but now they put the noose in my hands and ask me to do the dirty deed. From experience I have very little faith left in what governments tell me, and if I’m going to hang a man I want to see fairly direct evidence of wrongdoing.
[3] What is your objective measurement methodology for deciding “socially-unwanted things”? I think that since government sits in a position of most authority, so transparency in government is far more important than transparency in some largely unimportant private affairs, and that transparency cannot be filtered on the basis of “socially-unwanted” or other vague undefinable terms, it should be at transparent as possible.
If you are asking whether government should set up surveillance just in case someone somewhere might kick their dog, then the answer is no, because in about two minutes that surveillance will be misused. I guarantee.
[4] The answer is in this sentence:
We have seen “statistical” deaths from the Chernobyl reactor but they are based on models and no one knows what’s real any more. We have the same problem with modelled global warming (and yes, even the so-called thermometer data includes a large component of statistical fudge, which you can tell because every year the past decades get colder, real thermometer readings stay constant after the reading). Once again, I’m trusting someone who waves around a chart and says, “This is going to save lives.”
In no case is this a test of utilitarianism, it’s a test of faith in authority. I’ll also point out that utility itself is subjective, but that’s perhaps not the “classic” position.
1. I don’t see a basis for allocating to “an identifiable group”, unless the group of people urgently requiring transplants is classed a such a group. It seems to me that the inability to store live organs will tend to make allocation to the person with the most urgent need at the time any given organ becomes available the only feasible option. From a utilitarian point of view, I suppose a calculus based on happy life-years would be likely to lead to the young being favoured over the old. I haven’t done enough thinking about this to have a firm view about it.
2. It first depends whether or not the murder of the 1 million people would reduce the number of happy life-years. Although it seems intuitively obvious that a terrorist attack that killed 1 million people would reduce the number of happy life-years, it depends on the circumstances. For example, if the terrorist was seeking to prevent a group of people (the leaders of a national government, say) from plotting a military operation that would kill an even larger number of people (say, 2 million), but that group was so well protected from individual assassination that the only feasible way to neutralise them was via a terrorist attack that would kill up to a million people as collateral damage, then that terrorist attack might well increase, rather than decrease, the number of happy life-years. In those circumstances, a utilitarian would have no basis interfering and ought to support the attack.
If it is established that the attack would reduce the number of happy life-years, then from a utilitarian point of view, presumably the cut-off would be the probability at which the reduction in average happy-life years caused by the terrorist’s assassination was exceeded by the reduction in average happy life years caused by the terrorist attack.
3. I have no idea what this means. Does that imply I am or am not a utilitarian?
4. I’d go for the institution that saves the greater number of lives.
1. In principle, allocate to people who get the biggest increase in quality-adjusted life years.
2. This is not a sensible hypothetical. The idea that a probability could be meaningfully assigned to the event is fanciful. We may as well be saying ‘imagine a world with no gravity’. Peoples’ responses are therefore uninformative. Having said all that, on the principle of minimising expected years of life lost, if the suspect is of average age, I guess a probability greater than one-millionth would be the threshold. If the suspect is younger, the probability needs to be higher. If younger, the probability is lower. However, this is complicated by other costs and benefits of pre-emptive action, so I don’t have the information to determine the threshold probability.
3. Another silly hypothetical. If taken literally, there is absolutely no point to making the information visible. In fact, if there is effort involved, pointless costs are involved, so I’d have to say I would not bother to make it visible.
4. I’d go for the latter, I’d hope. The magic ratio is 1. However, if I actually knew anyone in the identified group (or they were made known to me), I can see that the soundness of my decision-making could be compromised.