The Xmas quiz answers and discussion

Last Monday I posted 4 questions to see who thought like a classic utilitarian and who adhered to a wider notion of ethics, suspecting that in the end we all subscribe to ‘more’ than classical utilitarianism. There are hence no ‘right’ answers, merely classic utilitarian ones and other ones.

The first question was to whom we should allocate a scarce supply of donor organs. Let us first briefly discuss the policy reality and then the classic utilitarian approach.

The policy reality is murky. Australia has guidelines on this that advocate taking various factors into account, including the expected benefit to the organ recipient (relevant to the utilitarian) but also the time spent on the waiting list (not so relevant). Because organs deteriorate quickly once removed, there are furthermore a lot of incidental factors important, such as which potential recipient is answering the phone (relevant to a utilitarian)? In terms of priorities though, the guidelines supposedly take no account of “race, religion, gender, social status, disability or age – unless age is relevant to the organ matching criteria.” To the utilitarian this form of equity is in fact inequity: the utilitarian does not care who receives an extra year of happy life, but by caring about the total number of additional happy years, the utilitarian would use any information that predicts those additional happy years, including race and gender.

In other countries, the practices vary. In some countries the allocation is more or less on the basis of expected benefit and in the other is it all about ‘medical criteria’ which in reality include the possibility that donor organs go to people with a high probability of a successful transplant but a very low number of expected additional years. Some leave the decision entirely up to individual doctors and hospitals, putting huge discretion on the side of an individual doctor, which raises the fear that their allocation is not purely on the grounds of societal gain.

What would the classic utilitarian do? Allocate organs where there is the highest expected number of additional happy lives. This thus involves a judgement on who is going to live long and who is going to live happy. Such things are not knowable with certainty, so a utilitarian would turn to statistical predictors of both, using whatever indicator could be administrated.

As to length of life, we generally know that rich young women have the highest life expectancy. And amongst rich young women in the West, white/Asian rich young women live even longer. According to some studies in the US, the difference with other ethnic groups (Black) can be up to 10 years (see the research links in this wikipedia page on the issue). As to whom is happy, again the general finding is that rich women are amongst the happiest groups. Hence the classic utilitarian would want to allocate the organs to rich white/Asian young women.I should note that the classic utilitarian would thus have no qualms about ending up with a policy that violates the anti-discrimination laws of many societies. Our societies shy away from using observable vague characteristics as information to base allocations on, which implicitly means that the years of life of some groups are weighed higher than the years of life of another. The example thus points to a real tension between on the one hand classic utilitarianism and its acceptance of statistical discrimination on the basis of gender and perceived ethnicity and on the other hand the dominant moral positions within our society. Again, I have no wish to say which one is ‘right’ but merely note the discrepancy. As to myself, I have no problem with the idea that priority in donor organs should be given to young women though I also see a utilitarian argument for a bit of positive discrimination in terms of a blind eye to ethnicity (ie, there is utilitarian value in maintaining the idea that allocations should not be on the basis of perceived ethnicity, even though in this case that comes at a clear loss of expected life years).

The second question surrounded the willingness to pre-emptively kill off threats to the lives of others.

The policy reality here is, again, murky. In order to get a conviction on the basis of ‘attempted’ acts of terrorism or murder, the police would have to have pretty strong evidence of a high probability that the acts were truly going to happen. A 1-in-a-million chance of perpetrating an act that would cost a million lives would certainly not be enough. Likely, not even a 10% chance would be enough, even though the expected costs of a 10% chance would be 100,000 lives, far outweighing the life of the one person (and I know that the example is somewhat artificial!).

When it concerns things like the drone-program of the west though, under which the US, with help from its allies (including Australia), kills off potential terrorist threats and accepts the possibility of collateral damage, the implicit accepted burden of proof seems much lower. I am not saying this as a form of endorsement, but simply stating what seems to go on. Given the lack of public scrutiny it is really hard to know just how much lower the burden of proof is and where in fact the information is coming from to identify targets, but being a member of a declared terrorist organisation seems to be enough cause, even if the person involved hasn’t yet harmed anybody. Now, it is easy to be holier-than-thou and dismissive about this kind of program, but the reality is that this program is supported by our populations: the major political parties go along with this, both in the US and here (we are not abandoning our strategic alliance over it with the Americans, are we, nor denying them airspace?), implying that the drone program happens, de facto, with our society’s blessing, even if some of us as individuals have mixed feelings about it. So the drone program is a form of pre-emptively killing off potential enemies because of a perceived probability of harm. The cut-off point on the probability is not known, but it is clearly lower than used in criminal cases inside our countries.

To the classic utilitarian, if all one knew would be the odds of damage and the extent of damage, then the utilitarian would want to kill off anyone who represented a net expected loss. Hence the classic utilitarian would indeed accept any odds just above 1 in a million when the threat is to a million lives: the life of the potential terrorist is worth the expected costs of his possible actions (which is one life). If one starts to include the notion that our societies derive benefit from the social norm that strong proof of intended harm is needed before killing anyone, then even the classic utilitarian would increase the threshold odds to reflect the disutility of being seen to harm those social norms, though the classic utilitarian would quickly reduce the thresholds if there were many threats and hence the usefulness of the social norm became less and less relevant. To some extent, this is exactly how our society functions: in a state of emergency or war, the burden of proof required to shoot a potential enemy drastically reduces as the regular rule of law and ‘innocent till proven guilty’ norms give way to a more radical ‘shoot now, agonize later’ mentality. If you like, we have recognised mechanisms for ridding ourselves of the social norm of a high burden of proof when the occasion calls for it.

As to personally pulling the trigger, the question to a utilitarian becomes entirely one of selfishness versus the public good and thus dependent on the personal pain of the person who would have to pull the trigger. To the utilitarian person who is completely selfless but who experiences great personal pain from pulling the trigger, the threshold probability becomes 2 in a million (ie, his own life and that of the potential terrorist), but to a more selfish person the threshold could rise very high such that even with certainty the person is not willing to kill someone else to save a million others. That might be noble under some moral codes, but to a utilitarian it would represent extreme selfishness.

So the example once again shows the gulf between how our societies normally function when it concerns small probabilities of large damages, and what the classic utilitarian would do. A utilitarian is happy to act on small probabilities, though of course eager to purchase more information if the possibility is there. Our societies are less trigger-happy. Only in cases whereby there is actual experienced turmoil and damage, do our societies gradually revert to a situation where it indeed just takes a cost-benefit frame of mind and suspends other social norms. A classic utilitarian is thus much more pro-active and willing to act on imperfect information than is normal in our societies.

The third question was about divulging information that would cause hurt but that did not lead to changes in outcomes. In the case of the hypothetical, the information was about the treatment of pets. To the classic utilitarian, this one is easy: information itself is not a final outcome and, since the hypothetical was set up in that way, the choice was between a lower state of utility with more information, versus a higher state of utility with less information. The classic utilitarian would chose the higher utility and not make the information available.

The policy reality in this case is debatable. One might argue that the hypothetical, ie that more information would not lead to changes but merely to hurt, is so unrealistic that it basically does not resemble any real policies. Some commentators made that argument, saying they essentially had no idea what I was asking, and I am sympathetic to it.

The closest one comes to the hypothetical it is the phenomenon of general flattery, such as where populations tell themselves they are god’s chosen people with a divine mission, or where whole populations buy into the idea that no-one is to blame for their individual bad choices (like their smoking choices). One might see the widespread phenomenon of keeping quiet when others are enjoying flattery as a form of suppressing information that merely hurts and would have no effect. Hence one could say that ‘good manners’ and ‘tact’ are in essence about keeping information hidden that hurts others. Personally, though I hate condoning the suppression of truth for any cause, I have to concede the utilitarian case for it.

The fourth and final question is perhaps the most glaring example of a difference between policy reality and classic utilitarianism, as it is about the distinction between an identified saved life and a statistically saved life. As one commenter already noted (Ken), politicians find it expedient to go for the identified life rather than the un-identified statistical life, and this relates to the lack of reflection amongst the population.

To the classic utilitarian, it should not matter whose life is saved: all saved lives are to the classic utilitarian ‘statistical’. Indeed, it is a key part of utilitarianism that there is no innate superiority of this person over that one. Hence, the classic utilitarian would value an identified life equally to a statistical one and would thus be willing to pour the same resources into preventing the loss of a life (via inoculations, safe road construction, etc.) as into saving a particular known individual.

The policy practice is miles apart from classic utilitarianism, not just in Australia but throughout the Western world. For statistical lives, the Australian government more or less uses the rule of thumb that it is willing to spend some 50,000 dollars per additional happy year. This is roughly the cut-off point for new medicines onto the Pharmaceutical benefit Scheme. It is also pretty much the cut-off point in other Western countries for medicines (as a rule of thumb, governments are willing to pay about a median income for another year of happy life of one of their citizens).

For identified lives, the willingness to pay is easily ten times this amount. Australia thus has a ‘Life Saving Drugs’ program for rare life-threatening conditions. This includes diseases like Gaucher Disease, Fabry disease, and the disease of Pompe. Openly-available estimates of the implied cost of a life vary and it is hard to track down the exact prices, but each year of treatment for a Pompe patient was said, in a Canadian conference for instance, to cost about 500,000 dollars. In New Zealand, the same cost of 500,000 is being used in their media. Here in Australia, the treatment involved became available in 2008 and I understand it indeed costs about 500,000 per patient per year. There will be around 500 patients born with Pompe on this program in Australia (inferred from the prevalence statistics). Note that this treatment cost does not in fact mean the difference between life and death: rather it means the difference between a shorter life and a longer one. Hence the cost per year of life saved is actually quite a bit higher than 500,000 for this disease.

What does this mean? It means, quite simply, that in stead of saving one person with the disease of Pompe, one could save at least 10 others. In order for the person born with Pompe to live, 10 others in his society die. It is a brutal reality that is difficult to talk about, but that does not change the reality. Why is the price so high? Because the pharmaceutical companies can successfully bargain with governments for an extremely high price on these visible lives saved. They hold politicians to ransom over it, successfully in the case of Australia.

Saving one identified life rather than ten unidentified ones is not merely non-utilitarian. It also vastly distorts incentives. It distorts the incentives for researchers and pharmaceutical companies away from finding solutions to the illnesses had by the anonymous many, to finding improvements in the lives of the identifiable few. It creates incentives to find distinctions between patients so that new ‘small niches’ of identified patients can be found out of which to make a lot of money. Why bother trying to find cures for malaria and cancer when it is so much more lucrative to find a drug that saves a small but identifiable fraction of the population of a rich country?

So kudos to those willing to say they would go for the institution that saved the most lives. I agree with you, but your society, as witnessed by its actions, does not yet agree, opening the question what can be done to more rationally decide on such matters.

Thanks to everyone who participated in the quiz and merry X-mas!

This entry was posted in Economics and public policy, Ethics, Geeky Musings, Media, Miscellaneous, Philosophy, Politics - national, Society, Uncategorized. Bookmark the permalink.
Subscribe
Notify of
guest

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
desipis
10 years ago

but also the time spent on the waiting list (not so relevant).

However, spending long times on the waiting list may cause suffering and unhappiness. Minimising suffering is also something that can be approached in a utilitarian fashion. In fact if we take your definition of classic utilitarianism, it’s something we must take into account:

…classic utilitarianism is about getting the highest number of total happy life years.

The question becomes how to balance the harm to ‘total happy life years’ from waiting on the list for a long time against the benefit from maximising raw life year from selecting a better person. Deciding how to weight those factors is likely subject and hence falls outside the scope of a utilitarian approach, however the weighting will still affect the outcome of a broader utilitarian decision. You can’t just decide to make the suffering factor zero because it makes the utilitarian decision simpler.

Patrick Caldon
Patrick Caldon
10 years ago

Other questions:

Suppose there is a trolley, heading down some tracks and you’re standing next to a switch which can direct the trolley down the right hand track. On the right hand track is a utilitarian who can optimally calculate how many citizens ought to die in exchange for other citizens lives. He’s the world’s foremost Professor of Utilitarianism and the the best person out there at being utilitarian, uniquely qualified to make this calculation. On the left hand track are five ordinary citizens without this talent for utilitarian calculus. You can push the trolley on the left or right hand track – either kill the utilitarian or the five ordinary citizens. But you have to choose one way or the other.

1) If you were an ordinary citizen and not particularly talented at utilitarian calculus, how do you throw the switch?

2) Suppose you’re the world’s second best utilitarian. How do you throw the switch in this case?

Patrick Caldon
Patrick Caldon
10 years ago

Ooops -you can direct the trolley down the right and track or the left hand track.

Gummo Trotsky
Gummo Trotsky
10 years ago

Hence the classic utilitarian would indeed accept any odds just above 1 in a million when the threat is to a million lives: the life of the potential terrorist is worth the expected costs of his possible actions (which is one life).

This just doesn’t stack up. Especially given your answer to the question of allocating transplants, which is determined by the prospect/likelihood that an organ transplant will result in the maximum number of additional happy lives.

Below the probability of 1 in 500,000 (where the statistically predicted number of deaths from the potential terrorist attack is two) there is no significant gain – in probabilistic terms – in killing the suspected terrorist. With rounding up the threshold probability is 1 in 750,000. Maybe.

Maybe, because we’re assuming equal life expectancies in the suspected terrorist and his target population. But suppose the suspected terrorist’s future life expectancy is 60 years, while that of his target population is only 30 years. Then, even at a probability of success of 1 in 500,000 you gain nothing, in terms of life years, from killing the terrorist. Although there’s an expectation that two people will die from his potential attack you’ve gained nothing in terms of life years.

In terms of happy life years you may well be worse off: the potential terrorist is entitled, in your utilitarian calculation of happiness to a self-righteous gloating bonus on happiness if he actually kills any infidels. That needs to be offset with a self-loathing, oh what a fool was I, guilt penalty but if the former outweighs the latter, the utilitarian conclusion must be – leave the potential terrorist alone.