Ethics Committees

The excesses of ethics committees are a pet hate of mine, but I’d always thought that for instance the Stanley Milgram experiment was an example of the kind of experiment where genuine ethical issues arose that might justify not going ahead.

But now I read on Wikipedia that:

In Milgram’s defense, 84 percent of former participants surveyed later said they were “glad” or “very glad” to have participated, 15 percent chose neutral responses (92% of all former participants responding).1 Many later wrote expressing thanks. Milgram repeatedly received offers of assistance and requests to join his staff from former participants. Six years later (at the height of the Vietnam War), one of the participants in the experiment sent correspondence to Milgram, explaining why he was glad to have participated despite the stress:

While I was a subject in 1964, though I believed that I was hurting someone, I was totally unaware of why I was doing so. Few people ever realize when they are acting according to their own beliefs and when they are meekly submitting to authority… To permit myself to be drafted with the understanding that I am submitting to authority’s demand to do something very wrong would make me frightened of myself… I am fully prepared to go to jail if I am not granted Conscientious Objector status. Indeed, it is the only course I could take to be faithful to what I believe. My only hope is that members of my board act equally according to their conscience…23

Milgram argued that the ethical criticism provoked by his experiments was because his findings were disturbing and revealed unwelcome truths about human nature.

  1. 13[]
  2. 14[]
  3. 15[]
Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
conrad
conrad
10 years ago

I doubt the idea of good/bad is the main thing which most workplaces truly care about, even if they pretend to make decisions in part based on this trade-off. Let’s do the very simple maths here. Unless we have a roundiung error, then 84 + 15 = 99. That leaves 1% of people to take court action against you or your workplace.

Paul frijters
Paul frijters
10 years ago

I am a fan of the milgram experiments too, but Conrad is right to point to the incentives of the hierarchy: the incentives of ethics committees and those on top of hierarchies are skewed. They get part of the blame for complaints, but little of the glory. So they behave risk averse. Indeed, I believe they behave in contravention to human rights acts and legislation on freedom of speech in that they put non-legal barriers in place to both doing research and disseminating it. They are effectively pseudo-legal and use that power to minimise the risks to themselves. Entirely understandable from the point of view of their own interests, but detrimental to social science and national debate. The obvious solution would be to have no ethics committees in social science and instead have clearer national laws as to what goes and what does not.

paul frijters
paul frijters
10 years ago
Reply to  Nicholas Gruen

perhaps one should think of a national committee as an alternative to local ones. The problem with local ones is that the potential benefits of the information generated by the research are national (or even international) whist the potential risks of litigation are local. One wants the trade-off to be made at the level where these spillovers get internalised. National laws are then one option, but perhaps a single national ethics committee for social science is another?

conrad
conrad
10 years ago
Reply to  paul frijters

The guidelines are already done by a single body nationally, see here”., so I don’t really see how a national committee will solve anything. There is already some really stupid stuff in there, even with a great committee of experts thinking of the rules, like not allowing blanket ethics for permutations of entirely harmless procedures. In your area, for example, you couldn’t (or at least shouldn’t be able to given the rules) get blanket approval to run as many experiments as you wanted over time looking at attitudes to happiness, no matter how utterly harmless it is. You also can’t get approval to get rid of the stupid crap you need to put on every thing you do about what to do if you are harmed by the experiment, which counsellor to see, the fact the questions might be illegal if you are accessing them outside Australia etc. (say, for example, when running a questionaire designed to see if pets make people happier).

So in this respect, local rules might be better, because you could have a committee which understands what individuals do (e.g., Paul does harmless experiments looking at happiness; Conrad does harmless experiments looking at language; etc.) and they could just give you blanket coverage for anything within reasonably large bounds without having to put up with national guidelines.

Paul frijters
Paul frijters
10 years ago
Reply to  paul frijters

Depending on who is on these local committees, they might indeed have local expertise in the relevant area (though in practise, this is often not true. Often the majority of members are not even academics). But even then, the issue of weighing risks and benefits locally is hampered by these skewed incentives. So you want the weighing of those risks and benefits to occur at the smallest level of aggregation where the majority of those benefits and risks are captured. As knowledge is a clear public good, that would be national. The issue of expertise is then important for the design of how the national committee (which is going to be rather busy, probably having a fair few subcommittees to cater for a whole country) functions.

paul walter
paul walter
10 years ago

Intriguing, both with that subtle thread starter, and with the comments section.
Significantly, am not commenting further, but oh yes much of it makes sense and Iwant to savour and absorb before ruining aset of good comments fromothers in that section… (buzz) Eeeeooouuuchhhh!!!!!!!!!