Morality of the herd?

There are few things  we enjoy more here at Club Troppo than a good rant about morality and values.   Some even think we’re a bit precious about it.   Anyway, I was mightily pleased to see bipartisan agreement between  The Bomber and The Rodent about the desirability of making immigrants (and maybe even tourists)  sign  a pledge in blood to  subscribe to  Australian values.

Of course, although both of them  (and presumably most of the rest of us)  agree that  treating non burka-clad women as slabs of meat in a butcher’s shop is unAustralian,    neither was terribly specific about exactly what Australian values actually  are, nor how we should work out how universally they’re held, still less how  they’re formed.

That’s why I was even more pleased the other day when I stumbled across this draft paper   (MS Word document) by moral psychologists Jonathan Haidt and Fredrik Bjorklund.   Not only do Haidt and Bjorklund summarise in some depth the current state of research about moral behaviour in the cognitive sciences, but they also develop their own relatively new theory which they call the Social Intuitionist Model (The Emotional Dog and its Rational Tail (2001)).

Theoretical approaches to moral psychology (and to a considerable  extent moral philosophy) can be broadly grouped into empiricist, rationalist and intuitionist/moral sense  approaches.  

Utilitarianism in its various guises is an example of an empiricist approach, as is behaviourist psychology (e.g. Pavlov, Skinner)  with its claims that moral (and general) behaviour is almost entirely malleable and shaped by social/cultural conditioning.    Adam Smith’s Theory of Moral Sentiments is also predominantly an example of an empiricist approach to moral behaviour, with much in common with modern behaviourism, although grounded in an innate intuition of “sympathy”.

Modern exponents of rationalist moral psychology  include Piaget and Kohlberg.  

Strangely enough, arguably the father of the intuitionist/moral sense  approach was that archetypal empiricist (at least for non-moral purposes) David Hume:

There has been a controversy started of late … concerning the general foundation of Morals; whether they be derived from Reason, or from Sentiment; whether we attain the knowledge of them by a chain of argument and induction, or by an immediate feeling and finer internal sense; whether, like all sound judgments of truth and falsehood, they should be the same to every rational intelligent being; or whether, like the perception of beauty and deformity, they be founded entirely on the particular fabric and constitution of the human species. (Enquiry Concerning the Principles of Morals, 1960/1777, p.2)

Kant’s moral philosophy includes elements of empiricism, rationalism and intuitionism, as do the approaches of 20th century and contemporary Kantian-influenced moral theorists like WD Ross, John Rawls and Ronald Dworkin.

Haidt and Bjorklund are decisively in the intuitionist camp, though  basing  their arguments very much on empirical research in cognitive psychology (including their own research).   Their theory certainly seems (at least to this non-expert)  solidly grounded in current cognitive science research.   They claim that human moral behaviour emanates from a set of moral intuitions that are hard-wired into the brain and therefore identifiable across all human cultures, albeit that their precise shape is strongly influenced and moulded by social and cultural factors during a child’s development.   They also argue that the initial moral flash of intuition that precedes every individual moral “decision” may be modified by social factors at the time.   However, that social influence is anything but a process of intellectual reasoning in the vast majority of cases.   The process is  little more than the outworking of our desire to fit our moral decision-making into a consensus of the community or peer group of which we  see ourselves  as part: the morality of the herd.

The vast majority of what passes for moral “reasoning” is in reality no more than post hoc justification of decisions actually already reached on an intuitive basis, a conclusion that doesn’t look promising for idealistic political theories like  Habermas‘s  concepts of “communicative rationality” and “deliberative democracy”.   That won’t come as  a  huge  surprise to  readers of political blogs, a domain where (like political discourse generally)  bloggers and commenters mostly just  shout past each other (however civilly)  from entrenched, predetermined positions.

According to Haidt and Bjorklund, the brain’s  intuitive “innate moral modules” are also  eminently susceptible to triggering and therefore manipulation by the way in which a moral (or political) issue is “framed” (ref Tversky and Kahneman in economics, Sunstein from a more specifically moral perspective), a proposition that also won’t come as a surprise to marketing gurus or political spin doctors.

Haidt and Bjorklund identify 5 specific “innate moral modules” in the human brain:

  • Harm (a sensitivity to or dislike of signs of pain and suffering in others);
  • Reciprocity (a set of emotional responses related to playing tit-for-tat, such as negative responses to those who fail to repay favors);
  • Hierarchy (a set of concerns about navigating status hierarchies, for example anger towards those who fail to display proper signs of deference and respect);
  • Concerns about purity (related to the emotion of disgust, necessary for explaining why so many moral rules relate to food, sex, menstruation, and the handling of corpses); and
  • Concerns about boundaries between ingroup and outgroup .

Fairly clearly, the  Social Intuitionist Model even if only partly correct  has  enormous relevance for a wide range of fields, including law and politics.  As Haidt and Bjorklund themselves observe:

If the  Social Intuitionist Model  is right and moral reasoning is usually post-hoc rationalization, then moral philosophers who think they are reasoning their way impartially to conclusions may often be incorrect. Even if philosophers are better than most people at reasoning, a moment’s reflection by practicing philosophers should bring to mind many cases where another philosopher was clearly motivated to reach a conclusion, and was just being clever in making up reasons to support her already-made-up mind. A further moment of reflection should point out the hypocrisy in assuming that it is only other philosophers who do this, not oneself. The practice of moral philosophy may be improved by an explicit acknowledgment of the difficulties and biases involved in moral reasoning.

I’ve copied and pasted a condensed version of Haidt and Bjorklund’s paper below for time-challenged readers (although  it would still print out to 10 pages or so):

———————————————————

When God began to recede from scientific explanations in the 16th century, some philosophers began to wonder if God was really needed to explain morality either. In the 17th and 18th centuries, English and Scottish philosophers such as the third Earl of Shaftesbury, Frances Hutcheson, and Adam Smith surveyed human nature and declared that people are innately sociable, and that they are both benevolent and selfish. However it was David Hume who worked out the details and implications of this approach most fully:

There has been a controversy started of late … concerning the general foundation of Morals; whether they be derived from Reason, or from Sentiment; whether we attain the knowledge of them by a chain of argument and induction, or by an immediate feeling and finer internal sense; whether, like all sound judgments of truth and falsehood, they should be the same to every rational intelligent being; or whether, like the perception of beauty and deformity, they be founded entirely on the particular fabric and constitution of the human species. (Enquiry Concerning the Principles of Morals,   1960/1777, p.2)

We added the italics above to show which side Hume was on. This passage is extraordinary for two reasons. First, it is a succinct answer to Question 1: Where do moral beliefs and motivations come from? They come from sentiments which give us an immediate feeling of right or wrong, and which are built into the fabric of human nature. Hume’s answer to Question 1 is our answer too, and much of the rest of our essay is an elaboration of this statement, using evidence and theories that Hume did not have available to him. But this statement is also extraordinary as a statement about the controversy “started of late.” Hume’s statement is just as true in 2005 as it was in 1776.

There really is a controversy started of late (in the 1980s), a controversy between rationalist approaches (based on Piaget and Kohlberg) and moral sense or intuitionist theories (e.g., Kagan, 1984; Frank, 1988; Haidt, 2001; Shweder & Haidt, 1993; J. Q. Wilson, 1993). We will not try to be fair and unbiased guides to this debate (indeed, our theory says you should not believe us if we tried to be). Instead, we will make the case for a moral sense approach to morality, based on a small set of innately prepared, affectively valenced moral intuitions. We will contrast this approach to a rationalist approach, and we will refer the reader to other views when we discuss limitations of our approach. The contrast is not as stark as it seems: the Social Intuitionist Model includes reasoning at several points, and rationalist approaches often assume some innate moral knowledge, but there is a big difference in emphasis. Rationalists say the real action is in reasoning; intuitionists say it’s in quick intuitions, gut feelings and moral emotions.   “¦

 

The conclusion at the end of this string is that the human mind is always evaluating, always judging everything it sees and hears along a “good-bad” dimension (see Kahneman,1999). It doesn’t matter whether we are looking at men’s faces, lists of appetizers, or Turkish words; the brain has a kind of gauge (sometimes called a “like-ometer”) that is constantly moving back and forth, and these movements, these quick judgments, influence whatever comes next. The most dramatic demonstration of the like-ometer in action is the recent finding that people are slightly more likely than chance to marry others whose first name shares its initial letter with their own; they are more likely to move to cities and states that resemble their names (Phil moves to Philadelphia; Louise to Louisiana); and they are more likely to choose careers that resemble their names (Dennis finds dentistry more appealing; Lawrence is drawn to law. Pelham, Mirenberg, & Jones, 2002). Quick flashes of pleasure, caused by similarity to the self, make some options “just feel right.”
This perspective on the inescapably affective mind is the foundation of the social intuitionist model (SIM), presented in Figure 1 (from Haidt, 2001).The model is composed of 6 links, or psychological processes, which describe the relationships among an initial intuition of good versus bad, a conscious moral judgment, and conscious moral reasoning. The first four links are the core of the model, intended to capture the great majority of judgments for most people.

 

Link 1: The Intuitive Judgment Link
 

The SIM is founded on the idea that moral judgment is a ubiquitous product of the ever-evaluating mind. Like aesthetic judgments, moral judgments are made quickly, effortlessly, and intuitively. We see an act of violence, or hear about an act of gratitude, and we experience an instant flash of evaluation, which may be as hard to explain as the affective response to a face or a painting. That’s the intuition. Moral intuition is defined as: the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling (like-dislike, good-bad) about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion (modified   from Haidt, 2001, p.818). This is the “finer internal sense” that Hume talked about. In most cases this flash of feeling will lead directly to the conscious condemnation (or praise) of the person in question, often including verbal thoughts such as “what a bastard” or “wow, I can’t believe she’s doing this for me!” This conscious experience of praise or blame, including a belief in the rightness or wrongness of the act, is the moral judgment. Link 1 is the tight connection between flashes of intuition and conscious moral judgments. However this progression is not inevitable: often a person has a flash of negative feeling, for example toward stigmatized groups (easily demonstrated through implicit measurement techniques such as the Implicit Association Test, Greenwald, McGhee, & Schwartz, 1998), yet because of one’s other values, one resists or blocks the normal tendency to progress from intuition to consciously endorsed judgment.
These flashes of intuition are not dumb; as with the superb mental software that runs visual perception, they often hide a great deal of sophisticated processing occurring behind the scenes. Daniel Kahneman, one of the leading researchers of decision making, puts it this way:  

We become aware only of a single solution — this is a fundamental rule in perceptual processing. All other solutions that might have been considered by the system — and sometimes we know that alternative solutions have been considered and rejected — we do not become aware of. So consciousness is at the level of a choice that has already been made. (Kahneman, 2004, p.26)

Even if moral judgments are made intuitively, however, we often feel a need to justify them with reasons, much more so than we do for our aesthetic judgments. What is the relationship between the reasons we give and the judgments we reach?
Link 2: The Post-Hoc Reasoning Link
 

Studies of reasoning describe multiple steps, such as searching for relevant evidence, weighing evidence, coordinating evidence with theories, and reaching a decision (Kuhn, 1989; Nisbett & Ross, 1980). Some of these steps may be performed unconsciously, and any of the steps may be subject to biases and errors, but a key part of the definition of reasoning is that it has steps, at least two of which are performed consciously. Galotti (1989, p.333), in her definition of everyday reasoning, specifically excludes “any one-step mental processes” such as sudden flashes of insight, gut reactions, and other forms of “momentary intuitive response.” Building on Galotti (1989), moral reasoning can be defined as: conscious mental activity that consists of transforming given information about people in order to reach a moral judgment (Haidt, 2001, p.818). To say that moral reasoning is a conscious process means that the process is intentional, effortful, controllable, and that the reasoner is aware that it is going on (Bargh, 1994).
The SIM says that moral reasoning is an effortful process (as opposed to an automatic process), usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment. This claim is consistent with Hume’s famous claim that reason is “the slave of the passions, and can pretend to no other office than to serve and obey them” (Hume, 1969/1739, p.462). Nisbett and Wilson (1977) demonstrated such post-hoc reasoning for causal explanations. When people are tricked into doing a variety of things, they readily make up stories to explain their actions, stories that can often be shown to be false. People often know more than they can tell, but when asked to introspect on their own mental processes people are quite happy to tell more than they can know, expertly crafting plausible-sounding explanations from a pool of cultural theories about why people generally do things (see Wilson [2002] on the limits of introspection).
 

Link 3: The Reasoned Persuasion Link

 

The glaring one-sidedness of everyday human reasoning is hard to understand if you think that the goal of reasoning is to reach correct conclusions, or to create accurate representations of the social world. However, many thinkers, particularly in evolutionary psychology, have argued that the driving force in the evolution of language was not the value of having an internal truth-discovering tool; it was the value of having a tool to help a person track the reputations of others, and to manipulate those others by enhancing one’s own reputation (Dunbar, 1996). People are able to re-use this tool for new purposes, including scientific or philosophical inquiry, but the fundamentally social origins of speech and internal verbal thought affect our other uses of language.
Links 3 and 4 are the social part of the social intuitionist model. People love to talk about moral questions and violations, and one of the main topics of gossip is the moral and personal failings of other people (Dunbar, 1996; Hom and Haidt, in prep.). In gossip people work out shared understandings of right and wrong, they strengthen relationships, and they engage in subtle or not-so-subtle acts of social influence to bolster the reputations of themselves and their friends (Hom & Haidt, in prep.; Wright 1994). “¦

People strive to reach consensus on normative issues within their “parish,” that is, within the community they participate in. People who can do so can reap the benefits of coordination and cooperation. Moral discourse therefore serves an adaptive biological function, increasing the fitness of those who do it well.
 Some evolutionary thinkers have taken this adaptive view to darker extremes. In an eerie survey of moral psychology, Robert Wright (1994, p.280) wrote:

The proposition here is that the human brain is a machine for winning arguments, a machine for convincing others that its owner is in the right — and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth.

This may offend you. You may feel the need to defend your brain’s honor. But the claim here is not that human beings can never think rationally, or that we are never open to new ideas. Lawyers can be very reasonable when they are off duty, and human minds can be too. The problem comes when we find ourselves firmly on one side of a question, either because we had an intuitive or emotional reaction to it, or because we have interests at stake. It is in those situations, which include most acts of moral judgment, that conscious verbal moral reasoning does what it may have been designed to do: argue for one side.
 

Link 4: The Social Persuasion Link
 

There are, however, means of persuasion that don’t involve giving reasons of any kind. The most dramatic studies in social psychology are the classic studies showing just how easily the power of the situation can make people do and say extraordinary things. Some of these studies show obedience without persuasion (e.g., Milgram’s [1963] “shock” experiments); some show conformity without persuasion (e.g., Asch’s [1956] line-length experiments). But many show persuasion. Particularly when there is ambiguity about what is happening, people look to others to help them interpret what is going on, and what they should think about what is going on. “¦  

 

The only other ultrasocial mammals are the naked mole rats of East Africa, but they, like the bees and the ants, accomplish their ultrasociality by all being siblings and reaping the benefits of kin altruism. Only human beings cooperate widely and intensely with non-kin, and we do it in part through a set of social psychological adaptations that make us extremely sensitive to and influenceable by what other people think and feel. We have an intense need to belong and to fit in (Baumeister & Leary, 1995), and our moral judgments are strongly shaped by what others in our “parish” believe, even when they don’t give us any reasons for their beliefs.

“¦

Question 4: What Exactly Are The Intuitions?

 

If we want to rebuild moral psychology on an intuitionist foundation, we had better have a lot more to say about what intuitions are, and about why people have the particular intuitions they have. We look to evolution to answer these questions. One could perfectly well be an empiricist intuitionist one might believe that children simply develop whatever intuitions or reactions for which they are reinforced; or one might believe that children have a general tendency to take on whatever values they see in their parents, their peers, or the media. Of course social influence is important, and the social links of the SIM are intended to capture such processes. However we see two strong arguments against a fully empiricist approach in which intuitions are entirely learned. The first, pointed out by Tooby, Cosmides, & Barrett (in press), is that children routinely resist parental efforts to get them to care about, value, or desire things. It is just not very easy to shape children, unless one is going with the flow of what they already like. It takes little or no work to get 8 year old children to prefer candy to broccoli, to prefer being liked by their peers to being approved of by adults, or to prefer hitting back to loving their enemies. Socializing the reverse preferences would be difficult or impossible. The resistance of children to arbitrary or unusual socialization has been the downfall of many utopian efforts. Even if a leader can select a group of unusual adults able to believe in universal love while opposing all forms of hatred and jealousy, nobody has ever been able to raise the next generation of children to take on such unnatural beliefs.
The second argument is that despite the obvious cultural variability of norms and practices, there is a small set of moral intuitions that is easily found in all societies, and even across species. “¦

 Might there be a small set of moral intuitions underlying the enormous diversity of moral “cuisines?” Just such an analogy was made by the Chinese philosopher Mencius 2400 years ago:

There is a common taste for flavor in our mouths, a common sense for sound in our ears, and a common sense of beauty in our eyes. Can it be that in our minds alone we are not alike? What is it that we have in common in our minds? It is the sense of principle and righteousness. The sage is the first to possess what is common in our minds. Therefore moral principles please our minds as beef and mutton and pork please our mouths. (Mencius, quoted in Chan, 1963, p. 56).

Elsewhere Mencius specifies that the roots, or common principles of human morality are to be found in moral feelings such as commiseration, shame, respect, and reverence (Chan, 1963, p. 54).
Haidt and Joseph (2004) set out to list these common principles a bit more systematically, reviewing five works that were rich in detail about moral systems. Two of the works were written to capture what is universal about human cultures: Donald Brown’s (1991) catalogue “Human Universals,” and Alan Fiske’s (1992) grand integrative theory of the four models of social relations. Two of the works were designed primarily to explain differences across cultures in morality: Schwartz and Bilskey’s (1990) widely used theory of 15 values, and Richard Shweder’s theory of the “big 3” moral ethics autonomy, community, and divinity (Shweder et al., 1997). The fifth work was Frans de Waal’s (1996) survey of the roots or precursors of morality in other animals, primarily chimpanzees, Good Natured. We (Haidt & Joseph) simply listed all the cases where some aspect of the social world was said to trigger approval or disapproval; that is, we tried to list all the things that human beings and chimpanzees seem to value or react to in the behavior of others. We then tried to group the elements that were similar into a smaller number of categories, and finally we counted up the number of works (out of 5) that each element appeared in. The winners, showing up clearly in all five works, were harm (a sensitivity to or dislike of signs of pain and suffering in others), reciprocity (a set of emotional responses related to playing tit-for-tat, such as negative responses to those who fail to repay favors), and hierarchy (a set of concerns about navigating status hierarchies, for example anger towards those who fail to display proper signs of deference and respect). We believe these three issues are excellent candidates for being the “taste buds” of the moral domain. In fact, Mencius specifically included emotions related harm (commiseration) and hierarchy (shame, respect, and reverence) as human universals.
We tried to see how much moral work these three sets of intuitions could do, and found that we could explain most but not nearly all of the moral virtues and concerns that are common in the world’s cultures. There were two additional sets of concerns that were widespread, but that had only been mentioned in three or four of the five works: concerns about purity (related to the emotion of disgust, necessary for explaining why so many moral rules relate to food, sex, menstruation, and the handling of corpses), and concerns about boundaries between ingroup and outgroup . Liberal moral theorists may dismiss these concerns as matters of social convention (for purity practices) or as matters of prejudice and exclusion (for ingroup concerns), but we believe that many or most cultures see matters of purity, chastity, in-group loyalty, and patriotism as legitimate parts of their moral domain.
We (Haidt, Joseph, and Bjorklund) believe these five sets of intuitions should be seen as the foundations of intuitive ethics. For each one, a clear evolutionary story can be told, and has already been told many times. We hope nobody will find it controversial to suppose that evolution has built in to humans (and to some extent chimpanzees, bonobos, and other social mammals) an emotional sensitivity to issues related to harm/suffering, reciprocity/fairness, hierarchy/social-order, and ingroup/outgroup. The only set of intuitions with no clear precursor in other animals is purity/pollution. But concerns with purity and pollution require the emotion of disgust and its cognitive component of contamination sensitivity, which only human beings older than the age of 7 have fully mastered (Rozin, Fallon, & Augustoni-Ziskind, 1985). We think it is quite sensible to suppose that most of the foundations of human morality are many millions of years old, but that some aspects of human morality have no precursors in other animals.

 

Now that we have identified five promising areas or clusters of intuition, how exactly are they encoded in the human mind? There are a great many ways to think about innateness. At the mildest extreme is a general notion of “preparedness,” the claim that animals are prepared (by evolution) to learn some associations more easily than others (Seligman, 1971). For example, rats can more easily learn to associate nausea with a new taste than with a new visual stimulus (Garcia & Koelling, 1966), and monkeys (and humans) can very quickly acquire a fear of snakes from watching another monkey (or human) reacting with fear to a snake, but it is very hard to acquire a fear of flowers by such social learning (Mineka & Cook, 1988). The existence of preparedness as a product of evolution is uncontroversial in psychology. Everyone accepts at least that much writing on the slate at birth. So the mildest version of our theory is that the human mind has been shaped by evolution so that children can very easily be taught or made to care about harm, reciprocity, hierarchy, ingroups, and purity, however they have no innate moral knowledge just a preparedness to acquire certain kinds of moral knowledge, and a resistance to acquiring other kinds (e.g., that all people should be loved and valued equally).
 At the other extreme is the idea of the massively module mind, championed by evolutionary psychologists such as Pinker (1997) and Cosmides and Tooby (1994). On this view the mind is like a Swiss army knife with many tools, each one an adaptation to “the long-enduring structure of the world.” If every generation of human beings faced the threat of disease from bacteria and parasites that spread by physical touch, minds that had a contamination-sensitivity module built in (i.e., feel disgust towards feces and rotting meat, and also towards anything that touches feces or rotting meat) were more likely to run bodies that went on to leave surviving offspring than minds that had to learn everything from scratch using only domain-general learning processes. As Pinker (2002, p. 192) writes, with characteristic flair: “The sweetness of fruit, the scariness of heights, and the vileness of carrion are fancies of a nervous system that evolved to react to those objects in adaptive ways.”
Modularity is controversial in cognitive science. Most psychologists accept Fodor’s (1983) claim that many aspects of perceptual and linguistic processing are the output of modules, which are informationally encapsulated special purpose processing mechanisms. Informational encapsulation means that the module works on its own proprietary inputs. Knowledge contained elsewhere in the mind will not affect the output of the module. For example, knowing that two lines are the same length in the Muller-Lyer illusion does not alter the percept that one line is longer. However Fodor himself rejects the idea that much of higher cognition can be understood as the output of modules. On the other hand, Dan Sperber (1994) has pointed out that modules for higher cognition do not need to be as tightly modularized as Fodor’s perceptual modules. All we need to say is that higher cognitive processes are modularized “to some interesting degree,” that is, higher cognition is not one big domain-general cognitive workspace. There can be many bits of mental processing that are to some degree module-like. For example, quick, strong, and automatic rejection of anything that seems like incest suggests the output of an anti-incest module, or modular intuition. (See the work of Debra Lieberman, this volume). Even when the experimenter explains that the brother and sister used two forms of birth control, and that the sister was adopted into the family at age 14, many people still say they have a gut feeling that it is wrong for the siblings to have consensual sex. The output of the module is not fully revisable by other knowledge, even though some people overrule their intuition and say, uneasily, that consensual adult sibling incest is OK.
We do not know what point on the continuum from simple preparedness to hard and discrete modularity is right, so we tentatively adopt Sperber’s intermediate position that there are a great many bits of mental processing that are modular “to some interesting degree.” (We see no reason to privilege the blank slate side of the continuum as the default or “conservative” side.) Each of our five foundations can be thought of as a module, or set of functionally related modules. We particularly like Sperber’s point that “because cognitive modules are each the result of a different phylogenetic history, there is no reason to expect them all to be built on the same general pattern and elegantly interconnected” (Sperber, 1994, p. 46). We are card-carrying anti-parsimonists. We believe that psychological theories should have the optimum amount of complexity, not the minimum that a theorist can get away with. The history of moral psychology is full of failed attempts to derive all of morality from a single source (e.g., non-contradiction, harm, empathy, or internalization). We think it makes more sense to look at morality as a set of multiple concerns about social life, each one with its own evolutionary history and psychological mechanism. There is not likely to be one unified moral module, or moral organ.

 

Question 5: How Does Morality Develop?

 

Once you see morality as grounded in a set of innate moral modules (Sperber modules, not Fodor modules), the next step is to explain how children develop the morality that is particular to their culture, and to themselves. The first of two main tools we need for an intuitionist theory of development is assisted externalization (see Fiske, 1991). The basic idea is that morality, like sexuality or language, is better described as emerging from the child (externalized) on a particular developmental schedule, rather than being placed into the child from outside (internalized) on society’s schedule. However, as with linguistic and sexual development, morality requires guidance and examples from the local culture to externalize and configure itself properly, and children actively seek out role models to guide their development. Each of the five moral modules matures at a different point in development for example, two-year-olds are sensitive to suffering in people and animals (Zahn-Waxler & Radke-Yarrow, 1982), but they show few concerns for fairness and equal division of resources until some time after the third birthday (Haidt et al., in preparation a), and they do not have a full understanding of purity and contagion until around the age of 7 or 8 (Rozin, Fallon, & Augustoni-Ziskind, 1986). When their minds are ready, children will begin showing concerns about and emotional reactions to various patterns in their social world (e.g., suffering, injustice, moral contamination). These reactions will likely be crude and inappropriate at first, until they learn the application rules for their culture (e.g., share evenly with siblings, but not parents), and until they develop the wisdom and expertise to know how to resolve conflicts among intuitions. “¦

 

Philosophical Implications

 

The social intuitionist model draws heavily on the work of philosophers (Hume, Gibbard, Aristotle), and we think it can give back to philosophy as well. There is an increasing recognition among philosophers that there is no firewall between philosophy and psychology, and that philosophical work is often improved when it is based on psychologically realistic assumptions (Flanagan, 1991). The social intuitionist model is intended to be a statement of the most important facts about moral psychology. Here we list six implications that this model may have for moral philosophy. “¦

3) Monistic theories are likely to be wrong. If there are many independent sources of moral value (i.e., the five modules), then moral theories that value only one source and set to zero all others are likely to produce psychologically unrealistic systems that most people will reject. Traditional utilitarianism, for example, does an admirable job of maximizing moral goods derived from the suffering module. But it often runs afoul of moral goods derived from the reciprocity module (e.g., rights), to say nothing of its violations of the ingroup module (why treat outsiders equal to insiders?) the hierarchy module (it respects no tradition or authority that demands anti-utilitarian practices), and the purity module (spiritual pollution is discounted as superstition). A Kantian or Rawlsian approach might do an admirable job of developing intuitions about fairness and justice, but each would violate many other virtues and ignore many other moral goods. An adequate normative ethical theory should be pluralistic, even if that introduces endless difficulties in reconciling conflicting sources of value. (Remember, we are antiparsimonists. We do not believe there is any particular honor in creating a one-principle moral system.) Of course, a broad enough consequentialism can acknowledge the plurality of sources of value within a particular culture, and then set about maximizing the total. Our approach may be useful to such consequentialists, who generally seem to focus on goods derived from the first two modules only (that is, the “liberal” modules of harm and reciprocity).

4) Relativistic and skeptical theories go too far. Meta-ethical moral relativists say that “there are no objectively sound procedures for justifying one moral code or one set of moral judgments as against another”(Neilsen, 1967). If relativism is taken as a claim that no one code can be proven superior to all others then it is correct, for given the variation in human minds and cultures there can be no one moral code that is right for all people, places, and times. A good moral theory should therefore be pluralistic in a second sense in stating that there are multiple valid moral systems (Shweder & Haidt, 1993; Shweder et al., 1997). Relativists and skeptics often go further, however, and say that no one code can be judged superior to any other code, but we think this is wrong. If moral truths are anthropocentric truths, then moral systems can be judged on the degree to which they violate important moral truths held by members of that society. For example, the moral system of Southern White slave holders radically violated the values and wants of a large proportion of the people involved. The system was imposed by force, against the victims’ will. In contrast, many Muslim societies place women in roles that outrage some egalitarian Westerners, but that the great majority within the culture including the majority of women endorse. A well-formed moral system is one that is endorsed by the great majority of its members, even those who appear, from the outside, to be its victims. An additional test would be to see how robust the endorsement is. If Muslim women quickly reject their society when they learn of alternatives, the system is not well-formed. If they pity women in America or think that American ways are immoral, ten their system is robust against the presentation of alternatives.

5) The methods of philosophical inquiry may be tainted. If the SIM is right and moral reasoning is usually post-hoc rationalization, then moral philosophers who think they are reasoning their way impartially to conclusions may often be incorrect. Even if philosophers are better than most people at reasoning, a moment’s reflection by practicing philosophers should bring to mind many cases where another philosopher was clearly motivated to reach a conclusion, and was just being clever in making up reasons to support her already-made-up mind. A further moment of reflection should point out the hypocrisy in assuming that it is only other philosophers who do this, not oneself. The practice of moral philosophy may be improved by an explicit acknowledgment of the difficulties and biases involved in moral reasoning. As Greene (this volume) has shown, flashes of emotion followed by post-hoc reasoning about rights may be the unrecognized basis of   deontological approaches to moral philosophy.

About Ken Parish

Ken Parish is a legal academic, with research areas in public law (constitutional and administrative law), civil procedure and teaching & learning theory and practice. He has been a legal academic for almost 20 years. Before that he ran a legal practice in Darwin for 15 years and was a Member of the NT Legislative Assembly for almost 4 years in the early 1990s.
This entry was posted in Philosophy, Society. Bookmark the permalink.

6 Responses to Morality of the herd?

  1. Rafe Champion says:

    Thanks Ken, you have done a great service in drawing our attention to this work and cutting and pasting so we can get the gist without reading the whole thing.

    Actually I don’t even have time to read all of your condensed version so will make do with some off the cuff comments which may (possibly) bear on the issues.

    (a) The idea of five (or some number of sources) of moral value is great, too much effort has been spent trying to justify some particular source of moral authority. The same thing happened in the theory of knowledge where rival schools battled to establish the single touchstone of truth. The answer is that there are numerous sources of knowledge (intuition, tradition, evidence, reason) but none have authority. So let it be with moral values.

    (b) The philosophers have been pretty useless, who has gained anything from the arguments about the is/ought problem, once you have grasped that oughts cannot be derives from ‘is’ statements?

    The positivists did a lot of damage by declaring that moral statements are meaningless because they cannot be verified.

    The Marxists were equally damaging with the clam that moral values only reflect class interests.

    (c) the relativists and skeptics are wrong because all groups and communities operate with some kind of spoken or unspoken framework of values and quite clearly some moral principles are robust in the sense of generating better outcomes for everyone. Who is going to knock the principles of honesty, courage, service, loyalty, compassion and a few others that most people would be prepared to nominate?

    Anyway, you have laid the foundation for a great discussion, especially for those of us who came home early instead of going to the pub.

  2. Thanks very much for this Ken,

    Your cuts and pastes are duly cut and pasted onto my ‘to read’ word file. (But when to read it when todays AFR Review looks so good.

  3. meika says:

    goodo Ken, now how do we get our most prominent ideologues to read this type of thing, I mean if, say, Andrew Bolt read it, would it be like a bucket of water on the wicked witch of the west, would he just melt away?

    thinking aloud now, the next questions are: 1) why do we each differ in how we prioritise each source/module within our moral communities, despite our environment and despite our inheritance, i.e. what survival value does an idiosyncratic diversity confer on each of us, and 2)[built up from individual choices in 1] constrained as they are in part by 2)(yes that is recursive) why don’t moral frameworks disappear at the group level (even if they differ greatly in detail they do not disappear altogether despite moral panic merchants). (I am biased to intuit morality is robust you see, despite the baboons down the road stealing my three year old’s tricycle)(yes they are my out group but I don’t torch their houses even though I know exactly who they are).

    I propose it is the frictional differences the maintain the pool of intuitions/priorities that then allowed a choice of survival options that confers a survival advantage. the biases provide answers by maintaining current bad/weird fits but may be useful in the future (don’t eat pigs the Egyptians said, Why not said the hairy barbarians up north)

    Winning itself does not confer social survival, whatever it does personally for the great men of history, but the competition does confer advantages for the group, unless it goes troppo like the differences in present day Iraq. Few moral systems that States have built up realise it is the competition that confers survival advantage, indeed in proto-would-be States of fundamentalism, and conservative hidebound autocratic old states specifically exclude diversity, they exclude that which gives them strength. They rule in spite of themselves.

    Of course, because it is unconscious at the levels of leadership in most societies, (an appreciative society would be conscious of this) then it is unconscious at the personal level, and individuals have to be conned into wanting to win at all costs, (join the Navy and see the world before you bomb it) or be shanghaied into it.

    Lordy mother, what a mess.

    Liberalism goes closest, but its built on toleration and not appreciation of difference. Fundamentalism is not doubt always with us, but it is in part a reflection of toleration, I wonder what horror we will have to live through to gain an appreciation of difference, if toleration of different beliefs and liberalism was born out of the wars of religion in Europe, what the hell is going to happen now with the current lack of leadership I see in anglophonie.

    We have a range of potential answer kept in circulation through ideological stoushes (hijabs are bad for you, though I think ties are far worse, very unhygienic) because we lack an organ for truth.

    I think Doktor Nietzsche said that first.

  4. Ken Parish says:

    meika

    Your question “1) why do we each differ in how we prioritise each source/module within our moral communities, despite our environment and despite our inheritance, i.e. what survival value does an idiosyncratic diversity confer on each of us … ?” is one that Haidt and Bjorklund answer at some length. I’ll copy what they say below. Your other questions/observations I’ll leave to a later comment (or maybe another post):

    Question 6: Why Do People Vary In Morality?

    If virtues are learned skills of social perception, reflection, and behavior, then the main question for an intuitionist approach to moral personality is to explain why people vary in their virtues. The beginning of the story must surely be innate temperament. The “first law of behavioral genetics”

  5. meika says:

    The “Big five”

  6. meika says:

    Bremmer’s target–

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.