We are born with morality

The moral gene: is the good innate in humans? Scientists say: yes

Carelessly passing an injured, bloodied woman just because you don't want to stain your new suit? Cheat a good friend out of two thousand euros? Or kill a frail aunt to get her inheritance early? When confronted with such scenarios, most people think: “Hardly possible.” And that's a good thing. Because without moral scruples, it would be difficult for us to get along.

But what does it mean? Maybe it's just the internalized fear of punishment. Or an unconscious, self-serving calculation, according to which humanity ultimately pays off. The belief that one has to do good, however, could not be much more than a subtle illusion of evolutionary history.

We may just be “survival machines”, programmed for the aimless reproduction of our genes, as the British evolutionary biologist Richard Dawkins put it in his 1976 classic “The Selfish Gene”; With his provocative view that even the altruism of the individual can be traced back to the egoism of his genes, he shaped the view of people for decades.

Meanwhile, a different wind is blowing through laboratories, classrooms and popular science books. Behavioral researchers, psychologists, animal experts and even mathematicians are increasingly emphatically presenting a more optimistic version of the human condition: Basically, humans are far better than their reputation.

Basic moral rules are already embedded in us wherever possible when we see the light of day. Regardless of culture, creed or gender, there is a fundamental framework of norms that is common to all people. A special network is set up in the head, which keeps the ethical awareness going. Accordingly, people are not only driven by the blind urge to assert themselves as the most capable against competitors, but also want to cooperate (see box on the right). In short: maxims like Thomas Hobbes ’" homo hominem lupus est "(man is a wolf to man) do not have to be the last word in wisdom.

Even our hairy cousins, the primates, have a sense of fairness and empathy, central components of human morality (see box on page 89). But whether they think in terms of good and bad is doubtful. Humans, on the other hand, condemn the self-enrichment of corporate bosses as well as con artists, are appalled by the sexual abuse of children and reprimand notorious liars. The moral instinct that guides our sense of what is right and what is bad does not shape the environment. This is what the cognitive psychologist Marc Hauser of Harvard University in Cambridge, USA claims. He claims to have discovered that morality is in our genes.

Moral constants. When Hauser and the linguist Noam Chomsky researched the human language instinct five years ago, he came up with the idea that morality could work in a similar way. Both systems are based on rules. And just as all languages ​​share basic grammatical structures, there are also constants in morality, such as the command not to kill. Armed with this admittedly rather simple finding, Hauser, a renowned expert on animal intelligence, turned to his specialist colleagues and asked which scientists had previously taken a closer look at the analogy.

In fact, they weren't the first to venture into this terrain. Moral philosopher John Rawls had already dedicated a few lines to the subject in his classic “Theory of Justice” in the early 1970s. The idea was later taken up by two legal philosophers who had studied with Chomsky in the early 1990s. Seven years ago, Matthias Mahlmann from the Free University of Berlin postulated a “universal grammar of morality” in a monograph. And John Mikhail of Georgetown University Law Center in Washington, D.C., is pursuing similar hypotheses today. But no natural scientist had previously put this theory to the test with sophisticated experiments. Hauser rose to this challenge. Last September, he presented the preliminary, controversial result of his research in a book entitled “Moral Minds”.

“When children learn their mother tongue, they don't think about it too much,” explains Hauser in his office on the ninth floor of the William James building overlooking the Harvard campus. "It happens quite naturally, similar to how their arms grew in the womb." It is similar with the moral compass. Only its mechanics are so unknown that even thousands of years of intense pondering about morality could not shed light on them.

There are essentially two different approaches to judging actions morally: the utilitarian (here only the consequences count) and the deontological (which is about the action and not its consequences). But we don't think in the way that philosophers would like. Hauser found in experiments that our ethical judgments follow unknown rules. The traditional concepts of the philosophers, on the other hand, are only models to reconstruct the intuitions.

Test on the Internet. Hauser penetrates the core of moral thought by confronting people with scenarios that they have to judge ethically. He has put a morality test on the Internet, in which around 300,000 people have taken part so far (http: //moral.wjh. Harvard.edu/index2.html). The responses of the participants were surprisingly constant - regardless of religion, age, gender, education or country of origin. This suggests that people judge based on the same set of rules and imperatives such as “Be fair!”. Hundreds of thousands of Internet surfers do not represent all of humanity, but Hauser is now also carrying out tests with nomadic peoples - with comparable results.

At the center of the classic example of a moral dilemma is a railroad car. Imagine the following scenario: You are standing next to a railroad track at a switch. Out of control, a wagon speeds up. In the lane branching off to the left, a group of five railway workers is busy, one on the right is a single one. If you don't do anything, the car swings left and kills the five men. You can save the five by flipping the switch - but then the individual dies. Most people answer that they are rerouting the car. In another scenario, you can push a burly man with a heavy backpack off the bridge and onto the tracks to stop the car. This time, almost all respondents stated that such an act was unjustifiable, even though the result would be the same in both cases.

The classic explanatory models cannot clear up this supposed contradiction. A utilitarian would only look at the result, i.e. approve both actions, a deontologist would reject both options, since killing is always wrong. The solution to the knot is that our moral sense subliminally follows an unknown rule: we obviously distinguish between intended and foreseen harm. Whoever switches the switch anticipates that the individual worker will die, but does not intend to do so. Whoever pushes the man off the bridge, on the other hand, wants to kill him in order to save the others.

Euthanasia dilemma. Of course, this is not Hauser's only discovery in the unconscious set of rules. People also consider damage caused by physical contact to be far more reprehensible than damage caused by no contact. And an action with negative consequences is worse for people than failure to do something that still has the same result. The latter is illustrated by the debate on active and passive euthanasia. A decision may be made to give a deadly patient a fatal overdose or to turn off the life support systems. In both cases the result would be the same: the patient dies.

However, in almost all countries - the Netherlands is an exception - it is a criminal offense to administer fatal drugs to the terminally ill. According to Hauser, the reason lies in our moral intuition: We consider the active induction of death to be morally more serious than a largely passive attitude that leads to the same result. In terms of evolutionary psychology, of course, this makes sense: If someone fails to act, we cannot be sure whether he did it on purpose. That is why we hesitate to unequivocally judge them morally. But as the euthanasia example shows, inherited morality is not always made for the modern world. "This also shows the value of this research," says Hauser: "In this way lawmakers can identify the sources from which our moral thinking is fed."

On the basis of these rules and moral imperatives, the variety of values ​​that we are familiar with emerges. An example: All societies believe that one should not kill. But there are exceptions everywhere. Honor killings, for example, are despicable in Western democracies. In other societies, on the other hand, such crimes are considered to be justified. Likewise, Eskimos consider child murder permissible when resources are scarce. And while the states of Western Europe reject the death penalty, in the USA it is practiced to this day largely unchanged.

Moral concepts. While all the decisive phases in language acquisition are in the first years of life, moral ideas seem to solidify on and beyond the threshold of youth, between the ages of nine and fifteen. If Japanese children live in the United States during this period, they develop strong American values ​​and customs. If, on the other hand, they return to their home country before this phase, the stay will only leave superficial traces in their behavior. If you emigrate to America after the age of 15, you will experience a culture shock there without being able to adopt the moral thinking of your environment.

The rules mentioned do not yet result in a “grammar” of morality. There is no doubt that research in this area is still in its infancy. When, for example, the linguist Steven Pinker, who also teaches at Harvard University, wrote the book “The Language Instinct” in the early 1990s, he could look back on almost a century of modern grammar research. The moral grammar project was largely uncharted territory until five years ago. That is why Hauser is optimistic that in 30 years there will also be a complex set of ethical rules.

However, other researchers are not quite as optimistic about this. The philosopher Richard Rorty, who previously taught at Stanford University, objects that Hauser does not draw a clear line between morality and social conventions. And the psychologists Paul Bloom and Izzat Jarudi from Yale University criticize in the journal “Nature” that the analogy to language does not go as far as Hauser would like it to be.

All languages, despite structural variations, have verbs, for example, but in some the object follows the verb, in others it precedes the verb. In morality, on the other hand, there is only a difference in weighting. So all cultures consider sincerity and fairness important, but some place more emphasis on sincerity and others on fairness. In addition, a moral system within a culture is not as universal as language is. In Pakistan there are honor killings that many people in this culture consider morally justified. But not all Pakistani are in agreement on this. On the other hand, there is no debate about what a grammatically correct sentence should look like in Urdu. But Bloom admits: "Of course, this does not fundamentally refute Hauser's thesis, only the parallel to language probably does not go that far."

Moral judgments. The search for the unconscious moral set of rules may still be a novelty, but neuropsychologists have been using imaging methods to investigate for several years what happens in the brain when we make moral judgments. The researchers want to shed light on an old dispute: whether moral judgments are made from the gut, as the Scottish philosopher David Hume claimed, or are the result of rational knowledge, as Immanuel Kant put it. Initial findings make it clear that there is no exclusive data center in the cerebral cortex that would spit out moral judgments on command. Rather, it is a network of the brain areas responsible for emotions, abstract thinking and interpersonal relationships.
Hauser's Harvard colleague Joshua Greene, who has changed from a philosopher to a neuropsychologist due to his interest in ethics, has been analyzing this network for years. The researcher has already had test subjects solve a number of moral dilemmas in the magnetic resonance tomograph. An exciting picture emerged. When confronted with the train example, the idea of ​​pushing a man off the bridge gives test subjects inhibitory feelings that prevent them from overthrowing a person to death with their bare hands. Tiny spots in the frontal lobe, in the parietal brain and at the base of the cerebral cortex, which light up yellow and red on the images of the brain, testify to this. These are the same regions that flash when they are afraid or sad.

But Greene got completely different results when it came to the question of whether one could throw the switch lever to save five track workers. In this case, the cognitive areas in particular appear: the prefrontal cortex behind the forehead and the anterior cingulate cortex deep behind the middle of the forehead - both areas are involved in the cognitive solution of problem situations. "There are two different processes in the brain that serve different types of moral problems," explains Greene. "If they affect us directly, we react primarily emotionally, if not, we look at moral questions more rationally."

Accordingly, two systems would be involved in the brain when it comes to moral questions, which in turn reflect the two classic ethical positions: utilitarianism, which judges actions at a distance according to their consequences, and deontology, which ascribes actions to moral intrinsic value. “The philosophers' argument may be rooted in their brains,” jokes Greene. “Deontology concerns absolute prohibitions of society like murder - here we react emotionally. On the other hand, we consider non-essential moral questions more utilitarian, in that we rationally assess the consequences. "

A study by Hauser that he recently completed together with the neuropsychologist Antonio Damasio fits in with these results. The researchers tested a group of sociopaths whose ventromedial prefrontal cortex, a brain region behind the nose and forehead that is considered to be the mediator between feeling and mind, had been destroyed. In scenarios like the wagon example, if there was a question of getting violent, they decided like utilitarian - without hesitation, they pushed the man off the bridge. Normal people, on the other hand, withhold their feelings from such a decision - a possibility that sociopaths miss. But to moral questions that involved violence, as in the case of the switch, they gave the same answers as most other people.

Emotional brake. The rational assessment of moral problems was still intact with the sociopaths - but the emotional brake that would have prevented them from approving of cruel acts was lacking. According to Greene, this demonstrates that emotions and reason both play a central role in our ethical perception. Of course, Hauser wants to see it that ratio is a bit more important. Because if the moral universal grammar should rule our morality, an unconscious analysis would have to precede every judgment. This question will probably only be finally decided by new recording techniques. For the time being, imaging processes only provide snapshots, not a film of thought. It is therefore difficult to decide which area will be the melody that the others will dance to.

But a certain moral judgment does not make us good people. As Hauser and Greene repeatedly point out, why we act is very different from how we judge an act. Moral rules and commandments that make us think something is good or bad do not say anything about how we ultimately behave in a situation. Other factors come into play here: egoism, well-established habits, affects. But our social actions are also based on some building blocks that make us humans appear in a good light, at least at first glance. This is how “mirror neurons” operate in our brains, which form the basis for human compassion (see box on page 88). And recent experiments with small children have shown that helpfulness is innate (see box on the right).

But the utopia of peaceful coexistence fails above all because of one hurdle: the deep-seated fear and aggression towards strangers. Even people who claim that they have no prejudice against other people expose themselves in experiments.A test by the neuropsychologist Elizabeth Phelps from New York University demonstrated as early as 2000 that whites who see pictures of blacks in quick succession unconsciously react with great reservations - on the other hand, people of their skin color see far more positively.

Such prejudices against strangers have consequences. They come into play subliminally in job interviews as well as in encounters on the street or in bars. And they are almost inevitable. When confronted with other people, humans classify people according to race, age and gender within milliseconds. In doing so, he perceives those he counts in his own group more quickly and more positively. On the other hand, he initially encounters outgroups with an instinctive hostility. And if there is a dispute over resources in a society with other groups, if there is even a war between countries, then the high values ​​of internalized ethics preferably apply to one's own group. No wonder: in phylogenetic terms, morality served the purpose of regulating the social life of a manageable group. Utopia, world peace, was unfortunately never the goal.

But humans do not have to be fooled by evolution. On the contrary: it is capable of learning. You can see that in the steady progress in terms of equality, racism and animal rights. On the basis of the universal commandment “do not cause unnecessary suffering” we can thus recognize through rational discourse that we should reduce the suffering of animals as much as possible. Three decades ago there were no significant animal welfare laws anywhere in the world, today it looks very different. So cultural evolution sometimes advances rapidly - and expands the human sense of the good.

From Hubertus Breuer