On Determining Whom to Kill: The Challenge of Moral Decision Making

You may have heard of the recent tragic accident that took place in May 2016 in Florida, labeled as the first fatal accident of an autonomous vehicle. The accident occurred because the car crashed into a tractor-trailer while the ‘Autopilot mode’ of the car was activated. This spurred a public debate about the reliability of autonomous vehicles, even including opinions that autonomous cars should be banned from the road. In fact, due to the limits of current technology, autonomous vehicles in the market are not yet ‘fully’ autonomous because they still require the attention of the driver. However, will everyone be content if technology develops enough to manufacture full-fledged self-driving cars?

The answer is probably no, because there will still be a central question left unanswered, even if all technical problems are solved: how should an algorithm react when confronted with an inevitable, possibly deadly conflict? One such scenario is depicted in Figure 1. Should an algorithm be utilitarian and kill the driver of a car for rescuing several pedestrians, or should the driver be saved at all costs? This may ring a bell for those of you who have heard of the ‘trolley problem’, which was first introduced by Foot in 1967 and questions whether it is moral to kill one person for the sake of saving lives of five other people, when you have to be the one to make the decision.

(Did you know? There is a flash game version of trolley problem which you can try out, available here! This is only available on PC.)

figure1
Figure 1. Possible Dilemmas of Self-driving Cars (Bonnefon et. al, 2016): (a) killing several pedestrians or one passerby, (b) killing one pedestrian or its own passenger, and (c) killing several pedestrians or its own passenger.

The advent of autonomous vehicles has awakened a need for establishing maxims of moral decision-making in real life. However, people’s complicated response when they were asked about this issue implies that there is more to programming autonomous vehicles than just maximizing the number of lives saved in the trolley problem.

According to a recent study done by Bonnefon et al., people tend to show rather ambiguous responses when asked about the autonomous car conflict outlined above, agreeing that there should be self-sacrificing autonomous vehicles as long as they don’t have to ride in them. In other words, participants of the study rated self-sacrificing cars as highly moral while they themselves were more willing to purchase cars that protect their drivers at all cost.

Here comes the dilemma: if self-driving cars are programmed to not self-sacrifice in this situation, people not riding these cars will not like this idea, to say the very least. And if autonomous cars are designed to self-sacrifice in a given situation in order to maximize utilitarian outcomes, then no one will oppose it but at the same time no one will ride in it or buy it. Thus, the problem does not simply boil down to calculating and programming most rational and utilitarian choices in these cars. In the following paragraphs, I will explore whether and how moral psychology and neuroscience can inform us about possible solutions to this dilemma.   (For more information about the self-sacrificing autonomous vehicles, please refer here.

The Emotional Dog and its Rational Tail

Traditionally, humans were considered as rational beings, and therefore assumed to make moral judgments based on deliberate reasoning. However, psychologist Jonathan Haidt claims that the reality is actually the opposite; he proposes that moral decision-making is first determined by emotion (intuition), and the reasoning process which we think we base our decisions on is just a post-hoc process that justifies the fast, automatic, and emotional gut-feelings. “No, I am sure that I make moral decisions based on careful reasoning!” you may still insist. Then let us see how you answer this tricky question.

     Julie and Mark, who are sister and brother, are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie is already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex? 

Do you think what they did is morally wrong? If so, why? You would probably get stuck because you run out of rationales: nobody is getting hurt or being deprived of their rights from this, there is a zero percent chance of the sister being pregnant since they used contraceptives, and most importantly both of them were perfectly fine after the event. But even if you cannot come up with clear reasons, it just ‘feels’ morally wrong to imagine having sex with your sibling anyway, and I bet nothing is going to change your mind.

Haidt framed this dilemma as “moral dumbfounding,” which refers to having strong negative gut-feelings about certain situations, but being unable to explain why they seem immoral. This is the essence of the “social intuitionist” view which argues that during moral judgment, a decision is first made automatically via intuition, and then reason is activated to find logical reasons that support that decision. Haidt captured this phenomenon in a well-known phrase, the “emotional dog and its rational tail”.

The Emotional Dog Supported by Science

But guess what? The social intuitionist view is not merely a philosophical debate but has also been supported by some cool scientific research. A study by psychologist Antonio Damasio advocates the importance of emotions in decision-making. He found that patients with brain damage in ventromedial prefrontal cortex (vmPFC), a region for regulating and inhibiting emotion, showed deficits not only in expressing their emotion but also made terrible decisions in real life, although their intellectual levels, such as IQ or knowledge of social norms, remained intact. In Damasio’s study the patients were asked to perform the Iowa Gambling Task, which is hypothesized to simulate real-life decision-making settings. The patients performed poorly in the Iowa Gambling Task (real-life) condition although they performed as well as other people in other laboratory condition tasks.

figure2
Figure 2. Iowa Gambling Task (IGT): Participants choose each deck of card and they either win or lose money accordingly. The goal of this task is to win as much money as possible by flipping cards. (Please watch this video for a detailed description) PaulWicks (CC0 1.0)

The Iowa Gambling example showed that emotion is crucial in general real-life decision-making, but is it also true in the moral context? There is another scientific study which corroborates the view that intuition dominates moral decisions as well. In 2015, psychologist Pärnamets and colleagues proved that simply directing eye gaze towards one option over another can bias moral decisions. In this experiment, they first showed participants a moral issue and two possible responses for it: for example, a moral issue “murder is sometimes justifiable” was shown on the screen, and the subjects had time to freely look at two possible answers “sometimes justifiable” and “never justifiable” which appeared on each side of the screen.

figure3
Figure 3. How gaze biases moral decision-making (Pärnamets 2015)

The trick is here: the experimenters measured the duration of the gaze on each option by eye-tracker, and stopped whenever the participant had their gaze on one randomly predetermined option for at least 750 milliseconds (ms) and the other for at least 250 ms. The participants were prompted to make their decision, and after this the participants were asked to rate the perceived importance of the moral items to ensure that the result was not driven by any interest or other factors.

The results are surprising in that although they rated two alternatives as equally important, their moral decision was directed towards the target which they looked longer at above-chance level (over 50%). This implies that even sensory inputs as simple as gaze can bias moral decision-making, and this further supports Haidt’s hypothesis that moral decision making is driven by an unconscious, automatic and fast process and reasoning process merely following it.

Reasoning can Override the Initial Intuition

A different perspective comes from neuroscientist Joshua Greene who emphasizes that  rational processes are stronger than Haidt has claimed. (Phew! We are not emotional dogs any more!) Two different kinds of moral dilemmas were given to subjects in this study: easy and hard. An example of an easy moral dilemma would be the ‘infanticide dilemma,’ in which a mother wants to kill her baby for the mother’s own good. In this easy problem, there is almost no reasonable excuse to kill the baby, so one could easily reach the conclusion that it is morally inappropriate. On the other hand, the hard problem is represented as the dilemma in which a crying baby has to be killed in order to save other five people hiding from their enemy; it is harder because one has to weigh the cost of personal moral violation (killing the baby) with the utilitarian value of other five people. And the result of this study is that cognitive processes override the initial emotional response if the utilitarian benefit (saving lives of five people over one) outweighs the personal moral violation (killing the baby). And Greene and colleagues found that anterior cingulate cortex (ACC), a brain region associated with cognitive control and detecting conflict, and dorsolateral prefrontal cortex (DLPFC), a brain region associated with deployment of cognitive control, becomes activated when people make utilitarian decisions in these hard personal moral dilemmas. In conclusion, although the initial automatic response is determined by intuition, following cognitive and reasoning processes can alter it.

Weaving It Together

So it seems that both intuition and reasoning play an important role in moral decision-making. Then exactly how do intuition and reasoning interact with each other, and what role do they play during moral decision making? To answer this, I would like to introduce a recent study by Gui and colleagues done in 2015, which investigated the temporal dynamics (time course) of intuitive, emotional and cognitive processes in moral decision making. They did this by using electroencephalography (EEG), a neuroscience research method which measures electrical activity in the brain and is ideal for finding out the precise time course of brain activity.

figure4
Figure 4. Electroencephalogram (EEG)

Gui and colleagues found that moral decision-making involves four distinct stages of EEG signal. The first stage occurs 80-150 ms after the stimuli is shown, and can be interpreted as moral intuition (N1; the letter indicates whether the peak is positive or negative, and the number indicates at which order the peak occurred. So, for example N1 stands for the first negative peak in the EEG signal). Second stage (N2) happens 150-200 ms after stimulus onset and shows emotional arousal, and this does not affect the previous moral intuition. And in the third stage, 200-400 ms after onset, emotional processing arises and influences following cognitive processes (LPP; Late positive potential). And in the final stage, after 450 ms, complex moral reasoning, represented by slow wave, takes over. The implication of this study is that it found the exact time course of moral decision-making, and it proves the independent component ‘moral intuition’ really exists. It also found evidence in support of the idea that cognitive process occurs after intuition and emotion, and that moral judgment is an overall integration of intuitional, emotional, and cognitive processes.

Take Home Message: Going Back to the Autonomous Car Dilemma

So, going back to the autonomous car problem. The aforementioned studies show that intuition and reasoning are both important in moral decision-making. If programmers only considered the utilitarian and reasoning processes of moral decisions and ignored the emotional processes, this could potentially lead to disastrous decisions on the road. In other words, the problem with autonomous cars cannot be simplified into a yes/no problem of ‘to self-sacrifice or not?,’ but should be flexible enough to take into account complex and varying situations. This is reflected in moral psychologist Greene’s view on autonomous cars: what is more important than programming to minimize harm in autonomous vehicles is meeting the moral sentiments of people.

 

Leave a comment