Chances are last week’s article about Brain-to-Brain interfaces left you worried about the technology of the future. But are you afraid of a robot apocalypse? Not yet? Still, it’s likely that humanlike robots will make you feel uneasy. Probably not Pixar’s Wall-E with his kind eyes, nor Star Wars Hero C-3PO, we’re speaking of the artificial twins of author Philip K. Dick (Fig. 1, Fig. 4: #80) and roboticist Hiroshi Ishiguro (Fig. 4: #74). But what makes that difference in our reaction of repulsion? Why should we care? And what does it all have to do with neuroscience?
Fig. 1. Philip K. Dick’s artificial twin wants to put his friend into a “people zoo”.
The Uncanny
Related to Sigmund Freud’s notion of “The Uncanny” (1919), Masahiro Mori coined a similar term in the 1970s for a hypothesis describing how humanlikeness has a positive effect on the perception of an object – except for a strong rupture when the resemblance becomes too close to a real human being. These anthropomorphic beings create an atmosphere of eeriness, and thus fall into the “Uncanny Valley” (Fig. 2).
Fig. 2. This first translation of Mori’s idea of the Uncanny Valley was later adapted from “affinity” to “familiarity” by MacDorman’s (2012). Illustration by Smurrayinchester, CC BY-SA 3.0.
Since the original formulation1, technological development has only made the topic of humanlikeness more important. Today, the Uncanny Valley is not only discussed in the context of robots, dolls and prosthetic limbs, but also extends to the field of animation in movies, games and the Internet. Movies using animated versions of their actors, such as “The Polar Express,” have been criticized for their creepy characters. With the rise of Virtual Reality environments, this hyper-realistic animation will be challenged further to create a convincing artificial world. Recent developments let us focus on the real-life Uncanny Valley for now.
Hotels managed by robots
Robotics have also improved by strides in recent years, so that humanoid robots have been introduced as receptionists in department stores, at hotel lobbies and as adult entertainment products. They are even used as a therapy tool for children with autism2. Because such children have troubles understanding social behavior and other people’s emotions, they seem to feel more comfortable playing with a predictable, straight-forward machine. Nevertheless, the humanlike shape helps them adapt and transfer the new knowledge onto humans.
Fig. 3. How will Atlas by Boston Dynamics react to being mistreated?
Earlier this year, Boston Dynamics presented their state-of-the-art robot Atlas, the next generation that is i. a. capable of moving agilely on unstable ground. Showing Atlas being pushed around by his creators, the video (Fig. 3) evoked highly emotional responses in viewers all over the internet. The reactions were split: Whereas some express alertness for the potential dangers of artificial intelligence, others reveal a certain degree of empathy for this non-sentient being. However, both sides show the complexity of the relationship between humans and machines by painting the classical picture of the tendency to anthropomorphize objects to make sense of the world4.
In a quite recent article in the renowned science journal Nature 3, Daniela Rus, head of the Computer Science and Artificial Intelligence Laboratory at MIT, Cambridge, stated that in the near future, robots will be as “common as cars and phones, a world where everybody can have a robot and robots are pervasively integrated in the fabric of life.” So it’s about time to understand how to make robots trust-evoking and approachable for everybody. Therefore, we first have to solve why they are sometimes so scary.
There have been many attempts to explain the feelings of revulsion when seeing something that falls into the Uncanny Valley. Diverse experimental methods ranging from pictures and videos to real human-robot-interaction and also the mistranslation of the original article have led to wide debates5. The mechanisms behind the phenomenon are not yet fully understood, but recent research has revealed four main explanatory approaches: categorization ambiguity, perceptual mismatch, inhibitory evaluation, and mind perception.
Categorization ambiguity
When looking at a modern humanoid, our brain has to categorize it as a humanlike object in contrast to a real human in order to react accordingly. With increasing detail in appearance and movement, it takes more mental effort to distinguish both, even if this happens unconsciously. In their 2016 study, Mathur and Reichling5 found that participants took longer to decide on the “humanness” of the robots that were less liked according to explicit ratings on a continuous scale (Fig. 4). Even though statistically, the researchers couldn’t rule out other factors involved, it is a fact that people usually dislike when things become too ambiguous6. There is something uncanny about it.
Fig. 4. Real robots’ sorted from mechanic to human used by Mathur and Reichling (2016)5: #45 being rated as the eeriest, #74 Geminoid twin of Hiroshi Ishiguro and #80 Philip K. Dick, Copyright: CC BY-NC-ND 4.0.
Perceptual Mismatch
The human mind makes predictions about the environment in order to facilitate quick and appropriate reactions to the environment. We know from experience how facial expressions and gestures usually look, how humans move, how they behave in a certain context and how they might react to certain situations. Looking at a human-like object automatically evokes these same expectations. Humanoid robots are not yet perfect enough to trick all the senses; they could have a mismatch on many levels, such as the eyes not showing the same emotion as the mouth, moving with insufficient fluidity or not according to the sounds they make. Humans are excellent at error detection, so we naturally experience negative affect to the incongruences. But being confronted with something artificial yet still humanlike is nothing for which we are evolutionary prepared. Hence, it is difficult to attribute the “uncanniness” to anything specific.
While the phenomenon is hard to grasp, it can still be explained with the help of neuroscientific methods that offer objective data, without solely relying on self-report. When measuring the electrical brain activity (with an electroencephalogram, or EEG) from the parietal cortex, Urgen and colleagues7 compared their participants’ brain activations when reacting to videos versus pictures. They showed them (a) a woman with biological movement, (b) an in-between android version, which looked like the woman but moved like a robot and (c) a robot with mechanical movement. The researchers wanted to see whether the incongruence between appearance and movement in the video condition would be reflected neurophysiologically. The EEG allows to detect reactions to a violated expectation, referred to as prediction errors, with the help of a special landmark in the signal called N400. Using this method, only the android in motion (b) elicited the signal, while the human (a) and robot (c) didn’t. Neither did any of them in the picture condition. These results provide a reliable argument in favor of the perceptual mismatch interpretation of the uncanny valley effect.
Fig. 5. Presented material illustrating human-human interaction vs. human-robot interaction in Wang and Quadflieg (2015)8, CC BY 4.0 (detail from Fig. 1).
This interpretation applies to more complex processes such as the interpretation of social interactions as well. In a recent study using functional magnetic resonance brain imaging (fMRI), Wang and Quadflieg (2015)8 studied brain activation of participants looking at pictures of human-human-interaction pairs compared to human-robot-interaction pairs in the exact same depicted situations (Fig. 5). They actually found less activation for the human-robot-pair in the brain region seemingly responsible for judgments of intentions and beliefs behind others’ interactions, the left parietal junction. This brain region is also important for the self-others distinction and moral decisions in general. However, the medial perirhinal cortex, involved in memory and meaning processing, and the ventromedial prefrontal cortex, engaged in emotion regulation and decisions9, activated more while looking at those pictures. Together, these areas are interpreted as responsible for building up norms about stereotypical situations, so-called scripts. This is in line with the general observation that it is difficult to derive sensitive information by looking at a robot, as it lacks expression. Hence, people rely on a generalized concept that they have memorized and internalized before, instead of judging the social situation individually as they do with human pairs.
Inhibitory devaluation
Combining both categorization ambiguity and perceptual mismatch, the inhibitory devaluation hypothesis by Ferrey and colleagues proposes that “negative evaluations will be triggered by any stimulus that activates multiple, competing stimulus representations during recognition”10. This means that all sorts of competing information, be it be category or perception, have a negative influence on cognitive processes. It has an inhibiting effect on the ongoing evaluation and is therefore perceived as disturbing. The resulting negative affect is then attributed to the triggering object, in this case the robot. However, this hypothesis is not restricted to the Uncanny Valley effect with robots. The same effect applies to morphs, pictures that gradually change, from human to animal and animal to other animal, as for human to robot. The authors argue that the Uncanny Valley is just one example for a broader phenomenon not limited to human-likeness, and thus all specificities should be explained with the help of other phenomena.
Mind perception
Another psychological hypothesis states that the Uncanny Valley effect is grounded in a fear that robots have their own minds – just like us. Gray and Wegner (2012)11 claim that it is especially the notion of robots possessing emotions and sensations that makes us anxious. When robots communicate some sort of motivation and experience without actually having the ability to act or plan on their own, we might feel that this is not right. Rationalizing does not spare us from simply feeling that robots might eventually overpower humans by taking control over their own actions.
More studies are needed to explore this idea further. Although robot designers may choose to avoid too close resemblance to human features (Fig. 6), the algorithms behind robots actions, currently evolving at lightning speed, are independent of their appearance .
So, should we be afraid of robots? Well, that mostly depends on their programmers, for sure. Max Tegmark, President of the Future of Life Institute, explains12: “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI [articifical intelligence] safety research is to never place humanity in the position of those ants.” Since January 2015 over 8,600 researchers, including the famous physicist Stephen Hawking, have signed an open letter in order to prevent artificial intelligence from stepping over certain boundaries. Yet who knows whether robot enterprises will abide by the proposed rules in the future?
At least there is some hope for bridging the Uncanny Valley: a study13 focusing on the factor of familiarity and eeriness could show that repeated interaction with a real robot can decrease the sense of revulsion towards robots, whether more or less humanlike. Nevertheless, the authors point out that in the end the robot’s actions will determine our attitude towards them.
Fig. 6 Hanson’s robot Sophia says she wants to make art and will destroy humans.
Recommended for further reading: Collection of “uncanny material” by Dr. Stephanie Lay
Sources
[1] Mori, M., MacDorman, K., & Kageki, N. (2012). The Uncanny Valley [From the Field]. IEE Robotics & Automation Magazine, 19(2), 98-100.
[2] Ueyama, Y., & Eapen, V. (2015). A Bayesian Model of the Uncanny Valley Effect for Explaining the Effects of Therapeutic Robots in Autism Spectrum Disorder. PLOS ONE, 10(9), e0138642. doi:10.1371/journal.pone.0138642
[3] Butler, D. (2016). A world where everyone has a robot: why 2040 could blow your mind. Nature Publishing Group. Retrieved (27.02.16) from: http://www.nature.com/news/a-world-where-everyone-has-a-robot-why-2040-could-blow-your-mind-1.19431
[4] Waytz, A., Epley, N., & Cacioppo, J. T. (2010). Social Cognition Unbound: Insights Into Anthropomorphism and Dehumanization. Current Directions in Psychological Science, 19(1), 58–62. doi:10.1177/0963721409359302
[5] Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32. doi:10.1016/j.cognition.2015.09.008
[6] Kätsyri J, Förger K, Mäkäräinen, M., & Takala, T. (2015). A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology, 6(1488), 20. doi:10.3389/fpsyg.2015.00390
[7] Urgen, B. A. et al. (2015). Predictive coding and the Uncanny Valley hypothesis: Evidence from electrical brain activity. Conference: Cognition: A Bridge Between Robotics and Interaction, 10th ACM/IEEE International Conference on Human-Robot Interaction, At Porland, Oregon.
[8] Wang, Y., & Quadflieg, S. (2015). In our own image?: Emotional and neural processing differences when observing human–human vs human–robot interactions. Social Cognitive and Affective Neuroscience, nsv043. doi:10.1093/scan/nsv043
[9] van den Bos, W., & Guroglu, B. (2009). The role of the ventral medial prefrontal cortex in social decision making. The Journal of neuroscience : the official journal of the Society for Neuroscience, 29(24), 7631–7632. doi:10.1523/JNEUROSCI.1821-09.2009
[10] Ferrey, A. E., Burleigh, T. J., & Fenske, M. J. (2015). Stimulus-category competition, inhibition, and affective devaluation: A novel account of the uncanny valley. Frontiers in Psychology, 6(1488), 44. doi:10.3389/fpsyg.2015.00249
[11] Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130. doi:10.1016/j.cognition.2012.06.007
[12] Tegmark, M. (n. d.). Benefits & Risks Of Artificial Intelligence. Future of Life. Retrieved (27.02.16) from: http://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
[13] Złotowski, J. A., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., & Ishiguro, H. (2015). Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Frontiers in Psychology, 6. doi:10.3389/fpsyg.2015.00883