Zoozie herselfEdit

Zoozie is the female of the Oogieloves, and is hyperfluent in every language. She can even communicate with animals, which leads to odd incidents when meat is in the question. A very stereotypical girl who the males of her species have no interest in.

Oogieloves as a speciesEdit

  • In aesthetics the uncanny valley is the hypothesis that human replicas that appear almost but not exactly like real human beings elicit feelings of eeriness and revulsion among some observers.[2] Valley denotes a dip in the human observer's affinity for the replica, a relation that otherwise increases with the replica's human likeness.[3] Examples can be found in robotics and 3D computer animation among others.

Contents Edit


    • 1Etymology
    • 2Hypothesis
    • 3Theoretical basis
      • 3.1Research
    • 4Design principles
    • 5Criticism
    • 6Similar effects
    • 7In computer animation and special effects
    • 8In fiction
    • 9See also
    • 10References
    • 11Bibliography
    • 12External links

Etymology[edit] Edit

The concept was identified by the robotics professor Masahiro Mori as Bukimi no Tani Genshō (不気味の谷現象) in 1970.[4] The term was first translated as uncanny valley in the 1978 book Robots: Fact, Fiction, and Prediction, written by Jasia Reichardt,[5] thus forging an unintended link to Ernst Jentsch's concept of the uncanny,[6] introduced in a 1906 essay "On the Psychology of the Uncanny."[7][8][9] Jentsch's conception was elaborated by Sigmund Freud in a 1919 essay entitled "The Uncanny" ("Das Unheimliche").[10]

Hypothesis[edit] Edit

Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong revulsion. However, as the robot's appearance continues to become less distinguishable from that of a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[11]

This area of repulsive response aroused by a robot with appearance and motion between a "barely human" and "fully human" entity is called the uncanny valley. The name captures the idea that an almost human-looking robot will seem overly "strange" to some human beings, will produce a feeling of uncanniness, and will thus fail to evoke the empathic response required for productive human-robot interaction.[11]

Theoretical basis[edit] Edit

Hypothesized emotional response of subjects is plotted against anthropomorphism of a robot, following Mori's statements. The uncanny valley is the region of negative emotional response towards robots that seem "almost". Movement amplifies the emotional response.[12]

A number of theories have been proposed to explain the cognitive mechanism underlying the phenomenon:

    • Mate selection. Automatic, stimulus-driven appraisals of uncanny stimuli elicit aversion by activating an evolved cognitivemechanism for the avoidance of selecting mates with low fertility, poor hormonal health, or ineffective immune systems based on visible features of the face and body that are predictive of those traits.[13][14]
    • Mortality salience. Viewing an "uncanny" robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability.... [P]artially disassembled on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human facade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android’s movements could be unsettling because it elicits a fear of losing bodily control."[15]
    • Pathogen avoidance. Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity."[14][16] The visual anomalies of androids, robots, and other animated human characters cause reactions of alarm and revulsion, similar to corpses and visibly diseased individuals.[17]
    • Sorites paradoxes. Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric, degree of human likeness.[18]
    • Violation of human norms. The uncanny valley may "be symptomatic of entities that elicit a model of a human other but do not measure up to it".[19] If an entity looks sufficiently nonhuman, its human characteristics will be noticeable, generating empathy. However, if the entity looks almost human, it will elicit our model of a human other and its detailed normative expectations. The nonhuman characteristics will be noticeable, giving the human viewer a sense of strangeness. In other words, a robot stuck inside the uncanny valley is no longer being judged by the standards of a robot doing a passable job at pretending to be human, but is instead being judged by the standards of a human doing a terrible job at acting like a normal person. This has been linked to perceptual uncertainty and the theory of predictive coding.[20][21]
    • Religious definition of human identity. The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity.[22] An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is "specialness", the irrational belief that aging and death as central premises of life apply to all others but oneself.[23] The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. In folklore, the creation of human-like, but soulless, beings is often shown to be unwise, as with the golem in Judaism, whose absence of human empathy and spirit can lead to disaster, however good the intentions of its creator.[24]
    • Conflicting perceptual cues. The negative effect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to category membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with cognitive dissonance.[25][26] Several studies support this possibility. Mathur & Reichling found that the time subjects took to gauge a robot face's human- or mechanical-resemblance peaked for faces deepest in the Uncanny Valley, suggesting that perceptually classifying these faces as "human" or "robot" posed a greater cognitive challenge.[27] However, they found that while perceptual confusion coincided with the Uncanny Valley, it did not mediate the effect of the Uncanny Valley on subjects' social and emotional reactions—suggesting that perceptual confusion may not be the mechanism behind the Uncanny Valley effect. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.[28] Yamada et al. found that cognitive difficulty was associated with negative affect at the midpoint of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog).[29] Ferrey et al. demonstrated that the midpoint between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities.[25] Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to genetically modified organisms ("Frankenfoods") and transgender individuals.[30] Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict.[31] There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty,[28][29] perceptual mismatch,[32] frequency-based sensitization,[33] and inhibitory devaluation.[25]

Research[edit] Edit

A series of studies experimentally investigated whether Uncanny Valley effects exist for static images of robot faces. Mathur MB & Reichling DB[27]used two complementary sets of stimuli spanning the range from very mechanical to very human-like: first, a sample of 80 objectively chosen robot face images from Internet searches, and second, a morphometrically and graphically controlled 6-face series set of faces. They asked subjects to explicitly rate the likability of each face. To measure trust toward each face, subjects completed a one-shot investment game to indirectly measure how much money they were willing to "wager" on a robot's trustworthiness. Both stimulus sets showed a robust Uncanny Valley effect on explicitly-rated likability and a more context-dependent Uncanny Valley on implicitly-rated trust. Their exploratory analysis of one proposed mechanism for the Uncanny Valley, perceptual confusion at a category boundary, found that category confusion occurs in the Uncanny Valley but does not mediate the effect on social and emotional responses.

An empirically estimated Uncanny Valley for static robot face images[27]

One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can lead to less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.[34]

As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology are measuring human brain activations related to the uncanny valley.[35][36]In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain’s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict.[20] The brain "lit up" when the human-like appearance of the android and its robotic motion "didn’t compute". Ayşe Pınar Saygın, an assistant professor from UCSD, says "The brain doesn’t seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."[37][38][39]

Viewer perception of facial expression and speech and the uncanny valley in realistic, human-like characters intended for video games and film is being investigated by Tinwell et al., 2011.[40] Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games.[41] Building on the body of work already undertaken in android science, this research intends to build a conceptual framework of the uncanny valley using 3D characters generated in a real-time gaming engine. The goal is to analyze how cross-modal factors of facial expression and speech can exaggerate the uncanny. Tinwell et al., 2011[42] have also introduced the notion of an unscalableuncanny wall that suggests that a viewer’s discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism. A summary of Dr Angela Tinwell's research on the Uncanny Valley, psychological reasons behind the Uncanny Valley and how designers may overcome the uncanny in human-like virtual characters is provided in her book, The Uncanny Valley in Games and Animation by CRC Press.[43]

Design principles[edit] Edit

A number of design principles have been proposed for avoiding the uncanny valley:

    • Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed.[44] For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice.[45] For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior.[46] If an animated character looks more human than its movement, this gives a negative impression.[47] Human neuroimaging studies also indicate matching appearance and motion kinematics are important.[20][48][49]
    • Reducing conflict and uncertainty by matching appearance, behavior, and ability. In terms of performance, if a robot looks too appliance-like, people will expect little from it; if it looks too human, people will expect too much from it.[46] A highly human-like appearance leads to an expectation that certain behaviors will be present, such as humanlike motion dynamics. This likely operates at a sub-conscious level and may have a biological basis. Neuroscientists have noted "when the brain's expectations are not met, the brain...generates a 'prediction error'. As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide "it is not a good idea to make [robots] so clearly in our image after all."[20][49][50]
    • Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can fall into the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture. Avoiding a photorealistic texture can permit more leeway.[51]

Criticism[edit] Edit

A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:

    • Good design can lift human-looking entities out of the valley. David Hanson has criticized Mori's hypothesis that entities approaching human appearance will necessarily be evaluated negatively.[52] He has shown that the uncanny valley that Karl MacDorman and Hiroshi Ishiguro[53] generated – by having participants rate photographs that morphed from humanoid robots to android robots to human beings – could be flattened out by adding neotenous, cartoonish features to the entities that had formerly fallen into the valley.[52]
    • The uncanny appears at any degree of human likeness. Hanson has also pointed out that uncanny entities may appear anywhere in a spectrum ranging from the abstract (e.g., MIT's robot Lazlo) to the perfectly human (e.g., cosmetically atypical people).[52] Capgras syndrome is a relatively rare condition in which the sufferer believes that people (or, in some cases, things) have been replaced with duplicates. These duplicates are rationally accepted to be identical in physical properties, but the irrational belief is held that the "true" entity has been replaced with something else. Some sufferers of Capgras syndrome claim that the duplicate is a robot. Ellis and Lewis argue that the syndrome arises from an intact system for overt recognition coupled with a damaged system for covert recognition, which leads to conflict over an individual being identifiable but not familiar in any emotional sense.[54] This supports the view that the uncanny valley could arise due to issues of categorical perception that are particular to the manner in which the brain processes information.[49][55]
    • The uncanny valley is a heterogeneous group of phenomena. Phenomena labeled as being in the uncanny valley can be diverse, involve different sense modalities, and have multiple, possibly overlapping causes, which can range from evolved or learned circuits for early face perception[51][56] to culturally-shared psychological constructs.[57] People's cultural backgrounds may have a considerable influence on how androids are perceived with respect to the uncanny valley.[58]
    • The uncanny valley may be generational. Younger generations, more used to CGI, robots, and such, may be less likely to be affected by this hypothesized issue.[59]

Similar effects[edit] Edit

An effect similar to the uncanny valley was noted by Charles Darwin in 1839:

A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves with transhuman enhancements (cf. body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition.[61] So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley.[61] Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.[62]

In computer animation and special effects[edit] Edit

A number of films that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic. Examples include the following:

    • According to roboticist Dario Floreano, the baby character Billy in Pixar's groundbreaking 1988 animated short film Tin Toy provoked negative audience reactions, which first led the film industry to take the concept of the uncanny valley seriously.[63][64]
    • The 2001 film Final Fantasy: The Spirits Within, the first photorealistic computer-animated feature film, provoked negative reactions from some viewers due to its near-realistic yet imperfect visual depictions of human characters.[65][66][67] The Guardian critic Peter Bradshaw stated that while the film's animation is brilliant, the "solemnly realist human faces look shriekingly phoney precisely because they're almost there but not quite".[68] Rolling Stone critic Peter Travers wrote of the film, "At first it's fun to watch the characters, […] But then you notice a coldness in the eyes, a mechanical quality in the movements".[69]
    • Several reviewers of the 2004 animated film The Polar Express called its animation eerie. reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy. So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying".[70] The term "eerie" was used by reviewers Kurt Loder[71] and Manohla Dargis,[72] among others. Newsday reviewer John Anderson called the film's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train".[73] Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.[74]
    • In a review of the 2007 animated film BeowulfNew York Times technology writer David Gallagher wrote that the film failed the uncanny valley test, stating that the film's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf’s face... allowing viewers to admire every hair in his 3-D digital stubble".[75]
    • Some reviewers of the 2009 animated film A Christmas Carol criticized its animation as being creepy. Joe Neumaier of the New York Daily Newssaid of the film, "The motion-capture does no favors to co-stars [Gary] Oldman, Colin Firth and Robin Wright Penn, since, as in 'Polar Express,' the animated eyes never seem to focus. And for all the photorealism, when characters get wiggly-limbed and bouncy as in standard Disney cartoons, it's off-putting".[76] Mary Elizabeth Williams of wrote of the film, "In the center of the action is Jim Carrey -- or at least a dead-eyed, doll-like version of Carrey".[77]
    • In the 2010 live-action film The Last Airbender, the character Appa, the flying bison, has been called "uncanny". Geekosystem's Susana Polo found the character "really quite creepy", noting "that prey animals (like bison) have eyes on the sides of their heads, and so moving them to the front without changing rest of the facial structure tips us right into the uncanny valley".[78]
    • The 2010 live-action film Tron: Legacy features a computer-generated young version of actor Jeff Bridges (as a young Kevin Flynn and Clu) which reviewers have criticized as being creepy. Vic Holtreman of Screen Rant wrote, "Finally we get to the CGI recreation of Jeff Bridges as a young man. Have we finally gotten past the 'uncanny valley' (where the mind/eye discerns that something is just not quite 'real')? Sadly, no. As long as young Kevin Flynn wasn’t talking, the face looked great – but as soon as he spoke, the creepy factor pops up. He looked like he had a face full of Botox […] One could argue that Clu was a computer program and should have been 'stiff' compared to a human, but even in the opening scene of the film where we see the real-world young Kevin Flynn, the same effect is present".[79] Manohla Dargis of The New York Times wrote, "Mr. Bridges mostly amuses by throwing a little Lebowski into his performance as the older Kevin, which partly makes up for the creepiness of his computer-enhanced turn as both the younger Kevin and the rebellious program Clu. This youthful version was achieved by digitally translating the actor’s facial movements into data for a simulacrum that here looks like an animated death mask".[80] Amy Biancolli of the Houston Chronicle wrote, "Regarding Bridges' digital de-aging: It's creepy. It's a little less creepy on Clu's face (he's not human, anyway) than on Kevin's in a scene from 1989, but on either of them it has the waxen look of storefront mannequins - or over-Botoxed socialites".[81]
    • The 2011 animated film Mars Needs Moms was widely criticized for being creepy and unnatural because of its style of animation. The film was the second biggest box office bomb in history, which may have been due in part to audience revulsion.[82][83][84][85] (Mars Needs Moms was produced by Robert Zemeckis's production company, ImageMovers, which had previously produced The Polar ExpressBeowulf, and A Christmas Carol.)
    • Reviewers had mixed opinions regarding whether the 2011 animated film The Adventures of Tintin: The Secret of the Unicorn was affected by the uncanny valley. Daniel D. Snyder of The Atlantic wrote, "Instead of trying to bring to life Herge’s beautiful artwork, Spielberg and co. have opted to bring the movie into the 3D era using trendy motion-capture technique to recreate Tintin and his friends. Tintin’s original face, while barebones, never suffered for a lack of expression. It’s now outfitted with an alien and unfamiliar visage, his plastic skin dotted with pores and subtle wrinkles […]. While all the characters sport some kind of cartoonish features—especially their ears and noses—their photorealistic eyes are somehow blank. […] In bringing them to life, Spielberg has made the characters dead. […] the cast of Tintin, sit comfortably at the bottom of the fabled uncanny valley […] Spielberg decided not to follow. Instead he’s stuck at the bottom of the uncanny valley with Tintin staring back at him through his dead eyes".[86] N.B. of The Economist wrote, "This distinction is referred to as 'the uncanny valley' […] In 'The Adventures of Tintin', too, the effect can be grotesque. Tintin, Captain Haddock and the others exist in settings that are almost photo-realistic, and nearly all of their features are those of flesh-and-blood people. And yet they still have the sausage fingers and distended noses of comic-strip characters. It's not so much 'The Secret of the Unicorn' as 'The Invasion of the Body Snatchers'".[87] However, other reviewers felt that the film avoided the uncanny valley despite its animated characters' realism. Critic Dana Stevens of Slate wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley'".[88] Wired magazine editor Kevin Kelly wrote of the film, "we have passed beyond the uncanny valley into the plains of hyperreality".[89]
    • The 2015 live-action film Terminator Genisys contains a scene featuring a computer-generated young version of actor Arnold Schwarzenegger(as a young T-800 Terminator) which some reviewers saw as being affected by the uncanny valley. Eric Mungenast of the East Valley Tribunewrote of the film, "One notable technological problem stems from the attempt to make old Schwarzenegger look young again with some digital manipulation — it definitely doesn’t cross over the uncanny valley".[90] Writer and reviewer Simon Prior wrote of the film, "Even the 'Synthespian' version of Arnie isn’t as horrific as you might expect, although it’s still clear that CGI hasn’t yet conquered the Uncanny Valley issue".[91] (The film's predecessor, 2009's Terminator Salvation, also has a scene with a CG young Schwarzenegger which provoked similar reactions from some viewers,[92][93] but unlike in the newer film, this earlier version does not speak.)

In fiction[edit] Edit

The fear, arising at contemplation of the "person" having small aberrations, and strengthening of impression because of its movement were noticed in 1818 by the writer Mary Shelley in the novel Frankenstein; or, The Modern Prometheus:

"How to describe my feelings at this awful show, how to represent unfortunate, created by me with such incredible work? And meanwhile its members were proportional, and I picked up for it beautiful lines. Beautiful — My God great! Yellow skin too hardly fitted his muscles and veins; hair were black, shiny and long, and teeth white as pearls; but that their contrast with the watery eyes almost indistinguishable on color from eye-sockets, with dry skin and a narrow cut of a black mouth was more terrible. <…> It was impossible to look at it without shudder. No mummy restored to life could be more awful than this monster. I saw the creation unfinished; it was ugly even then; but when his joints and muscles started moving, something turned out more terrible, than all fictions of Dante." Mary Shelley. Frankenstein; or, The Modern Prometheus.

The 1977 Doctor Who serial "The Robots of Death" describes a mental illness called "Grimwade's Syndrome" or "robophobia": a condition where the lack of body language from humanoid robots provokes in certain people the feeling that they are "surrounded by walking, talking dead men."

In the 2008 30 Rock episode "Succession", Frank Rossitano explains the uncanny valley concept, using a graph and Star Wars examples, to try to convince Tracy Jordan that his dream of creating a pornographic video game is impossible. He also references the computer-animated film The Polar Express.[94]

In the 2010 Criminal Minds episode "The Uncanny Valley", a woman turns women into living dolls to try to reclaim the porcelain dolls she had owned when she was younger.

In the 2014 The Leftovers season 1 finale "The Prodigal Son Returns," the Guilty Remnant places dozens of lifesize humanoid replicas of departed loved ones into the homes of their family members who are still alive. This action unnerves so many people that it sparks a riot in the town.