We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!
Abstract
:1. Introduction
2. Different Conceptions of What Anthropomorphism Is
3. The Importance of Contextual Effects in the Models of Anthropomorphism
3.1. A Contextless Model: The Mere Appearance Hypothesis
3.2. Why We Anthropomorphize: The Three Psychological Determinants of Anthropomorphism
3.3. How We Anthropomorphize: The Theory of Mind
3.3.1. Anthropomorphism as a Process Dependent on the Theory of Mind
3.3.2. The Anthropomorphism Independent from the Theory of Mind
4. Anthropomorphizing Factors: Robotic Factors Are Not Enough
4.1. Robotic Factors: The Design of the Robot
4.1.1. The Robot’s Appearance Has a Strong Impact on Anthropomorphism
4.1.2. A Human-like Voice Helps, but It Is Not Enough
4.1.3. Behavior Is a Crucial Factor
4.1.4. The Quality of Movements Can Reinforce Anthropomorphism
4.2. Situational Factors: The Situation Itself Can Change the Level of Anthropomorphism
4.2.1. Anthropomorphic Framing Increases Robot Acceptance and Anthropomorphism
4.2.2. Giving a Robot the Role of a Companion Increases Acceptance
4.2.3. The Frequency of the Interaction Decreases Anxiety and Anthropomorphism
4.2.4. A Robot Perceived as Autonomous Is Anthropomorphized More
4.3. Human Factors Also Depend on the Users Themselves, Not Just the Robots
4.3.1. The Older We Get, the Less We Anthropomorphize
4.3.2. Same Gender Robot Promotes Acceptance in Children, but Not Anthropomorphism
4.3.3. Personality Traits Impact Anthropomorphism
4.3.4. Cultural Differences Regarding Anthropomorphism
4.3.5. But There Is More
5. Limits
5.1. Intrinsic Limit for Robotic Factors: The Uncanny Valley
5.2. The Measure of Anthropomorphism and Its Limits
5.2.1. Questionnaires and Implicit Measures
5.2.2. The Pragmatic Limits of Anthropomorphic Measures
5.3. General Methodological Limitations
Is Anthropomorphizing a Robot Even a Good Thing?
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SEEK | sociality, effectance, and elicited agent knowledge |
ToM | theory of mind |
ASD | autism spectrum disorders |
References
- Jamet, F.; Masson, O.; Jacquet, B.; Stilgenbauer, J.L.; Baratgin, J. Learning by teaching with humanoid robot: A new powerful experimental tool to improve children’s learning ability. J. Robot. 2018, 2018, 4578762. [Google Scholar] [CrossRef]
- Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. The mentor-child paradigm for individuals with autism spectrum disorders. In Proceedings of the Workshop Social Robots Personalisation at the Crossroads between Engineering and Humanities (Concatenate) at the 18th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Stockholm, Sweden, 13–16 March 2023. [Google Scholar]
- Baratgin, J.; Dubois-Sage, M.; Jacquet, B.; Stilgenbauer, J.L.; Jamet, F. Pragmatics in the false-belief task: Let the robot ask the question! Front. Psychol. 2020, 11, 593807. [Google Scholar] [CrossRef] [PubMed]
- Duffy, B. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
- Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef] [Green Version]
- Reeves, B.; Nass, C.I. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Center for the Study of Language and Information, Ed.; Cambridge University Press: New York, NY, USA, 1966. [Google Scholar]
- Cullen, H.; Kanai, R.; Bahrami, B.; Rees, G. Individual differences in anthropomorphic attributions and human brain structure. Soc. Cogn. Affect. Neurosci. 2014, 9, 1276–1280. [Google Scholar] [CrossRef] [Green Version]
- Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of mind perception. Science 2007, 315, 619. [Google Scholar] [CrossRef] [Green Version]
- Meltzoff, A.N.; Brooks, R.; Shon, A.P.; Rao, R.P.N. “Social” robots are psychological agents for infants: A test of gaze following. Neural Netw. Off. J. Int. Neural Netw. Soc. 2010, 23, 966–972. [Google Scholar] [CrossRef]
- Urgen, B.A.; Plank, M.; Ishiguro, H.; Poizner, H.; Saygin, A.P. EEG theta and Mu oscillations during perception of human and robot actions. Front. Neurorobotics 2013, 7, 19. [Google Scholar] [CrossRef] [Green Version]
- De Graaf, M.M.; Malle, B.F. People’s explanations of robot behavior subtly reveal mental state inferences. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 239–2148. [Google Scholar] [CrossRef]
- Fussell, S.R.; Kiesler, S.; Setlock, L.D.; Yew, V. How people anthropomorphize robots. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The The Netherlands, 12–15 March 2008; Association for Computing Machinery: New York, NY, USA, 2008. HRI ’08. pp. 145–152. [Google Scholar] [CrossRef]
- Ranjbartabar, H.; Richards, D. Should we use human-human factors for validating human-agent relationships? A look at rapport. In Proceedings of the Workshop on Methodology and the Evaluation of Intelligent Virtual Agents (ME-IVA) at the Intelligent Virtual Agent Conference (IVA2018), Sydney, NSW, Australia, 5–8 November 2018; pp. 1–4. [Google Scholar]
- Thellman, S.; Ziemke, T. The intentional stance toward robots: Conceptual and methodological considerations. In Proceedings of the 41st Annual Conference of the Cognitive Science Society, Montreal, QC, Canada, 24–27 July 2019; Proceedings of the CogSci’19. Goel, A.K., Seifert, C.M., Freksa, C., Eds.; Cognitive Science Society Inc.: Seattle, WA, USA, 2019; pp. 1097–1103. [Google Scholar]
- Thellman, S.; Giagtzidou, A.; Silvervarg, A.; Ziemke, T. An implicit, non-verbal measure of belief attribution to robots. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 23–26 March 2020; ACM: Cambridge UK, 2020; pp. 473–475. [Google Scholar] [CrossRef]
- Thellman, S.; de Graaf, M.; Ziemke, T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM Trans.-Hum.-Robot. Interact. 2022, 11, 1–51. [Google Scholar] [CrossRef]
- Guthrie, S.E. Faces in the Clouds: A New Theory of Religion; Oxford University Press: New York, NY, USA, 1995. [Google Scholar]
- Dacey, M. Anthropomorphism as Cognitive Bias. Philos. Sci. 2017, 84, 1152–1164. [Google Scholar] [CrossRef]
- Dacey, M.; Coane, J.H. Implicit measures of anthropomorphism: Affective priming and recognition of apparent animal emotions. Front. Psychol. 2023, 14, 1149444. [Google Scholar] [CrossRef] [PubMed]
- Caporael, L.R. Anthropomorphism and mechanomorphism: Two faces of the human machine. Comput. Hum. Behav. 1986, 2, 215–234. [Google Scholar] [CrossRef]
- Zanatto, D.; Patacchiola, M.; Cangelosi, A.; Goslin, J. Generalisation of anthropomorphic stereotype. Int. J. Soc. Robot. 2020, 12, 163–172. [Google Scholar] [CrossRef]
- Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristic and biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
- Złotowski, J.; Sumioka, H.; Eyssel, F.; Nishio, S.; Bartneck, C.; Ishiguro, H. Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism. Int. J. Soc. Robot. 2018, 10, 701–714. [Google Scholar] [CrossRef]
- Airenti, G. The Development of Anthropomorphism in Interaction: Intersubjectivity, Imagination, and Theory of Mind. Front. Psychol. 2018, 9, 2136. [Google Scholar] [CrossRef]
- Zhao, X.; Malle, B.F. Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition 2022, 224, 105076. [Google Scholar] [CrossRef]
- Atherton, G.; Cross, L. Seeing more than human: Autism and anthropomorphic theory of mind. Front. Psychol. 2018, 9, 528. [Google Scholar] [CrossRef] [Green Version]
- Chaminade, T.; Franklin, D.; Oztop, E.; Cheng, G. Motor interference between Humans and Humanoid Robots: Effect of Biological and Artificial Motion. In Proceedings of the 4th International Conference on Development and Learning, Hong Kong, China, 31 July–3 August 2005; Volume 2005, pp. 96–101. [Google Scholar] [CrossRef]
- Chaminade, T.; Zecca, M.; Blakemore, S.J.; Takanishi, A.; Frith, C.; Micera, S.; Dario, P.; Rizzolatti, G.; Gallese, V.; Umiltà, M. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS ONE 2010, 5, e11577. [Google Scholar] [CrossRef] [Green Version]
- Heyes, C.M.; Frith, C.D. The cultural evolution of mind reading. Science 2014, 344, 1243091. [Google Scholar] [CrossRef]
- Shepard, R.N. Toward a universal law of generalization for psychological science. Science 1987, 237, 1317–1323. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhao, X.; Cusimano, C.; Malle, B. Do people spontaneously take a robot’s visual perspective? In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; ACM: New York, NY, USA, 2016; pp. 335–342. [Google Scholar] [CrossRef]
- Paepcke, S.; Takayama, L. Judging a bot by its cover: An experiment on expectation setting for personal robots. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; ACM: New York, NY, USA, 2010; pp. 45–2148. [Google Scholar] [CrossRef] [Green Version]
- Banks, J. Theory of mind in social robots: Replication of five established human tests. Int. J. Soc. Robot. 2020, 12, 403–414. [Google Scholar] [CrossRef]
- Heider, F.; Simmel, M. An Experimental Study of Apparent Behavior. Am. J. Psychol. 1944, 57, 243–259. [Google Scholar] [CrossRef]
- Zlotowski, J.; Sumioka, H.; Nishio, S.; Glas, D.; Bartneck, C.; Ishiguro, H. Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Front. Psychol. 2015, 6, 883. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Flanagan, T.; Wong, G.; Kushnir, T. The minds of machines: Children’s beliefs about the experiences, thoughts, and morals of familiar interactive technologies. Dev. Psychol. 2023, 59, 1017–1031. [Google Scholar] [CrossRef]
- Spatola, N. L’interaction homme-robot, de l’anthropomorphisme à l’humanisation. L’Année Psychol. 2019, 119, 515–563. [Google Scholar] [CrossRef]
- Kim, M.J.; Kohn, S.; Shaw, T. Does Long-Term Exposure to Robots Affect Mind Perception? An Exploratory Study. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1820–1824. [Google Scholar] [CrossRef]
- Nijssen, S.R.R.; Müller, B.C.N.; Bosse, T.; Paulus, M. You, robot? The role of anthropomorphic emotion attributions in children’s sharing with a robot. Int. J.-Child-Comput. Interact. 2021, 30, 332–336. [Google Scholar] [CrossRef]
- Barsante, L.S.; Paixão, K.S.; Laass, K.H.; Cardoso, R.T.N.; Eiras, Ã.E.; Acebal, J.L. A model to predict the population size of the dengue fever vector based on rainfall data. arXiv 2014, arXiv:1409.7942. [Google Scholar]
- Waytz, A.; Gray, K.; Epley, N.; Wegner, D.M. Causes and consequences of mind perception. Trends Cogn. Sci. 2010, 14, 383–388. [Google Scholar] [CrossRef] [PubMed]
- Waytz, A.; Morewedge, C.K.; Epley, N.; Monteleone, G.; Gao, J.H.; Cacioppo, J.T. Making sense by making sentient: Effectance motivation increases anthropomorphism. J. Personal. Soc. Psychol. 2010, 99, 410–435. [Google Scholar] [CrossRef] [PubMed]
- Waytz, A.; Cacioppo, J.; Epley, N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 2010, 5, 219–232. [Google Scholar] [CrossRef] [PubMed]
- Spatola, N.; Wykowska, A. The personality of anthropomorphism: How the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots. Comput. Hum. Behav. 2021, 122, 106841. [Google Scholar] [CrossRef]
- Bartz, J.A.; Tchalova, K.; Fenerci, C. Reminders of Social Connection Can Attenuate Anthropomorphism: A Replication and Extension of Epley, Akalis, Waytz, and Cacioppo (2008). Psychol. Sci. 2016, 27, 1644–1650. [Google Scholar] [CrossRef] [PubMed]
- Lee, K.M.; Jung, Y.; Kim, J.; Kim, S.R. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human-robot interaction. Int. J.-Hum.-Comput. Stud. 2006, 64, 962–973. [Google Scholar] [CrossRef]
- Jung, Y.; Hahn, S. Social Robots as Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearances. arXiv 2023, arXiv:2306.02694. [Google Scholar]
- Wang, W. Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Comput. Hum. Behav. 2017, 68, 334–344. [Google Scholar] [CrossRef]
- Premack, D.; Woodruff, G. Does the chimpanzee have a theory of mind? Behav. Brain Sci. 1978, 1, 515–526. [Google Scholar] [CrossRef] [Green Version]
- Hortensius, R.; Kent, M.; Darda, K.M.; Jastrzab, L.; Koldewyn, K.; Ramsey, R.; Cross, E.S. Exploring the relationship between anthropomorphism and theory-of-mind in brain and behavior. Hum. Brain Mapp. 2021, 42, 4224–4241. [Google Scholar] [CrossRef]
- Tahiroglu, D.; Taylor, M. Anthropomorphism, social understanding, and imaginary companions. Br. J. Dev. Psychol. 2019, 37, 284–299. [Google Scholar] [CrossRef] [PubMed]
- Marchetti, A.; Manzi, F.; Itakura, S.; Massaro, D. Theory of mind and humanoid robots from a lifespan perspective. Z. Psychol. 2018, 226, 98–109. [Google Scholar] [CrossRef]
- Woo, B.M.; Tan, E.; Hamlin, J.K. Theory of mind in context: Mental-state representations for social evaluation. Behav. Brain Sci. 2021, 44, e176. [Google Scholar] [CrossRef]
- Hortensius, R.; Cross, E.S. From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Ann. N. Y. Acad. Sci. 2018, 1426, 93–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Carrington, S.J.; Bailey, A.J. Are there theory of mind regions in the brain? A review of the neuroimaging literature. Hum. Brain Mapp. 2009, 30, 2313–2335. [Google Scholar] [CrossRef]
- Schurz, M.; Radua, J.; Aichhorn, M.; Richlan, F.; Perner, J. Fractionating theory of mind: A meta-analysis of functional brain imaging studies. Neurosci. Biobehav. Rev. 2014, 42, 9–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Schurz, M.; Tholen, M.G.; Perner, J.; Mars, R.B.; Sallet, J. Specifying the brain anatomy underlying temporo-parietal junction activations for theory of mind: A review using probabilistic atlases from different imaging modalities. Hum. Brain Mapp. 2017, 38, 4788–4805. [Google Scholar] [CrossRef] [Green Version]
- Spunt, R.P.; Ellsworth, E.; Adolphs, R. The neural basis of understanding the expression of the emotions in man and animals. Soc. Cogn. Affect. Neurosci. 2017, 12, 95–105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chaminade, T.; Hodgins, J.; Kawato, M. Anthropomorphism influences perception of computer-animated characters’ actions. Soc. Cogn. Affect. Neurosci. 2007, 2, 206–216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wykowska, A.; Chellali, R.; Al-Amin, M.M.; Müller, H. Implications of robot actions for human perception. How do we represent actions of the observed Rrobots? Int. J. Soc. Robot. 2014, 6, 357–366. [Google Scholar] [CrossRef]
- Kühn, S.; Brick, T.R.; Müller, B.C.N.; Gallinat, J. Is This Car Looking at You? How Anthropomorphism Predicts Fusiform Face Area Activation when Seeing Cars. PLoS ONE 2014, 9, e113885. [Google Scholar] [CrossRef] [PubMed]
- Quesque, F.; Rossetti, Y. What do theory-of-mind tasks actually measure? Theory and practice. Perspect. Psychol. Sci. 2020, 15, 384–396. [Google Scholar] [CrossRef] [PubMed]
- Ruijten, P.A.; Haans, A.; Ham, J.; Midden, C.J. Perceived human-likeness of social robots: Testing the Rasch model as a method for measuring anthropomorphism. Int. J. Soc. Robot. 2019, 11, 477–494. [Google Scholar] [CrossRef] [Green Version]
- Nijssen, S.R.R.; Müller, B.C.N.; Baaren, R.B.v.; Paulus, M. Saving the robot or the human? Robots who feel deserve moral care. Soc. Cogn. 2019, 37, 41–56. [Google Scholar] [CrossRef]
- Mubin, O.; Stevens, C.; Shahid, S.; Mahmud, A.; Dong, J.J. A review of the applicability of robots in education. Technol. Educ. Learn. 2013, 1, 13. [Google Scholar] [CrossRef] [Green Version]
- Barco, A.; de Jong, C.; Peter, J.; Kühne, R.; van Straten, C.L. Robot Morphology and Children’s Perception of Social Robots: An Exploratory Study. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; Association for Computing Machinery: New York, NY, USA, 2020. HRI ’20. pp. 125–127. [Google Scholar] [CrossRef] [Green Version]
- Broadbent, E.; Kumar, V.; Li, X.; Sollers, J.; Stafford, R.; Macdonald, B.; Wegner, D. Robots with display screens: A Robot with a more humanlike face display is perceived to have more mind and a better personality. PLoS ONE 2013, 8, e72589. [Google Scholar] [CrossRef] [PubMed]
- Burdett, E.R.R.; Ikari, S.; Nakawake, Y. British children’s and adults’ perceptions of robots. Hum. Behav. Emerg. Technol. 2022, 2022, 3813820. [Google Scholar] [CrossRef]
- Carpinella, C.M.; Wyman, A.B.; Perez, M.A.; Stroessner, S.J. The Robotic Social Attributes Scale (RoSAS): Development and Validation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017. HRI ’17. pp. 254–262. [Google Scholar] [CrossRef]
- Disalvo, C.; Gemperle, F.; Forlizzi, J.; Kiesler, S. All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, London, UK, 25–28 June 2002; Proceedings of the DIS’02. ACM: New York, NY, USA, 2002; Volume 321–326, pp. 321–326. [Google Scholar] [CrossRef]
- Goldman, E.J.; Baumann, A.E.; Poulin-Dubois, D. Preschoolers’ anthropomorphizing of robots: Do human-like properties matter? Front. Psychol. 2023, 13, 1102370. [Google Scholar] [CrossRef]
- Haring, K.S.; Silvera-Tawil, D.; Takahashi, T.; Watanabe, K.; Velonaki, M. How people perceive different robot types: A direct comparison of an android, humanoid, and non-biomimetic robot. In Proceedings of the 2016 8th International Conference on Knowledge and Smart Technology (KST), Chiangmai, Thailand, 3–6 February 2016; pp. 265–270. [Google Scholar] [CrossRef]
- Kiesler, S.; Powers, A.; Fussell, S.R.; Torrey, C. Anthropomorphic interactions with a robot and robot-like agent. Soc. Cogn. 2008, 26, 169–181. [Google Scholar] [CrossRef]
- Krach, S.; Hegel, F.; Wrede, B.; Sagerer, G.; Binkofski, F.; Kircher, T. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 2008, 3, e2597. [Google Scholar] [CrossRef]
- Malle, B.F.; Scheutz, M.; Forlizzi, J.; Voiklis, J. Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 125–132. [Google Scholar] [CrossRef]
- Manzi, F.; Peretti, G.; Di Dio, C.; Cangelosi, A.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. A robot is not worth another: Exploring children’s mental state attribution to different humanoid robots. Front. Psychol. 2020, 11, 2011. [Google Scholar] [CrossRef]
- Manzi, F.; Massaro, D.; Di Lernia, D.; Maggioni, M.A.; Riva, G.; Marchetti, A. Robots Are Not All the Same: Young Adults’ Expectations, Attitudes, and Mental Attribution to Two Humanoid Social Robots. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 307–314. [Google Scholar] [CrossRef] [PubMed]
- Onnasch, L.; Hildebrandt, C.L. Impact of anthropomorphic robot design on trust and attention in industrial human-robot interaction. ACM Trans.-Hum.-Robot. Interact. 2021, 11, 1–24. [Google Scholar] [CrossRef]
- Powers, A.; Kiesler, S. The advisor robot: Tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; Proceedings of the HRI ’06. ACM: New York, NY, USA, 2016; Volume 2006, pp. 218–225. [Google Scholar] [CrossRef]
- Riek, L.D.; Rabinowitch, T.C.; Chakrabarti, B.; Robinson, P. How anthropomorphism affects empathy toward robots. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction—HRI’09, La Jolla, CA, USA, 9–13 March 2009; ACM Press: La Jolla, CA, USA, 2009; pp. 245–246. [Google Scholar] [CrossRef] [Green Version]
- Sacino, A.; Cocchella, F.; De Vita, G.; Bracco, F.; Rea, F.; Sciutti, A.; Andrighetto, L. Human- or object-like? Cognitive anthropomorphism of humanoid robots. PLoS ONE 2022, 17, e0270787. [Google Scholar] [CrossRef]
- Sommer, K.; Nielsen, M.; Draheim, M.; Redshaw, J.; Vanman, E.; Wilks, M. Children’s perceptions of the moral worth of live agents, robots, and inanimate objects. J. Exp. Child Psychol. 2019, 187, 104656. [Google Scholar] [CrossRef] [PubMed]
- Tung, F.W. Child perception of humanoid robot appearance and behavior. Int. J.-Hum.-Comput. Interact. 2016, 32, 493–502. [Google Scholar] [CrossRef]
- Zanatto, D.; Patacchiola, M.; Goslin, J.; Cangelosi, A. Priming anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots? In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 543–544. [Google Scholar] [CrossRef]
- Baxter, P.; Ashurst, E.; Read, R.; Kennedy, J.; Belpaeme, T. Robot education peers in a situated primary school study: Personalisation promotes child learning. PLoS ONE 2017, 12, e0178126. [Google Scholar] [CrossRef] [Green Version]
- Boladeras, M.; Nuño, N.; Saez-Pons, J.; Pardo, D.; Angulo, C. Building up child-robot relationship for therapeutic purposes: From initial attraction towards long-term social engagement. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA, 21–23 March 2011; pp. 927–932. [Google Scholar] [CrossRef]
- Breazeal, C.; Harris, P.L.; DeSteno, D.; Kory Westlund, J.M.; Dickens, L.; Jeong, S. Young children treat robots as informants. Top. Cogn. Sci. 2016, 8, 481–491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Henkemans, O.A.B.; Bierman, B.P.; Janssen, J.; Looije, R.; Neerincx, M.A.; van Dooren, M.M.; de Vries, J.L.; van der Burg, G.J.; Huisman, S.D. Design and evaluation of a personal robot playing a self-management education game with children with diabetes type 1. Int. J.-Hum.-Comput. Stud. 2017, 106, 63–76. [Google Scholar] [CrossRef] [Green Version]
- Horstmann, A.C.; Krämer, N.C. Expectations vs. actual behavior of a social robot: An experimental investigation of the effects of a social robot’s interaction skill level and its expected future role on people’s evaluations. PLoS ONE 2020, 15, e0238133. [Google Scholar] [CrossRef]
- Huang, C.M.; Thomaz, A.L. Joint attention in human-robot interaction. In Proceedings of the 2010 AAAI Fall Symposium Series, Arlington, WV, USA, 11–13 November 2010. [Google Scholar]
- Kanda, T.; Shimada, M.; Koizumi, S. Children learning with a social robot. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; Proceedings of the HRI ’12. ACM: New York, NY, USA, 2012; pp. 351–358. [Google Scholar] [CrossRef]
- Kruijff-Korbayová, I.; Oleari, E.; Bagherzadhalimi, A.; Sacchitelli, F.; Kiefer, B.; Racioppa, S.; Pozzi, C.; Sanna, A. Young users’ perception of a social robot displaying familiarity and eliciting disclosure. In Proceedings of the Social Robotics; Tapus, A., André, E., Martin, J.C., Ferland, F., Ammi, M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 380–389. [Google Scholar] [CrossRef]
- Kumar, S.; Itzhak, E.; Edan, Y.; Nimrod, G.; Sarne-Fleischmann, V.; Tractinsky, N. Politeness in Human-Robot Interaction: A Multi-Experiment Study with Non-Humanoid Robots. Int. J. Soc. Robot. 2022, 14, 1805–1820. [Google Scholar] [CrossRef]
- Li, M.; Guo, F.; Wang, X.; Chen, J.; Ham, J. Effects of robot gaze and voice human-likeness on users’ subjective perception, visual attention, and cerebral activity in voice conversations. Comput. Hum. Behav. 2023, 141, 107645. [Google Scholar] [CrossRef]
- Looije, R.; Neerincx, M.A.; Hindriks, K.V. Specifying and testing the design rationale of social robots for behavior change in children. Cogn. Syst. Res. 2017, 43, 250–265. [Google Scholar] [CrossRef] [Green Version]
- Manzi, F.; Ishikawa, M.; Di Dio, C.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. The understanding of congruent and incongruent referential gaze in 17-month-old infants: An eye-tracking study comparing human and robot. Sci. Rep. 2020, 10, 11918. [Google Scholar] [CrossRef]
- Nitsch, V.; Glassen, T. Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–1 September 2015; pp. 535–540. [Google Scholar] [CrossRef]
- Obaid, M.; Sandoval, E.; Złotowski, J.; Moltchanova, E.; Basedow, C.; Bartneck, C. Stop! That is close enough. How body postures influence human-robot proximity. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 354–361. [Google Scholar] [CrossRef]
- Okumura, Y.; Hattori, T.; Fujita, S.; Kobayashi, T. A robot is watching me!: Five-year-old children care about their reputation after interaction with a social robot. Child Dev. 2023, 94, 865–873. [Google Scholar] [CrossRef]
- Rossignoli, D.; Manzi, F.; Gaggioli, A.; Marchetti, A.; Massaro, D.; Riva, G.; Maggioni, M. Attribution of mental state in strategic human-robot interactions. Res. Sq. 2022. [Google Scholar] [CrossRef]
- Tozadore, D.C.; Pinto, A.H.; Romero, R.A. Variation in a Humanoid Robot Behavior to Analyse Interaction Quality in Pedagogical Sessions with Children. In Proceedings of the 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 133–138. [Google Scholar] [CrossRef]
- Wigdor, N.; Greeff, J.; Looije, R.; Neerincx, M. How to improve human-robot interaction with Conversational Fillers. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 219–224. [Google Scholar] [CrossRef]
- Simmons, R.; Knight, H. Keep on dancing: Effects of expressive motion mimicry. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 720–727. [Google Scholar] [CrossRef]
- Castro-González, À.; Admoni, H.; Scassellati, B. Effects of form and motion on judgments of social robots animacy, likability, trustworthiness and unpleasantness. Int. J.-Hum.-Comput. Stud. 2016, 90, 27–38. [Google Scholar] [CrossRef]
- Kuz, S.; Mayer, M.P.; Müller, S.; Schlick, C.M. Using Anthropomorphism to Improve the Human-Machine Interaction in Industrial Environments (Part I). In Proceedings of the Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management. Human Body Modeling and Ergonomics; Duffy, V.G., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; Lecture Notes in Computer Science; pp. 76–85. [Google Scholar] [CrossRef]
- Salem, M.; Eyssel, F.; Rohlfing, K.; Kopp, S.; Joublin, F. To Err is Human-like: Effects of Robot Gesture on Perceived Anthropomorphism and Likability. Int. J. Soc. Robot. 2013, 5, 313–323. [Google Scholar] [CrossRef]
- Tremoulet, P.D.; Feldman, J. Perception of animacy from the motion of a single object. Perception 2000, 29, 943–951. [Google Scholar] [CrossRef]
- Eyssel, F.; Kuchenbrandt, D.; Hegel, F.; de Ruiter, L. Activating elicited agent knowledge: How robot and user features shape the perception of social robots. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 851–857. [Google Scholar] [CrossRef]
- Kuriki, S.; Tamura, Y.; Igarashi, M.; Kato, N.; Nakano, T. Similar impressions of humanness for human and artificial singing voices in autism spectrum disorders. Cognition 2016, 153, 1–5. [Google Scholar] [CrossRef]
- Masson, O.; Baratgin, J.; Jamet, F. NAO robot as experimenter: Social cues emitter and neutralizer to bring new results in experimental psychology. In Proceedings of the International Conference on Information and Digital Technologies, IDT 2017, Zilina, Slovakia, 5–7 July 2017; pp. 256–264. [Google Scholar] [CrossRef]
- Niculescu, A.; Dijk, B.; Nijholt, A.; Li, H.; See, S. Making social robots more attractive: The effects of voice pitch, humor and empathy. Int. J. Soc. Robot. 2013, 5, 171–191. [Google Scholar] [CrossRef] [Green Version]
- Tielman, M.; Neerincx, M.; Meyer, J.J.; Looije, R. Adaptive emotional expression in robot-child interaction. In Proceedings of the 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, Germany, 3–6 March 2014; ACM: New York, NY, USA, 2014; pp. 407–414. [Google Scholar] [CrossRef] [Green Version]
- Torre, I.; Goslin, J.; White, L.; Zanatto, D. Trust in artificial voices: A “congruency effect” of first impressions and behavioral experience. In Proceedings of the Technology, Mind, and Society, Washington, DC, USA, 5–7 April 2018; Proceedings of the TechMindSociety’ 18. ACM: New York, NY, USA, 2018; Volume 40, pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Arnheim, R. Visual Thinking; University of California Press: Berkeley, CA, USA, 1969. [Google Scholar]
- Carey, S.; Spelke, E. Domain-specific knowledge and conceptual change. In Mapping the Mind: Domain Specificity in Cognition and Culture; Cambridge University Press: New York, NY, USA, 1994; pp. 169–200. [Google Scholar] [CrossRef]
- Yee, N.; Bailenson, J.N.; Rickertsen, K. A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; Association for Computing Machinery: New York, NY, USA, 2007. CHI ’07. pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
- Yin, R.K. Looking at upside-down faces. J. Exp. Psychol. 1969, 81, 141–145. [Google Scholar] [CrossRef]
- Leder, H.; Bruce, V. When inverted faces are recognized: The Role of configural information in face recognition. Q. J. Exp. Psychol. Sect. 2000, 53, 513–536. [Google Scholar] [CrossRef] [PubMed]
- Huijnen, C.A.G.J.; Lexis, M.A.S.; Jansens, R.; de Witte, L.P. How to implement robots in interventions for children with autism? A co-creation study involving people with autism, parents and professionals. J. Autism Dev. Disord. 2017, 47, 3079–3096. [Google Scholar] [CrossRef] [Green Version]
- Masson, O.; Baratgin, J.; Jamet, F. NAO robot, transmitter of social cues: What impacts? In Proceedings of the Advances in Artificial Intelligence: From Theory to Practice; Benferhat, S., Tabia, K., Ali, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 559–568. [Google Scholar] [CrossRef]
- Kahn, P.H.; Kanda, T.; Ishiguro, H.; Freier, N.G.; Severson, R.L.; Gill, B.T.; Ruckert, J.H.; Shen, S. “ROBOVIE, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Dev. Psychol. 2012, 48, 303–314. [Google Scholar] [CrossRef] [Green Version]
- Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319. [Google Scholar] [CrossRef] [Green Version]
- Minato, T.; Sakai, K.; Uchida, T.; Ishiguro, H. A study of interactive robot architecture through the practical implementation of conversational android. Front. Robot. AI 2022, 9, 905030. [Google Scholar] [CrossRef]
- Lacroix, D.; Wullenkord, R.; Eyssel, F. I Designed It, So I Trust It: The Influence of Customization on Psychological Ownership and Trust Toward Robots. In Proceedings of the Social Robotics (ICSR 2022); Cavallo, F., Cabibihan, J.J., Fiorini, L., Sorrentino, A., He, H., Liu, X., Matsumoto, Y., Ge, S.S., Eds.; Springer Nature: Cham, Switzerland, 2023; Lecture Notes in Computer Science; Volume 13818, pp. 601–614. [Google Scholar] [CrossRef]
- Grice, H.P. Logic and conversation. In Speech Acts; Cole, P., Morgan, J.L., Eds.; Academic Press: New York, NY, USA, 1975; Syntax and Semantics; Volume 3, pp. 43–58. [Google Scholar]
- Ducrot, O. Dire et ne Pas Dire: Principes de séMantique Linguistique, 3rd ed.; Collection Savoir: Hermann, Paris, 2008. [Google Scholar]
- Sperber, D.; Wilson, D. Relevance: Communication and Cognition, 2nd ed.; Blackwell Publishers: Oxford, UK; Cambridge, MA, USA, 2001. [Google Scholar]
- Jacquet, B.; Baratgin, J.; Jamet, F. The Gricean Maxims of Quantity and of Relation in the Turing Test. In Proceedings of the 2018 11th International Conference on Human System Interaction (HSI), Gdansk, Poland, 4–6 July 2018; pp. 332–338. [Google Scholar] [CrossRef]
- Jacquet, B.; Masson, O.; Jamet, F.; Baratgin, J. On the Lack of Pragmatic Processing in Artificial Conversational Agents. In Proceedings of the Human Systems Engineering and Design; Ahram, T., Karwowski, W., Taiar, R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 394–399. [Google Scholar] [CrossRef]
- Jacquet, B.; Baratgin, J.; Jamet, F. Cooperation in Online Conversations: The Response Times as a Window Into the Cognition of Language Processing. Front. Psychol. 2019, 10, 727. [Google Scholar] [CrossRef]
- Jacquet, B.; Hullin, A.; Baratgin, J.; Jamet, F. The Impact of the Gricean Maxims of Quality, Quantity and Manner in Chatbots. In Proceedings of the 2019 International Conference on Information and Digital Technologies (IDT), Zilina, Slovakia, 25–27 June 2019; pp. 180–189. [Google Scholar] [CrossRef]
- Jacquet, B.; Jaraud, C.; Jamet, F.; Guéraud, S.; Baratgin, J. Contextual Information Helps Understand Messages Written with Textisms. Appl. Sci. 2021, 11, 4853. [Google Scholar] [CrossRef]
- Kumazaki, H.; Muramatsu, T.; Yoshikawa, Y.; Matsumoto, Y.; Ishiguro, H.; Kikuchi, M.; Sumiyoshi, T.; Mimura, M. Optimal robot for intervention for individuals with autism spectrum disorders. Psychiatry Clin. Neurosci. 2020, 74, 581–586. [Google Scholar] [CrossRef]
- Bailenson, J.; Swinth, K.; Hoyt, C.; Persky, S.; Dimov, A.; Blascovich, J. The Independent and Interactive Effects of Embodied-Agent Appearance and Behavior on Self-Report, Cognitive, and Behavioral Markers of Copresence in Immersive Virtual Environments. Presence 2005, 14, 379–393. [Google Scholar] [CrossRef]
- Barchard, K.A.; Lapping-Carr, L.; Westfall, R.S.; Fink-Armold, A.; Banisetty, S.B.; Feil-Seifer, D. Measuring the Perceived Social Intelligence of Robots. ACM Trans.-Hum.-Robot. Interact. 2020, 9, 1–29. [Google Scholar] [CrossRef]
- Darling, K.; Nandy, P.; Breazeal, C. Empathic concern and the effect of stories in human-robot interaction. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 770–775. [Google Scholar] [CrossRef] [Green Version]
- Kory Westlund, J.; Martinez, M.; Archie, M.; Das, M.; Breazeal, C. Effects of framing a robot as a social agent or as a machine on children’s social behavior. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 688–693. [Google Scholar] [CrossRef]
- Mara, M.; Appel, M. Science fiction reduces the eeriness of android robots: A field experiment. Comput. Hum. Behav. 2015, 48, 156–162. [Google Scholar] [CrossRef]
- Mou, W.; Ruocco, M.; Zanatto, D.; Cangelosi, A. When would you trust a robot? A study on trust and theory of mind in human-robot interactions. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 956–9437. [Google Scholar] [CrossRef]
- Nijssen, S.R.R.; Heyselaar, E.; Müller, B.C.N.; Bosse, T. Do we take a robot’s needs into account? The effect of humanization on prosocial considerations toward other human beings and robots. Cyberpsychology Behav. Soc. Netw. 2021, 24, 332–336. [Google Scholar] [CrossRef] [PubMed]
- Onnasch, L.; Roesler, E. Anthropomorphizing robots: The effect of framing in human-robot collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 1311–1315. [Google Scholar] [CrossRef] [Green Version]
- Rosenthal-von der Pütten, A.; Straßmann, C.; Mara, M. A long time ago in a galaxy far, far away…The effects of narration and appearance on the perception of robots. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 1169–9437. [Google Scholar] [CrossRef]
- Ruocco, M.; Mou, W.; Cangelosi, A.; Jay, C.; Zanatto, D. Theory of Mind improves human’s trust in an iterative human-robot game. In Proceedings of the 9th International Conference on Human-Agent Interaction, Virtual Event, Japan, 9–11 November 2021; ACM: New York, NY, USA, 2021; pp. 227–234. [Google Scholar] [CrossRef]
- Schömbs, S.; Klein, J.; Roesler, E. Feeling with a robot—The role of anthropomorphism by design and the tendency to anthropomorphize in human-robot interaction. Front. Robot. AI 2023, 10, 1149601. [Google Scholar] [CrossRef] [PubMed]
- Söderlund, M. Service robots with (perceived) theory of mind: An examination of humans’ reactions. J. Retail. Consum. Serv. 2022, 67, 102999. [Google Scholar] [CrossRef]
- Sturgeon, S.; Palmer, A.; Blankenburg, J.; Feil-Seifer, D. Perception of social intelligence in robots performing false-belief tasks. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–7. [Google Scholar] [CrossRef]
- Chernyak, N.; Gary, H.E. Children’s cognitive and behavioral reactions to an autonomous versus controlled social robot dog. Early Educ. Dev. 2016, 27, 1175–1189. [Google Scholar] [CrossRef]
- Haas, M.; Aroyo, A.M.; Barakova, E.; Haselager, W.; Smeekens, I. The effect of a semi-autonomous robot on children. In Proceedings of the 2016 IEEE 8th International Conference on Intelligent Systems (IS), Sofia, Bulgaria, 4–6 September 2016; pp. 376–381. [Google Scholar] [CrossRef]
- Lee, H.; Choi, J.J.; Kwak, S.S. Will you follow the robot’s advice?: The impact of robot types and task types on people’s perception of a robot. In Proceedings of the Second International Conference on Human-Agent Interaction, Tsukuba, Japan, 29–31 October 2014; ACM: New York, NY, USA, 2014; pp. 137–140. [Google Scholar] [CrossRef]
- Tozadore, D.; Pinto, A.; Romero, R.; Trovato, G. Wizard of Oz vs. autonomous: Children’s perception changes according to robot’s operation condition. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 664–669. [Google Scholar] [CrossRef]
- Van Straten, C.L.; Peter, J.; Kühne, R.; Barco, A. The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation. AI Soc. 2022, 37, 383–399. [Google Scholar] [CrossRef]
- Bartneck, C.; Suzuki, T.; Kanda, T.; Nomura, T. The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI Soc. 2007, 21, 217–230. [Google Scholar] [CrossRef]
- De Graaf, M.M.A.; Ben Allouch, S.; van Dijk, J.A.G.M. Long-term evaluation of a social robot in real homes. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2016, 17, 461–490. [Google Scholar] [CrossRef] [Green Version]
- De Jong, C.; Peter, J.; Kühne, R.; Barco, A. Children’s acceptance of social robots: A narrative review of the research 2000–2017. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2019, 20, 393–425. [Google Scholar] [CrossRef]
- Nishio, S.; Ogawa, K.; Kanakogi, Y.; Itakura, S.; Ishiguro, H. Do robot appearance and speech affect people’s attitude? Evaluation through the Ultimatum Game. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 809–814. [Google Scholar] [CrossRef]
- Ribi, F.; Yokoyama, A.; Turner, D. Comparison of children’s behavior toward Sony’s robotic dog AIBO and a real Dog: A pilot study. Anthrozoos Multidiscip. J. Interact. People Anim. 2008, 21, 245–256. [Google Scholar] [CrossRef]
- Sinnema, L.; Alimardani, M. The attitude of elderly and young adults towards a humanoid robot as a facilitator for social interaction. In Social Robotics; Springer International Publishing: Cham, Switzerland, 2019; pp. 24–33. [Google Scholar] [CrossRef]
- Tanaka, F.; Cicourel, A.; Movellan, J.R. Socialization between toddlers and robots at an early childhood education center. Proc. Natl. Acad. Sci. USA 2007, 104, 17954–17958. [Google Scholar] [CrossRef]
- Al-Taee, M.A.; Kapoor, R.; Garrett, C.; Choudhary, P. Acceptability of Robot Assistant in Management of Type 1 Diabetes in Children. Diabetes Technol. Ther. 2016, 18, 551–554. [Google Scholar] [CrossRef]
- Banthia, V.; Maddahi, Y.; May, M.; Blakley, D.; Chang, Z.; Gbur, A.; Tu, C.; Sepehri, N. Development of a graphical user interface for a socially interactive robot: A case study evaluation. In Proceedings of the 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 13–15 October 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Ray, C.; Mondada, F.; Siegwart, R. What do people expect from robots? In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3816–3821. [Google Scholar] [CrossRef] [Green Version]
- Wiese, E.; Weis, P.P.; Bigman, Y.; Kapsaskis, K.; Gray, K. It’s a Match: Task Assignment in Human–Robot Collaboration Depends on Mind Perception. Int. J. Soc. Robot. 2022, 14, 141–148. [Google Scholar] [CrossRef]
- Baratgin, J.; Jamet, F. Le paradigme de “l’enfant mentor d’un robot ignorant et naïf” comme révélateur de competences cognitives et sociales précoces chez le jeune enfant. In Proceedings of the WACAI 2021; Centre National de la Recherche Scientifique [CNRS]: Saint Pierre d’Oleron, France, 2021. [Google Scholar]
- Baratgin, J.; Jacquet, B.; Dubois-Sage, M.; Jamet, F. “Mentor-child and naive-pupil-robot” paradigm to study children’s cognitive and social development. In Proceedings of the Workshop: Interdisciplinary Research Methods for Child-Robot Relationship Formation, HRI-2021, Boulder, CO, USA, 8–11 March 2021. [Google Scholar]
- Masson, O.; Baratgin, J.; Jamet, F. NAO robot and the “endowment effect”. In Proceedings of the 2015 IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO), Lyon, France, 30 June–2 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Masson, O.; Baratgin, J.; Jamet, F.; Ruggieri, F.; Filatova, D. Use a robot to serve experimental psychology: Some examples of methods with children and adults. In Proceedings of the International Conference on Information and Digital Technologies (IDT-2016), Rzeszow, Poland, 5–7 July 2016; pp. 190–197. [Google Scholar] [CrossRef]
- Graaf, M.M.A.d.; Allouch, S.B. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
- Baxter, P.; De Jong, C.; Aarts, R.; de Haas, M.; Vogt, P. The Effect of Age on Engagement in Preschoolers’ Child-Robot Interactions. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017. HRI ’17. pp. 81–82. [Google Scholar] [CrossRef] [Green Version]
- Beran, T.; Ramirez-Serrano, A.; Kuzyk, R.; Fior, M.; Nugent, S. Understanding how children understand robots: Perceived animism in child-robot interaction. Int. J. Hum.-Comput. Stud. 2011, 69, 539–550. [Google Scholar] [CrossRef]
- Di Dio, C.; Manzi, F.; Peretti, G.; Cangelosi, A.; Harris, P.L.; Massaro, D.; Marchetti, A. Shall I trust you? From child-robot interaction to trusting relationships. Front. Psychol. 2020, 11, 469. [Google Scholar] [CrossRef] [Green Version]
- Flanagan, T.; Rottman, J.; Howard, L.H. Constrained Choice: Children’s and Adults’ Attribution of Choice to a Humanoid Robot. Cogn. Sci. 2021, 45, e13043. [Google Scholar] [CrossRef]
- Leite, I.; Lehman, J. The Robot Who Knew Too Much: Toward Understanding the Privacy/Personalization Trade-Off in Child-Robot Conversation. In Proceedings of the The 15th International Conference on Interaction Design and Children, Manchester, UK, 21–24 June 2016; Association for Computing Machinery: New York, NY, USA, 2016. Proceedings of the IDC’ 16. pp. 379–387. [Google Scholar] [CrossRef]
- Martin, D.U.; MacIntyre, M.I.; Perry, C.; Clift, G.; Pedell, S.; Kaufman, J. Young children’s indiscriminate helping behavior toward a humanoid robot. Front. Psychol. 2020, 11, 239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Martin, D.U.; Perry, C.; MacIntyre, M.I.; Varcoe, L.; Pedell, S.; Kaufman, J. Investigating the nature of children’s altruism using a social humanoid robot. Comput. Hum. Behav. 2020, 104, 106149. [Google Scholar] [CrossRef]
- Okanda, M.; Taniguchi, K.; Wang, Y.; Itakura, S. Preschoolers’ and adults’ animism tendencies toward a humanoid robot. Comput. Hum. Behav. 2021, 118, 106688. [Google Scholar] [CrossRef]
- Pulido, J.C.; González, J.C.; Suárez-Mejías, C.; Bandera, A.; Bustos, P.; Fernández, F. Evaluating the child–robot interaction of the NAOTherapist platform in pediatric rehabilitation. Int. J. Soc. Robot. 2017, 9, 343–358. [Google Scholar] [CrossRef] [Green Version]
- Serholt, S.; Basedow, C.; Barendregt, W.; Obaid, M. Comparing a humanoid tutor to a human tutor delivering an instructional task to children. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; Volume 2015, pp. 1134–1141. [Google Scholar] [CrossRef] [Green Version]
- Tozadore, D.C.; Pinto, A.M.H.; Ranieri, C.; Batista, M.R.; Romero, R. Tablets and humanoid robots as engaging platforms for teaching languages. In Proceedings of the 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR), Curitiba, Brazil, 8–11 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Zhang, Y.; Song, W.; Tan, Z.; Zhu, H.; Wang, Y.; Lam, C.M.; Weng, Y.; Hoi, S.P.; Lu, H.; Man Chan, B.S.; et al. Could social robots facilitate children with autism spectrum disorders in learning distrust and deception? Comput. Hum. Behav. 2019, 98, 140–149. [Google Scholar] [CrossRef]
- Choi, J.; Jongyun, L.; Han, J. Comparison of cultural acceptability for educational robots between Europe and Korea. J. Inf. Process. Syst. 2008, 4, 97–102. [Google Scholar] [CrossRef] [Green Version]
- Dang, J.; Liu, L. Do lonely people seek robot companionship? A comparative examination of the Loneliness—Robot anthropomorphism link in the United States and China. Comput. Hum. Behav. 2023, 141, 107637. [Google Scholar] [CrossRef]
- Eyssel, F.; Kuchenbrandt, D. Social categorization of social robots: Anthropomorphism as a function of robot group membership: Social categorization and social robots. Br. J. Soc. Psychol. 2012, 51, 724–731. [Google Scholar] [CrossRef]
- Haring, K.; Silvera-Tawil, D.; Watanabe, K.; Velonaki, M. The influence of robot appearance and interactive ability in HRI: A cross-cultural study. Proc. Soc. Robot. 2016, 9979, 392–401. [Google Scholar] [CrossRef]
- Li, D.; Rau, P.L.; Li, Y. A cross-cultural study: Effect of robot appearance and task. Int. J. Soc. Robot. 2010, 2, 175–186. [Google Scholar] [CrossRef]
- Abel, M.; Kuz, S.; Patel, H.J.; Petruck, H.; Schlick, C.M.; Pellicano, A.; Binkofski, F.C. Gender Effects in Observation of Robotic and Humanoid Actions. Front. Psychol. 2020, 11, 797. [Google Scholar] [CrossRef] [PubMed]
- Bryant, D.; Borenstein, J.; Howard, A. Why Should We Gender?: The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; ACM: New York, NY, USA, 2020; pp. 13–21. [Google Scholar] [CrossRef] [Green Version]
- Kraus, M.; Kraus, J.; Baumann, M.; Minker, W. Effects of gender stereotypes on trust and likability in spoken human-robot interaction. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018); European Language Resources Association (ELRA): Miyazaki, Japan, 2018. [Google Scholar]
- Kuchenbrandt, D.; Häring, M.; Eichberg, J.; Eyssel, F. Keep an eye on the task! How gender typicality of tasks influence human–robot interactions. In Proceedings of the Social Robotics; Ge, S.S., Khatib, O., Cabibihan, J.J., Simmons, R., Williams, M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Lecture Notes in Computer Science; pp. 448–457. [Google Scholar] [CrossRef]
- Lücking, P.; Rohlfing, K.; Wrede, B.; Schilling, M. Preschoolers’ engagement in social interaction with an autonomous robotic system. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Cergy-Pontoise, France, 19–22 September 2016; pp. 210–216. [Google Scholar] [CrossRef]
- Robben, D.; Fukuda, E.; De Haas, M. The effect of gender on perceived anthropomorphism and intentional acceptance of a storytelling robot. In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm Sweden, 13–16 March 2023; Association for Computing Machinery: New York, NY, USA, 2023. HRI’23. pp. 495–499. [Google Scholar] [CrossRef]
- Sandygulova, A.; O’Hare, G.M. Investigating the impact of gender segregation within observational pretend play interaction. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 399–406. [Google Scholar] [CrossRef]
- Sandygulova, A.; O’Hare, G.M.P. Age- and Gender-Based Differences in Children’s Interactions with a Gender-Matching Robot. Int. J. Soc. Robot. 2018, 10, 687–700. [Google Scholar] [CrossRef]
- Schermerhorn, P.; Scheutz, M.; Crowell, C.R. Robot social presence and gender: Do females view robots differently than males? In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The The Netherlands, 12–15 March 2008; Association for Computing Machinery: New York, NY, USA, 2008. HRI’08. pp. 263–270. [Google Scholar] [CrossRef]
- Siegel, M.; Breazeal, C.; Norton, M.I. Persuasive robotics: The influence of robot gender on human behavior. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 2563–2568. [Google Scholar] [CrossRef]
- Suzuki, T.; Nomura, T. Gender preferences for robots and gender equality orientation in communication situations. AI Soc. 2022. [Google Scholar] [CrossRef]
- Tung, F.W. Influence of Gender and Age on the Attitudes of Children towards Humanoid Robots. In Proceedings of the Human-Computer Interaction. Users and Applications; Jacko, J.A., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Lecture Notes in Computer Science; pp. 637–646. [Google Scholar] [CrossRef]
- Kędzierski, J.; Muszyński, R.; Zoll, C.; Oleksy, A.; Frontkiewicz, M. EMYS—Emotive head of a social robot. Int. J. Soc. Robot. 2013, 5, 237–249. [Google Scholar] [CrossRef] [Green Version]
- Bernstein, D.; Crowley, K. Searching for signs of intelligent life: An investigation of young children’s beliefs about robot intelligence. J. Learn. Sci. 2008, 17, 225–247. [Google Scholar] [CrossRef]
- Heerink, M. Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 6–9 March 2011; ACM: New York, NY, USA, 2011; pp. 147–148. [Google Scholar] [CrossRef] [Green Version]
- Nakano, T.; Tanaka, K.; Endo, Y.; Yamane, Y.; Yamamoto, T.; Nakano, Y.; Ohta, H.; Kato, N.; Kitazawa, S. Atypical gaze patterns in children and adults with autism spectrum disorders dissociated from developmental changes in gaze behavior. Proc. Biol. Sci. 2010, 277, 2935–2943. [Google Scholar] [CrossRef] [PubMed]
- Van Straten, C.L.; Peter, J.; Kühne, R. Child-robot relationship formation: A narrative review of empirical research. Int. J. Soc. Robot. 2020, 12, 325–344. [Google Scholar] [CrossRef] [Green Version]
- Benenson, J.F.; Apostoleris, N.H.; Parnass, J. Age and sex differences in dyadic and group interaction. Dev. Psychol. 1997, 33, 538–543. [Google Scholar] [CrossRef]
- Wood, W.; Rhodes, N. Sex Differences in Interaction Style in Task Groups. In Gender, Interaction, and Inequality; Springer: Berlin/Heidelberg, Germany, 1992; pp. 97–121. [Google Scholar] [CrossRef]
- Martinez, M.A.; Osornio, A.; Halim, M.L.D.; Zosuls, K.M. Gender: Awareness, identity, and stereotyping. In Encyclopedia of Infant and Early Childhood Development, 2nd ed.; Benson, J.B., Ed.; Elsevier: Oxford, UK, 2020; pp. 1–12. [Google Scholar] [CrossRef]
- Mehta, C.M.; Strough, J. Sex segregation in friendships and normative contexts across the life span. Dev. Rev. 2009, 29, 201–220. [Google Scholar] [CrossRef]
- Berghe, R.; Haas, M.; Oudgenoeg-Paz, O.; Krahmer, E.; Verhagen, J.; Vogt, P.; Willemsen, B.; Wit, J.; Leseman, P. A toy or a friend? Children’s anthropomorphic beliefs about robots and how these relate to second-language word learning. J. Comput. Assist. Learn. 2021, 37, 396–410. [Google Scholar] [CrossRef]
- Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. 2006, 7, 437–454. [Google Scholar] [CrossRef]
- Gaertner, S.L.; Dovidio, J.F. Reducing Intergroup Bias: The Common Ingroup Identity Model; Psychology Press: New York, NY, USA, 2000; pp. 13–212. [Google Scholar]
- Levine, M.; Prosser, A.; Evans, D.; Reicher, S. Identity and emergency intervention: How social group membership and inclusiveness of group boundaries shape helping behavior. Personal. Soc. Psychol. Bull. 2005, 31, 443–453. [Google Scholar] [CrossRef] [PubMed]
- Annaz, D.; Campbell, R.; Coleman, M.; Milne, E.; Swettenham, J. Young children with autism spectrum disorder do not preferentially attend to biological motion. J. Autism Dev. Disord. 2012, 42, 401–408. [Google Scholar] [CrossRef] [PubMed]
- Mori, M. The uncanny valley. Energy 1970, 7, 33–35. [Google Scholar]
- Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
- Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef] [PubMed]
- Kim, B.; Bruce, M.; Brown, L.; de Visser, E.; Phillips, E. A comprehensive approach to validating the uncanny valley using the anthropomorphic RoBOT (ABOT) database. In Proceedings of the 2020 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 24 April 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Lee, M.K.; Forlizzi, J.; Rybski, P.; Crabbe, F.; Chung, W.; Finkle, J.; Glaser, E.; Kiesler, S. The snackbot: Documenting the design of a robot for long-term human-robot interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, USA, 9–13 March 2009; ACM: New York, NY, USA, 2009; pp. 7–14. [Google Scholar] [CrossRef]
- Spatola, N.; Wudarczyk, O. Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism. Comput. Hum. Behav. 2021, 124, 106934. [Google Scholar] [CrossRef]
- Mathur, M.B.; Reichling, D.B. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [Green Version]
- Mara, M.; Appel, M.; Gnambs, T. Human-like robots and the uncanny valley: A meta-analysis of user responses based on the godspeed scales. Z. Psychol. 2022, 230, 33–46. [Google Scholar] [CrossRef]
- Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
- MacDorman, K.F.; Ishiguro, H. The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 297–337. [Google Scholar] [CrossRef]
- Laakasuo, M.; Palomäki, J.; Köbis, N. Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. Int. J. Soc. Robot. 2021, 13, 1679–1688. [Google Scholar] [CrossRef]
- MacDorman, K. Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. In ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science; Indiana University: Bloomington, IN, USA, 2006. [Google Scholar]
- Woods, S. Exploring the design space of robots: Children’s perspectives. Interact. Comput. 2006, 18, 1390–1418. [Google Scholar] [CrossRef]
- Lewkowicz, D.J.; Ghazanfar, A.A. The development of the uncanny valley in infants. Dev. Psychobiol. 2012, 54, 124–132. [Google Scholar] [CrossRef] [Green Version]
- Brink, K.A.; Gray, K.; Wellman, H.M. Creepiness creeps in: Uncanny valley feelings are acquired in childhood. Child Dev. 2019, 90, 1202–1214. [Google Scholar] [CrossRef]
- Baxter, P.; Kennedy, J.; Senft, E.; Lemaignan, S.; Belpaeme, T. From characterising three years of HRI to methodology and reporting recommendations. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 391–398. [Google Scholar] [CrossRef] [Green Version]
- Torta, E.; van Dijk, E.; Ruijten, P.A.M.; Cuijpers, R.H. The Ultimatum Game as Measurement Tool for Anthropomorphism in Human–Robot Interaction. In Proceedings of the Social Robotics; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer International Publishing: Cham, Switzerland, 2013; pp. 209–217. [Google Scholar] [CrossRef]
- Amirova, A.; Rakhymbayeva, N.; Yadollahi, E.; Sandygulova, A.; Johal, W. 10 years of human-NAO interaction research: A scoping review. Front. Robot. AI 2021, 8, 744526. [Google Scholar] [CrossRef]
- Sandoval, E.B.; Brandstatter, J.; Yalcin, U.; Bartneck, C. Robot likeability and reciprocity in human robot interaction: Using ultimatum game to determinate reciprocal likeable robot strategies. Int. J. Soc. Robot. 2021, 13, 851–862. [Google Scholar] [CrossRef]
- Mubin, O.; Henderson, J.; Bartneck, C. You just do not understand me! Speech recognition in human robot interaction. In Proceedings of the The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 637–642. [Google Scholar] [CrossRef] [Green Version]
- Di Dio, C.; Manzi, F.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. It does not matter who you are: Fairness in pre-schoolers interacting with human and robotic partner. Int. J. Soc. Robot. 2020, 12, 1045–1059. [Google Scholar] [CrossRef]
- Belpaeme, T.; Baxter, P.; de Greeff, J.; Kennedy, J.; Read, R.; Looije, R.; Neerincx, M.; Baroni, I.; Zelati, M.C. Child-Robot Interaction: Perspectives and Challenges. In Proceedings of the Social Robotics; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer International Publishing: Cham, Switzerland, 2013; pp. 452–459. [Google Scholar] [CrossRef]
- Fisher, R. Social desirability bias and the validity of indirect questioning. J. Consum. Res. 1993, 20, 303–315. [Google Scholar] [CrossRef]
- Phillips, E.; Zhao, X.; Ullman, D.; Malle, B.F. What is human-like? decomposingr robots’ human-like appearance using the anthropomorphic roBOT (ABOT) database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018. HRI ’18. pp. 105–113. [Google Scholar] [CrossRef]
- Li, J. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int. J.-Hum.-Comput. Stud. 2015, 77, 23–37. [Google Scholar] [CrossRef]
- Leyzberg, D.; Spaulding, S.; Toneva, M.; Scassellati, B. The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Annual Meeting of the Cognitive Science Society, Sapporo, Japan, 1–4 August 2012; Volume 34. [Google Scholar]
- Kose-Bagci, H.; Ferrari, E.; Dautenhahn, K.; Syrdal, D.S.; Nehaniv, C.L. Effects of embodiment and gestures on social interaction in drumming games with a humanoid robot. Adv. Robot. 2009, 23, 1951–1996. [Google Scholar] [CrossRef]
- Roesler, E.; Manzey, D.; Onnasch, L. Embodiment Matters in Social HRI Research: Effectiveness of Anthropomorphism on Subjective and Objective Outcomes. In ACM Transactions on Human-Robot Interaction; ACM: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
- Richards, D.; Vythilingam, R.; Formosa, P. A principlist-based study of the ethical design and acceptability of artificial social agents. Int. J.-Hum.-Comput. Stud. 2023, 172, 102980. [Google Scholar] [CrossRef]
- Malle, B.; Fischer, K.; Young, J.; Moon, A.; Collins, E. Trust and the discrepancy between expectations and actual capabilities of social robots. In Human-Robot Interaction: Control, Analysis, and Design; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2021; pp. 3–23. [Google Scholar]
- Bickmore, T.W.; Puskar, K.; Schlenk, E.A.; Pfeifer, L.M.; Sereika, S.M. Maintaining reality: Relational agents for antipsychotic medication adherence. Interact. Comput. 2010, 22, 276–288. [Google Scholar] [CrossRef]
- Tahan, K.; Cayrier, A.; Baratgin, J.; N’Kaoua, B. ZORA Robot to Assist a Caregiver in Prospective Memory Tasks. (Accepted under Minor Revision), Applied Neuropsychology: Adult. Available online: https://hidrive.ionos.com/lnk/1SLOFXwX (accessed on 6 July 2023).
- Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [Green Version]
- Scibilia, A.; Pedrocchi, N.; Fortuna, L. Modeling Nonlinear Dynamics in Human–Machine Interaction. IEEE Access 2023, 11, 58664–58678. [Google Scholar] [CrossRef]
- Roesler, E.; Manzey, D.; Onnasch, L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci. Robot. 2021, 6, eabj5425. [Google Scholar] [CrossRef]
- Broadbent, E. Interactions with robots: The truths we reveal about ourselves. Annu. Rev. Psychol. 2017, 68, 627–652. [Google Scholar] [CrossRef] [Green Version]
Factor | Article | Variable | Effect | Effect p-Value | Robot | Sample Size | Mean Age (Standard Deviation) | Country |
---|---|---|---|---|---|---|---|---|
Appearance | Banks [34] | Mentalizing explanations | human/android > low/mid robots | OZOBOT, COZMO, NAO | 469 | USA | ||
Barco et al. [67] | Anthropomorphism score | NAO > COZMO > PLEO | p < 0.05 | NAO, PLEO, COZMO | 35 | The Netherlands | ||
Broadbent et al. [68] | Preference; mind attribution | face > no face | p < 0.01; p < 0.001 | PEOPLEBOT | 30 | USA | ||
Burdett et al. [69] | Will to play with | iconic > humanoid > abstract | NAO, TITAN, MINDAR | 110 | ; ; | UK | ||
Carpinella et al. [70] | Perceived warmth; Competence; Comfort | human-like > machine-like | Team-built robot face | 252 | not specified | not specified | ||
Disalvo et al. [71] | Perception of humanness | many facial features > none | 48 Different robots | 60 | not specified | not specified | ||
Goldman et al. [72] | Biological properties attribution | NAO = DASH | not significant | NAO, DASH | 89 | 42 months (3 y.o.); 65 months (5 y.o.) | USA, Canada | |
Haring et al. [73] | Intelligence | android > humanoid > abstract | GEMINOID-F, ROBI, KEEPON | 335 | (Japan); (Australia) | Japan, Australia | ||
Kiesler et al. [74] | Personality attribution | present > projected | NURSEBOT | 113 | 26 | USA | ||
Krach et al. [75] | Fun; Perceived Intelligence | anthropomorphic robot > functional | BARTHOC JR, LEGO MINDSTORMS | 20 | Germany | |||
Malle et al. [76] | Blame | humanoid = human > mechanical | Mechanical or humanoid | 633 | not specified | |||
Manzi et al. [77] | Mental state attribution | NAO > ROBOVIE | NAO, ROBOVIE | 189 | 5, 7 and 9 | Italy | ||
Manzi et al. [78] | Mental state attribution | PEPPER > NAO | NAO, PEPPER | 174 | (NAO); (PEPPER) | Italy | ||
Nijssen et al. [65] | Sacrifice | humanness + | GEMINOID, KOJIRO | 54 | 19.43(2.69) | The Netherlands | ||
Nijssen et al. [40] | Sharing | iconic = abstract robot | not significant | NAO, LEGO MINDSTORMS | 120 | ; | The Netherlands | |
Onnasch and Hildebrandt [79] | Number of fixations | anthropomorphic > non-anthropomorphic | SAWYER | 40 | Germany | |||
Powers and Kiesler [80] | Cooperation | human-like > machine like | Animated Robot | 98 | not specified | not specified | ||
Riek et al. [81] | Empathy | humanoid > mechanical | ROOMBA, AUR, ANDREW, ALICIA | 120 | UK | |||
Sacino et al. [82] | Inversion effect for robots | high level of humanness > low-level | Multiple robots | 99, 94, 109 | ; ; | Italy | ||
Sommer et al. [83] | Perceived moral worth | NAO = PLEO | not significant | NAO, PLEO | 126 | Australia | ||
Tung [84] | Social and physical attraction | anthropomorphic > non-anthropomorphic | 12 Robots (pictures), 9 Robots (videos) | 267 | Taiwan | |||
Zanatto et al. [85] | Change rate | primed robot > nonprimed | SCITOS G5, iCUB | 15 | not specified | UK | ||
Zanatto et al. [22] | Likability; Trust; Anthropomorphism | NAO > BAXTER | NAO, BAXTER | 30 | UK | |||
Zhao and Malle [26] | Perspective taking | head/face > no head/no face; ERICA > NAO and BAXTER > THYMIO | NAO, BAXTER, ERICA, THYMIO | 1729, 1431 | ; | not specified | ||
Zlotowski et al. [36] | Likability; Eeriness | iconic > android; android > iconic | GEMINOID HI-2, ROBOVIE R2 | 58 | Japan | |||
Behavior | BAXTER et al. [86] | Enjoyment | perceived competence + (personalized) | NAO | 59 | not specified | UK | |
Boladeras et al. [87] | Preference | slow > agitated | not specified | PLEO | 4 | not specified | Spain | |
Breazeal et al. [88] | Preference; Gaze | attentive = non attentive; attentive > non attentive | not significant; | DRAGONBOTS | 17 | USA | ||
Henkemans et al. [89] | Perceived fun | personalized > neutral | NAO | 45 | ; | The Netherlands | ||
Horstmann and Krämer [90] | Perceived sociability; Competence | high level of interaction > low level | NAO | 162 | not specified | |||
Huang and Thomaz [91] | Intelligence | joint attention > without joint attention | SIMON | 20 | not specified | USA | ||
Kanda et al. [92] | Appreciation | social behavior > non-social behavior | ROBOVIE, LEGO MINDSTORMS | 31 | not specified | Japan | ||
Kruijff-Korbayová et al. [93] | Perceived friendship | familiar > neutral | NAO | 19 | not specified | Italy | ||
Kumar et al. [94] | Satisfaction and trust | polite > rude | TurtleBot3 Burger, Robotic Arm | 203 | 26 (Young adults) vs. 70 (Seniors) | not specified | ||
Li et al. [95] | Fun; Likability | with eye gaze > without | Alpha2 | 27 | China | |||
Looije et al. [96] | Smiling; Questionnaire | affective > non affective; affective = non affective | NAO | 18 | 9 | The Netherlands | ||
Manzi et al. [97] | Duration of fixation on the face | with eye contact > without | ROBOVIE | 32 | not specified | not specified | ||
Nitsch and Glassen [98] | Interaction score | animated robot > apathetic robot | NAO | 48 | not specified | Germany | ||
Obaid et al. [99] | Proximity | standing robot > sitting | NAO | 22 | New Zealand | |||
Okumura et al. [100] | Perceived intelligence; Emotion attribution | interactive > still robot | SOTA | 36 | 62.08 months (6.42) | Japan | ||
Rossignoli et al. [101] | attribution of mental states | earnest robot > misleading | NAO | 126 | not specified | Italy | ||
Tozadore et al. [102] | Correct answers | high interactivity > low | not specified | NAO | 30 | not specified | Brazil | |
Tung [84] | Social and physical attraction | movement > static | 12 Robots (pictures), 9 Robots (videos) | 311 | Taiwan | |||
Wigdor et al. [103] | Free play selection; Perceived human-likeness | fillers = no fillers; fillers > no fillers | not significant; | NAO | 26 | The Netherlands | ||
Simmons and Knight [104] | Diversity in motions | mimicry > control | KEEPON | 45 | not specified | Portugal | ||
Waytz et al. [43] | Anthropomorphism | unpredictable > predictable | ASIMO | 55 | USA | |||
Zanatto et al. [85] | Change rate | social gaze > non social | SCITOS G5, iCUB | 15 | not specified | UK | ||
Zhao and Malle [26] | Perspective taking | reach > gaze > side-look | NAO, BAXTER, ERICA, THYMIO | 1219 | not specified | |||
Zlotowski et al. [36] | Likability | positive behavior > negative | GEMINOID HI-2, ROBOVIE R2 | 58 | Japan | |||
Movement | Castro-González et al. [105] | Likability | soft movement > mechanical | BAXTER | 42 | not specified | USA | |
Kuz et al. [106] | Movement prediction | human > robotic | Robotic Arm | 24 | Germany | |||
Salem et al. [107] | Mental state attribution; Likability | gesture > no gesture | HONDA | 62 | 30.90(9.82) | Germany | ||
Tremoulet and Feldman [108] | Animacy ratings | aligned > misaligned; fast > slow; large direction change > small | None | 34 | not specified | USA | ||
Voice | Eyssel et al. [109] | Likability | human > robot voice | FLOBI | 58 | Germany | ||
Flanagan et al. [37] | Mental state attribution | NAO = Alexa | NAO, ROOMBA, Alexa | 127 | USA | |||
Kuriki et al. [110] | Perceived humanness; positive feelings | human voice > artificial | 14 | Japan | ||||
Li et al. [95] | Fun; Likability | human-like voice > non human-like | Alpha2 | 27 | China | |||
Masson et al. [111] | Endowment effect | vocal intonation > non-vocal | not specified | NAO | 30 | not specified | France | |
Niculescu et al. [112] | Likability | high pitch > low | OLIVIA, CYNTHIA | 28 | not specified | Singapore | ||
Tielman et al. [113] | Expressions; Valence | affective > non affective | NAO | 18 | The Netherlands | |||
Torre et al. [114] | Investment | synthetic voice > natural (generous condition), natural > synthetic (mean condition) | NAO | 120 | not specified | UK |
Factor | Article | Variable | Effect | Effect p-Value | Robot | Sample Size | Mean Age (Standard Deviation) | Country |
---|---|---|---|---|---|---|---|---|
Anthropomorphic framing | Barchard et al. [136] | Positive feelings | social competence score+ | ROBOVIE, NAO, PR2, DRAGONBOT | 296 | USA | ||
Darling et al. [137] | Reluctance to hit | story > no story | HEXBUG NANO | 101 | USA | |||
Kory Westlund et al. [138] | Eye gaze | friend > machine | TEGA | 22 | USA | |||
Mara and Appel [139] | Perceived human-likeness and attractiveness; Perceived eeriness | narrative > non-narrative; narrative < non-narrative | ; p = 0.001 | TELENOID | 72 | Austria | ||
Mou et al. [140] | Trust | high-level ToM > low-level | PEPPER | 32 | not specified | UK | ||
Nijssen et al. [65] | Sacrifice | anthropomorphic framing < neutral | GEMINOID, KOJIRO | 54 | The Netherlands | |||
Nijssen et al. [40] | Sharing | affective robot > non affective | NAO, LEGO MINDSTORMS | 120 | (4–5 y.o.); (8–9 y.o.) | The Netherlands | ||
Nijssen et al. [141] | Socially mindful choices | anthropomorphic framing = neutral | not significant | KOJIRO | 128 | The Netherlands | ||
Onnasch and Roesler [142] | Anthropomorphism | anthropomorphic framing = neutral | not significant | NAO | 40 | (I); (II) | Germany | |
Rosenthal-von der Pütten et al. [143] | Likability; Anthropomorphism | story > no story | Papero, Icat, GEMINOID, HRP-4c, Justin, Mika | 249 | not specified | |||
Ruocco et al. [144] | Investment | high-level ToM > low-level | PEPPER | 32 | not specified | |||
Schömbs et al. [145] | Likability, Perceived competence | anthropomorphic framing = technical | not significant | PEPPER, PANDA | 180 | Germany | ||
Söderlund et al. [146] | Perceived quality | high ToM > low ToM | HIWONDER | 51 | (I); (II) | Sweden | ||
Sturgeon et al. [147] | Intelligence | ToM > no ToM | NAO | 53 | 20–79 | not specified | ||
Autonomous degree | Chernyak and Gary [148] | Emotional state attribution | autonomous > controlled | AIBO | 80 | (5 y.o.); (7 y.o.) | “Mostly Euro-American” | |
Haas et al. [149] | Likability | remotely controlled = autonomous | not significant | NAO | 20 | The Netherlands | ||
Lee et al. [150] | Social presence; Trust | teleoperated > autonomous; autonomous > teleoperated | RA-I | 30 | not specified | South Korea | ||
Tozadore et al. [151] | Perceived intelligence; Preference | autonomous > teleoperated | NAO | 82 | Brazil | |||
van Straten et al. [152] | Perceived autonomy; Anthropomorphism | covert teleoperation > overt | NAO | 168 | The Netherlands | |||
Frequency of Interaction | Bartneck et al. [153] | Positive attitude | interaction+ | AIBO | 467 | not specified | China, Germany, Japan, Mexico, The Netherlands, UK, USA | |
BAXTER et al. [86] | Enjoyment | Interaction 1 = Interaction 3 | not significant | NAO | 59 | not specified | UK | |
de Graaf et al. [154] | Attitude toward robots | Interaction 6 > Interaction 1 | KAROTZ | 102 | The Netherlands | |||
de Jong et al. [155] | Anxiety | pre- > post-interaction | NAO | 52 | (elderly); (students) | The Netherlands | ||
Kim et al. [39] | Perception of spirit | Interaction 1 and 2 > Interaction 3 | 251 different robots | 41 | USA | |||
Nishio et al. [156] | Acceptance rate for android | after interaction > before | ROBOVIE R2 et GEMINOID HI-1 | 21 | Japan | |||
Ribi et al. [157] | Frequency of interaction | Time+ | not specified | AIBO | 14 | not specified | Swiss | |
Sinnema and Alimardani [158] | Anxiety | pre- > post-interaction | NAO | 52 | (elderly); (students) | The Netherlands | ||
Tanaka et al. [159] | Quality of interaction | Time− | QRIO | not specified | 18–24 months | USA | ||
Zlotowski et al. [36] | Eeriness | Interaction 1 > Interaction 3 | GEMINOID HI-2, ROBOVIE R2 | 58 | Japan | |||
Robot role | Al-Taee et al. [160] | Acceptability level | companion, education teacher > calculator | not specified | NAO | 37 | 6–16 y.o. | UK |
Banthia et al. [161] | Enjoyment | storyteller > interaction partner (3–5 y.o.); storyteller < interaction partner (5–8 y.o.) | not specified | ZENO | not specified | 3–13 y.o. | Canada | |
Burdett et al. [69] | Will to be prayed for by robots | young > older children, adults | NAO, TITAN, MINDAR | 110 | (I); (II); (III) | UK | ||
Horstmann and Krämer [90] | Perceived sociability | assistant > competitor | NAO | 162 | not specified | |||
Kory Westlund et al. [138] | Gaze time | friend > machine | TEGA | 110 | USA | |||
Ray et al. [162] | Acceptability for cooking; for cleaning | no > yes; yes > no | not specified | ROBOX, ALICES | 240 | not specified | Swiss |
Factor | Article | Variable | Effect | Effect p-Value | Robot | Sample Size | Mean Age (Standard Deviation) | Country |
---|---|---|---|---|---|---|---|---|
Age | Al-Taee et al. [160] | Acceptability level | young > old children | NAO | 37 | 6–16 y.o. | UK | |
Banthia et al. [161] | Enjoyment | storyteller > interaction partner (3–5 y.o.); storyteller < interaction partner (5–8 y.o.) | not specified | ZENO | not specified | 3–8 y.o. | Canada | |
BAXTER et al. [169] | Gaze time | young > old | NAO | 32 | months | The Netherlands | ||
Beran et al. [170] | Mental state attribution | young > old children | robotic arm | 184 | Canada | |||
Burdett et al. [69] | Helpfulness; Kindness | young > older children, adults; children > adults | NAO, TITAN, MINDAR | 110 | (4–8 y.o.); (9–13 y.o.); (adults) | UK | ||
Di Dio et al. [171] | Trust | human > robot (3 y.o.); robot > human (7 y.o.) | NAO | 94 | not specified | Italy | ||
Flanagan et al. [172] | Free choice attribution | human > robot (adults), robot = human (children) | (adults); not significant (children) | ROBOVIE | 32 (children), 60 (adults) | (children), (adults) | USA | |
Flanagan et al. [37] | Mind attribution | young children > old | NAO, ROOMBA, Alexa | 127 | USA | |||
Goldman et al. [72] | Biological properties attribution | 3 y.o. > 5 y.o. | NAO, DASH | 44 (3 y.o.), 45 (5 y.o.) | 42 months (3 y.o.), 65 months (5 y.o.) | USA and Canada | ||
Kahn et al. [122] | Mental state attribution | 9–12 y.o. > 15 y.o. | ROBOVIE | 90 | not specified | USA | ||
Kumar et al. [94] | Trust | seniors > young adults | TurtleBot3 Burger, robotic arm | 203 | (young); (old) | USA | ||
Leite and Lehman [173] | Affective response | young children = old | not significant | Abstract Robot | 28 | not specified | ||
Manzi et al. [77] | Mental state attribution | 5 y.o. > 7–9 y.o. | NAO, ROBOVIE | 189 | months (5 y.o.); (7 y.o.); (9 y.o.) | Italy | ||
Martin et al. [174] | Helping | high autonomy = low autonomy | NAO | 82 | Australia | |||
Martin et al. [175] | Latency to help | looks at target < looks away | NAO | 40 | Australia | |||
Nijssen et al. [40] | Anthropomorphism | young children > old | NAO and LEGO MINDSTORMS | 120 | (4–5 y.o.); (8–9 y.o.) | The Netherlands | ||
Okanda et al. [176] | Anthropomorphism | 3 y.o. > 5 y.o. and adults | KIROBO | 79 | months (3 y.o.); (5 y.o.); (adults) | Japan | ||
Pulido et al. [177] | Willingness to have this robot at home | of children want it at home | not specified | NAO | 120 | Spain | ||
Serholt et al. [178] | Success rate | robot = human | not significant | NAO | 27 | Sweden | ||
Sinnema and Alimardani [158] | Anxiety post interaction; Usefulness | old > young; old > young | all | NAO | 52 | (young); (old) | The Netherlands | |
Sommer et al. [83] | Moral concern for PLEO | age− | NAO, PLEO | 126 | Australia | |||
Tozadore et al. [179] | Enjoyment; Interest for the other platform | tablet = robot; robot > tablet | p > 0.05; p < 0.01 | NAO | 22 | Brazil | ||
Simmons and Knight [104] | Time of interaction | young = old | KEEPON | 45 | Portugal | |||
Zhang et al. [180] | Mental state attribution | TD > ASD | NAO | 40 | (neurotypical); (autistic) | China | ||
Culture | Bartneck et al. [153] | Positive attitude | USA > Mexico | AIBO | 463 | not specified (adults) | Various | |
Choi et al. [181] | Robot as friend | Koreans > Spanish | 160 | not specified | Korea, Spain | |||
Dang and Liu [182] | Attribution of mental abilities | Chinese: loneliness− | Description of a social robot | 397 | 31.12(8.91) (Americans); 29.95(7.62) (Chinese) | China, USA | ||
Eyssel and Kuchenbrandt [183] | Attribution of mental abilities | same culture > different | FLOBI | 78 | Germany | |||
Haring et al. [184] | Mental state attribution | Japanese > Australian | ROBI, KEEPON | 126 | (Japan); (Australia) | Japan, Australia | ||
Li et al. [185] | Likability, engagement and Satisfaction; Trust | Korean = Chinese > German; Chi > Ko > Ger | p < 0.01; p < 0.05 | LEGO MINDSTORMS NXT | 108 | (China); (Korea); (Germany) | China, Korea, Germany | |
Gender | Abel et al. [186] | Anthropomorphic rating | men > women | all | Gantry robot | 40 | (Male); (Female) | Germany |
Bryant et al. [187] | Perceived competence | female = neutral = male | all | PEPPER | 50 | USA | ||
Carpinella et al. [70] | Perceived warmth and competence; Discomfort | female > male; female < male | p < 0.05; p < 0.001 | Team-built robot face | 252 | Not specified | Not specified | |
Eyssel et al. [109] | Mind attribution among men; among women; Perceived psychological proximity among men | male voice > female, female voice > male (human voice only); male voice > female | p < 0.01, p < 0.05 | FLOBI | 58 | Germany | ||
Kraus et al. [188] | Perceived trustworthy and competence; Likability | men > women; women > men | NAO | 38 | Germany | |||
Kuchenbrandt et al. [189] | Duration of task | female robot > male (male participants), male = female (female participants) | NAO | 73 | Germany | |||
Leite and Lehman [173] | Affective response | girls = boys | Abstract Robot | 28 | - | |||
Lücking et al. [190] | Interaction level | boys > girls | NAO | 12 | 58 months | Germany | ||
Niculescu et al. [112] | Likability | men > women | OLIVIA, CYNTHIA | 28 | Adults | Singapore | ||
Robben et al. [191] | Anthropomorphism; Enjoyment | same gender = different | NAO | 62 | The Netherlands | |||
Sandygulova and O’Hare [192] | Playing time | same gender > different | NAO | 74 | (girls); (boys) | Ireland | ||
Sandygulova and O’Hare [193] | Same gender preference | boys > girls, young > old | NAO | 107 | 5–12 y.o. | Ireland | ||
Schermerhorn et al. [194] | Response bias | alone > with robot (women); alone < robot (men) | Abstract Robot | 47 | not specified | not specified | ||
Siegel et al. [195] | Credibility; Trust; Engagement | opposite sex > same | all | MOBILE DEXTEROUS SOCIAL ROBOT | 134 | USA | ||
Simmons and Knight [104] | Time of interaction | girls > boys | KEEPON | 45 | Portugal | |||
Suzuki and Nomura [196] | Chosen gender | either > male, women | 80% of participants | not specified | 1000 | not specified | Japan | |
Tung [197] | Social and physical attraction to robots | girls > boys | 12 different robots | 267 | 10–15 y.o. | Taiwan | ||
Personality | Bartz et al. [46] | Anthropomorphic attributions | attachment anxiety > non attachment anxiety | Gadgets and pets | 178 | North America | ||
Darling et al. [137] | Reluctance to hit | empathetic > non empathetic | HEXBUG NANO | 101 | USA | |||
Kędzierski et al. [198] | Will to interact | openness to new experiences + | EMYS | 45 | Poland | |||
Spatola and Wykowska [45] | Anthropomorphic attribution; Attitudes | need for cognition−, need for prediction+; cognition+, prediction− | HOSPI, PERSONAL ROBOT, ARMAR, NIMBRO, NADINE | 1141 | France | |||
Others | Bartz et al. [46] | Anthropomorphic attributions | loneliness > non loneliness | Gadgets and pets | 178 | North America | ||
Bernstein and Crowley [199] | Attribution of psychological characteristics | inexperienced > experienced | QRIO and Exploration Rover Personnel | 60 | months (4–5 y.o.); months (6–7 y.o.) | USA | ||
Dang and Liu [182] | Attribution of mental abilities | Chinese: loneliness− | Description of a social robot | 397 | (Americans); (Chinese) | China, USA | ||
de Graaf et al. [154] | Evaluation before interaction | rejecters > other groups | KAROTZ | 102 | The Netherlands | |||
Heerink [200] | Perception as social entity | education− | ROBOCARE | 66 | not specified | Swiss | ||
Kuriki et al. [110] | Perceived humanness and positive feelings | artificial voice = human (ASD group) | 28 | (autistic); (neurotypical) | Japan | |||
Lee et al. [47] | Social attractiveness; Positive evaluation | lonely > non-lonely | p < 0.05; p < 0.01 | AIBO, APRIL | 32 | not specified | USA | |
Nakano et al. [201] | Fixation to eyes and mouth | TD > TSA | 104 | (autistic); (neurotypical) | Japan | |||
Niculescu et al. [112] | Positive feelings | no experience > experience | all | OLIVIA, CYNTHIA | 28 | not specified | Singapore | |
Paepcke and Takayama [33] | Perceived competence after interaction | low expectations > high | PLEO et AIBO | 24 | USA | |||
Zhang et al. [180] | Mental state attribution | TD > TSA | NAO | 40 | (autistic); (neurotypical) | China |
Factor Categories | Factors | Effect on Acceptance | Effect on Anthropomorphism |
---|---|---|---|
Robotic | Human-like appearance | 13+, 0=, 0− | 12+, 3=, 0− |
Human-like voice | 5+, 0=, 1− | 2+, 0=, 0− | |
Human-like behavior | 19+, 4=, 1− | 5+, 0=, 0− | |
Movements | 3+, 0=, 1− | 2+, 0=, 0− | |
Situational | Anthropomorphic framing | 8+, 1=, 0− | 5+, 2=, 0− |
Human-like role | 2+, 1=, 2− | 0+, 0=, 0− | |
Interaction frequency | 6+, 3=, 1− | 0+, 0=, 1− | |
Perceived autonomy | 1+, 1=, 1− | 3+, 0=, 0− | |
Human | Age | 4+, 2=, 3− | 0+, 0=, 8− |
Gender ★ | |||
- Female | 2+, 2=, 2− | 1+, 3=, 0− | |
- Male | 2+, 2=, 2− | 2+, 2=, 0− | |
Personality | 2+, 0=, 1− | 3+, 0=, 1− | |
Culture | 3+, 0=, 3− | 2+, 0=, 2− | |
Others | |||
- Experience with technology, education | 0+, 0=, 1− | 0+, 0=, 2− | |
- Expectations | 0+, 0=, 2− | 0+, 0=, 0− | |
- Social isolation | 1+, 0=, 0− | 1+, 0=, 0− | |
- Developmental type | 3+, 0=, 0− | 1+, 0=, 0− |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too! Appl. Sci. 2023, 13, 8743. https://doi.org/10.3390/app13158743
Dubois-Sage M, Jacquet B, Jamet F, Baratgin J. We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too! Applied Sciences. 2023; 13(15):8743. https://doi.org/10.3390/app13158743
Chicago/Turabian StyleDubois-Sage, Marion, Baptiste Jacquet, Frank Jamet, and Jean Baratgin. 2023. "We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!" Applied Sciences 13, no. 15: 8743. https://doi.org/10.3390/app13158743
APA StyleDubois-Sage, M., Jacquet, B., Jamet, F., & Baratgin, J. (2023). We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too! Applied Sciences, 13(15), 8743. https://doi.org/10.3390/app13158743