We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!
Round 1
Reviewer 1 Report
The paper was a pleasure to read. It is very well written with a captivating introduction and motivation. The discussion of anthropomorphism is also clear and engaging.
I enjoyed the paper and found it very informative. I was surprised however with the structure. I did not see a research question or methodology. There was no discussion of how the articles were identified - search terms, exclusions, etc as in a systematic review following PRISM method. But I feel the number of articles included and the range of review to be superior to many systematic reviews I have reviewed in recent times that end up with 5-20 articles and lack the comprehensiveness of this study. Specific comments and suggestions follow.
p.2 Concerning strong and week attribution of mental states, I note similar observations concerning development of rapport or working alliance with virtual agents. For example, see Ranjbartabar H, Richards D, editors. Should we use human-human factors for validating human-agent relationships? a look at rapport. Workshop on Methodology and the Evaluation of Intelligent Virtual Agents (ME-IVA) at the Intelligent Virtual Agent Conference (IVA2018); 2018. That work found that while humans spoke in terms of humanlike qualities and behaviours of the agent (e.g. empathy), when asked to answer questions used for measuring rapport and therapist-patient relationship, respondents were almost angry and rang up to complain at being asked such questions.
Theories from HCI such as gestalt theory and human information processing model are also likely to provide some explanation. People naturally seek to make sense of their world. They fill in the gaps and draw on their existing understanding/experience to interpret and project meaning onto what they become exposed to.
p. 4 "For this approach, it is the need to interact with and explain one’s environment that 154
prompts individuals to anthropomorphize an object" this statement harks back to the theories underpinning HCI in general as mentioned in my previous comment.
p. 4 (for in stance -- (for instance
The literature on believability (see Emma Norling's definition of believability versus realism) and animation (see Bailenson and other Disney animators) would also be relevant to consider.
These more recent articles build on that work.
Yee N, Bailenson JN, Rickertsen K. A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. 2007. In: Proceedings of the SIGCHI conference on Human factors in computing systems [Internet]. [1-10].
Bailenson JN, Swinth K, Hoyt C, Persky S, Dimov A, Blascovich J. The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Teleoperators and Virtual Environments. 2005;14(4):379-93.
In contrast to ToM, individuals become wiling to "suspend disbelief" due to the emotions aroused by interaction (which can be aroused merely by watching as in a movie) with an inanimate object or animal who becomes anthropomorphised because it arouses emotions such as fear, love, anger, joy that we experience in our interactions with humans.
p. 10 "Although many studies focus on the appearance of the robot, other characteristics of the robot influence its perception by individuals" - is this very short paragraph finished. The sentence does not have a full stop. Also why is Intrinsic limit in red?
p. 15 "The more often an individual interacts with a robot (AIBO), the more they express a positive attitude toward robots in general [150] which can be interpreted as a simple exposure effect". I find this interesting because it is the opposite effect to the Technology Novelty Effect, where people score the technology high because it is new and interesting, but this effect wears off. This is one of the concerns for robots also. It is consistent with the finding from [156] and [151].
p. 22 "it could also depends" -- > "it could also depend"
Uncanny valley is discussed as a potential concern/limitation, but I think there are other concerns also, which I believe are more important as they have greater consequences. The point should also be made that even if being perceived as humanlike leads to greater liking, it does not necessarily lead to better outcomes, for example in terms of learning gains, following of advice or health improvement. The paper is lacking an important discussion of ethical use of robots, particularly the ethics of anthropomorphisation and potential misuse of such deception.
Just as much of the work on believability in intelligent virtual agents saw believability as the holy grail, as researchers and practitioners we should ensure that the technology improves human lives. Having someone see the robot as a human, if that is what the researcher/developer is aiming at, should be questioned as a goal. This goal has ethical considerations. I find it very telling that you could not find a study on the roles of robots. As a community we need to ask ourselves why are we building this technology - do we want to replace a human who plays that role - why? It it to be more available, less judgmental or biased, etc. It should be to improve learning or health outcomes or even to teach social skills, to address gaps/limitations which exist but not to replace humans in this role.
Please raise the issue of ethics and the importance of asking why we want robots in certain roles and whether it is necessary to anthropmorphise. In the design of a virtual agent by Bickmore for adherence to antipsychotic medication, it was imperative that the human did not think they were dealing with a person.
Bickmore T, Puskar K, Schlenk EA, Pfeifer LM, Sereika SM. Maintaining reality: relational agents for antipsychotic medication adherence. Interacting with Computers. 2010;22(4):276-88.
We should be careful in particular with vulnerable populations, including children and ensure that they do not come to prefer interacting with robots than with other people or fail to learn how to socialise and make friends, as painful as that can be. I am not aware of studies specifically with robots, but this study considers artificial social agents
Richards, D., Vythilingam, R., & Formosa, P. (2023). A principlist-based study of the ethical design and acceptability of artificial social agents. 588 International Journal of Human-Computer Studies, 172, 102980.
The limitations section is very helpful and the recommendation of standardised measures to allow proper comparisons is important.
The tables are very helpful, including the summary final table that seeks to draw the findings together.
Author Response
Author’s reply to Review Report 1
Dear Reviewer,
We are grateful that you have read and reviewed our article. We tried to revise following your review and we thank you so much for your support to improve the quality of the manuscript. Text changes have been highlighted in blue. We hope you will find this new version satisfactory.
Reviewer's comment:
The paper was a pleasure to read. It is very well written with a captivating introduction and motivation. The discussion of anthropomorphism is also clear and engaging.
I enjoyed the paper and found it very informative. I was surprised however with the structure. I did not see a research question or methodology. There was no discussion of how the articles were identified - search terms, exclusions, etc as in a systematic review following PRISM method. But I feel the number of articles included and the range of review to be superior to many systematic reviews I have reviewed in recent times that end up with 5-20 articles and lack the comprehensiveness of this study. Specific comments and suggestions follow.
Author's reply:
We thank the reviewer for their comments. Concerning the methodology used for article selection, we have added some clarifications to line 254. However, as this review is not intended to be systematic, we have not followed the PRISMA method.
Reviewer's comment:
p.2 Concerning strong and week attribution of mental states, I note similar observations concerning development of rapport or working alliance with virtual agents. For example, see Ranjbartabar H, Richards D, editors. Should we use human-human factors for validating human-agent relationships? a look at rapport. Workshop on Methodology and the Evaluation of Intelligent Virtual Agents (ME-IVA) at the Intelligent Virtual Agent Conference (IVA2018); 2018. That work found that while humans spoke in terms of humanlike qualities and behaviours of the agent (e.g. empathy), when asked to answer questions used for measuring rapport and therapist-patient relationship, respondents were almost angry and rang up to complain at being asked such questions.
Author's reply:
We thank the reviewer for making this useful suggestion. We have added this reference to line 58.
Reviewer's comment:
p.4 (for in stance -- (for instance
Author's reply:
We thank the reviewer for raising this. We have corrected the typographical error (line 171).
Reviewer's comment:
The literature on believability (see Emma Norling's definition of believability versus realism) and animation (see Bailenson and other Disney animators) would also be relevant to consider.
These more recent articles build on that work.
Yee N, Bailenson JN, Rickertsen K. A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. 2007. In: Proceedings of the SIGCHI conference on Human factors in computing systems [Internet]. [1-10].
Bailenson JN, Swinth K, Hoyt C, Persky S, Dimov A, Blascovich J. The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Teleoperators and Virtual Environments. 2005;14(4):379-93.
Author's reply:
We thank the reviewer for making these points above. We’ve added these references respectively to lines 297 and 834 for Yee and al. (2007) and to line 438 for Bailenson and al. (2005).
Reviewer's comment:
p.10 "Although many studies focus on the appearance of the robot, other characteristics of the robot influence its perception by individuals" - is this very short paragraph finished. The sentence does not have a full stop. Also why is Intrinsic limit in red?
Author's reply:
We thank the reviewer for raising this. We have corrected the mistakes (respectively line 356 and line 764).
Reviewer's comment:
p.22 "it could also depends" -- > "it could also depend"
Author's reply:
We thank the reviewer for raising this. We have corrected the mistake (line 742).
Reviewer's comment:
Uncanny valley is discussed as a potential concern/limitation, but I think there are other concerns also, which I believe are more important as they have greater consequences. The point should also be made that even if being perceived as humanlike leads to greater liking, it does not necessarily lead to better outcomes, for example in terms of learning gains, following of advice or health improvement. The paper is lacking an important discussion of ethical use of robots, particularly the ethics of anthropomorphisation and potential misuse of such deception.
Just as much of the work on believability in intelligent virtual agents saw believability as the holy grail, as researchers and practitioners we should ensure that the technology improves human lives. Having someone see the robot as a human, if that is what the researcher/developer is aiming at, should be questioned as a goal. This goal has ethical considerations. I find it very telling that you could not find a study on the roles of robots. As a community we need to ask ourselves why are we building this technology - do we want to replace a human who plays that role - why? It it to be more available, less judgmental or biased, etc. It should be to improve learning or health outcomes or even to teach social skills, to address gaps/limitations which exist but not to replace humans in this role.
Please raise the issue of ethics and the importance of asking why we want robots in certain roles and whether it is necessary to anthropmorphise. In the design of a virtual agent by Bickmore for adherence to antipsychotic medication, it was imperative that the human did not think they were dealing with a person.
Bickmore T, Puskar K, Schlenk EA, Pfeifer LM, Sereika SM. Maintaining reality: relational agents for antipsychotic medication adherence. Interacting with Computers. 2010;22(4):276-88.
We should be careful in particular with vulnerable populations, including children and ensure that they do not come to prefer interacting with robots than with other people or fail to learn how to socialise and make friends, as painful as that can be. I am not aware of studies specifically with robots, but this study considers artificial social agents
Richards, D., Vythilingam, R., & Formosa, P. (2023). A principlist-based study of the ethical design and acceptability of artificial social agents. 588 International Journal of Human-Computer Studies, 172, 102980.
The limitations section is very helpful and the recommendation of standardised measures to allow proper comparisons is important.
The tables are very helpful, including the summary final table that seeks to draw the findings together.
Author's reply:
We thank the reviewer for making these useful suggestions. We admit that ethical considerations are relevant to include in this review in order to offer a broader perspective of anthropomorphism in human-robot interactions. To this end, we have included them in the paper by adding section 5.3.1, line 971. We have also added the suggested references, respectively on lines 979, 981 and 983 for Bickmore and al. (2010), and on line 973 for Richard and al. (2023).
Thank you again for your invaluable advice.
Sincerely.
Reviewer 2 Report
As robots become increasingly integrated into our daily lives, understanding how individuals perceive these machines is of paramount importance. The human tendency to attribute human-like qualities to robots, known as anthropomorphism, has drawn significant attention from researchers and practitioners alike.
This manuscript titled "We don't anthropomorphize a robot based only on its cover: Context matters too!" presents a comprehensive review that delves into the multifaceted factors influencing anthropomorphism in the context of human-robot interactions.
The review synthesizes the findings of 156 experimental studies conducted between 2002 and 2023, exploring the varying attributions of human capabilities to robots. Contrary to intuitive assumptions, the study reveals that anthropomorphism is not solely dependent on robotic factors, but rather influenced by a complex interplay of contextual elements. These elements encompass not only characteristics inherent to the robot itself but also situational aspects surrounding the interaction and individual user traits.
Two prominent theories, the "mere appearance hypothesis" and the SEEK (Sociality, Effectance, and Elicited agent Knowledge) theory, are examined in the context of explaining anthropomorphism. The SEEK theory, in particular, emerges as a more robust explanation for the observed phenomena, underscoring the pivotal role of contextual factors in shaping perceptions of robots. However, it is noteworthy that even the SEEK theory falls short in fully elucidating all the factors involved, such as the autonomy of the robot, warranting further investigation.
One notable outcome of the review is the revelation of significant methodological variability in the studies of anthropomorphism. This diversity poses challenges in generalizing results and calls for the establishment of more standardized research practices in this domain. Consequently, the article presents valuable recommendations for future studies to enhance the rigor and reliability of research on human-robot interactions.
Considering the increasing significance of human-robot interactions in various domains, ranging from healthcare to service industries, the findings of this review hold relevance for researchers, designers, and policymakers.
The comprehensive analysis sheds light on the intricacies of anthropomorphism, emphasizing the crucial role of context in shaping users' perceptions of robots.
As such, this article is well-suited for publication in Applied Sciences, contributing valuable insights to the field and guiding the development of more effective and intuitive human-robot interactions in the future.
The manuscript submitted for review does not present serious problems in terms of its writing in English.
Author Response
Author’s reply to Review Report 2
Dear Reviewer,
We are grateful that you have read and reviewed our article. We thank you so much for your support to improve the quality of the manuscript.
Thank you again your invaluable advice.
Sincerely.