A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction
Abstract
:1. Introduction
2. Related Work
2.1. Human–Robot Interaction Design
- Robot-centered HRI bases design decisions on the assumption that the robot is an agent with its own goals. Behavior design aims to equip the robot for “surviving in the environment” or “fulfilling internal needs” (e.g., emotions) [16].
- Robot cognition-centered HRI thinks of the robot as an intelligent system that can make decisions and solve problems. The behavior of the robot is equated with the behavior of the software system. Behavior design thus evolves around questions of machine learning, problem solving, and cognitive robot architectures [16].
- Design-centered HRI refers to the robot as a product that was created to provide the user with a certain experience. To this end, form, modality, social norms, autonomy and interactivity are defined as relevant characteristics of the robot [5].
- Human-centered HRI puts people’s reactions and interpretations of the robot’s appearance and behavior in the center of attention. It focuses on the goal of designing robot behavior that is acceptable and comfortable for the human interaction partner [16].
2.2. Behavioral Expressions for Social Robots
- Robot-specific expressions: Social robots can have different appearances—humanoid (human-like), animoid (animal-like) and abstract robots (not inspired by the form of any living organism). Studies in HRI often concentrate on a particular robot, which can be self-built or commercially available. Examples of humanoid robots are Softbank’s Nao (SoftBank Robotics), Honda’s Asimo (American Honda Motor Co. Inc.) or Furhat (Furhat Robotics). Prominent examples for animoid robots in HRI research are the robots iCat [17], Aibo (Sony Corporation), Paro (Paro Robotics) or Kismet [4]. Abstract robots are hardly ever the object of investigation in social robot studies.
- Modality-specific expressions: Reviewing the body of research, we found that scientific studies on the behavioral design for social robots tend to focus on selected communication modalities (instead of using the full range available as displayed in Table 1). There is, for example, extensive research about gaze behavior of humanoid robots [9,18,19]. Furthermore, gestures, sound and light are often researched in combinations, especially in connection with emotional expressions for humanoid robots [7,20,21]. Other modalities that have received increased interest are proxemics (physical distance between human and robot) [10,22] and speech.
- Use Case-specific expressions: Social robots can be applied in different settings and assume various roles. Common application areas for social robots in HRI research are playing games [7,23], teaching [11,24] or treating autistic children [25,26]. Simple collaboration tasks are also often used to investigate the interaction between humans and robots [27,28].
2.3. Design Patterns
- A structured, user-centered design process that provides step-by-step guidance for how to create non-verbal behavioral expressions that are perceived as comprehensible and pleasant by the users,
- A format and conventions for documenting the created patterns and connecting them to form a shared HRI design language, and
- A pattern language for social robots that demonstrates how the design process and documentation format can be used in practice and contains 40 patterns with concrete recommendations for the design of non-verbal behaviors for robots that offer companionship and entertainment.
3. How to Create Behavioral Design Patterns for Human–Robot Interaction
3.1. Example Use Case
3.2. Phase 1: Analysis
3.2.1. Decomposing Use Cases
3.2.2. Identifying Recurring Robot Interaction Steps
3.2.3. Specifying Communication Goals
3.3. Phase 2: Creation
- “I am recording spoken information from you.” (action: listen to user);
- “My full attention is on you.” (sub-action: direct attention to user);
- “I am listening to what you are currently saying.” (sub-action: record user input).
3.3.1. Collecting Relevant Insights
3.3.2. Ideating Design Solutions
- Presentation of Insight Board: The team member who conducted the desktop research presents the Insight Board, the others follow the presentation and note down aspects that they find most relevant and inspiring through active listening. They also write down additional ideas that come to their mind.
- Sharing and Clustering Insights: The aspects that were noted down are discussed by the design team and clustered.
- Ideation: Based on the clusters, the design team brainstorms ideas for mapping the gathered insights to the behavior of robots in order to express the communication goal at hand.
- Sharing and Clustering Ideas: The resulting ideas are, again, shared within the design team and organized in idea clusters. The idea clusters found the basis for the next step, the pattern specification.
- The communication goal “I am recording spoken information from you.” addresses the auditory attention expressed by the robot towards verbal user input, or—speaking in more technical terms—the voice recording by the robot’s microphone. In the course of the ideation workshop, the designers came to the conclusion that, depending on the capabilities and appearance of the robot, this auditory attention could be expressed through a dedicated dynamic body expression (turning the ears towards the user) or through a specific light signal that augments the robot’s ear or microphone.
- The communication goal “My full attention is on you.” refers to visual attentiveness towards the user, as shown in social interaction by facing another person and maintaining eye contact. Similarly, to achieve this communication goal, the robot’s front and eyes can be turned towards the user. For robots without a torso or eyes, the communication goal could be expressed by pointing the camera towards the user.
- The superior communication goal “I am listening to what you are currently saying.” combines design ideas for the other two communication goals into one behavioral expression.
3.3.3. Specifying Multimodal Behavioral Patterns
- Decision cards contain the name and a short description of the modality. Modalities are color-coded and labeled with an expressive icon so that they are easy to distinguish from one another. As a first step, the design team puts all ten decision cards on the table and chooses the categories that are required to realize the design ideas produced in the ideation workshop. For the selected categories, the three cards described below are placed on the table and their instruction are followed. All other cards can be put aside.
- Investigation cards provide one or two questions that further specify the design space for a category, thus directing the attention of the designers towards specific aspects that are important when using this communication modality.
- Parameter Cards provide a list of parameters that need to be specified in order to create precise and implementable behavioral patterns.
- Idea Cards are empty cards that can be used to document the specifications for the selected modality and the communication goal at hand, based on the questions and parameters proposed by the two previous cards.
3.4. Phase 3: Evaluation
3.4.1. Prototyping Patterns
3.4.2. Gathering User Feedback in a User Study
- In the first part, participants were asked about their demographic data (age and gender) as well as their attitude towards and previous experiences with social robots.
- In the second part, they were shown a 6-minute video of a user playing the quiz game with the MiRo robot. Throughout the course of the game, the robot displayed a number of different patterns to communicate its state and intentions to the user, including the pattern for “listen to user” (see Video S2 in the Supplementary Materials or refer to the Robot Behavior Pattern Wiki for Human–Robot Interaction described in Section 5.1). This video was included to provide a meaningful context for the patterns and immerse the participants in the experience of interacting with the robot. After the video, participants were asked to report their first impression of the robot using the Robot Attitude Scale [49].
- The third part was the actual pattern evaluation. Participants watched short video snippets, each of which showed MiRo expressing one specific pattern. After each snippet, participants were asked to evaluate the comprehensibility and pleasantness of the pattern based on four questions:
- In your opinion, what does the robot shown in this video snippet want to express?
- Which behavior of the robot led to your opinion?
- How comprehensible was the behavior of the robot to you? (scale from 0 = not at all to 3 = very), and
- How pleasant was your experience of the behavior of the robot? (scale from 0 = not at all to 3 = very).
After watching all 13 videos, the participants were thanked for their participation, and the survey was ended.
3.4.3. Improving Patterns
- Methodology of the user test: How participants experience the comprehensibility and pleasantness of the robot is influenced by the way it is presented in the user test. Despite careful preparation, the interaction with the robot or general study procedure might turn out less smooth than intended, e.g., due to the testing environment or technical setup. Participants’ ratings are also influenced by their current mood or general attitude towards robots. Similarly, the perception of individual robot behaviors may depend on the use case or scenario. Thus, it should first be checked whether the study results reveal any hints of potential methodological limitations of the user test. If such shortcomings are revealed, the study results need to be interpreted with care, and the design team might want to consider repeating the user study with improved methodology.
- Implementation on the robotic platform: The prototypical implementation on the robot is often only one way to realize the design recommendation proposed by the pattern description. Low ratings might indicate low comprehensibility and pleasantness for this concrete implementation, rather than for the pattern itself. The design team should therefore make sure that the implementation incorporates all relevant aspects proposed by the pattern and consider whether there might be other, more appropriate ways to realize the expression on this robot.
- Behavioral expression: If neither the study methodology nor the implementation can be held accountable for low ratings, it can be assumed that the pattern itself needs revision. In this case, the design team needs to go back to the work they conducted in the Creation phase and revise it in light of the results of the user study. Improving the pattern description could mean either of the following options: making changes to the specifications of a single communication modality, altering the combination of modalities, or modifying the timing of the modalities. Additionally, the results of the user test might also reveal ideas for new patterns or variants of the same pattern. For example, there might be alternative ways to express the defined communication goal in the context of the specific use case. Given the iterative nature of the design process, revised patterns need to be tested again.
4. How to Document Behavioral Design Patterns in a Structured Way
5. How to Use Behavioral Design Patterns in Human–Robot Interaction Design
5.1. Making Patterns Accessible through a Pattern Language
5.2. Tailoring HRI to Individual Users Using Pattern Variants
6. Summary and Contribution
- A three-phase design process that provides step-by-step guidance for HRI designers to generate reusable, high quality behavioral patterns for social robots. To this end, it describes methods to identify recurring interaction situations, define communication goals for the robot, gather relevant insights from the existing body of research and transfer them into specific behavioral robot expressions.
- A pattern documentation format that provides a clear, standardized structure to note down all relevant aspects of a pattern, so that others can understand its design recommendations and use it for their own robot and use cases. Thereby, the pattern documentation format promotes knowledge exchange between HRI designers and supports the development of a shared design language for robot behavior.
- A pattern language of 40 behavioral patterns that can be used for HRI design. Future research and social robot applications can build upon this pattern language and extend it by applying the proposed design process and documentation format.
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AAIM | Abstract Application Interaction Model |
EUC | essential use cases |
HCD | human-centered design |
HRI | human–robot interaction |
HTI | human–technology interaction |
ROS | Robot Operating System |
UX | user experience |
Appendix A
Appendix B
Name | Listen to User |
---|---|
Type | Composed Pattern |
Ranking | ** (initially validated) |
Version | 1 |
Author | Kathrin Pollmann |
Interaction Situation | The user is telling the robot something. The robot provides feedback that it is aware of the user talking and that is recording the spoken information. |
Communication Goal | “I am listening to what you are currently saying.” |
Solution | Let the robot express that:
|
Illustration | |
Rationale | see Atomic Patterns Attentive, Speech recording and Operation Mode on |
Examples | see Supplementary Material, Videos S1 and S2 |
References and Context | Needs: Attentive AND Attentive, Speech recording AND Operation Mode on; Opposed patterns: all other composed patterns—two composed patterns can never be executed at the same time |
Name | Speech Recording |
---|---|
Type | Atomic Pattern |
Ranking | ** (initially validated) |
Version | 1 |
Author | Kathrin Pollmann |
Interaction Situation | The user has provided some speech input. The robot needs to indicate that it is perceiving and processing the input. |
Communication Goal | “I am recording spoken information from you.” |
Solution | To provide feedback that the user input is recorded, you should visualize the act of listening. To do so, you can highlight the recording “sensory” organ (microphone or ears as long as the user is talking. Depending on the robot, this can be achieved in different ways:
|
Illustration | |
Rationale | In social human interaction, facing and looking at the opposite person is often interpreted as taking in what the person is saying. However, this behavior is, in fact, rather ambiguous, as we can never be quite sure whether someone is actually listening or, for example, day dreaming. Animals communicate by making the activity of listening more explicit, by putting their ears upright and/or turning them towards a (potential) auditory cue in the environment. This behavior can be copied by robots with movable ears. Analogies can also be found using other communication modalities. Technical devices often use light signals or animations to visualize speech recording (compare Amazon Echo, mobile phone audio recording apps, cameras). This design approach can also be transferred to robots. For example, Softbank robotics designed their Pepper robot with ears that can be illuminated by light. Light research suggests using cold colors such as blue to make the light signal more attention grabbing. Alternatively, a green light color could be used to emphasize that the speech recording functionality is active. Green is generally associated with a technical device being turned on or working. References and further readings: [58,59,60] |
Examples | see Supplementary Material, Videos S3 and S4 |
References and Context | Needed by: Listening; Works well together with: Attentive |
Name | Attentive |
---|---|
Type | Atomic Pattern |
Ranking | * (initially validated) |
Version | 2 |
Author | Kathrin Pollmann |
Interaction Situation | Robot and user are engaged in interaction. The robot communicates that its full attention is focused on the user. This pattern should be used in two types of situations:
|
Communication Goal | “My full attention is on you.” |
Solution | Attentiveness towards the user can be expressed with a user-oriented positioning in the room. Position the robot in social distance from the user (1.2 m) and let the front of the robot face the user. It should always be clear where the frontal part of the robot is. If possible, use active user tracking to let the robot maintain eye contact with the user. Eye contact can also be achieved with robots that do not have eyes: in this case, the visual sensory organ, the camera, should be pointed at the user and (if possible) follow her head based on user tracking. |
Illustration | |
Rationale | In the interaction between different agents, attentiveness can be signaled through coming closer, decreasing the distance between oneself and the other agent. Research in human–robot interaction shows that a robot is perceived as attentive when its front is facing the user. From social interaction, we have also learned to interpret eye contact as a sign that someone’s attention is on us. Establishing eye contact is used as an unvoiced agreement to engage in social interaction with each other. These two concepts can easily be transferred to robots, as demonstrated by two examples: In the movie “Robot and Frank”, the robot turns towards the user when talking to him. Pixar’s robot Wall-E moves its head closer to objects of interest. References and further reading: [57,61,62,63,64] |
Examples | see Supplementary Material, Video S5 |
References and Context | Needed by: Active, Becoming active, Becoming inactive, Processing, Not understanding, Showing, Listening, Explaining, Encouraging good performance, Joyful positive feedback, Displeased positive feedback, Empathic negative feedback, Gloating negative feedback Opposed patterns: Inside turn, Passively available |
Appendix C
References
- Forlizzi, J. How robotic products become social products: An ethnographic study of cleaning in the home. In Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, 10–12 March 2007; pp. 129–136. [Google Scholar]
- Tanaka, F.; Ghosh, M. The implementation of care-receiving robot at an English learning school for children. In Proceedings of the 6th International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 6–9 March 2011; pp. 265–266. [Google Scholar]
- Wada, K.; Shibata, T. Living with seal robots—Its sociopsychological and physiological influences on the elderly at a care house. IEEE Trans. Robot. 2007, 23, 972–980. [Google Scholar] [CrossRef]
- Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum.-Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
- Bartneck, C.; Forlizzi, J. A design-centred framework for social human-robot interaction. In Proceedings of the RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759), Kurashiki, Japan, 22 September 2004; pp. 591–594. [Google Scholar]
- de Graaf, M.M.A.; Allouch, S.B. The influence of prior expectations of a robot’s lifelikeness on users’ intentions to treat a zoomorphic robot as a companion. Int. J. Soc. Robot. 2017, 9, 17–32. [Google Scholar] [CrossRef] [Green Version]
- Johnson, D.O.; Cuijpers, R.H.; Pollmann, K.; van de Ven, A.J. Exploring the entertainment value of playing games with a humanoid robot. Int. J. Soc. Robot. 2016, 8, 247–269. [Google Scholar] [CrossRef]
- Kahn, P.H.; Freier, N.G.; Kanda, T.; Ishiguro, H.; Ruckert, J.H.; Severson, R.L.; Kane, S.K. Design patterns for sociality in human-robot interaction. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (HRI), Amsterdam, The Netherlands, 12–15 March 2008; pp. 97–104. [Google Scholar]
- Mutlu, B. Designing Gaze Behavior for Humanlike Robots. Doctoral Dissertation, University of Pittsburgh, Pittsburgh, PA, USA, 2009. Unpublished. [Google Scholar]
- Takayama, L.; Pantofaru, C. Influences on proxemic behaviors in human-robot interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 5495–5502. [Google Scholar]
- Mubin, O.; Stevens, C.J.; Shahid, S.; Al Mahmud, A.; Dong, J.J. A review of the applicability of robots in education. J. Technol. Educ. Learn. 2013, 1, 13. [Google Scholar] [CrossRef] [Green Version]
- Robins, B.; Dautenhahn, K.; Te Boekhorst, R.; Billard, A. Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Univers. Access Inf. Soc. 2005, 4, 105–120. [Google Scholar] [CrossRef]
- ISO 9241-210. Ergonomics of Human-System Interaction—Part 210: Human-Centred Design for Interactive Systems; Beuth: Berlin, Germany, 2019. [Google Scholar]
- Hassenzahl, M. User experience (UX) towards an experiential perspective on product quality. In Proceedings of the 20th Conference on l’Interaction Homme-Machine, Metz, France, 2–5 September 2008; pp. 11–15. [Google Scholar]
- Fronemann, N.; Peissner, M. User experience concept exploration. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, Helsinki, Finland, 26–30 October 2014; Roto, V., Ed.; ACM: New York, NY, USA, 2014; pp. 727–736. [Google Scholar] [CrossRef]
- Dautenhahn, K. Socially intelligent robots: Dimensions of human–robot interaction. Philos. Trans. R. Soc. Lond. Ser. Biol. Sci. 2007, 362, 679–704. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- van Breemen, A.; Yan, X.; Meerbeek, B. iCat: An animated user-interface robot with personality. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, The Netherlands, 25–29 July 2005; pp. 143–144. [Google Scholar]
- Andrist, S.; Tan, X.Z.; Gleicher, M.; Mutlu, B. Conversational gaze aversion for humanlike robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, Germany, 3–6 March 2014; pp. 25–32. [Google Scholar]
- Mutlu, B.; Shiwa, T.; Kanda, T.; Ishiguro, H.; Hagita, N. Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (HRI), La Jolla, CA, USA, 9–13 March 2009; pp. 61–68. [Google Scholar]
- Häring, M.; Bee, N.; André, E. Creation and evaluation of emotion expression with body movement, sound and eye color for humanoid robots. In Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication (2011 RO-MAN), Atlanta, GA, USA, 31 July–3 August 2011; IEEE Press: Atlanta, GA, USA, 2011. [Google Scholar]
- Löffler, D.; Schmidt, N.; Tscharn, R. Multimodal expression of artificial emotion in social robots using color, motion and sound. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 334–343. [Google Scholar]
- Walters, M.L.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L.; Syrdal, D.S.; Nehaniv, C.L. An empirical framework for human-robot proxemics. In Proceedings of the New Frontiers in Human-Robot Interaction: Symposium at the AISB09 Convention, Edinburgh, UK, 8–9 April 2009; pp. 144–149. [Google Scholar]
- Barakova, E.I.; Lourens, T. Expressing and interpreting emotional movements in social games with robots. Pers. Ubiquitous Comput. 2010, 14, 457–467. [Google Scholar] [CrossRef] [Green Version]
- Kennedy, J.; Baxter, P.; Belpaeme, T. The robot who tried too hard: Social behaviour of a robot tutor can negatively affect child learning. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 67–74. [Google Scholar]
- Pennisi, P.; Tonacci, A.; Tartarisco, G.; Billeci, L.; Ruta, L.; Gangemi, S.; Pioggia, G. Autism and social robotics: A systematic review. Autism Res. 2016, 9, 165–183. [Google Scholar] [CrossRef] [PubMed]
- Robins, B.; Dautenhahn, K.; Dickerson, P. From isolation to communication: A case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot. In Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions, Washington, DC, USA, 1–7 February 2009; pp. 205–211. [Google Scholar]
- Breazeal, C.; Brooks, A.; Chilongo, D.; Gray, J.; Hoffman, G.; Kidd, C.; Lee, H.; Lieberman, J.; Lockerd, A. Working collaboratively with humanoid robots. In Proceedings of the 4th IEEE/RAS International Conference on Humanoid Robots, Santa Monica, CA, USA, 10–12 November 2004; Volume 1, pp. 253–272. [Google Scholar]
- Hoffman, G.; Breazeal, C. Collaboration in human-robot teams. In Proceedings of the AIAA 1st Intelligent Systems Technical Conference, Chicago, IL, USA, 20–22 September 2004; p. 6434. [Google Scholar]
- Alexander, C. A Pattern Language: Towns, Buildings, Construction; Oxford University Press: Oxford, UK, 1977. [Google Scholar]
- Gamma, E.; Helm, R.; Johnson, R.; Vlissides, J. Design Patterns: Elements of Reusable Object-Oriented Software; Addison-Wesley: Reading, MA, USA, 1995; p. 1995. [Google Scholar]
- Tidwell, J. Designing Interfaces: Patterns for Effective Interaction Design, 2nd ed.; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2010. [Google Scholar]
- Borchers, J.O. A pattern approach to interaction design. In Proceedings of the 3rd International Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, New York, NY, USA, 17–19 August 2000; ACM: New York, NY, USA, 2000; pp. 369–378. [Google Scholar]
- Peltason, J.; Wrede, B. Pamini: A framework for assembling mixed-initiative human-robot interaction from generic interaction patterns. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Tokyo, Japan, 24–25 September 2010; pp. 229–232. [Google Scholar]
- Petalson, J. Modeling Human-Robot-Interaction Based on Generic Interaction Patterns. Ph.D. Thesis, Bielefeld University, Bielefeld, Germany, 2013. [Google Scholar]
- Sauppé, A.; Mutlu, B. Design Patterns for Exploring and Prototyping Human-Robot Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’14, Toronto, ON, Canada, 26 April–1 May 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1439–1448. [Google Scholar] [CrossRef]
- Pollmann, K. Behavioral Design Patterns for Social, Assistive Robots-Insights from the NIKA Research Project. In Mensch und Computer 2019-Workshopband; Gesellschaft für Informatik e.V.: Bonn, Germany, 2019. [Google Scholar] [CrossRef]
- Constantine, L.L.; Lockwood, L.A.D. Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design; Addison-Wesley Professional: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
- Galvan, J.L.; Galvan, M.C. Writing Literature Reviews: A Guide for Students of the Social and Behavioral Sciences; Taylor & Francis: London, UK, 2017. [Google Scholar]
- Lucero, A. Framing, aligning, paradoxing, abstracting, and directing: How design mood boards work. In Proceedings of the Designing Interactive Systems Conference (DIS ’12), Newcastle Upon Tyne, UK, 11–15 June 2012; pp. 438–447. [Google Scholar]
- Canfield, J. How to Create an Empowering Vision Board. 2017. Available online: https://www.jackcanfield.com/blog/how-to-create-an-empowering-vision-book/ (accessed on 16 August 2021).
- Doorley, S.; Holcomb, S.; Klebahn, P.; Segovia, K.; Utley, J. Design Thinking Bootleg; Stanford University: Stanford, CA, USA, 2018. [Google Scholar]
- Pollmann, K. The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults. Multimodal Technol. Interact. 2021, 5, 33. [Google Scholar] [CrossRef]
- Consequential Robotics. MiRoCODE—Miro-E. Available online: https://www.miro-e.com/mirocode (accessed on 16 August 2021).
- Collins, E.C.; Prescott, T.J.; Mitchinson, B.; Conran, S. MIRO: A versatile biomimetic edutainment robot. In Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology-ACE ‘15, Iskandar, Malaysia, 16–19 November 2015; ACM Press: New York, NY, USA, 2015; pp. 28:1–28:4. [Google Scholar] [CrossRef]
- SoftBank Robotics. Choregraphe Suite—Aldebaran 2.5.11.14a Documentation. Available online: http://doc.aldebaran.com/2-5/software/choregraphe/index.html (accessed on 16 August 2021).
- SoftBank Robotics. Pepper the Humanoid and Programmable Robot|SoftBank Robotics. Available online: https://www.softbankrobotics.com/emea/en/pepper (accessed on 16 August 2021).
- Peissner, M.; Häbe, D.; Janssen, D.; Sellner, T. MyUI: Generating accessible user interfaces from multimodal design patterns. In Proceedings of the 4th ACM SIGCHI Symposium on Engineering Interactive Computing Systems-EICS ’12, Copenhagen, Denmark, 25–26 June 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
- Ziegler, D.; Peissner, M. Modelling of Polymorphic User Interfaces at the Appropriate Level of Abstraction. In Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2018; pp. 45–56. [Google Scholar] [CrossRef]
- Broadbent, E.; Stafford, R.; MacDonald, B. Acceptance of Healthcare Robots for the Older Population: Review and Future Directions. Int. J. Soc. Robot. 2009, 1, 319–330. [Google Scholar] [CrossRef]
- Mayring, P. Qualitative Inhaltsanalyse. In Handbuch Qualitative Forschung in der Psychologie; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2010; pp. 601–613. [Google Scholar]
- Dearden, A.; Finlay, J. Pattern Languages in HCI: A Critical Review. Hum.–Comput. Interact. 2006, 21, 49–102. [Google Scholar] [CrossRef]
- Tidwell, J. Common Ground: A Pattern Language for Human-Computer Interface Design. 1999. Available online: http://www.mit.edu/~jtidwell/interaction_patterns.html (accessed on 16 August 2021).
- Conley, K. REP Purpose and Guidelines. 2010. Available online: https://www.ros.org/reps/rep-0001.html (accessed on 16 August 2021).
- Dautenhahn, K. Robots we like to live with?!—A developmental perspective on a personalized, life-long robot companion. In Proceedings of the RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759), Kurashiki, Japan, 20–22 September 2004; pp. 17–22. [Google Scholar] [CrossRef] [Green Version]
- Syrdal, D.S.; Lee Koay, K.; Walters, M.L.; Dautenhahn, K. A personalized robot companion?—The role of individual differences on spatial preferences in HRI scenarios. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju Island, Korea, 26–29 August 2007; pp. 1143–1148. [Google Scholar] [CrossRef] [Green Version]
- Pollmann, K.; Ziegler, D. Personal Quizmaster: A Pattern Approach to Personalized Interaction Experiences with the MiRo Robot. In Proceedings of the Conference on Mensch und Computer (MuC ’20), Magdeburg, Germany, 6–9 September 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 485–489. [Google Scholar] [CrossRef]
- Mizoguchi, H.; Takagi, K.; Hatamura, Y.; Nakao, M.; Sato, T. Behavioral expression by an expressive mobile robot-expressing vividness, mental distance, and attention. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications (IROS ’97), Grenoble, France, 11 September 1997; Volume 1, pp. 306–311. [Google Scholar]
- Fronemann, N.; Pollmann, K.; Loh, W. Should my Robot Know What’s Best for me? Human–Robot Interaction between User Experience and Ethical Design. AI Soc. 2021. [Google Scholar] [CrossRef]
- Baraka, K.; Veloso, M.M. Mobile Service Robot State Revealing Through Expressive Lights: Formalism, Design, and Evaluation. Int. J. Soc. Robot. 2018, 10, 65–92. [Google Scholar] [CrossRef]
- Choi, Y.; Kim, J.; Pan, P.; Jeung, J. The Considerable Elements of the Emotion Expression Using Lights in Apparel Types. In Proceedings of the 4th International Conference on Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology (Mobility ’07), Singapore, 10–12 September 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 662–666. [Google Scholar] [CrossRef]
- Goffman, E. Behavior in Public Places: Notes on the Social Organization of Gatherings; The Free Press: New York, NY, USA, 1963. [Google Scholar]
- Jan, D.; Traum, D.R. Dynamic Movement and Positioning of Embodied Agents in Multiparty Conversations. In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS ’07), Honolulu, HI, USA, 14–18 May 2007; Association for Computing Machinery: New York, NY, USA, 2007. [Google Scholar] [CrossRef]
- Cassell, J.; Bickmore, T.; Billinghurst, M.; Campbell, L.; Chang, K.; Vilhjálmsson, H.; Yan, H. Embodiment in Conversational Interfaces: Rea. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’99), Pittsburgh, PA, USA, 15–20 May 1999; Association for Computing Machinery: New York, NY, USA, 1999; pp. 520–527. [Google Scholar] [CrossRef]
- Cassell, J.; Thorisson, K.R. The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Appl. Artif. Intell. 1999, 13, 519–538. [Google Scholar] [CrossRef]
Actuators | Communication Modalities |
---|---|
Multidirectional joints in neck, torso, arms, legs, ears | Whole body motions, posture |
Multidirectional wheels | Proxemics, movements within the room |
Movable legs | Proxemics, movements within the room |
LEDs | Light signal |
Speakers | Sound, speech |
Manipulable eyes | Facial Expressions, gaze behavior |
Main Actions of Robot | Communication Goals |
---|---|
explain quiz game | “I am explaining something to you. Stay focused on me!” |
provide information | “I am showing you information. Please pay attention to me and to this information.” |
listen to user | “I am listening to what you are currently saying.” |
process confirmation | “I am processing what I just learned from you. This will take some time—I will tell you when I am ready.” |
load quiz game | “I am loading the game and preparing to play it with you. I will let you know when I am ready.” |
motivate good performance | “I believe in you and I’ll support you to show good performance in the upcoming action!” |
direct attention to user | “My full attention is on you.” |
demand user’s attention | “You need to focus your attention on this. It is important.” |
demonstrate own readiness | “I can start acting straight away.” |
demand user input | “I am expecting you to provide your input now.” |
record user input | “I am recording spoken information from you.” |
turn attention away from user | “My attention is not on you. I am currently focused on internal processing or nothing at all.” |
indicate progress of processing | “I am currently processing or loading information.” |
confirm successful data processing | “I have successfully processed or loaded information.” |
indicate correctness | “What you did/said is correct.” |
indicate incorrectness | “What you did/said is incorrect.” |
energize user | “I am strong and so are you! Let’s go!” |
Component | Function |
---|---|
Name | Short, to the point description of the core of the solution |
Preamble | |
Type | Provides keywords referring to the type and application domain of the pattern |
Ranking | Indication of how valid the pattern is, ranging from * (tentative) over ** (initially validated) to *** (well validated) |
Version | Version number of the pattern |
Author | Authors’ names and e-mail addresses |
Design Challenge | |
Interaction Situation | Describes the general recurring interaction situation in which the patter occurs including the expectations and needs in this situation from the user’s point of view |
Communication Goal | Describes the design problem at hand, focusing on the communication goal, i.e., the message that the robot behavior should communicate to the human interaction partner |
Design Solution | |
Solution | Text description of the concrete behavioral expressions that can be realized on a social robot in order to solve the design challenge; phrased as an instruction for the designer |
Illustration | Visual representation of the solution |
Rationale | Reasoning behind the proposed behavioral expression: inspirational examples and scientific references |
Examples | Concrete examples of how the pattern can be used, visualized as videos or story boards |
References and Context | Connections with other patterns |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pollmann, K.; Ziegler, D. A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction. Multimodal Technol. Interact. 2021, 5, 49. https://doi.org/10.3390/mti5090049
Pollmann K, Ziegler D. A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction. Multimodal Technologies and Interaction. 2021; 5(9):49. https://doi.org/10.3390/mti5090049
Chicago/Turabian StylePollmann, Kathrin, and Daniel Ziegler. 2021. "A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction" Multimodal Technologies and Interaction 5, no. 9: 49. https://doi.org/10.3390/mti5090049
APA StylePollmann, K., & Ziegler, D. (2021). A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction. Multimodal Technologies and Interaction, 5(9), 49. https://doi.org/10.3390/mti5090049