Next Article in Journal
Multimodal Hate Speech Detection in Greek Social Media
Next Article in Special Issue
Immersive Virtual Reality Exergame Promotes the Practice of Physical Activity in Older People: An Opportunity during COVID-19
Previous Article in Journal
Musical Control Gestures in Mobile Handheld Devices: Design Guidelines Informed by Daily User Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Case Report

The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults

Fraunhofer Institute for Industrial Engineering IAO, Nobelstr. 12, 70569 Stuttgart, Germany
Multimodal Technol. Interact. 2021, 5(7), 33; https://doi.org/10.3390/mti5070033
Submission received: 4 May 2021 / Revised: 7 June 2021 / Accepted: 23 June 2021 / Published: 29 June 2021

Abstract

:
Robots have been proposed as intelligent technology that can support the independent living and health of older adults. While significant advances are being made regarding hardware and intelligent software to support autonomous actions of robots, less emphasis has been put on designing robot behavior that is comprehensible and pleasant for older adults. However, good usability and user experience are crucial factors for acceptance and long-term use. One way to actively engage older adults in behavioral design for social robots is participatory design. The Modality Card Deck is proposed, a tool that helps to engage older adults in human-robot interaction design process and participate in design decision for robot behavior. The cards guide the users towards creating ideas for design solutions which are detailed enough to be implemented by interaction designers and software developers. This paper provides a detailed description of the Modality Card Deck and presents an evaluation of the tool in the scope of a case study. In the case study, the card deck was used in participatory design workshops with older adults to develop multi-modal robot behaviors for the Pepper robot and a quiz game application. After describing the procedure of the case study, the workshop results and learnings about working with the Modality Card Deck and older adults are presented.

1. Introduction

There is an increased interest in applying intelligent robotic systems not only to industrial contexts, but bringing them to spaces such as homes [1], schools [2], or nursing homes [3]. In particular, they are discussed as a solution to support and assist older adults in their daily lives. To this end, technical software and hardware solutions are developed and constantly optimized to enable robots to record and gather context and user information, as well as navigate, move, and interact autonomously. Recent advantages in Artificial Intelligence (AI) can further advance these developments and help us to create robots that are capable of making their own decisions and tailor their actions to the individual user and situational context.
However, to be successfully employed in use cases evolving around elder care, a robot does not only have to be equipped with the technical features described above, but also act as a social interaction partner for the user. The term social robot has been introduced to describe robots that engage in everyday interactions with us. Robots are considered social if they communicate social affordances [4] and if the communication is perceived as intuitive by the users [5]. This can be achieved by using different communication modalities such as body posture, facial expression, speech, and gaze [5]. These communication modalities or components of the robot behavior can be regarded as the user interface of human-robot interaction—the visible output of the intelligent system that takes care of reasoning and steering the robot’s actions. Comparable to other user interfaces, it is hence important to carefully think about how to design the behavior of the robot and how to make use of the different communication modalities to ensure an effective (i.e. high usability and positive user experience) interaction between humans and social robots.
Commercially available robots often come with preprogrammed behavior. However, this behavior does not necessarily meet the expectations and preferences of certain user groups, not to mention the needs of the individual user. It has been argued that the acceptance of and willingness to use a robot is closely connected to how well it addresses the user’s characteristics, needs, and preferences [6,7]—an approach that has also been summarized under the term of personalized human-robot interaction (HRI) [8].
A promising approach towards such personalized experiences with technology is participatory design (PD) which actively involves (future) users in the design process [9]. PD assumes that users are “experts by experience” or “experts of their lifeworld” [10] and can thus provide valuable input for expert designers, developers, or researchers [11]. With this mindset, PD not only emphasizes the partnership of users and design experts in the design process, but also demonstrates a clear focus on experience-centered and positive design, as opposed to problem and deficit-centered design. PD has been successfully used with older adults to inform the design of interactive technologies, especially in the context of ambient assistant living and elder home care. In this paper, the terms PD and Co-Design are used interchangeably.
In HRI, the responsibility for specifying appropriate robot behavior is mostly taken on by designers. While some efforts have been made to employ co-creation methods in HRI design, they focus on the appearance and character of the robot, rather than the interaction with the user. Interaction-related results are usually of rather inspirational nature and it is not very often that PD activities lead to design solutions that are concrete enough to be implemented by interaction designers or developers right away. HRI-related PD activities that explicitly involve older adults as co-designers are scarce.
The present research makes a contribution to PD with older adults in HRI, focusing on the following question: How can we enable older adults to participate in the design of multi-modal behaviors for social robots? We introduce the Modality Card Deck, a workshop tool that enriches PD approaches in HRI. The workshop tool is designed as a haptic card deck that guides novices in the domain of HRI design through the design process towards very specific design solutions which are detailed enough to be implemented by interaction designers and software developers. The Modality Card Deck thus aims to support user-driven design of a robotic user interface without requiring the users to have a detailed understanding about the intelligent technology behind it. The present paper describes the Modality Card Deck and its application. It also presents a case study that was conducted with older adults to evaluate the usefulness and usability of the tool as well as the quality of the design solutions developed with the tool in terms of specificity and technical feasibility.

2. Related Work

2.1. Co-Designing with Older Adults

In human-technology interaction (HTI) design, it is common practice to frequently engage older adults in design processes for products that specifically target this user group. The user involvement is, however, not always realized as direct involvement of the target group: Sometimes secondary user groups like caregivers or relatives are involved instead of the primary user group of older adults [11]. It has also been argued that older adults are only involved to legitimize design choices [12]. With regard to intelligent technologies, co-design activities mostly evolve around the field of ambient assisted living (AAL), for use cases such as fall detection, communication with care givers, or the management of medication [11]. End user involvement usually takes place in one of the following development stages [11,13]:
  • Ideation: PD activities are employed with the objective to generate ideas for new products or find new use cases for existing products. This is, however, more often can be realized through traditional user research methods such as interviews or focus groups, rather than active co-creation;
  • Device (re-) design and prototyping: In this phase the engagement of users in the actual design task is most prominent. Co-creation is mostly carried out in workshop formats, although related tasks can, by principle, also be worked on by single users and in the context of use (compare my previous work [14]). The process can be facilitated by using mock-ups and scenarios [15,16,17];
  • Product testing: Co-creation can be a means to let users evaluate design solutions and the term is sometimes used in this connection, but mostly to describe user testing activities in laboratory environments.
Some people might assume that the specific target group of older adults might need specific tools and methods to be engaged in co-design. However, this is not necessarily the case, as described by e.g., Zenella and colleagues [18]: They mention focus groups, user testing with mock-ups, multi-modal prototypes, and questionnaires as suitable means for letting older users participate in design activities—all common methods known from user-centered design that do not need to be adjusted for this specific target group (provided that participants do not have significant motor or cognitive impairments [18]). It should, however, be taken into consideration that older adults might be part of a group of participants who are not very familiar with PD methodology and/or with state-of-the-art technology. To prepare for this case, it might be advisable to make sure to instruct such novices to the design process and technology in a way that is easy to follow. Similarly, members of the design team who are inexperienced with PD might benefit from methods that provide guidance on how to accompany and facilitate design activities of older adults [13].

2.2. Participatory Design for HRI

While the field of HTI has long embraced PD as a valuable approach for designing interactive applications [19], PD is not yet a well-established and commonly-used method in HRI design. However, PD activities are now and then used, mainly to generate user requirements or general insights, either to gather design inspiration for a specific robot or to derive ideas for new types of robot and their application areas.
Leong and Johnston [20] conducted co-design workshops with eight older adults to assess their opinions and ideas about a robot dog. From the workshops they deduced user requirements, insights about the behavior of the human interaction partner towards the robot dog as well as general design guidelines. Other researchers focused more on the sociability implied by HRI design. Lee and colleagues [21] investigated the value of participatory methods for the design of social robots in social contexts. In a series of user research interventions including interviews and PD workshops, they collected user requirements, evaluations of existing social robots, and design ideas for new robots. Azenkot, Feng, and Cakmat [22] explored how service robots need to be designed to serve as guides for blind people. They organized co-creation sessions with visually impaired users and designers, during which they created storyboards to document relevant interaction situations. With the help of a human guide, the participants also defined and evaluated the desired behavior for a robot in specific context situations in Wizard-of-Oz set-up (e.g., movement speed, feedback). The goal of the study was to identify meaningful use cases for the robot guide. Ostrowski et al. [23] propose a PD approach that combines qualitative and quantitative methods to engage older adults in the design of social robots and their fuctionalites. They placed the robot Jibo in an assisted-living community for three weeks, where 19 participants interacted with it on a daily basis. The researchers recorded and analyzed these interactions and also employed a card-based design kit [24] to gather participants’ feedback about their preference regarding social fuctions of the robot such as facilitating remote communication or providing reminders.
As the interaction design of a robot heavily relies on its hardware, it is not surprising that PD is also employed to include users in the design of robot appearance and functionality. To this end, different prototyping methods can be used—ranging from low fidelity paper-and-pan prototypes to more advanced prototyping tools. In this connection, users are often asked to create their ideal robot, as e.g., proposed by Caleb-Solly et al. [25]. The researchers engaged users in embodiment and scenario workshops where they should design and elaborate on their ideal robot. With the workshops they wanted to discover desired functionalities as well as aspects that influence acceptance of social robots in the home environment. Frederiks et al. [5] take a slightly different approach to the same question and present the Do-It-Yourself platform, Opsoro, which was developed to enable non-experts to build and customize social robot characters. The platform was successfully used in a series of PD workshops. The focus lies on crafting the physical appearance of the robot (e.g., with cardboard) and programming its actuators, but not on reflecting on and specifying how the robot communicates with the user. Eftring and Frennert [26] combined co-creation activities with qualitative user research methods and prototyping activities for the design of social assistant robot for older adults, with the goal to strengthen the mutual learning between designers and (future) users. With this mixed method approach they gathered user requirements regarding the robot’s form factors, appearance, and functionality. The designers documented the design ideas of the five participants with sketches and scenarios. Participants were also enabled to contribute to the technical design of the robot by letting them experiment with and reflect on different types of sensors. Thus, they provided valuable insights about which type of sensor data could be interesting for HRI. Similarly, Björling and Rose [27] developed a set of PD methods focused on prototyping activities, based on their studies about robots for stress reduction in teenagers. They let the teens describes their dream robot through sketching, storyboarding, and scenario writing, prototyped it and iteratively improved it through user feedback. They also introduced a robot design challenge where participants created prototypes of robots using cardboard and other handicraft materials. As a third method, they tested interaction scenarios with role-play and virtual reality prototypes.
Another angle to behavior co-creation has been introduced through methods like motion capture, puppeteering, and learning from demonstration. The resulting behavioral expressions can, however, only be implemented on humanoid robots, and not animoid or abstract ones. Louie and Nejat [28], for example, developed a program for caregivers to create their own behaviors for a social robot that played a Bingo game with older adults. These approaches offer an easy way for designing and programming novices to create robot applications. However, the require a lot of work up front to build the required software and are thus not well suited to create early, low-fidelity prototypes. More importantly, they only work with humanoid robots and cannot be applied to other robot appearances (e.g., animoid or abstract robots).
Despite positive experiences with PD in the HRI community, the full potential of this design approach is not yet leveraged. PD activities are not standardized and often considered time-consuming [5]. While the design of the robot appearance receives much attention, the behavior of the robot and interaction with the user is often dealt with implicitly or on the side, or focused solely on humanoid robots.
Thus, there is a need to extend current PD approaches in HRI with methods and tools that provide guidance for users and designers alike to support user involvement in design decisions for interaction and behavior design for different types of robots.

2.3. Behavioral Expressions for Social Robots

When developing a new tool for co-creating robot behavior, it is necessary to take a closer look at what behavioral design for social robots entails. A robot typically consists of a software system and a physical body. Robots thus fall into the category of embodied technology, which poses new challenges for interaction design. Consensus has been reached among HRI researchers that the preferred communication modality for user input is speech. Defining the right output modality for the robot is, however, more complex. A robot could, per design, be able to communicate verbally, but it could also make use of its body for non-verbal communication. The observation of human social interaction suggests that it is the combination of verbal and non-verbal behavior that makes communication effective [29]. It has also been argued that when non-verbal and verbal communications are in conflict, people tend to rely on non-verbal cues for their interpretations [30]. In addition, it is well known that non-verbal communication is not solemnly related to dynamic body expressions, but that even a static body posture is interpreted as serving a certain communication goal by sending a certain message [31]. When designing robots, it is therefore crucial to make well thought-out decisions regarding their verbal as well as non-verbal behaviors. To do so, a number of modalities can be considered. These modalities can either be similar to mechanisms of human social communication (human-like communication modalities) or defined by additional actuators of the robot (machine-like communication modalities). Breazeal [32] mentions the following human-like modalities that play a role in behavioral design for social robots:
  • Whole body motion,
  • Proxemics,
  • Facial expressions,
  • Gaze behavior,
  • Head orientation and shared attention,
  • Touch,
  • (Para-)linguistic cues, and
  • Verbal output.
These human-like means of communication can be replenished with machine-like modalities as described for example by Embgen [33]:
  • Sound,
  • Color,
  • Light, and
  • Shape.
A new tool for PD in HRI that focusses on behavior and interaction design should consider this design space and provide support for the users and design team to embrace its complexity.

2.4. Card-Based Design Tools

There are many tools and methods that can be used to support PD activities. Cards are a very common tool for design workshops to enrich design activities in a tangible and engaging way. They are often used in PD to actively engage users in the design process, as they make the design process visible [34] and help to explain complex concepts to novices in a specific domain [35]. Due to their playful nature, working with cards appears to be less intimidating than the task of designing an interactive system [35]. At the same time, cards help to make ideas explicit and to develop theoretical ideas into concrete, practical design guidelines [36]. Thus, they can act as a communication tool between users and designers [37].
In design research, cards often act as inspirational sources to stimulate new ideas, e.g., in ideation sessions [35]. In addition, card decks can also be employed to guide the design process. This is usually done by providing step-by-step instructions on how to use the card deck [34]. Instructions can, for example, define a workflow and workspace for the cards (e.g., [38]). A predefined workflow can also be supported by introducing different categories of cards (e.g., [39]). By withholding certain cards and showing them later in the process, the complexity of the task is reduced and it is ensured that the users are not overwhelmed.
Studies show that cards are a suitable tool to conduct PD activities with older adults in the field of HRI [23,24]. Singh [24] developed a card-based design kit that help users to describe and provide feedback on different aspects of voice-based agents. To this end, four types of cards a proposed: Actions cards that describe functionalities of the robot, personality cards that enable assessment of one’s own and the agent’s personality, theme cards that stimulate reflection on a high level (e.g., about the impact of the robot on your life), and design cards that guide the specification of different aspects of the robot and the interaction experience (form, materiality, inputs, outputs, location, connections via internet of things, personality and gender, ethics, and a final rating). The cards have been developed in the context of a number of user studies and demonstrated to be helpful for various age groups. Due to their initial focus on voice-based agents, they are not well suited to specify non-verbal behavioral expressions for social robots.

3. What Is the Modality Card Deck and How Is It Used?

The Modality Card Deck is a workshop and co-creation tool that supports people in creating multi-modal behavioral expressions for robots. It builds upon previous insights about PD with the specific user group of older adults and in the specific domain of HRI (as outlined above). Previous research showed that PD can yield valuable insights for HRI design. However, the PD activities described above mostly aim to uncover general user requirements, identify use cases, specify the robot’s appearance, or deduce universal design guidelines for HRI. The Modality Card Deck extends this body of research by providing a tool for generating and documenting concrete design decision of how a robot should communicate with the user, in order for the communication to be experienced as comprehensible and pleasant. The methodology of the Modality Card Deck puts a clear emphasis on enabling older adults to produce concrete multi-modal robot expressions, detailed enough to be implemented by interaction designers and software developers. To this end, a card deck is proposed, one that is both inspirational as well as structured to provide a step-by-step instruction for users. It guides novice users through the process of understanding their design options, choosing and combining their preferred communication modalities, and documenting them in a complete and precise way.
The Modality Card Deck consists of 40 cards and features 10 categories, one for each communication modality (Figure 1). These modalities were based on the literature on multi-modal robot behavior described above. Each modality is represented by an expressive icon and color-coded to insure that they are distinguishable.
The cards guide the users through the design process in three steps (as described below). To do so, the card deck contains four different cards for each modality. Figure 2 illustrates the three steps and related cards.
Step 1: Select your preferred communication modalities. For this step each modality category has a so-called decision card (marked <D>). This card states the name of the modality and the icon on the front and provides a short description of the modality on the back. The users place all decision cards on the table to get an overview of the communication modalities that are available to design the robot behavior. They can then select the modalities they find appropriate for the robot expression they want to design. The chosen modalities remain on the table, while the other cards are put aside. A specific type of decision cards are twin cards. Twin cards are connected by a colored circle (see cards for “Static Posture” and “Dynamic Expression” in Figure 2) and represent communication modalities that are mutually exclusive. This means that these two modalities cannot be used in the same behavioral expression. A robot can, for example, not maintain a static posture while at the same time move its joints.
Step 2: Understand the design challenge. For each of the selected modalities, the users pick the corresponding investigation card (marked <?>). This card contains one or two questions that specify the design challenge that needs to be addressed when including the modality behavioral expression of a robot. It guides the users’ attention to specific aspects that need to be taken into account when designing with this communication modality. As an example, the guiding questions for the modality category “Static Posture” are: “Which joints are involved? How are the joints positioned for the posture?”. For “Sound” it first needs to be defined: “What type of sound is used? (Single tone, chord, sequence of tones, melody, mechanical sound)”.
Step 3: Specify the communication modality. In this step, the users add the two remaining cards for each modality: The parameter card and the idea card (both marked <!>). The parameter card contains a list of parameters that need to be specified in order to realize the behavioral expression on a robot. The idea card is an empty card on which the users can note down the parameter specifications. Parameter cards and idea cards are used together in one step, so that the users can go over each parameter one-by-one and directly note down their ideas on how to address the parameter. For example, to specify the “Static Posture” of the robot the “parameter card” instructs the users to consider two parameters for each joint: Rotation and pitch. To define the “Sound” the users are asked to consider volume, pitch, pleasantness of the composition, annoyance/noiseness, and rhythm.
The Modality Cards can be used in various types of workshops to engage (future) users in PD activities regarding HRI and robot behavior design.

4. Case Study: Designing a Quizmaster Robot For and with Older Adults

The Modality Card Deck was used and evaluated as part of the NIKA project (user-centered interaction design for context sensitive, acceptable robots; German: Nutzerzentrierte Interaktionsgestaltung für Kontext-sensitive, akzeptable Roboter). The project examines how robots can support older adults’ health, well-being, and independent living with a focus on the usability and user experience of the interactive robot behavior. To this end, different applications have been ideated that are suitable to promote older adults’ health in a playful and engaging way. The Modality Card Desk was used to actively involve older adults in the design process of a robot-based quiz game. Quiz games are frequently proposed as entertainment applications for older adults, and also contribute to the users’ well-being and health, serving as a regular brain training activity. The quiz game was developed based on the human-centered design process and inspired by user research activities [40]. In the project team, we developed the course and intelligent software for the game and were then faced with the challenge of designing the behavior shown by the robot in the different phases of the game (Figure 3). For this design stage, we decided to involve older adults directly in the design process, in order to generate behavioral expressions that matched their needs and expectations. The goal was to support them in creating behavioral expressions by themselves, instead of assessing their requirements and developing the behavioral expressions in the design team. Thus, our future users got the chance to influence and shape the way the robot would communicate with them during the quiz game, especially in those phases that are related to user engagement and motivation.

4.1. Use Case

In the quiz game the robot acts as a quiz master who challenges the player with questions. Figure 3 summarizes the most important interaction steps in the course of the game. The player has to select the correct answer to the question out of three potential answers and receive feedback from the quiz master. The goal of the quiz game is to activate and entertain older adults by engaging them in brain training activities. In this set-up, the robot can take different roles to motivate the user to regularly play the game: It could, for example, take the role of a coach who persistently encourages the user in an empathic way. It could also act as an opponent that continuously pushes the user to a better performance by challenging their knowledge [41]. Which role the robot takes is expressed through its behavior and the different communication modalities. Part of the goal of the case study was to discover which roles the robot should take, in order to motivate the player of the quiz game. To this end, participants designed their own preferred behavior for the robot, which—implicitly—also yield information about the role participants would assign to the robot quizmaster. This role could potentially be a coach, opponent, or a completely different character.

4.2. Participants and Workshop Procedure

Three Co-Creation Workshops were conducted with a total of 13 participants. The participants were between 60 and 81 years old (Mean age = 68.07, SD = 4.78) and all retired.
The Modality Card Deck was employed as a tool to enable the workshop participants to create their own concrete multi-modal behaviors for the robot. While the methodology of the Modality Card Desk and the Co-Creation Workshop can be used independently of the robot appearance, we chose to let participants develop behaviors for one specific robot: The Pepper robot by Softbanks Robotics ([42]; Figure 4). For novices in the field of HRI design, it is easier to create design solutions for a specific robot with predefined communication modalities.
The workshop consisted of three parts: Warm-Up, Use Case Exploration, and Robot Behavior Co-Creation which concluded with a feedback round regarding the co-creation methodology and the Modality Card Deck. The task in the three parts were constructed in a way that participants always documented their thoughts and ideas using the templates and material that were specifically designed for this purpose. The discussions and explanations during the workshops were documented by the workshop facilitator by taking notes and photos. In this paper, the focus lies on the last part, which will be described in detail. The first two parts are based on established PD tasks (compare Section 2) and only described in a superficial way. Each of the parts was scheduled for one hour, so that the workshop lasted three hours in total.
Participants were welcomed and signed the informed consent form. They were then introduced to the topic and goal of the workshop. The goal of the Warm-Up phase was to make participants familiar with the topic of playing a quiz game and create a pleasant atmosphere as well as creative mindset for the rest of the workshop. To do so, participants were first asked to act out a gesture that expresses how they experience playing games. They were then introduced to the method of Lego® Serious Play® which facilitates the expression of thoughts and ideas through metaphorical modeling with Lego® bricks. Participants were instructed to build and then presents a model of a memorable positive experience they once had while playing. As a third task, participants visualized and characterized their own dream robot for playing games using either Lego® bricks, play-doh®, or drawings.
In the Use Case Exploration part, participants were made familiar with the course of the quiz game (as depicted in Figure 3). Using Emotion Cards [43] they were asked to reflect upon their feelings when playing the quiz game with a computer as compared to a robot and in different social contexts (in the company of a close, well-known, or unknown person). This exercise encouraged participants to not only reflect on their own emotions in the use cases, but also on how the quiz master robot could influence these emotions.
Users can only participate effectively in co-design if they first gain some understanding about the technology they are designing for [20]. Thus, in the Robot Behavior Co-Creation part participants were introduced to the Pepper robot and its communication modalities. To do so, they watched a video of the robot and then received a detailed description of the modalities the robot can use to communicate with the user. We then showed them the Modality Card Desk and instructed participants how to use it to create their own multi-modal robot behaviors in groups of two. Participants were asked to design behavioral expressions for the part of the quiz game where the robot provides feedback about the user’s answer. Thus, they had to design two different behavioral expressions:
  • Behavior of the robot when it tells the user that the answer is correct (positive feedback);
  • Behavior of the robot when it tells the user that the answer is incorrect (negative feedback).
We chose these interaction situations because they are the ones that contribute most to engaging and motivating the user, and are thus most revealing about the role and character of the robot. Participants were instructed to create behavioral expressions that they would perceive as comprehensible and pleasant, because we wanted them to focus on design solutions that provide a positive user experience. After presenting their design solutions to the group, participants were engaged in a short group discussion to give them the opportunity to provide feedback on the Modality Card Desk. Figure 5 provides impressions of how the participants worked with the modality cards, following the three steps.
The following instructions were used to introduce the Modality Card Deck: “You have now watched the Pepper robot in action. I would like you to now consider that this robot is the quizmaster and you have the task to design its behavior so that I will be the perfect quizmaster playing companion to you. To do so, please take a look again at the course of the quiz game and the related emotions you noted down. Please focus on the situation in which you have provided the answer to the question and the robot will now reveal to you whether your answer is correct or not. It will now be your task to describe the behavior of the robot in these two situations. How would it react if you answered correctly or incorrectly? To support you in specifying the behavior, I provided you with a card deck. It includes all the different modalities the robot could use to communicate with you. Let’s take a look at how it works. First you can put the cards with the symbol <D> an the table and choose the modalities that you would like the robot to use. For the selected cards, you then take from the card deck all other cards which have the same color and icon and place them below the modality cards in the following order: The card with the symbol <?>, the card with the symbol <!>, and the empty card with the symbol <!>. The first two provide questions and parameters that you should consider when specifying the behavior for the robot. You can note down your ideas on the empty cards. Please specify one behavior for the case that your answer is correct and one for the case that your answer is incorrect. Specify the behavior in such way that it will be pleasant for you and increase your motivation during the quiz game”.

4.3. Workshop Results

The participants produced six behavioral expression for the negative feedback situation and five behavioral expression for the positive feedback situation. One behavioral expressions for the latter was not finished in time and thus removed from the data set. The following sections presents some examples of the produced design solutions and show the resulting behavioral expressions on the Pepper robot. Similarities and differences of the design solutions are then discussed as well as main insights from the robot behavior co-creation phase.

4.3.1. Resulting Behavioral Expressions

Figure 6 and Figure 7 provide examples for the behavioral expressions of the Pepper robot produced by the workshop participants for the positive and negative feedback situations. The workshop participants documented their specifications of the robot behavior using the idea cards (Figure 6A and Figure 7A). As the behavioral expressions were documented as text descriptions on a number of idea cards, no additional effort was required by the design team to interpret the results. If the descriptions were detailed and precise enough, they were implemented on the Pepper robot by a team of interaction designers after the workshops. Whether a description was detailed enough was determined by the design team during the implementation task. A description for a modality was labeled as “very detailed” if the design team could implement it on the Pepper robot without asking further questions. The label “somewhat detailed” was used when the description could be implemented by consulting the notes taken by the facilitator during the workshops. Descriptions were defined as “incomplete” if the design team lacked sufficient information for implementing the described ideas. Figure 8 and Figure 9 provide an overview of the assessment of the clarity of the descriptions produced by the workshop participants.
The results were documented as videos, presented as a series of stills in Figure 6B and Figure 7B.

4.3.2. Use of Different Communication Modalities

The workshop results show that the Modality Card Deck successfully guided the users to create behavioral expressions for the Pepper robot that combine different communication modalities. Following the different steps with the help of the different types of cards, they were able to make well-considered design decisions on which modalities to include (and which to leave aside). For the positive feedback situation, all five groups independently decided to let the robot show a “Dynamic Expression” in a “Fixed Position”, accompanied by speech. The described “Dynamic Expressions” reached from nodding over thumps up to extending both arms towards the user or in the air. Two groups also used the display to accompany the verbal feedback with pictures or color. The other groups included green lights in their design solutions and one of them also added sound. Figure 8 provides an overview of the modalities used for the positive feedback situation.
For the negative feedback situation, participants created more divers behavioral expressions (see Figure 9). Across the six groups, each communication modality was at least used once.

4.3.3. Completeness and Precision

The specification of the different communication modalities varied regarding their completeness and precision. Figure 8 and Figure 9 indicate this variety:
  • Speed (of movements, dynamic expressions or speech): Participants used imprecise wordings like “slowly”, “normal speed”, “medium speed”, “not fast”, or described the speed in relation to other references “faster than...”;
  • Volume (speech and sounds): Participants used imprecise wordings like “not too loud” or “normal volume”;
  • Brightness (light)—Participants use imprecise wordings like “not too bright” and “rather low brightness”.

4.3.4. Additional Insights for the Behavioral Design of the Quiz Master Robot

The design solutions proposed by the workshop participants also provide insights that can help interaction designers in HRI with their task of creation behavioral expressions for social robots. In this case study, we deduced some general requirements for the behavioral design of the robot as a quiz master. These requirements were deduced by looking for similarities between the design propositions of the different groups. We first examined which role of the quiz master robot the participants conveyed through their proposed design solutions. A large majority described the robot as encouraging and kind, sometimes even funny. When comparing the design ideas for the positive and negative feedback situations, we noted that the participants chose more expressive behavior for the positive situation, which was reflected in a specification for the modalities that were longer in duration and more visible for bystanders. In this situation, they wanted the robot to be rather emotional and accentuate their success. To this end, the participants specified longer sentences for the robot, often including appreciation or praise of the quiz player’s performance, as well as more extensive gestures (dynamic expressions) and light signals. For the negative feedback situation, on the other hand, they designed more discrete behaviors to play down their failure. This was realized by short verbal expressions which only included the incorrectness of the answer. Dynamic expressions were specified to be shorter and more subtle than for the positive feedback situation. In addition, we learned that people might find a distance of 1 to 1.5 m between robot and user appropriate. The participants also had similar ideas regarding the speech output: Although it became obvious during the design sessions that different people have different preferences regarding the sound of the robot’s voice, all groups decided that their robot should have a human voice rather than a mechanical one. They emphasized that the sound of the voice should be natural and comprehensible, and that the robot should be able to copy the intonation of a human voice so as to be able to accentuate certain words. Interestingly, the sentences they formulated for the robot were always short and did not include any sub clauses. It also became clear that participants preferred their robot to show meaningful gestures when providing feedback on the quiz answers. Only once, a static posture was chosen (for the negative feedback situation). This preference for meaningful gestures is in line with previous research (e.g., [44]).

4.4. Feedback on the Modality Card Deck

During the discussion round, two of the three workshop groups agreed that they experienced the initial task of designing a behavioral expression for the robot and the amount of cards as overwhelming and confusing. This feeling did, however, disappear, once they were introduced to the three steps and different types of cards and started selecting their preferred communication modalities. Before undertaking this step, four out of the six groups called upon the workshop facilitator for re-assurance that they understood the task correctly. After that, all groups worked on their own. During the feedback round, the participants reported that the first step was helpful in reducing the complexity of the design task and helped them to focus on single aspects of the behavior. Thus, they could occupy themselves with small individual building blocks of the robot behavior instead of having to think about the complete behavior straight away. Participants mentioned that the cards invite you to start on the task right away and without overthinking it. They discussed that the Modality Card Deck provides good structure, guidance, and instruction for working towards the design of the behavioral robot expression. They experienced it as positive how quickly they were able to produce their own, very concrete design solutions without any prior knowledge in HRI design. Step one was perceived as especially helpful, as it reduces the cards you have to work with. One group emphasized that the paired cards were especially helpful in the task of choosing the modalities in step one. Participants also appreciated the initial overview of all modalities, which they perceived as stimulating and inspiring for developing their own ideas.
The effort for the facilitator of the co-creation workshop was evaluated as comparatively low. Once participants became familiar with them, the different types of cards of the Modality Card Deck were self-explanatory and the participant could self-guide themselves through the process of generating robot behavior.

4.5. Learnings and Ideas for Improving the Modality Card Deck

To sum up, using the Modality Card Deck, the participants were able to independently produce concepts of multi-modal behavioral expressions for the Pepper robot. They also experienced working with the cards as positive and helpful. The cards can hence be regarded as a suitable tool to engage older adults in the design of communication and interaction between users and social robots. Still, the three workshops also revealed some ideas for improvement. The Modality Card Deck at its current state does not provide any guidance for putting the chosen communication modalities in a chronological order. This could be supported by introducing a forth step—timing the interplay of communication modalities. While the tangible cards can easily be arranged on the table in the desired order to indicate the time course, it might be nice to have some additional visual support for arranging the cards. This could, for example, be a timeline as presented in Figure 10, on which the users can mark the duration, as well as the starting and end time points for the different modalities. Moreover, not all behavioral expressions were described detailed enough to be implemented on the Pepper robot. More specifically, this mostly concerns the design propositions for the modalities “Sound”, “Speech”, and “Dynamic Expression”. It became obvious that some parameters cannot easily be specified by non-experts without the opportunity of experiencing them. For “Dynamic Expression” some clarification could be provided during the workshops when participants acted out the expression themselves. A video recording of the users performing the gesture could– together with the textual specification–provide sufficient guiding for the implementation of the dynamic expression. This is, of course, only an option when designing for a humanoid robot. To specify sound and light, it could be helpful to provide participants with the opportunity to experiment with different settings for the communication modality. Naturally, the best option would be to have a robot present at the PD workshop and let participants work directly on the robot, for example by using rapid prototyping techniques. However, this is often not feasible due to transportation expenditure, time constraints, and availability of the required amount of robots. As an alternative, one could include other tools that allow participants to specify the parameters for single communication modalities such as manipulable light sources and robot speech synthesizers.
The lack of completeness and precision of some of the developed behavioral expressions only became obvious at the end of each workshop when there was no time for the workshop groups to refine or further detail their design propositions. This suggests that the step-by-step process proposed by the Modality Card Deck could be improved. It might be necessary to conduct two rounds of the Robot Behavior Co-Creation phase and let participants present their preliminary ideas, so that they can receive feedback from the facilitator where to provide additional detail and can iterate on their design ideas. In addition, the whole design process could benefit from letting more members of the design team take part in the co-creation workshop, especially those entrusted with the programming of the robot. Direct involvement in the workshop would provide them with the opportunity to interview users about their design choices and thus help them to gain valuable insights for the implementation of the behaviors on the robot.

5. Conclusions and Future Work

The Modality Card Deck was proposed as a workshop tool that guides through the process of designing multi-modal behavioral expression for social robots. The case study showed how the tool was used in PD workshops with older users to design behaviors for a humanoid quiz master robot. With the help of the Modality Card Deck, the participants were able to reflect on which modalities they found appropriate in a given interaction situations and specify their ideas. While the main goal of the Modality Card Deck is to enable (future) users to create design solutions that are specific enough to be implemented by designers and software developers, this was not achieved by all workshop groups. This shortcoming could be overcome by adjusting the Modality Card Deck and workshop procedure as described above. The co-creation workshop using the Modality Card Deck produced a number of different versions for the same interaction situation (in this case study positive and negative feedback situations). These different design propositions can be valuable input for the next steps of the human-centered design process. When described precisely enough, the behavioral expressions can be implemented on a robot. Thus, testable prototypes are created that can be evaluated with a larger group of user in iterative user testing. Having a broad variety of robot behaviors to test is a huge advantage when striving for personalized HRI design. Different users have different needs and abilities and, as mentioned earlier, might experience different robot behaviors as pleasant. A large-scale user test with a number of different behavioral expressions for the same interaction situation can reveal which type of user prefers which type of robot behavior. The goal should hence not be to choose one of the proposed behaviors, but rather find out how to best match different user types with different robot behaviors. This matching can ultimately be done by intelligent system components and, in the long run, provide a personalized interaction experience that can contribute to the increased acceptance of a social robot in elderly care. To fully leverage the potential of personalized HRI for older adults, more co-creation workshops will be planned and behavioral robot expressions for different interaction situations as well as different types of robots (humanoid, animoid, abstract) will be generated. Our final goal is to create a database with variants of behavioral expressions for social robots for a brought variety of interactions situations. This database can then be used to tailor the behavior of the robot to the individual user’s needs and preferences, thus providing a personalized experience during HRI.
In the case study, a particular use case and a humanoid robot were used to employ the Modality Card Deck. The cards are, however, designed and phrased in a generic way, thus allowing to apply them for different scenarios and different robots. They could, actually, also be used to co-design other intelligent technology that uses multi-modal output to communicate with the user. Thus, they provide an interesting tool to co-create interaction experiences with tangible interfaces of AI applications. Similarly to the Design Kit for voice-based agents by Singh [24], the cards can be used in any context without having to provide technical equipment or an actual robot. Thus it can easily be applied to engage participants in PD activity in their daily live context, e.g., in a care home. It can also be used to develop ideas for robots that do not exist yet or only exist as form prototypes. Nonetheless, the potential impact of the Modality Card Deck could be further improved when combining it with methods that allow users to also realize their ideas for the different modalities on a physical or simulated robot, as e.g., proposed by Tian and colleagues [45]. Their PD approach include prototyping behaviors for the Pepper robot using the graphical programming tool Choreograph with a simulated and later a physical robot. With more time and the required technical set-up, the co-creation methodology proposed above could be extended by a fourth workshop phase during which participants prototype the developed behavioral expression and iteratively improve it themselves. Still, this would probably require multiple sessions, in order for them to learn how to use the prototyping software.
So far, the Modality Card Deck only supports the co-creation of the robot’s side of the interaction. The behavior of the user andtheir way of communicating with the robot is not addressed. Thus, a next step should be to extend the proposed methodology with a component that supports older adults in reflecting on and documenting design ideas for how to realize user input to the robot system. To this end, the Modality Card Deck could be combined with other existing approaches that explain sensory capabilities of the robot and make them graspable for the PD participants (e.g., [26]). This addition appears to be crucial, especially for older adults who might have special requirements based on their motor or sensory abilities.

Funding

This research was conducted as part of the NIKA project and funded by the the German Federal Ministry of Education and Research (BMBF 16SV7941).

Institutional Review Board Statement

Ethical review and approval were not required for the study in accordance with local legislation and institutional requirements.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

I thank all participants who took part in the co-creation workshops and Tamara Schwarz for the graphical design of the Modality Card Deck.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HTIHuman-technology interaction
HRIHuman-robot interaction
PDParticipatory design
AALAmbient assisted living

References

  1. Forlizzi, J. How robotic products become social products: An ethnographic study of cleaning in the home. In Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, 10–12 March 2007; pp. 129–136. [Google Scholar]
  2. Tanaka, F.; Ghosh, M. The implementation of care-receiving robot at an English learning school for children. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 8–11 March 2011; pp. 265–266. [Google Scholar]
  3. Wada, K.; Shibata, T. Living with seal robots—its sociopsychological and physiological influences on the elderly at a care house. IEEE Trans. Robot. 2007, 23, 972–980. [Google Scholar] [CrossRef]
  4. McArthur, L.Z.; Baron, R.M. Toward an ecological theory of social perception. Psychol. Rev. 1983, 90, 215. [Google Scholar] [CrossRef]
  5. Frederiks, A.D.; Octavia, J.R.; Vandevelde, C.; Saldien, J. Towards Participatory Design of Social Robots. In Proceedings of the IFIP Conference on Human-Computer Interaction, Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; pp. 527–535. [Google Scholar]
  6. Chen, K.; Chan, A. Use or non-use of gerontechnology—A qualitative study. Int. J. Environ. Res. Public Health 2013, 10, 4645–4666. [Google Scholar] [CrossRef] [PubMed]
  7. Künemund, H. Chancen und Herausforderungen assistiver Technik. Nutzerbedarfe und Technikakzeptanz im Alter. Tatup-Z. FÜR Tech. Theor. Und Prax. 2015, 24, 28–35. [Google Scholar] [CrossRef] [Green Version]
  8. Syrdal, D.S.; Koay, K.L.; Walters, M.L.; Dautenhahn, K. A personalized robot companion?-The role of individual differences on spatial preferences in HRI scenarios. In Proceedings of the RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Korea, 26–29 August 2007; pp. 1143–1148. [Google Scholar]
  9. Sanders, E.B.N.; Stappers, P.J. Co-creation and the new landscapes of design. Co-Design 2008, 4, 5–18. [Google Scholar] [CrossRef] [Green Version]
  10. Beimborn, M.; Kadi, S.; Köberer, N.; Mühleck, M.; Spindler, M. Focusing on the human: Interdisciplinary reflections on ageing and technology. In Ageing and Technology; Transcript Verlag: Bielefeld, Germany, 2016; p. 311. [Google Scholar]
  11. Merkel, S.; Kucharski, A. Participatory Design in Gerontechnology: A systematic literature review. Gerontologist 2018, 59, e16–e25. [Google Scholar] [CrossRef] [PubMed]
  12. Östlund, B.; Olander, E.; Jonsson, O.; Frennert, S. STS-inspired design to meet the challenges of modern ageing. Welfare technology as a tool to promote user driven innovations or another way to keep older users hostage? Technol. Forecast. Soc. Chang. 2015, 93, 82–90. [Google Scholar] [CrossRef]
  13. Sumner, J.; Chong, L.S.; Bundele, A.; Lim, Y.W. Co-designing technology for ageing in place: A systematic review. Gerontologist 2020, gnaa064. [Google Scholar] [CrossRef] [PubMed]
  14. Pollmann, K.; Fronemann, N.; Krüger, A.E.; Peissner, M. PosiTec—How to Adopt a Positive, Need-Based Design Approach. In Design, User Experience, and Usability: Users, Contexts and Case Studies; Marcus, A., Wang, W., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 52–66. [Google Scholar]
  15. Frennert, S.; Eftring, H.; Östlund, B. Older People’s Involvement in the Development of a Social Assistive Robot. In Social Robotics; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer International Publishing: Cham, Switzerland, 2013; pp. 8–18. [Google Scholar]
  16. Giorgi, S.; Ceriani, M.; Bottoni, P.; Talamo, A.; Ruggiero, S. Keeping “InTOUCH”: An Ongoing Co-design Project to Share Memories, Skills and Demands through an Interactive Table. In Human Factors in Computing and Informatics; Holzinger, A., Ziefle, M., Hitz, M., Debevc, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 633–640. [Google Scholar]
  17. Lindsay, S.; Jackson, D.; Schofield, G.; Olivier, P. Engaging Older People Using Participatory Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012. CHI ’12. pp. 1199–1208. [Google Scholar]
  18. Zanella, A.; Mason, F.; Pluchino, P.; Cisotto, G.; Orso, V.; Gamberini, L. Internet of Things for Elderly and Fragile People. arXiv 2020, arXiv:2006.05709. [Google Scholar]
  19. Muller, M.J. Participatory design: The third space in HCI. In The human-computer interaction handbook; CRC Press: Boca Raton, FL, USA, 2007; pp. 1087–1108. [Google Scholar]
  20. Leong, T.W.; Johnston, B. Co-design and robots: A case study of a robot dog for aging people. In Proceedings of the International Conference on Social Robotics, Kansas City, MO, USA, 1–3 November 2016; Springer: Cham, Switzerland, 2016; pp. 702–711. [Google Scholar]
  21. Lee, H.R.; Šabanović, S.; Chang, W.L.; Hakken, D.; Nagata, S.; Piatt, J.; Bennett, C. Steps toward participatory design of social robots: Mutual learning with older adults with depression. In Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, 6–9 March 2017; pp. 244–253. [Google Scholar]
  22. Azenkot, S.; Feng, C.; Cakmak, M. Enabling building service robots to guide blind people a participatory design approach. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 3–10. [Google Scholar]
  23. Ostrowski, A.K.; DiPaola, D.; Partridge, E.; Park, H.W.; Breazeal, C. Older Adults Living With Social Robots: Promoting Social Connectedness in Long-Term Communities. IEEE Robot. Autom. Mag. 2019, 26, 59–70. [Google Scholar] [CrossRef]
  24. Singh, N. Talking Machines: Democratizing the Design of Voice-Based Agents for the Home. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2018. [Google Scholar]
  25. Caleb-Solly, P.; Dogramadzi, S.; Ellender, D.; Fear, T.; van den Heuvel, H. A mixed-method approach to evoke creative and holistic thinking about robots in a home environment. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; ACM: New York, NY, USA, 2014; pp. 374–381. [Google Scholar]
  26. Eftring, H.; Frennert, S. Designing a social and assistive robot for seniors. Z. FÜR Gerontol. Und Geriatr. 2016, 49, 274–281. [Google Scholar] [CrossRef]
  27. Björling, E.A.; Rose, E. Participatory Research Principles in Human-Centered Design: Engaging Teens in the Co-Design of a Social Robot. Multimodal Technol. Interact. 2019, 3, 8. [Google Scholar] [CrossRef] [Green Version]
  28. Louie, W.Y.G.; Nejat, G. A Social Robot Learning to Facilitate an Assistive Group-Based Activity from Non-expert Caregivers. Int. J. Soc. Robot. 2020, 12, 1159–1176. [Google Scholar] [CrossRef]
  29. Argyle, M. Non-verbal communication in human social interaction. In Non-Verbal Communication; Cambridge University Press: Cambridge, UK, 1972; Volume 2. [Google Scholar]
  30. Phutela, D. The importance of non-verbal communication. Iup J. Soft Ski. 2015, 9, 43. [Google Scholar]
  31. Watzlawick, P.; Beavin, J.; Jackson, D. Some Tentative Axioms of Communication; Routledge: London, UK, 2017. [Google Scholar]
  32. Breazeal, C. Role of Expressive Behaviour for Robots that Learn from People. Philos. Trans. R. Soc. Lond. Ser. B 2009, 364, 3527–3538. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Embgen, S.; Luber, M.; Becker-Asano, C.; Ragni, M.; Evers, V.; Arras, K.O. Robot-specific social cues in emotional body language. In Proceedings of the IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 1019–1025. [Google Scholar]
  34. Wölfel, C.; Merritt, T. Method card design dimensions: A survey of card-based design tools. In Proceedings of the IFIP Conference on Human-Computer Interaction, Cape Town, South Africa, 2–6 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 479–486. [Google Scholar]
  35. Mora, S.; Gianni, F.; Divitini, M. Tiles: A card-based ideation toolkit for the internet of things. In Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, UK, 10–14 June 2017; pp. 587–598. [Google Scholar]
  36. Deng, Y.; Antle, A.N.; Neustaedter, C. Tango cards: A card-based design tool for informing the design of tangible learning games. In Proceedings of the 2014 Conference on Designing Interactive Systems, Vancouver, BC, Canada, 21–25 June 2014; pp. 695–704. [Google Scholar]
  37. Beck, E.; Obrist, M.; Bernhaupt, R.; Tscheligi, M. Instant card technique: How and why to apply in user-centered design. In Proceedings of the Tenth Anniversary Conference on Participatory Design, Bloomington, IN, USA, 30 September–4 October 2008; pp. 162–165. [Google Scholar]
  38. Alves, V.; Roque, L. A deck for sound design in games: Enhancements based on a design exercise. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, Lisbon, Portugal, 8–11 November 2011; pp. 1–8. [Google Scholar]
  39. Dibitonto, M.; Tazzi, F.; Leszczynska, K.; Medaglia, C.M. The IoT design deck: A tool for the co-design of connected products. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017; Springer: Cham, Switzerland, 2017; pp. 217–227. [Google Scholar]
  40. Ziegler, D.; Pollmann, K.; Fronemann, N.; Tagalidou, N. HCD4Personalization - Menschzentrierte Interaktionsgestaltung anhand individueller Eigenschaften der Nutzenden. In Proceedings of the Mensch und Computer 2019-Workshop, Hamburg, Germany, 8–11 September 2019. [Google Scholar]
  41. Pollmann, K.; Ziegler, D. Personal Quizmaster: A Pattern Approach to Personalized Interaction Experiences with the MiRo Robot. In Proceedings of the Conference on Mensch Und Computer, Magdeburg, Germany, 6–9 September 2020. [Google Scholar]
  42. Pepper. Available online: https://www.softbankrobotics.com/emea/de/pepper (accessed on 7 June 2021).
  43. Yoon, J.; Pohlmeyer, A.E.; Desmet, P.M. Positive Emotional Granularity Cards; Delft Institute of Positive Design: Delft, The Netherlands, 2015. [Google Scholar]
  44. Ruijten, P.A.; Cuijpers, R.H. Does a friendly robot make you feel better? In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–6. [Google Scholar]
  45. Tian, L.; Carreno-Medrano, P.; Allen, A.; Sumartojo, S.; Mintrom, M.; Coronado Zuniga, E.; Venture, G.; Croft, E.; Kulic, D. Redesigning Human-Robot Interaction in Response to Robot Failures: A Participatory Design Methodology. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
Figure 1. Categories of the Modality Card Deck representing different modalities a robot can use to communicate.
Figure 1. Categories of the Modality Card Deck representing different modalities a robot can use to communicate.
Mti 05 00033 g001
Figure 2. The set of decision cards (<D>) (front: First row, back: Second row), investigation cards (front: Third row) (<?>), parameter cards (front: Fourth row), and idea cards cards (front: Fifth row) (<!>) for the examples “Static Posture”, “Dynamic Expression”, and “Sound” (left) and the three steps of the design process (right).
Figure 2. The set of decision cards (<D>) (front: First row, back: Second row), investigation cards (front: Third row) (<?>), parameter cards (front: Fourth row), and idea cards cards (front: Fifth row) (<!>) for the examples “Static Posture”, “Dynamic Expression”, and “Sound” (left) and the three steps of the design process (right).
Mti 05 00033 g002
Figure 3. The phases of the quiz game.
Figure 3. The phases of the quiz game.
Mti 05 00033 g003
Figure 4. The Pepper robot.
Figure 4. The Pepper robot.
Mti 05 00033 g004
Figure 5. Participants designing multimodal behavioral expressions for the Pepper robot with the Modality Card Deck, following the three phases (from left to right): Select your preferred communication modality, understand the design challenge, and specify the communication modality.
Figure 5. Participants designing multimodal behavioral expressions for the Pepper robot with the Modality Card Deck, following the three phases (from left to right): Select your preferred communication modality, understand the design challenge, and specify the communication modality.
Mti 05 00033 g005
Figure 6. Example for a behavioral expression for the positive feedback situation: (A) Idea cards documenting the design solution. (B) Implementation on the Pepper robot.
Figure 6. Example for a behavioral expression for the positive feedback situation: (A) Idea cards documenting the design solution. (B) Implementation on the Pepper robot.
Mti 05 00033 g006
Figure 7. Example for a behavioral expression for the negative feedback situation: (A) Idea cards documenting the design solution. (B) Implementation on the Pepper robot.
Figure 7. Example for a behavioral expression for the negative feedback situation: (A) Idea cards documenting the design solution. (B) Implementation on the Pepper robot.
Mti 05 00033 g007
Figure 8. Overview of communication modalities used by the groups to create multi-modal behaviors for the positive feedback situation.
Figure 8. Overview of communication modalities used by the groups to create multi-modal behaviors for the positive feedback situation.
Mti 05 00033 g008
Figure 9. Overview of communication modalities used by the groups to create multi-modal behaviors for the negative feedback situation.
Figure 9. Overview of communication modalities used by the groups to create multi-modal behaviors for the negative feedback situation.
Mti 05 00033 g009
Figure 10. Timeline template to document timing of the different modalities that form the behavioral expression of the robot.
Figure 10. Timeline template to document timing of the different modalities that form the behavioral expression of the robot.
Mti 05 00033 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pollmann, K. The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults. Multimodal Technol. Interact. 2021, 5, 33. https://doi.org/10.3390/mti5070033

AMA Style

Pollmann K. The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults. Multimodal Technologies and Interaction. 2021; 5(7):33. https://doi.org/10.3390/mti5070033

Chicago/Turabian Style

Pollmann, Kathrin. 2021. "The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults" Multimodal Technologies and Interaction 5, no. 7: 33. https://doi.org/10.3390/mti5070033

APA Style

Pollmann, K. (2021). The Modality Card Deck: Co-Creating Multi-Modal Behavioral Expressions for Social Robots with Older Adults. Multimodal Technologies and Interaction, 5(7), 33. https://doi.org/10.3390/mti5070033

Article Metrics

Back to TopTop