Next Article in Journal
Identification of Multiple Local Damage to an Offshore Jacket Substructure Using a Novel Strain Expansion–Reduction Approach
Previous Article in Journal
Investigation of Reverse Swing and Magnus Effect on a Cricket Ball Using Particle Image Velocimetry
Previous Article in Special Issue
Hybrid Fuzzy Regression Analysis Using the F-Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robot Partner Development Platform for Human-Robot Interaction Based on a User-Centered Design Approach

1
Department of Mechanical Engineering, School of Engineering, Tokyo University of Technology, Hachioji 192-0982, Japan
2
Graduate School of Systems Design, Tokyo Metropolitan University, Hino 191-0065, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(22), 7992; https://doi.org/10.3390/app10227992
Submission received: 31 August 2020 / Revised: 21 October 2020 / Accepted: 29 October 2020 / Published: 11 November 2020
(This article belongs to the Special Issue Selected Papers from The Conference ISIS & ICBAK 2019)

Abstract

:
This paper presents a robot partner development platform based on smart devices. Humans communicate with others based on the basic motivations of human cooperation and have communicative motives based on social attributes. Understanding and applying these communicative motives become important in the development of socially-embedded robot partners. Therefore, it is becoming more important to develop robots that can be applied according to needs while taking these human communication elements into consideration. The role of a robot partner is more important in not only on the industrial sector but also in households. However, it seems that it will take time to disseminate robots. In the field of service robots, the development of robots according to various needs is important and the system integration of hardware and software becomes crucial. Therefore, in this paper, we propose a robot partner development platform for human-robot interaction. Firstly, we propose a modularized architecture of robot partners using a smart device to realize a flexible update based on the re-usability of hardware and software modules. In addition, we show examples of implementing a robot system using the proposed architecture. Next, we focus on the development of various robots using the modular robot partner system. Finally, we discuss the effectiveness of the proposed robot partner system through social implementation and experiments.

1. Introduction

Recently, population aging is occurring in countries with varying levels of development in all regions of the world. Among them, the Republic of Korea is known as one of the fastest aging societies in the world [1]. In particular, as the Republic of Korea has achieved rapid economic growth, the demographic structure is also rapidly changing. In Japan, the aging rate has already exceeded over 28%. Social isolation for the elderly due to the increase of elderly people living alone is being highlighted as a social problem [2]. The increasing social isolation of elderly people living alone is also a cause of major social problems such losing the will to live, the progression of dementia, and lonely death. Additionally, the shortage of caregivers is a serious problem.
In 2018, the job openings-to-applicants ratio reached 3.9, indicating that there is an absolute shortage of labor in the service area of caregiving [3]. With research showing that happiness can spread through social relations [4], now it is time to actively support socially-isolated people. Since these problems tend to increase more and more in the future, some preventive measures are strongly required.
Generally, a daily conversation and proper exercise are important for the elderly to be physically and mentally healthy. For example, a group with less frequent conversations has been shown to not be in good health [3]. By encouraging healthy elderly people to participate in society, we believe that they can efficiently support the extension of a healthy life expectancy. Therefore, it is important to research and develop a human-friendly communication system, that is, a system that evokes motivation for health through human-like conversation. If the elderly can recognize the robot as a human and respond to it according to the ability of the elderly, it is can be considered to be an effective method for reducing the loneliness of the elderly.
As the number of elderly people increases, government care for them is not enough. Therefore, elderly people should consider self-care. The concept of a user-centered design is very important to support self-management and improve the quality of life [5]. User-centered systems can improve the accessibility and usability of complex systems and devices that support human activities, communication, and interaction. Therefore, we should consider the use of robot partners to support users based on a user-centered design approach.
Communication skills are indispensable for robot partners with many opportunities to interact with people. For smooth communication, the robot system needs to consider the human’s interaction process [6]. There is a focus point to look out in human communication. That is, human communication has a social nature in helping others or to share information. This point can be applied to robot communication. As a basic development direction for robots, it will be important to see how robots support and provide information, playing the role of human partners. We envisage that it will be the “killer application” of robot development in the future. Therefore, we developed a human-robot interaction system inspired by “three general types of evolved communicative motives” [7].
Nowadays, robots are developing on various platforms such as smart devices, and systems that can be designed by end users are also being developed. However, as a result of market research of various users during the demonstration experiment stage [8], there were companies (as customers) that wanted to introduce robot systems in their own companies with the purpose of a corporate information service, but they were unable to introduce them due to the following problems. Firstly, the robot system design is limited. There are many robots with excellent performance on the market. However, there is no choice for robot hardware and software designs. In our previous demonstration on experiments on the elderly, the received feedback was of opinions that differed about the design choices of robots according to each needs. Furthermore, even after introducing the robots, development costs are required. In order to perform the service, it is necessary to change the contents periodically.
Through our research and development of robots, we could see that individuals use robots based on its purpose of use, and that it takes time to develop the robot accordingly. Therefore, we began developing a robot partner development platform capable of constructing a robot system and creating content according to needs, such as a robot care support for the elderly.
To develop robots for a variety of needs, there are various aspects to consider from the perspective of the client to the point of view of the user as shown in Table 1. Here, we considered the five-tiered hierarchy of robot development. In our study, we consider how to construct the architecture of the robot according to the integration of hardware, software, and utterance content design.
There are considerations of perspective from the researcher to the vendor to develop robots for various needs. First of all, consideration should be given to what kind of architecture to use, the modularization for design according to various needs, and completion of the developed system following the needs from the vendor’s viewpoint. Next, to develop robots for various needs, it is necessary to consider how to customize the user along with the problem of how to design content for service. Therefore, we look at how to construct a cognitive model of hardware, software, and content design, from the perspective of customers and the users as shown in Figure 1. In other words, the purpose is to complete the system according to the configuration of the component and the system so that the content can be easily designed.
In order to provide support for the elderly, it is necessary to first review the needs of customers as shown in Figure 1. This is because actual end users prioritize the aspect of using the platform over the aspect of developing it. In a bid to increase the efficiency of the customer’s work and to confirm the possibility of robot development from the customer’s point of view, an evaluation of visitors and development interviews with students were conducted. If services for these problems can be realized, it will be applicable to the elderly.
This paper is organized as follows. Section 2 describes literature reviews related to the development of a robot partner. Section 3 describes the robot partner system for human-robot interaction; Section 3.1 describes the robot’s hardware and Section 3.2 describes the robot’s software. Section 4 describes the conversation system of our robot partner system. Section 5 explains persona analysis and its method, and implementation examples of robot partner are shown in Section 6. Conclusions and future works are drawn in Section 7.

2. Literature Review

Recently, smart devices have high specifications and are equipped with various functions such as the element of AI (Artificial Intelligence), voice recognition, and voice synthesis. We conducted research to utilize such a high-performance and familiar interface for robot platforms. Over time, elderly people who are unfamiliar with information appliances begin to use smartphones and tablet PCs. One of the reasons for this is the development of an easy, icon-based interface that can be used visually and intuitively. Therefore, we developed a robot partner system based on smart devices in order to save budget. Smart devices that are generally sold have built-in sensors such as an accelerometer, gyro, front and back cameras, touch interface, illuminance sensor, and compass. By using such a system, it is possible to reduce the budget for developing a robot system. In addition, by using the environment of the smart device connected to the network, it is possible to manage the service contents through the update of the application. Proper content management helps improve the quality of services provided to users.
According to the “Executive Summary World Robotics 2019 Service Robots [9]” of the International Federation of Robotics, the total number of personal and home service robots was about 16.3 million units in 2018, an increase of 59% over the previous year. It is expected to continue to increase in the future. As described above, in order to provide various services at home, the demand for communication robots is increasing, and system development for realizing natural communication with humans is actively progressing. Accordingly, various robots have been developed to help people in their daily lives. Robots are used not only for elderly care but also in other various fields. For example, robots offer customer services in places such as hotels and restaurants [10,11]. In the robot communication system, it can target a variety of people, such as young people living alone and children whose parents are going out. The communication system can be used as a practice partner for people who are not good at talking with others.
For support in various places, there are various robot partners. “Shimi” is a robotic musical companion [12]. The robot has a stereo speaker to play music and the robot’s body can shake to a beat. The robot’s main attraction is the music playback and embodiment of body gesture. It also includes the ability to play and search music categories with voice recognition. “BUDDY” is a companion robot [13]. The robot can communicate with their families. This robot has open-source technology platform for developers. The robot can extend interaction ability by adding arm structure to the body. “Humanoid buddy” is developing a robot using a 3D printer [14]. This robot can perform dancing and demonstrate various facial expressions.
Demonstration experiments using real robots are being actively conducted. Here, we would like to introduce some famous robot partners. The communication robot “PALRO (FUJISOFT Inc. Yokohama, Japan)” has various capabilities specialized in information support for the elderly [15]. PALRO is a communication robot. The actual use of PALRO in facilities and hospitals for the elderly is also increasing. The robot plays an auxiliary role through recreation in the hospital. “Pepper (Softbank Inc. Tokyo, Japan)” is a robot that can communicate by considering a person’s emotional state [16]. It is used on behalf of people to explain and guide products. “RoBoHon (Sharp Inc. Osaka, Japan)” is also a robot that can communicate with humans [17]. This robot contains the functions of smart devices, therefore it is possible to connect with IoT (Internet of Things) devices and home appliances such as television and air conditioners. As such, various types of robots are being developed according to various needs.
We also conducted a questionnaire through empirical experiments to develop a robot partner system according to human needs from previous research [18]. Through seven days of experiment, we realized that the needs of the elderly and caregiver are different, and the desired design of the robot is slightly different. By taking these factors into account in the development of a robot partner, we propose robot partners with a modular cognitive model based on the modularized architecture of hardware and software. In addition, we propose the development of a smart device-based robot partner development platform that supports the development and sharing of various services suitable for situations by combining modules such as voice recognition, vision, and various smartphone sensors.

3. The Robot Partner System: iPhonoid-C

Previously, we proposed a smart device-based robot partner system that meets the various needs of users as shown in Figure 2 [19,20,21]. By using various sensors inside the smart device, external information can be obtained, analyzed, and used as data for human-robot interaction. To build a robot system using such a smart device, we have built an architecture of hardware and software.

3.1. Development of Robot Hardware

Previously, it cost a lot to develop a single prototype robot. The more parts and detailed design that occurred, the higher the cost of development. However, with the expiration of original patents for the Fused Deposition Modeling (FDM) printing process in 2009 [22], the price of 3D printers has lowered and printing performance has improved. Therefore, 3D printers are also used in the development of our hardware modules.
Nonverbal communication, such as human facial expressions and gestures is an element that carries emotions. In addition, nonverbal communication has a complementary relationship to verbal communication and plays an important role in human communication. Human-friendly robots need to interact based on its embodiment. However, the smartphone itself only has the screen, speaker, and microphone, and does not have the body of the robot. Therefore, we considered using the charging stand for the robot’s body. We are developing a device that can be recharged when plugged into a table and can also be used as a robot as shown in Figure 3. When a smart device is attached to the robot’s body, BlueTooth communication with the robot body is initialized. These structures are being developed to allow for the selection of various smart devices as shown in Figure 4. Our robot partner design selection system consists of three parts: A support to fix the smart device, the upper body to support the robot’s arm, and the lower body to attach a moving mechanism. Figure 5 is an example of printing using the 3D drawings of a selected design. For the robotic part confirmation, robot design selection is used through AR (Augmented Reality) technology based on iOS devices.

3.2. Development of Robot Software

To develop the robot partner according to various needs, a robot has a modular structured system to allow development via a combination as shown in Figure 6. The application layer is used as the top layer and the modules of each layer are gathered to form the structure of the upper system. Based on this modular configuration, it is possible to make a robot system that meets the required needs [20]. Regarding the robot’s communication model, the idea is inspired by the human language acquisition model. In “Origins of Human Communication” [7], three general types of communicative motives are presented, as shown in Table 2.
We are inspired by these motives and it is applied as the foundation of conversation. The utterance sentence is selected stochastically from the group according to the user state. The selection probability ( p i ) of the i-th sentence is calculated by using a Boltzmann selection scheme as follow,
p i = ψ i · e x p ( s i / τ ) j = 1 J ψ i · e x p ( s j / τ )
where the utterance control parameter s i is defined as:
s i = tanh ( t c t i ) 2 β
where ψ i is the validity flag of the i-th sentence; τ is a temperature parameter; J is the current number of utterance sentences in the database; t i is the last time when the i-th sentence was spoken; t c is the current time; and β is a coefficient used to control the selection range of utterance sentences. Using such equations, a conversation has been selected with a probability based on the strength of individual content selection in order to avoid a monotonous conversation.

3.3. Emotional Empathy for Human-Robot Interaction

Human emotions plays an important role in social communication. Actually, the emotional state of a human is related in generating facial and gestural expressions. Basically, our emotion has eight feelings as follows: “Neutral”, “Happy”, “Surprise”, “Angry”, “Disgusting”, “Sad”, “Frightened”, “Fearful”, and “Thrilling”. And, two types of mood parameters to change the robot’s face are used as follows: “Positive” and “Negative”. Therefore, we developed an emotion model as shown in Figure 7 for the development of a human-friendly social robot [23].
The robot uses the sensor from a smart device to speculate emotional state according to the human state. Depending on the inferred emotional state, the robot’s utterance sentence, facial expression, and the shape of the gesture are determined. The robot’s facial expression forms a design using the screen of the smart device as shown in Figure 8.
There is a close relationship between emotion and body movements [24]. Accordingly, we considering a robot’s emotional state in order to generate the robot’s facial and gestural expressions. The robot has eight degrees of freedom to generate gestures. Each motor ID of the robot is assigned the same number at a predetermined position. The control of the motor for generating the robot’s gesture is divided as shown in Figure 9. The design of the robot’s gesture changes according to the robot’s feelings. For example, there are several gesture designs related to “Happy”, and each gesture consists of four segments and each segment has 10 values. There is a delay after the last segment. All 40 values in this structure can be designed by customers and users [25]. In the robot’s gesture generation, parameters are designed in consideration of the emotional state.

4. Conversation System for Human-Robot Interaction

Our conversation system is developing for modular cognitive model [21], and we discussed the structure of the conversation system for the iPhonoid related to different situations as illustrated in Figure 10 [26,27]. The basic robot conversation complements both predefined contents and contents from internet information such as a social network service [28].
The selection strength of the ith utterance is defined by p i :
p i = f 1 , i · f 2 , i j = 1 N f 1 , j · f 2 , j ,
where utterance control parameter f 1 , i is defined as:
f 1 , i = tanh ( t c t l , i ) 2 β ,
where t l , i is the last time when the ith sentence was spoken; t c is the current time; and β is a coefficient to control the selection range of the robot partner’s utterance sentences. The selection strength of the sentence is changed based on Equation (4) in order to avoid utterances with the same content after the robot’s speech. Furthermore, Gaussian distribution type f 2 , i is defined as:
f 2 , i = e ( t c t d , i ) 2 α if { ( Symmetric ) } { ( Left side ) ( t c < t d , i ) } { ( Right side ) ( t c > t d , i ) } 1 if ( On time ) ( t c = t d , i ) 0 if { ( Left side ) ( t c t d , i ) } { ( Right side ) ( t c t d , i ) } { ( On time ) ( t c t d , i ) }
where t d , i is the scheduled time of the ith utterance sentence, and the number of utterances can be controlled by α . The graphs of the time-dependent conversation mode are illustrated in Figure 11.
Conversation systems are classified into task-oriented and non-task-oriented from the viewpoint of the purpose of conversation as shown in Figure 12. A task-oriented conversation includes information support and guidance. On the other hand, a non-task-oriented conversation reduces stress in daily life, brings enjoyment, and chats with users.

4.1. Daily Conversation Mode

Daily conversation mode is based on content related to daily information support. This mode mainly includes notifications such as weather, news, and local community event information. In general, this mode includes event-driven utterances based on signals from voice recognition or response from environmental sensors.

4.2. Content Conversation Mode

The content conversation mode corresponds to human requesting communicative motives based on the interaction design template. Basic content is included in the robot system. Users can also update additional content according the user’s needs using Table 3. After updating the interaction design template to database, the robot is able to select an utterance based on parameters such as “robot_id”, “scenario_id”, “context_id”, “face”, “geesture”, “time”, “language”, and “sentences”.
Table 3 shows the structure for robot utterance. “robot_id” stores the id number assigned to each robot for multi-robot mode. “scenario_id” stores the categorization of utterances, and “context_id” is used to context categorize utterances. “face” selects the facial expression of the robot based on emotional factors, and “gesture” selects the robot gesture based on the robot’s embodiment. In our research, we defined eight feelings and two types of mood parameters to change the robot’s utterances and face. Depending on the feeling parameters, the contents of utterance and face design is changed. “time” is used when the utterance content changes depending on the time parameter. “language” stores a language code of the utterance sentence for multilingual mode, and “sentences” stores the robot’s utterance sentences.
In service industries, manuals are usually provided for customer response situations. When the service manual is applied as the robot’s content, it becomes possible for the robot to guide according to the manual sequence. Accordingly, this data structure makes it possible to support the type of robot sharing content by using the communication function of the smart device (Figure 13). Therefore, it can be usefully developed and used even when supporting work in multiple places such as hotels [29]. In addition, robot systems need time to update utterance content from vendor layer. Therefore, when the content can be edited in a text file format such as CSV, we considered that the content could be immediately updated from the customers or users layer.

4.3. Information Conversation Mode

The information support mode corresponds to human informing communicative motives. First of all, in a notification support, the robot partner notifies a person, e.g., when an event is detected on the sensor. Secondly, in daily information services, the robot conducts services after several greetings. The weather forecasts and news information will be gathered from the internet based on the smart device. Thirdly, information retrieval is a service concept that allows users to search for information via the internet when there is information on what they want to find.

4.4. Scenario Conversation Mode

The scenario conversation mode corresponds to the human sharing of communicative motives. According to the emotional state set in each utterance scenario, the robot could have an appropriate emotional expression. Previously, we have been developing emotional models for the robot partner system [23]. Depending on the result, robots can have various facial expressions as shown in Figure 8. The normal state is presented as neutral. For each emotional expression, gestural expression types are used and selected randomly based on the given emotional parameter.

5. Persona for Robot Interaction Design for the User-Centered Design Approach

Human-robot interaction has been studied based on human psychology [30] and the interaction between humans and robots has been studied for a long time [31]. The communication of a robot partner at the present stage is usually selecting the communication sentences in the conversation database in advance. Therefore, it is difficult to perform personalized interaction according to an individual situation. Consequently, we apply the concept of user persona for human-robot interaction [32]. To apply a human persona, we use the classification based on the multi-layer neural network based on the input information of the user and select the persona classified in the robot partner. The selected persona is used for the interaction of the robot partner. For example, it is possible to consider the greetings according to the living area as shown in Table 4. Here, we apply the persona for the Japanese area to the Kansai (west) area and the Kanto (east) area to human-robot interaction for the Japanese user as shown in Table 5.
In order to have natural communication, it is necessary to respond appropriately according to the personality and situation of the user. Therefore, the persona will be chosen based on the classification of human information for human-robot interaction. For example, in order to classify a persona, the East (Kanto in Japan) region and the West (Kansai in Japan) region differ in their cultural characteristics [33,34]. In the East region, they prefer a sophisticated life that emphasizes individual privacy, but the West region places more emphasis on relationships between people. According to this situation, in this study, it is classified into four kinds of persona as shown in Table 5. By using information about age and living area in the system, it is designed to change the selection of the uttered content used by the robot partner according to the result of the classification area as shown in Figure 14. The x-axis is the age and the y-axis is the area.
Persona classification according to such regional differences is selected based on the presence or absence of the internal personal information of the smart device.

6. Experimental Results

This section shows an example of design and development of the robot partner system. First, the conversion process of the mode for our system is shown through graphs in Section 6.1. Next, the open campus event guide is supported by our robot partner system in Section 6.2. Finally, we conduct interviews after testing the robot partner development platform in Section 6.3. Here, we show an example of development based on the robot partner development platform in case of human-robot interaction (Figure 15 and Figure 16). In our experiment, the purpose is to provide information support for normal citizens and university students.

6.1. The Results of Numerical Experiment

Here, we show the results of how the robot’s service is performed according to the change of the robot’s emotional state. Figure 17 shows the numerical experiment results of the human-robot interaction. The robot is designed to provide different information support according to the number of people gathered at the event place. T is time-stamp on the x-axis. In Figure 17 section (a), since there were people approaching, the “Greeting” or “Event announcement” mode was used to guide event information. Since a person was approaching and interacting in Figure 17 section (b), the robot introduced laboratory information by self-introduction mode. When people move away and walk away from the robot, conversation mode is changed to “Self-introduction” and “Research introduction” as shown in Figure 17 section (c). The robot’s facial expression is “Happy” emotional states, but the robot partner was feeling sad when people left the robot. As in Figure 17 section (d), the robot conversation mode starts anew from the “Greeting” according to the presence or absence of the visitor. In the next chapter, the results of the actual service from the robot at the event are shown.

6.2. Interaction System in Event Place

As an example of using a robot system, the robot partner is used as a university guide when the university had an open campus event as shown in Figure 16. The main aim of this robot system is to guide guests or unfamiliar visitors to the laboratory tour before it starts, and the robot partner gives information during the event. As shown in Table 6, it is possible to apply content about greetings and explanation of research information by using the time parameter.
The robot’s conversation content was designed in advance in cooperation with students, and Table 6 shows only a portion of the content used. If more detailed personal information can be provided, the robot could choose a greeting according to the user persona.
The robot was mainly responsible for guiding guests from the outside and explaining the prepared demonstration system, and after interacting with the robot, it showed effects such as naturally entering the laboratory. Consequentially, the robot system can take suitable interactions according to the current situation based on interaction content. By using this system, guests, who usually found it difficult to enter the laboratory, also showed interest in the robot. Furthermore, guests showed considerable interest in laboratory explanations.

6.3. Interview on the Use of Robot Partner Development Platform

In order to collect opinions of our robot platform, we got interview results after using the robot’s development platform from undergraduate students. A total of eight participants in their 20s participated. Interviews were conducted once, before and after the experiment as shown in Figure 18. When interviewing on whether “It was a good guide to knowing the process of making a robot by experiencing this system,” the responses from eights students showed a positive result with an average score of 3.5 (Figure 19). From Table 7, we can see that the use of the robot partner development platform is helpful in the progress of robot development. However, the system needs improvement for details such as the order and method of design selection.

7. Conclusions and Future Works

In this paper, we proposed the robot partner development platform for human-robot interaction. We explained the overall system for robot development and evaluated it through practical use. Our robot partner development platform design is based on the modularized architecture of robot partners using a smart device to realize the flexible update based on the usability of hardware and software modules. The robot’s body was made by 3D printers and the robot hardware was controlled by software libraries.
The results of the numerical experimental showed that our robot partner could conduct interaction according to different human behavior patterns. In this paper, we showed interview comments from undergraduate students for an evaluation of the difficulty of system development in human-robot interaction design. The experimental results showed that the proposed method could be an easy development for students. The proposed method could change robot conversation mode based on the state of human-robot interaction. By classifying personas, the robot could select the utterance contents for human-robot interaction. Therefore, user-designed interactions could be realized by selecting the control mode of the robot. However, it is necessary to improve the robot system by receiving evaluations from the elderly and those who support the elderly, which we will be the direction of development.
For future works, we will focus on improving the robot partner interaction system by setting detailed parameters to collect various types of information, such as personal hobbies and preferences. In addition, to learn the robot’s interaction ability from humans, we will apply the robot to the human development process. For example, infants and children learn by judging the situation from their mother’s eyes and facial expressions (Figure 20). Since, it is hard to infer the emotions of an elderly person, there is a limit to the function of robots for communicating with humans, and further research and development is required. Consequently, for the next step of research, we will aim to develop an “affect attunement” system based on the content design for elderly-caregiver interaction. We consider that a more natural interaction can be achieved by giving the robot the concept of emotional tuning with humans by an “affect attunement” system.

Author Contributions

Conceptualization, J.W., Y.O. and N.K.; Software, J.W.; Writing—original draft, J.W.; Supervision, Y.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank to the students for theirs help in testing the robot partner development platform and conducting interviews.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, S.H.; Kim, D.H.; Kim, W.S. Long-term care needs of the elderly in Korea and elderly long-term care insurance. Soc.Work Public Health 2010, 25, 176–184. [Google Scholar] [CrossRef] [PubMed]
  2. Gardiner, C.; Geldenhuys, G.; Gott, M. Interventions to reduce social isolation and loneliness among older people: An integrative review. Health Soc. Care Community 2018, 26, 147–157. [Google Scholar] [CrossRef] [PubMed]
  3. Cabinet Office, Government of Japan. White Paper on an Aging Society. 2019. Available online: https://www8.cao.go.jp/kourei/whitepaper/index-w.html (accessed on 3 October 2020).
  4. Fowler, J.H.; Christakis, N.A. Dynamic spread of happiness in a large social net-work: Longitudinal analysis over 20 years in the framingham heart study. BMJ 2008, 337, a2338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Earthy, J.; Jones, B.S.; Bevan, N. ISO standards for user-centered design and the specification of usability. In Usability in Government Systems; Morgan Kaufmann: Burlington, MA, USA, 2012; pp. 267–283. [Google Scholar]
  6. ISO 9241-210: Ergonomics of Human-System Interaction–Part 210: Human-Centred Design for Interactive Systems; International Standardization Organization (ISO): Geneva, Switzerland, 2019.
  7. Tomasello, M. Origins of Human Communication; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  8. The 23rd Ota Industry Fair. Available online: https://www.pio-ota.jp/k-fair/23/index.html (accessed on 3 October 2020).
  9. International Federation of Robotics. Executive Summary World Robotics 2019 Service Robots; International Federation of Robotics: Frankfurt, Germany, 2019. [Google Scholar]
  10. Osawa, H.; Ema, A.; Hattori, H.; Akiya, N.; Kanzaki, N.; Kubo, A.; Koyama, T.; Ichise, R. What is real risk and benefit on work with robots?: From the analysis of a robot hotel. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Viennam, Austria, 6 March 2017; pp. 241–242. [Google Scholar]
  11. Iwamoto, T.; Nishi, K.; Unokuchi, T. Simultaneous dialog robot system. In Inter-National Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2019; pp. 107–111. [Google Scholar]
  12. Savery, R.; Rose, R.; Weinberg, G. Finding Shimi’s voice: Fostering human-robot communication with music and a NVIDIA Jetson TX2. In Proceedings of the 17th Linux Audio Conference, Stanford, CA, USA, 23–26 March 2019; p. 5. [Google Scholar]
  13. Milliez, G. Buddy: A companion robot for the whole family. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 1 March 2018; p. 40. [Google Scholar]
  14. Potnuru, A.; Jafarzadeh, M.; Tadesse, Y. 3D printed dancing humanoid robot “Buddy” for homecare. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 733–738. [Google Scholar]
  15. Ono, S.; Woo, J.; Matsuo, Y.; Kusaka, J.; Wada, K.; Kubota, N. A Health Promotion Support System for Increasing Motivation Using a Robot Partner. Trans. Inst. Syst. Control Inf. Eng. 2015, 284, 161–171. [Google Scholar]
  16. Pandey, A.K.; Gelin, R. A mass-produced sociable humanoid robot: Pepper: The first machine of its kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  17. Kobayashi, T.; Kuriyama, K.; Arai, K. SNS Agency Robot for Elderly People Realizing Rich Media Communication. In Proceedings of the 2018 6th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud), Bamberg, Germany, 26–29 March 2018; pp. 109–112. [Google Scholar]
  18. Kobayashi, T.; Kuriyama, K.; Arai, K. Robot partner system for elderly people care by using sensor network. In Proceedings of the 2012 4th IEEE RAS & EMBS international conference on biomedical robotics and biomechatronics (BioRob), Bamberg, Germany, 26–29 March 2018; pp. 1329–1334. [Google Scholar]
  19. Woo, J.; Kubota, N. A Modular Structured Architecture Using Smart Devices for Socially-Embedded Robot Partners. In Handbook of Research on Advanced Mechatronic Systems and Intelligent Robotics; IGI Global: Hershey, PA, USA, 2020; pp. 288–309. [Google Scholar]
  20. Woo, J.; Botzheim, J.; Kubota, N. System integration for cognitive model of a robot partner. Intell. Autom. Soft Comput. 2017, 1–14. [Google Scholar] [CrossRef]
  21. Woo, J.; Botzheim, J.; Kubota, N. A modular cognitive model of socially embedded robot partners for information support. ROBOMECH J. 2017, 4, 10. [Google Scholar] [CrossRef] [Green Version]
  22. Torres, J.; Cole, M.; Owji, A.; DeMastry, Z.; Gordon, A.P. An approach for mechanical property optimization of fused deposition modeling with polylactic acid via design of experiments. Rapid Prototyp. J. 2016, 22, 387–404. [Google Scholar] [CrossRef]
  23. Woo, J.; Botzheim, J.; Kubota, N. Emotional empathy model for robot partners using recurrent spiking neural network model with Hebbian-LMS learning. Malays. J. Comput. Sci. 2017, 30, 258–285. [Google Scholar] [CrossRef]
  24. Morita, J.; Nagai, Y.; Moritsu, T. Relations between body motion and emotion: Analysis based on Laban Movement Analysis. Proc. Annu. Meet. Cogn. Sci. Soc. 2013, 35, 1026–1031. [Google Scholar]
  25. Woo, J.; Botzheim, J.; Kubota, N. Facial and gestural expression generation for robot partners. In Proceedings of the 2014 International Symposium on Micro-Nanomechatronics and Human Science (MHS), Nagoya, Japan, 10–12 November 2014; pp. 1–6. [Google Scholar]
  26. Woo, J.; Botzheim, J.; Kubota, N. Verbal conversation system for a socially embedded robot partner using emotional model. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 37–42. [Google Scholar]
  27. Woo, J.; Botzheim, J.; Kubota, N. A socially interactive robot partner using content-based conversation system for information support. J. Adv. Comput. Intell. Intell. Inform. 2018, 22, 989–997. [Google Scholar] [CrossRef]
  28. Jacquemont, M.; Woo, J.; Botzheim, J.; Kubota, N.; Sartori, N.; Benoit, E. Human-centric point of view for a robot partner: A cooperative project between France and Japan. In Proceedings of the 2016 11th France-Japan & 9th Europe-Asia Congress on Mechatronics (MECATRONICS)/17th International Conference on Research and Education in Mechatronics (REM), Compiegne, France, 15–17 June 2016; pp. 164–169. [Google Scholar]
  29. Yamamoto, S.; Woo, J.; Chin, W.H.; Matsumura, K.; Kubota, N. Interactive Information Support by Robot Partners Based on Informationally Structured Space. J. Robot. Mechatron. 2020, 32, 236–243. [Google Scholar] [CrossRef]
  30. Kahn, P.H., Jr.; Ishiguro, H.; Friedman, B.; Kanda, T.; Freier, N.G.; Severson, R.L.; Miller, J. What is a human?: Toward psychological benchmarks in the field of human–robot interaction. Interact. Stud. 2007, 8, 363–390. [Google Scholar] [CrossRef]
  31. Yamasaki, N.; Anzai, Y. Active interface for human-robot interaction. In Proceedings of the 1995 IEEE International Conference on Robotics and Automation, Nagoya, Japan, 21–27 May 1995; Volume 3, pp. 3103–3109. [Google Scholar]
  32. Nielsen, L.; Personas, M.S.; Dam, R.F. The Encyclopedia of Human-Computer Interaction, 2nd ed.; TheInteraction Design Foundationn: Aarhus, Denmark, 2013; Available online: http://www.interaction-design.org/encyclopedia/personas.html (accessed on 3 October 2020).
  33. Okamoto, S.; Smith, J.S.S. Japanese Language, Gender, and Ideology: Cultural Models and Real People; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  34. Preston, D.R. Handbook of Perceptual Dialectology; John Benjamins Publishing: Amsterdam, The Netherlands, 1999; Volume 1. [Google Scholar]
  35. Stern, D.N. The Interpersonal World of the Infant: A View from Psychoanalysis and Developmental Psychology; Routledge: Abingdon, UK, 2018. [Google Scholar]
Figure 1. Robot partner development layers.
Figure 1. Robot partner development layers.
Applsci 10 07992 g001
Figure 2. Development in various concepts using a 3D printer.
Figure 2. Development in various concepts using a 3D printer.
Applsci 10 07992 g002
Figure 3. Plugin system for robot partners.
Figure 3. Plugin system for robot partners.
Applsci 10 07992 g003
Figure 4. Various 3D drawings for printing robot parts.
Figure 4. Various 3D drawings for printing robot parts.
Applsci 10 07992 g004
Figure 5. Hardware confirmation and selection system based on a smart device.
Figure 5. Hardware confirmation and selection system based on a smart device.
Applsci 10 07992 g005
Figure 6. Modularization of robot partner system.
Figure 6. Modularization of robot partner system.
Applsci 10 07992 g006
Figure 7. The information flow of the robot’s emotional model.
Figure 7. The information flow of the robot’s emotional model.
Applsci 10 07992 g007
Figure 8. Facial expression of the robot partner.
Figure 8. Facial expression of the robot partner.
Applsci 10 07992 g008
Figure 9. Segment of robot movements for gesture generation.
Figure 9. Segment of robot movements for gesture generation.
Applsci 10 07992 g009
Figure 10. Structure of the conversation system.
Figure 10. Structure of the conversation system.
Applsci 10 07992 g010
Figure 11. Gaussian membership functions for utterance selection.
Figure 11. Gaussian membership functions for utterance selection.
Applsci 10 07992 g011
Figure 12. Conversation modes for robot partners.
Figure 12. Conversation modes for robot partners.
Applsci 10 07992 g012
Figure 13. Overview of robot system.
Figure 13. Overview of robot system.
Applsci 10 07992 g013
Figure 14. Example of classification result by multi-layer perception.
Figure 14. Example of classification result by multi-layer perception.
Applsci 10 07992 g014
Figure 15. Robot system usage example: Open university guide.
Figure 15. Robot system usage example: Open university guide.
Applsci 10 07992 g015
Figure 16. Application example of robot system.
Figure 16. Application example of robot system.
Applsci 10 07992 g016
Figure 17. The conversation mode transition results of the interaction.
Figure 17. The conversation mode transition results of the interaction.
Applsci 10 07992 g017
Figure 18. The interview environment.
Figure 18. The interview environment.
Applsci 10 07992 g018
Figure 19. Usability evaluation of the system.
Figure 19. Usability evaluation of the system.
Applsci 10 07992 g019
Figure 20. Infant development process [35].
Figure 20. Infant development process [35].
Applsci 10 07992 g020
Table 1. Development at various levels for customization.
Table 1. Development at various levels for customization.
Development LevelsContents
ResearchersAt the researcher stage, the cognitive architecture is considering
to develop the platform (e.g., EPAM, ACT-R, and Soar).
DevelopersAt the developer stage, the modular system is developed
considering the components of robots (e.g., RTM, ROS).
VendorsAt the vendor stage, implementation is carried out
according to the needs in consideration of integration.
CustomersAt the customer stage, content design is performed in accordance with the service
to be realized in consideration of the service content (e.g., nursing homes, restaurant).
UsersAt the user stage, content design is performed to customize
for personal preference of verbal communication.
Table 2. Three general types of evolved communicative motives [7].
Table 2. Three general types of evolved communicative motives [7].
Communicative Motives
Requesting“I want you to do something to help me (requesting help or information)”
Informing“I want you to know something because I think it will help or interest you (offering help including information)”
Sharing“I want you to feel something so that we can share attitudes/feelings together (sharing emotions or attitudes)”
Table 3. Interaction design template for robot partner system.
Table 3. Interaction design template for robot partner system.
Robot_IdScenario_IdContext_IdFaceGestureTimeLanguageSentences
robot_A01Neutral(0)1420ja-JPGood morning!
(In Japanese)
robot_A01Happy(1)1-ja-JPI’m glad to meet
you! (In Japanese)
robot_A01Neutral(0)1840ja-JPOh, I feel so tired
today. (In Japanese)
robot_A02Happy(1)1300ja-JPI feel a refreshing
day. (In Japanese)
robot_A02Neutral(0)1420ja-JPIt is warm in the daytime.
(In Japanese)
robot_A02Neutral(0)1960ja-JPToday is so hot!
(In Japanese)
robot_A13Happy(1)2-ja-JPThe next one was made by
bachelor students together with
a graduate student in
preparation for graduation
research. (In Japanese)
robot_A13Happy(1)2-ja-JPThe current research aims
to assist walking by
controlling motors.
(In Japanese)
Table 4. Pronunciation of greetings in Japanese for persona.
Table 4. Pronunciation of greetings in Japanese for persona.
Region NameExpression on “Good Morning” in JapaneseExpression on “Goodbye” in Japanes
Iwate prefectureOhayogansuAbara
Yamagata prefectureHayaenassuNdaramazu
TokyoOhayoSayonara
Niigata prefectureOhayoAbayo
Kyoto prefectureOhayosanSainara
Ehime prefectureOhayoSawainara
Okinawa prefectureUkimiso-chi-Mataya-
Table 5. Persona classification.
Table 5. Persona classification.
Persona APersona BPersona CPersona D
AreaKansai region (West)Kanto region (East)Kansai region (West)Kanto region (East)
AgeYoungElderlyYoungEldery
Table 6. Interaction design template for robot partner system.
Table 6. Interaction design template for robot partner system.
Robot_IdScenario_IdContext_IdFaceGestureTimeLanguageSentences
robot_A01Neutral(0)1420ja-JPGood morning!
(In Japanese)
robot_A01Happy(1)1-ja-JPI’m glad to meet you!
(In Japanese)
robot_A01Neutral(0)1840ja-JPOh, I feel so tired
today. (In Japanese)
robot_A02Happy(1)1300ja-JPI feel a refreshing
day. (In Japanese)
robot_A02Neutral(0)1420ja-JPIt is warm in the daytime.
(In Japanese)
robot_A02Neutral(0)1960ja-JPToday is so hot!
(In Japanese)
robot_A13Happy(1)2-ja-JPThe next one was made by
bachelor students together with
a graduate student in
preparation for graduation
research. (In Japanese)
robot_A13Happy(1)2-ja-JPThe current research aims
to assist walking
by controlling motors.
(In Japanese)
Table 7. The interview comments.
Table 7. The interview comments.
StudentsComments
Student AWhen making a robot from design, I felt that it was effective for
people who did not know the detail of robot. And, I thought
it’s especially good for people who will develop software.
Student BI thought that the ability to check the parts of the robot in advance
with AR is helpful. However, as the order of the system, I thought it was
better to check the function for the needs first and then select the parts.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Woo, J.; Ohyama, Y.; Kubota, N. Robot Partner Development Platform for Human-Robot Interaction Based on a User-Centered Design Approach. Appl. Sci. 2020, 10, 7992. https://doi.org/10.3390/app10227992

AMA Style

Woo J, Ohyama Y, Kubota N. Robot Partner Development Platform for Human-Robot Interaction Based on a User-Centered Design Approach. Applied Sciences. 2020; 10(22):7992. https://doi.org/10.3390/app10227992

Chicago/Turabian Style

Woo, Jinseok, Yasuhiro Ohyama, and Naoyuki Kubota. 2020. "Robot Partner Development Platform for Human-Robot Interaction Based on a User-Centered Design Approach" Applied Sciences 10, no. 22: 7992. https://doi.org/10.3390/app10227992

APA Style

Woo, J., Ohyama, Y., & Kubota, N. (2020). Robot Partner Development Platform for Human-Robot Interaction Based on a User-Centered Design Approach. Applied Sciences, 10(22), 7992. https://doi.org/10.3390/app10227992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop