Next Article in Journal
Fetal Megacystis: A New Morphologic, Immunohistological and Embriogenetic Approach
Next Article in Special Issue
Primitive Shape Fitting in Point Clouds Using the Bees Algorithm
Previous Article in Journal
Stochastic Modeling and Optimal Time-Frequency Estimation of Task-Related HRV
Previous Article in Special Issue
Pull-Based Distributed Event-Triggered Circle Formation Control for Multi-Agent Systems with Directed Topologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Learning and Comfort in Human–Robot Interaction: A Review

Department of Automotive Engineering, Clemson University, Greenville, SC 29607, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(23), 5152; https://doi.org/10.3390/app9235152
Submission received: 3 July 2019 / Revised: 30 September 2019 / Accepted: 24 October 2019 / Published: 28 November 2019
(This article belongs to the Collection Advances in Automation and Robotics)

Abstract

:
Collaborative robots provide prospective and great solutions to human–robot cooperative tasks. In this paper, we present a comprehensive review for two significant topics in human–robot interaction: robots learning from demonstrations and human comfort. The collaboration quality between the human and the robot has been improved largely by taking advantage of robots learning from demonstrations. Human teaching and robot learning approaches with their corresponding applications are investigated in this review. We also discuss several important issues that need to be paid attention to and addressed in the human–robot teaching–learning process. After that, the factors that may affect human comfort in human–robot interaction are described and discussed. Moreover, the measures utilized to improve human acceptance of robots and human comfort in human–robot interaction are also presented and discussed.

1. Introduction

Collaborative robots provide prospective and great solutions to complex hybrid assembly tasks, especially in smart manufacturing contexts [1,2]. Through human–robot interaction, the tasks can be split between humans and robots based upon their capabilities to leverage their unique advantages. For example, in human–robot collaborative assembly tasks, robots can execute tedious and strength-based sub-tasks, while humans can conduct brainwork-based sub-tasks [3,4]. In general, collaborative robots are mainly programmed and controlled by well-trained experts through offline coding devices, such as teaching pendants [5], which usually cost humans significant time and effort. However, modern smart manufacturing is experiencing quick product upgrades with more customization and shorter life cycles to meet the ever-changing market needs. In order to shorten this gap between research and application, one of the up-to-the-moment topics—robot learning from human demonstrations—is proposed and has been studied by both academia and industry in recent years [6,7]. Via this approach, humans can transfer knowledge to robots through demonstrated actions without needing considerable coding skills to have robots understand how to accomplish tasks [8,9].
Apart from the understanding of human actions and intentions in human–robot interaction [10,11], human comfort, which has a direct and immediate influence on the collaboration quality between the robot and its human partner, is also a significant factor for the robot to be aware of [12]. For instance, in human–robot collaborative tasks, technical safety (keeping a required physical distance or developing a safety interlock system between the human and the robot) does not necessarily mean human perceived safety and comfort since the perceived feelings are mostly subjective [13]. In addition, the same performance of the robot in the same task may lead to different humans having diverse comfort levels. For example, a slow robot speed usually makes some people feel safe in a human–robot interaction. However, some others may consider it to be less effective and feel uncomfortable.
Therefore, it is in the human–robot interaction context that we will introduce and discuss two significant topics—learning and comfort—in this review. The rest of the review is organized as follows. Section 2 provides the topic of teaching and learning in human–robot interaction, which contains discussion of robots learning from demonstration, human teaching approaches, and robot learning approaches. The conceptual details and corresponding applications of each sub-topic are presented. Several important issues, including extraction, real-time, correspondence, execution, and safety in the human–robot teaching–learning process are discussed in Section 3. Section 4 describes the details about what affects human comfort in human–robot interaction. Different kinds of factors that may influence human comfort are investigated and discussed with related studies. Section 5 describes how to improve human comfort in human–robot interaction, where the three sub-topics of human comfort measurement, measures to improve human acceptance of robots, and measures to improve human comfort are included and discussed. Finally, conclusions are presented in Section 6.

2. Teaching and Learning in Human–Robot Interaction

2.1. Robot Learning from Demonstration

The robot programming approach has gone through three distinct reforms in the past 60 years. As shown in Figure 1, these robot programming approaches include: teach pendant-based programming, computer-aided design (CAD)-based programing, and robot learning from human demonstrations.
In the first approach, teach pendants are handheld devices that can be used by humans to program robots directly with predefined tasks via different programming languages, such as RAPID, KRL, and VAL3. Afterwards, the robots are controlled to plan motions step-by-step to accomplish the fixed workflows [14]. However, the teach pendant-based programming approach usually requires the human mastering skillful technical expertise. Therefore, it is time-consuming and not cost-effective, especially for large-scale manufacturing tasks.
In order to assist users with programming robots in an intuitive way, the CAD-based robot programming approach was developed that allows humans to program robots in 3D manufacturing environments with basic CAD skills. Using the CAD-based programming approach, the robot programs are generated offline and then converted into robot commands for corresponding tasks [15]. This approach improves the efficiency of robot programming to some extent. However, in this approach, the users are still required to master a certain level of programming skills and expertise.
Robot learning from demonstration (LfD), which is also known as robot programming by demonstration (PbD), imitation learning, or apprenticeship learning, is a paradigm for enabling robots to autonomously perform new tasks [16]. The study of robot learning from demonstration started about 30 years ago. This approach has grown significantly and has become a center topic of robotics, especially in the area of human–robot interaction [8]. Via the approach of robot learning from demonstration, the human partners can program robots easily and greatly extend the robots’ capabilities for different tasks without programming expertise [7]. The process of robot learning from human demonstrations is basically divided into two steps: human teaching and robot learning. Numerous methods and technologies, such as force-sensor-based teaching [17], vision-system-based teaching [18], and natural-language-based teaching [19], have been developed and implemented in the human teaching process. Additionally, multiple learning algorithms have also been designed and developed for the robots to extract, learn, and build task strategies from human demonstrations [9]. In the human–robot teaching–learning process, the robot usually learns from the human in a direct or indirect manner. The former is human intuitive teaching toward the robot (e.g., kinesthetic-based teaching) and the latter is the human teaching using some external devices (e.g., via a vision system). Approaches of human teaching and robot learning are detailed as follows.

2.2. Human Teaching Approaches

2.2.1. Kinesthetic-Based Teaching

In this approach, the robot is physically guided through the task by the human, where its passive joints are moved through desired motions. Typically, the human teacher operates the robot learner, whose sensors record the execution [20]. The robot gets referee signals by pairing kinesthetic teachings with the human teacher executions being recorded. The human teacher demonstrates several actions for the robot for the same task and changes the location of each step in between to allow the robot to generalize exactly. After that, the robot can infer the relative positions of the objects by observing the demonstrations [16].
This approach provides a natural teaching interface for the robot towards learning the correct required motions. However, one of the drawbacks of the kinesthetic teaching approach is that the human usually uses more of their own degrees of freedom to guide the robot than the number of degrees of freedom they are trying to control. For instance, the human has to use two arms to move one robot manipulator or use both hands to move a few robot fingers in the juice making task [16].

2.2.2. Joystick-Based Teaching

In this approach, the human teacher controls the demonstration and transmits the information regarding the actions to the robot’s controller using a wireless or wired joystick. It is a low-level form of demonstration teaching. This approach has been developed for a variety of applications, such as robot polishing tasks [21], robot soccer tasks [22], and robot welding tasks [23]. In Katagami and Yamada [22], the mobile robot is taught by a human teacher in a soccer game. The operator observes the robot from a viewpoint of overlooking the environment, considers the next operation for teaching, and demonstrates the expected movements using a joystick. However, not all systems are suitable for this technique since the operated robot must be manageable without a high degree of freedom or a complicated structure.

2.2.3. Immersive Teleoperation Scenarios Teaching

In this approach, the human teacher is limited to using the robot’s own sensors and effectors to perform the task [24]. Compared to joystick teaching, this approach underlines utilizing the robot’s own body by the human teacher rather than external devices. In the immersive teleoperation scenarios teaching approach, teleoperation itself can be done using haptic devices, which allow the teacher to teach tasks that require the precise control of forces [24].
Since this approach not only solves the correspondence problem, which is one of the significant issues in human–robot teams (see Section 3), but it also allows human teachers to train robots from a distance, and is well utilized for robot locomotion tasks. In Peternel and Babič [25], a system with a capacity for teaching humanoid robots balancing skills in real-time is proposed. This system employs the human sensorimotor learning ability in which a human demonstrator learns how to operate a robot in the same manner that the human adapts to various everyday tasks. The robot learns the task while the human is operating the robot.

2.2.4. Wearable-Sensor-Based Teaching

Human teachers use their own bodies to perform example executions by wearing sensors, which are able to record the teachers’ states and actions [10,26]. For example, the force-sensing glove can be used for acquiring data from pressure sensors. The collected data can be mapped to manipulation actions, which will in turn be used to interpret physical human–robot interaction [27]. In addition, manipulation tasks can also be demonstrated in a virtual reality environment using a data glove and a motion tracker, by which the specific parts of the objects where grasping occurs are learned and encoded in the task description for the robot.
For other program-robot-by-demonstration systems, for example, the virtual environment is built upon the Virtual Hand Toolkit library provided by the Immersion Corporation, where the human hand is drawn in the virtual scene and driven based on the data captured with the virtual reality (VR) glove [28]. Human teachers also use wearable motion sensors attached on the arm to incrementally teach human gestures to a humanoid robot [29].

2.2.5. Natural-Language-Based Teaching

In this approach, the human teacher can present the demonstrations to the robot through a natural language, by which the teacher specifically tells the robot what actions to execute [9]. For instance, the human teacher can use natural language to teach a vision-based robot how to navigate itself in a miniature town, where the robot is provided with a set of primitive procedures derived from a corpus of route instructions in order to enable unconstrained speech [30]. In the recent work [31], the human teacher can propose a representation of high-level actions that only consists of the desired goal states rather than step-by-step operations in the robot teaching based on the traditional planning framework. For another case, human teachers collect a dataset of task descriptions in a free-form natural language and the corresponding grounded task-logs of the tasks performed in an online robot simulator. After that, they build a library of verb environment instructions that represent the possible instructions for each verb in the working context [32].

2.2.6. Vision-Based Teaching

This approach is based on the external observation teaching manner, where the executions information is recorded using vision devices [9], such as stereo-vision cameras, which can be located externally to the executing platform. In this approach, human teachers integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations, as well as action learning techniques, and all these sub-modules are integrated into a demonstration platform [33]. Another way is based on a luminous marker built with high-intensity light-emitting diodes (LEDs) that can be captured by a set of industrial cameras, where the marker supplies a six degree of freedoms (DoFs) human wrist tracking with both position and orientation data using stereoscopy. Then, the robot is automatically programmed from the demonstrated task [18]. This approach is mainly based on the image processing technology and is different from the VR glove method in which the wearable sensing technology is mainly employed.

2.3. Robot Learning Approaches

2.3.1. Kinesthetic-Based Learning

In this approach, the robot is handled and controlled by a human teacher, where it directly records the states and actions experienced by the sensors distributed on its body during the executions and then completes the explicit task. For example, in Kormushev et al. [34], a robotic manipulator learns to perform tasks that require exerting forces on external objects by interacting with a human operator in an unstructured environment. The robot learns from human teacher demonstrations based on positional and force profiles, by which the action skills are reproduced through an active control strategy based on task space control with a variable stiffness. Additionally, this approach can be combined with multiclass classification methods to realize robot learning from the human teacher. The goal parameters of linear attractor movement primitives can be learned from manually segmented and labeled demonstrations. Moreover, the observed movement primitive order can help to improve the movement reproduction for the robot in the learning process [35].

2.3.2. One-Shot Learning

Rather than teach the robot at once with all the demonstrations like the kinesthetic-based learning method, in the one-shot learning approach, the human teacher provides one or more examples of each sub motion apart from the others and robot learns from the observation of a single instance of the motion [16]. The robot can use this algorithm to select a previously observed path demonstrated by a human and generates a path in a novel situation based on pairwise mapping of invariant feature locations, which is then used in both the demonstrated and the new scenes using a combination of minimum distortion and minimum energy strategies [36].

2.3.3. Multi Shot Learning

Multi shot learning can be performed in batches after several demonstrations are recorded. Robot learning processes are usually inferred from statistical analysis of the data across demonstrations [16]. For instance, in Lee and Ott [37], a humanoid robot starts with observational learning and applies iterative kinesthetic motion refinement using a forgetting factor on the basis of a multi shot learning approach. The kinesthetic teaching is handled using a customized impedance controller, which combines tracking the performance with a compliant physical interaction on the real-time control level.

2.3.4. Vision-Based Learning

This approach corresponds to vision-based teaching, where the robot encodes the information recorded by the vision devices and maps it to its actions. The vision system acts as the robot’s eyes in the system for tracking the human teacher’s actions. In recent works, researchers employed a 7-DoF robot manipulator and a Kinect sensor to train the robot to learn from humans [38]. The skill learning approach is based on symbolic encoding other than trajectory encoding such that it offers a more concise representation of a skill, which is easily transferable to different forms of embodiments. This learning approach is also employed in the object affordances, where researchers extract a descriptive labeling of the sequence of sub-activities performed by a human describing the interactions with the objects in the form of associated affordances. Through a red green blue-depth (RGB-D) video, researchers can model the human activities and object affordances as a Markov random field, where the nodes represent objects and sub-activities. Then the robot learns from a human teacher by means of a structural support vector machine (SSVM) [39].

2.3.5. Reinforcement-Learning-Based Approach

In this approach, the robot learns through trial and error to maximize a reward such that it allows the robot to discover new control policies through free exploration of the state–action space. This approach assumes that there is a known set of necessary primitive actions for the robot to imitate to improve its behavior. Reinforcement learning is widely used in many applications, such as aerial vehicles, autonomous vehicles, robotic arms, and humanoid robots. In Bagnell and Schneider [40], an autonomous helicopter leverages a model-based policy search approach to learn a robust flight controller. In Ghadirzadeh et al. [41], a data-efficient reinforcement learning framework is proposed to enable a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. On the basis of reinforcement learning, a distributed and asynchronous policy learning is presented as a means to achieve generalization and improve training times in real-world manipulation tasks [42].

2.3.6. Inverse-Reinforcement-Learning-Based Approach

Inverse reinforcement learning is different from the reinforcement learning. This approach offers a framework to automatically determine the reward and infer the optimal control strategy when the robot learns from human demonstrations [16]. For instance, by means of inverse reinforcement learning, the robot can learn acceptability-dependent behavioral models from human demonstrations and build its own task policies to assist its human partner in collaborative assembly tasks [7]. Multi-robot inverse reinforcement learning is also proposed, where the behaviors of multiple robots execute fixed trajectories and interact with each other from passive observations [43]. Additionally, the inverse reinforcement learning can be leveraged to combine with a neural network to deal with large-scale and high-dimensional state spaces. In this case, the expert’s behaviors to unvisited regions of the state space can be generalized and the expert’s explicit or stochastic task policy representation can also be easily expressed [44].
This approach also utilizes the human teacher’s failed demonstrations as examples to train the robot. This method deliberately makes the robot avoid repeating the teacher’s mistakes rather than maximize the similarity to the demonstrations. Purposely generating failed demonstrations is easier than generating successful ones in some cases. For example, in Shiarlis et al. [45], the approach called inverse reinforcement learning from failure (IRLF), which converges faster and generalizes better than other methods, is proposed to exploit both successful and failed demonstrations.

2.3.7. Skill-Tree-Construction-Based Approach

Skill tree construction is an online algorithm from demonstration trajectories. In this approach, the demonstration trajectory is segmented into a chain of component skills in which each skill has a goal and is assigned a suitable abstraction from an abstraction library. This approach is able to segment multiple demonstrations and merge them into one skill tree [46,47].

2.3.8. Syntactics-Based Approach

In this approach, some significant task structures conducted by humans can be captured in the form of probabilistic activity grammars from a reasonably small number of samples even under noisy conditions. After that, the learned grammars are employed to facilitate the recognition of more complicated and unforeseen tasks that share underlying structures [48].

2.3.9. Semantic-Networks-Based Learning

This approach adopts semantic networks to make the robot gain the ability to model the world with concepts and relate them to low-level sensory-motor states [49]. By means of this approach, the robot can learn from its human teacher on the basis of hierarchical types of knowledge using the robot’s senses, actions, and spatial environment. The learning algorithm derives from a computational model of the human cognitive map that exploits the distinction between procedural, topological, and metrical knowledge of large-scale space. Moreover, the semantic hierarchy approach has been extended to continuous sensorimotor interaction with a continuous environment [50]. In Li et al. [51], a semantic-network-based learning approach is used to combine with a wearable sensor based method to obtain semantic information efficiently and link it to a metric map. In its applications, an intelligent mobile robot platform can create a 2D metric map, and meanwhile, human activity can be recognized via motion data from wearable motion sensors that are mounted on the human teacher.

2.3.10. Neural-Models-Based Learning

This approach models a mirror neuron system for the robot to learn from its human teacher, where the neural model is the basis of recognition and produces basic grasping motions. Meanwhile, it proves the existence of forward models for guiding these motions [52,53]. For instance, a robot control model is presented to integrate multimodal information, make decisions, and replicate a stimulus–response compatibility task in Sauser and Billard [54]. The model contains a neural network on the basis of a dynamic field approach, the natural ability of which is famously employed for stimulus enhancement and cooperative–competitive interactions within and across sensorimotor representations.

2.3.11. Procedural-Memory-Based Learning

In this approach, the robot’s procedural memory is developed based on an adaptive resonance system. In Yoo and Kim [55], the robot learns the knowledge from human demonstrations just like a child learns through interactions with parents and teachers to build its knowledge system. In this case, human demonstrations are captured using an RGB-D camera, from which the robot segments each execution with the acquired continuous streams. The robot’s procedural memory is developed based upon an adaptive resonance system. using this procedural-memory-based learning approach, the robot is able to perform full sequences of tasks with only partial information of the tasks to be carried out.

2.4. Comparison and Discussion of Different Approaches in Human–Robot Teaching–Learning Processes

In this section, we include comparisons of the differences, strengths, and weaknesses of the reviewed approaches above. As shown in Table 1, during human teaching processes, according to the employed teaching equipment and demonstration manners, human teaching approaches can be categorized into physical-touch approaches and non-physical-touch approaches. We also summarize the human–robot interfaces when humans teach robots in diverse kinds of tasks. The costs of these different teaching techniques are relatively divided into low and high costs. In robot learning processes, as presented in Table 2, the learning methods can be classified by low-level learning approaches and high-level learning approaches. For the low-level approaches, the collected demonstration information of the human teacher is usually directly used for pairing robot goal states and current states, then the robot is controlled by the robot controller. However, in the high-level approaches, more flexible and complex methods (e.g., machine learning algorithms) are employed for the robot to infer action policies from human demonstrations, and even predict unknown or unlearned actions in human–robot collaborative tasks. Therefore, these approaches make robots more intelligent than low-level learning approaches. From Table 1, Table 2, it can be concluded that different human teaching and robot learning approaches have their own features in diverse kinds of human–robot interactive contexts. Therefore, for the different levels of difficulty of tasks, the selection of the human teaching approach or robot learning approach should correspond to its features to properly solve the issues (see Section 3) in human–robot teaching–learning processes.

3. Several Issues in Human–Robot Teaching–Learning Processes

In the human–robot teaching–learning process, there are several issues that should be paid attention to and addressed. These issues include extraction, real-time, correspondence, execution, and safety issues. Each issue is discussed in the following.

3.1. Extraction

Extraction refers to whether the human teacher’s behavior states or actions are absolutely and correctly extracted into the dataset that will be used by the learning approaches to teach robots. For instance, in the vision-based teaching–learning systems, a teacher displays his hand in the camera’s view, so the dataset should record his palm and five figures, rather than three figures or no palm.

3.2. Real-Time

Real-time issues exist in most electrical device signal processing. In the robot system, we should ensure that the demonstration acquiring, information record, and execution embodiment are synchronous at per state and action level, or ensure that the time delay is within the margin of error. For example, when the teacher completes the entire screw pick–transport–place movements in a human–robot co-assembly task, the robot should not just start the picking action.

3.3. Correspondence

The issue of correspondence is very important in human–robot interaction, where it refers to the identification of a mapping between the human teacher and the robot learner that allows for the transfer of information in the human–robot team [9]. In a short and vivid statement, it concerns: “whether the robot could identify the information transferred by its human teacher.” Correspondence contains two sub-issues: the perceptual equivalence and the physical equivalence. In the perceptual equivalence, for the same scene in human–robot interaction, it may present differently because of the differences between human and robot sensory capabilities. For example, the robot can utilize depth cameras to observe human hands, while the human may recognize them from light. In the physical equivalence, the human and the robot may take a different action to complete the same physical effect because of the differences between human and robot embodiments [16].

3.4. Execution

This issue refers to whether robot should physically execute the human teacher’s behaviors completely and accurately. After a movement from a human teacher is acquired, recorded, and embodied, the robot could accept and transfer it to its effector and actuator to reproduce the same action.

3.5. Safety

Safety should always be kept in mind. The issue of safety includes two aspects: one is that the robot must be friendly to his teacher and must not inflict injury on or bring potential danger to the human demonstrator, and the other point is that the robot must have security for its tasks and will not damage other machines or products around its workspace. Thereby, safety measures, including interlock chains or emergency stop strategies, must be considered when designing the robot and its teaching–learning mechanism.

4. What Affects Human Comfort in Human–Robot Interaction?

The general factors that affect human comfort in human–robot interaction have been primarily studied from different perspectives. These factors mainly include the robot response speed, the robot movement trajectory, the human–robot proximity, the robot object-manipulating fluency, human coding efforts, the robot sociability, and factors outside human–robot teams. Each corresponding factor is discussed below.

4.1. Robot Response Speed

The speed of the robot response usually has a direct influence on human feelings in human–robot interaction. In Dautenhahn [56], the researchers conducted a study to investigate how a robot should best approach and place itself relative to a seated human subject by trying different approach directions and speeds. They discuss the results of the user studies in the context of developing a path-planning system for a mobile robot. Considering the robot speed as one of factors that may have an impact on human comfort in human–robot collaboration, Mitsunaga et al. developed an adaptation mechanism based on reinforcement learning to read subconscious body signals from humans and utilize this information to adjust robot actions [57]. In order to provide safe and socially acceptable robot paths for humans to make them comfortable in human–robot collaboration, Sisbot et al. designed a human-aware motion planner for the robot to adapt its speed by inferring human accessibility and preferences [58].

4.2. Robot Movement Trajectory

As shown in Figure 2, in a human–robot co-assembly task, the robot starts from point A, picks the part at point B, and deliveries the part to its human partner at point C. However, different robot movement trajectories (one is close to the human and the other is far away from human) may induce different psychological feelings in the human. For instance, through the comparisons of functional motion, legible motion, and predictable motion, Dragan et al. investigate the positive and negative impacts of different planning motions on human comfort and the success of physical human–robot collaboration [59].

4.3. Human–Robot Proximity

Human–robot proximity is the distance between the robot and its human partner in their collaboration. Mumm explores whether the human–human proxemics models can also explain how people physically and psychologically distanced themselves from robots. By conducting a controlled laboratory experiment in human–robot interaction, they conclude that humans who like/dislike the robot show different behaviors in physical and psychological distancing [60]. Walters et al. investigate human–robot and robot–human approach distances by testing two hypotheses. One is that approach distances preferred by humans in human–robot interaction will be comparable to those preferred in human–human interaction. The other is that common personality factors can be employed to predict humans’ likely approach distance preferences. They confirm these two hypotheses via human–robot interactive experiments in a conference room [61]. Takayama et al. explore issues regarding human personal space around robots by testing several research hypotheses in a controlled experiment. They discuss the factors that influence human–robot proximity and human comfort in human-approaching-robot and robot-approaching-human contexts [62]. In human–robot collaborative tasks, Stark et al. conducted a study to evaluate how comfort changes when the robot reaches into the human’s personal space at different distances and urgency levels [63].

4.4. Robot Object-Manipulating Fluency

Human–robot collaborative fluency can be regarded as a high level of coordination that gives rise to a well-synchronized meshing of the actions of humans and robots. The fluency in human–robot collaboration is also considered to be a quality factor that can be positively assessed and recognized when compared to a non-fluent scenario, which has an impact on task efficiency and human comfort [64]. Lasota et al. conducted an experiment in which the participants worked with an adaptive robot incorporating human-aware motion planning to perform a collaborative task. They evaluated the human–robot team fluency via a set of quantitative metrics, and further analyzed human satisfaction and comfort in human–robot collaboration [65]. Through the spatial (formed by distinct hand-over poses) and temporal contrast (formed by unambiguous transitions to the hand-over pose), Cakmak and colleagues improved the human–robot collaborative fluency that makes humans feel comfortable in human–robot teams [66].

4.5. Human Coding Efforts

Most robots are traditionally programmed using offline devices, such as a teaching pendant, which is tedious and time-consuming and makes the human expert feel uncomfortable in some situations. Therefore, novel and more effective approaches need to be developed for humans to interact and program robots easier and more intuitively. Neto et al. developed a CAD-based approach allowing the users to program robots with basic CAD skills to generate robot programs without inputting too much human coding effort comparing to the robot teaching pendant [15]. In order to facilitate human–robot collaborative tasks, Wang et al. proposed a teaching–learning collaboration model where the robot learned from human assembly demonstrations and actively collaborated with the human in collaborative tasks through a natural language, which can largely improve human–robot efficiency and human comfort [7].

4.6. Robot Sociability

Robot sociability has also been identified as a factor influencing human–robot interaction [61], where robots are regarded as “social entities” to interact with humans socially while they work together. From the social science perspective, the amount of robot smiling, eye contact, and the appearance of robots usually result in different feelings, including happiness, sadness, fatigue, nervousness, and worry, of human partners about the robots [67].

4.7. Factors Outside Human–Robot Teams

The factors outside the teamwork, including task types, working surroundings, and mission contexts [68], have also been investigated in human–robot collaboration [69]. These external factors generally affect the development and implementation of a human–robot team to accomplish the collaborative task successfully and ergonomically.

5. How to Improve Human Comfort in Human–Robot Interaction?

5.1. Human Comfort Measurement

Before getting familiar with how to improve human acceptance of robots, it is necessary to know how to measure human comfort. For human comfort measurement, there are two main widely used approaches: the self-evaluation approach and the physiological approach.

5.1.1. Self-Evaluation Approach

For the self-evaluation approach, many studies have been conducted to measure human comfort in various situations [70,71] using different kinds of questionnaires [72,73,74]. Several typical designs, such as human self-reports [75], frequency of human interventions [76], self-assessment manikin [77], and compliance with robot suggestions [68], have been utilized for assessing comfort in human–machine interactions. Some other approaches enable humans to rate their comfort levels using a Likert scale [78] in real time during user studies. In these approaches, online devices, such as smartphones with designed applications (APPs) [79,80,81], are adopted where the participants can change their comfort levels by either intuitively sliding comfort bars or speaking.

5.1.2. Physiological Approach

For the physiological approaches, many studies have shown that various physiological signals can be used to recognize the internal states of human subjects, such as emotions and feelings. In human–robot interaction contexts, it has been found that some affective states of humans, such as excitement, fatigue, engagement, and distractions (and likely comfort as well) [82,83], are correlated to their physiological signals including those measured using electrodermal activity (EDA), photoplethysmography (PPG), skin temperature (ST), eye tracking (ET), and electroencephalography (EEG). EDA, also known as galvanic skin response, is the measurement of conductance or resistance across the surface of one’s skin, which continuously varies as one responds to various stimuli [84]. PPG is a measurement of the changes in light absorption of the skin, which is measured using a pulse oximeter; it is a periodic signal that measures cardio performance and can be used to evaluate a participant’s level of arousal [85]. ST is measured with a temperature sensor and has been used to measure stress [86]. ET measures eye motion relative to the participant’s head in addition to pupil diameter via eye-tracking glasses. Studies have shown that pupil diameter is strongly related with the emotional arousal and autonomic activation [87]. EEG is a measure of the brain’s electrical activity and can be used to evaluate stress, excitement, focus, interest, relaxation, and engagement of the human in human–robot interaction [88].

5.2. Measures to Improve Human Acceptance of Robots

User acceptance of a robot has a direct impact on the quality of shared tasks when a robot works with its human partner. Many studies have been performed on the improvement of user acceptance in human–robot interaction [89,90,91]. Generally, improving the robot performance can result in a positive impact on user acceptance, where the proposed solutions in the previous studies include developing the robot with a friendly and intuitively human–robot interface [92,93,94], designing different kinds of robots for diverse age groups [95,96], improving the robot response to human needs [97,98], etc. Additionally, user acceptance will be improved if a high collaboration efficiency exists in the human–robot team. Several measures have been taken such as reducing human idle time [99,100] and facilitating human–robot collaboration fluency [101,102,103]. Making the robot easy to use can also improve user acceptance of the robot; for example, programing robots using human demonstrations to assist humans to accomplish collaborative tasks [7,19,39]. Moreover, the idea of human acceptance of new things, especially new robots, should also be improved. There are several methods that can be employed, such as professional user training [104,105,106] and popularizing robots with the public [107,108,109].

5.3. Measures to Improve Human Comfort

Several studies have been conducted on improving human comfort in human–robot interaction. In general, accommodating the robot actions to different individuals by considering their working preferences can improve human comfort. The proposed approaches include adjusting human–robot proximity in human personal space [65,110], designing multiple robot motion trajectories [59,111], controlling the robot with diverse velocities [58,112,113], and planning the robot with different manipulation orientations [114]. In addition, a fluent interaction [64] between the human and the robot can also improve human comfort. Some typical solutions have been developed, such as human intention anticipation for robot action selection [10,103,115], human-inspired plan execution system [99], and perceptual symbol practice [116]. Moreover, from the perspective of robot sociability, developing friendly appearances for robots in human–robot interaction, especially for assistive robots in home settings [117,118,119,120] and entertainment robots in public places [92,121,122,123], play a significant role in improving human comfort.

6. Conclusions

In this review paper, we have presented and discussed two significant topics in human–robot interaction: learning and comfort. The collaboration quality between the human and the robot has been improved largely by taking advantage of robots learning from demonstrations. Human teaching and robot learning approaches with their corresponding applications have been investigated in this work. We have presented and discussed several important issues that need to be paid attention to and addressed in the human–robot teaching–learning process. After that, the factors that may affect human comfort in human–robot interaction have been described and discussed. We have also presented and discussed the measures to improve human acceptance of robots and human comfort in human–robot interaction.

Author Contributions

Organizing, W.W. and Y.J.; writing, W.W., R.L., Y.C., and Y.J.; review and editing, W.W. and Y.J.; supervision, Y.J.

Funding

This work was supported by the National Science Foundation under grant IIS-1845779.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thoben, K.-D.; Wiesner, S.; Wuest, T. “Industrie 4.0” and smart manufacturing—A review of research issues and application examples. Int. J. Autom. Technol. 2017, 11, 4–16. [Google Scholar] [CrossRef]
  2. Wannasuphoprasit, W.; Akella, P.; Peshkin, M.; Colgate, J.E. Cobots: A novel material handling technology. In Proceedings of the 1998 ASME International Mechanical Engineering Congress and Exposition, Anaheim, CA, USA, 15–20 November 1998. [Google Scholar]
  3. Krüger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. Cirp Ann. Manuf. Technol. 2009, 58, 628–646. [Google Scholar]
  4. Wang, W.; Chen, Y.; Diekel, Z.M.; Jia, Y. Cost functions based dynamic optimization for robot action planning. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 277–278. [Google Scholar]
  5. Léger, J.; Angeles, J. Off-line programming of six-axis robots for optimum five-dimensional tasks. Mech. Mach. Theory 2016, 100, 155–169. [Google Scholar] [CrossRef]
  6. Billard, A.; Calinon, S.; Dillmann, R.; Schaal, S. Robot programming by demonstration. In Springer Handbook of Robotics; Springer: Berlin, Germany, 2008; pp. 1371–1394. [Google Scholar]
  7. Wang, W.; Li, R.; Chen, Y.; Diekel, Z.; Jia, Y. Facilitating human-robot collaborative tasks by teaching-learning-collaboration from human demonstrations. IEEE Trans. Autom. Sci. Eng. 2018, 16, 640–653. [Google Scholar] [CrossRef]
  8. Argall, B.D.; Chernova, S.; Veloso, M.; Browning, B. A survey of robot learning from demonstration. Robot. Auton. Syst. 2009, 57, 469–483. [Google Scholar] [CrossRef]
  9. Wang, W.; Li, R.; Diekel, Z.; Jia, Y. Controlling object hand-over in human-robot collaboration via natural wearable sensing. IEEE Trans. Hum. Mach. Syst. 2019, 49, 59–71. [Google Scholar] [CrossRef]
  10. Park, C.; Ondřej, J.; Gilbert, M.; Freeman, K.; Sullivan, C.O. Hi robot: Human intention-aware robot planning for safe and efficient navigation in crowds. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 3320–3326. [Google Scholar]
  11. Wang, W.; Li, R.; Chen, Y.; Jia, Y. Human intention prediction in human-robot collaborative tasks. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 279–280. [Google Scholar]
  12. Wang, W.; Liu, N.; Li, R.; Chen, Y.; Jia, Y. Hucom: A model for human comfort estimation in human-robot collaboration. In Proceedings of the 2018 Dynamic Systems and Control (DSC) Conference, Atlanta, GA, USA, 30 September–3 October 2018. [Google Scholar]
  13. Mead, R.; Matarić, M.J. Proxemics and performance: Subjective human evaluations of autonomous sociable robot distance and social signal understanding. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 5984–5991. [Google Scholar]
  14. Gruver, W.A.; Soroka, B.I.; Craig, J.J.; Turner, T.L. Industrial robot programming languages: A comparative evaluation. Ieee Trans. Syst. Man Cybern. 1984, SMC-14, 565–570. [Google Scholar] [CrossRef]
  15. Neto, P.; Pires, J.N.; Moreira, A.P. Cad-based off-line robot programming. In Proceedings of the 2010 IEEE Conference on Robotics Automation and Mechatronics (RAM), Singapore, 28–30 June 2010; pp. 516–521. [Google Scholar]
  16. Billard, A.; Grollman, D. Robot learning by demonstration. Scholarpedia 2013, 8, 3824–3837. [Google Scholar] [CrossRef]
  17. Skubic, M.; Volz, R.A. Acquiring robust, force-based assembly skills from human demonstration. Ieee Trans. Robot. Autom. 2000, 16, 772–781. [Google Scholar] [CrossRef]
  18. Ferreira, M.; Costa, P.; Rocha, L.; Moreira, A.P. Stereo-based real-time 6-dof work tool tracking for robot programing by demonstration. Int. J. Adv. Manuf. Technol. 2014, 85, 1–13. [Google Scholar] [CrossRef]
  19. Jia, Y.; She, L.; Cheng, Y.; Bao, J.; Chai, J.Y.; Xi, N. Program robots manufacturing tasks by natural language instructions. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 633–638. [Google Scholar]
  20. Billard, A.G.; Calinon, S.; Guenter, F. Discriminative and adaptive imitation in uni-manual and bi-manual tasks. Robot. Auton. Syst. 2006, 54, 370–384. [Google Scholar] [CrossRef]
  21. Nagata, F.; Watanabe, K.; Kiguchi, K.; Tsuda, K.; Kawaguchi, S.; Noda, Y.; Komino, M. Joystick teaching system for polishing robots using fuzzy compliance control. In Proceedings of the 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Banff, AB, Canada, 29 July–1 August 2001; pp. 362–367. [Google Scholar]
  22. Katagami, D.; Yamada, S. Real robot learning with human teaching. In Proceedings of the 4th Japan-Australia Joint Workshop on Intelligent and Evolutionary Systems, Hayama, Japan, 31 October–2 November 2000; pp. 263–270. [Google Scholar]
  23. Nose, H.; Kawabata, K.; Suzuki, Y. Method of Teaching a Robot. Google Patents 1991. [Google Scholar]
  24. Calinon, S.; Evrard, P.; Gribovskaya, E.; Billard, A.; Kheddar, A. Learning collaborative manipulation tasks by demonstration using a haptic interface. In Proceedings of the 2009 International Conference on Advanced Robotics (ICAR 2009), Munich, Germany, 22–26 June 2009; pp. 1–6. [Google Scholar]
  25. Peternel, L.; Babič, J. Humanoid robot posture-control learning in real-time based on human sensorimotor learning ability. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 5329–5334. [Google Scholar]
  26. Wang, W.; Li, R.; Diekel, Z.M.; Jia, Y. Hands-free maneuvers of robotic vehicles via human intentions understanding using wearable sensing. J. Robot. 2018, 2018, 4546094. [Google Scholar] [CrossRef]
  27. Javaid, M.; Žefran, M.; Yavolovsky, A. Using pressure sensors to identify manipulation actions during human physical interaction. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 670–675. [Google Scholar]
  28. Aleotti, J.; Caselli, S. Grasp programming by demonstration in virtual reality with automatic environment reconstruction. Virtual Real. 2012, 16, 87–104. [Google Scholar] [CrossRef]
  29. Calinon, S.; Billard, A. Incremental learning of gestures by imitation in a humanoid robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Washington, DC, USA, 9–11 March 2007; pp. 255–262. [Google Scholar]
  30. Lauria, S.; Bugmann, G.; Kyriacou, T.; Klein, E. Mobile robot programming using natural language. Robot. Auton. Syst. 2002, 38, 171–181. [Google Scholar] [CrossRef]
  31. She, L.; Cheng, Y.; Chai, J.Y.; Jia, Y.; Yang, S.; Xi, N. Teaching robots new actions through natural language instructions. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 868–873. [Google Scholar]
  32. Misra, D.K.; Sung, J.; Lee, K.; Saxena, A. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. Int. J. Robot. Res. 2015, 0278364915602060. [Google Scholar] [CrossRef]
  33. Bodenhagen, L.; Fugl, A.R.; Jordt, A.; Willatzen, M.; Andersen, K.A.; Olsen, M.M.; Koch, R.; Petersen, H.G.; Krüger, N. An adaptable robot vision system performing manipulation actions with flexible objects. IEEE Trans. Autom. Sci. Eng. 2014, 11, 749–765. [Google Scholar] [CrossRef]
  34. Kormushev, P.; Calinon, S.; Caldwell, D.G. Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Adv. Robot. 2011, 25, 581–603. [Google Scholar] [CrossRef]
  35. Manschitz, S.; Kober, J.; Gienger, M.; Peters, J. Learning movement primitive attractor goals and sequential skills from kinesthetic demonstrations. Robot. Auton. Syst. 2015, 74, 97–107. [Google Scholar] [CrossRef]
  36. Wu, Y.; Demiris, Y. Towards one shot learning by imitation for humanoid robots. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska, 3–8 May 2010; pp. 2889–2894. [Google Scholar]
  37. Lee, D.; Ott, C. Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Auton. Robot. 2011, 31, 115–131. [Google Scholar] [CrossRef]
  38. Das, N.; Prakash, R.; Behera, L. Learning object manipulation from demonstration through vision for the 7-dof barrett wam. In Proceedings of the 2016 IEEE First International Conference on Control, Measurement and Instrumentation (CMI), Kolkata, India, 8–10 January 2016; pp. 391–396. [Google Scholar]
  39. Koppula, H.S.; Gupta, R.; Saxena, A. Learning human activities and object affordances from rgb-d videos. Int. J. Robot. Res. 2013, 32, 951–970. [Google Scholar] [CrossRef]
  40. Bagnell, J.A.; Schneider, J.G. Proceedings of the Autonomous helicopter control using reinforcement learning policy search methods. In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation (ICRA 2001), Seoul, Korea, 21–26 May 2001; pp. 1615–1620. [Google Scholar]
  41. Ghadirzadeh, A.; Bütepage, J.; Maki, A.; Kragic, D.; Björkman, M. A sensorimotor reinforcement learning framework for physical human-robot interaction. arXiv 2016, arXiv:1607.07939. [Google Scholar]
  42. Yahya, A.; Li, A.; Kalakrishnan, M.; Chebotar, Y.; Levine, S. Collective robot reinforcement learning with distributed asynchronous guided policy search. arXiv 2016, arXiv:1610.00673. [Google Scholar]
  43. Bogert, K.; Doshi, P. Multi-robot inverse reinforcement learning under occlusion with state transition estimation. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, Istanbul, Turkey, 4–8 May 2015; pp. 1837–1838. [Google Scholar]
  44. Xia, C.; El Kamel, A. Neural inverse reinforcement learning in autonomous navigation. Robot. Auton. Syst. 2016, 84, 1–14. [Google Scholar] [CrossRef]
  45. Shiarlis, K.; Messias, J.; Whiteson, S. Inverse reinforcement learning from failure. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Singapore, 9–13 May 2016; pp. 1060–1068. [Google Scholar]
  46. Konidaris, G.; Kuindersma, S.; Grupen, R.; Barto, A. Robot learning from demonstration by constructing skill trees. Int. J. Robot. Res. 2010, 31, 360–375. [Google Scholar] [CrossRef]
  47. Konidaris, G.; Kuindersma, S.; Grupen, R.; Barreto, A.S. Constructing skill trees for reinforcement learning agents from demonstration trajectories. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–9 December 2010; pp. 1162–1170. [Google Scholar]
  48. Lee, K.; Su, Y.; Kim, T.-K.; Demiris, Y. A syntactic approach to robot imitation learning using probabilistic activity grammars. Robot. Auton. Syst. 2013, 61, 1323–1334. [Google Scholar] [CrossRef]
  49. Fonooni, B.; Hellström, T.; Janlert, L.-E. Learning high-level behaviors from demonstration through semantic networks. In Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART), Vilamoura, Portugal, 6–8 February 2012; pp. 419–426. [Google Scholar]
  50. Kuipers, B.; Froom, R.; Lee, W.-Y.; Pierce, D. The semantic hierarchy in robot learning. In Robot Learning; Springer: New York, NY, USA, 1993; pp. 141–170. [Google Scholar]
  51. Li, G.; Zhu, C.; Du, J.; Cheng, Q.; Sheng, W.; Chen, H. Robot semantic mapping through wearable sensor-based human activity recognition. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 5228–5233. [Google Scholar]
  52. Oztop, E.; Arbib, M.A. Schema design and implementation of the grasp-related mirror neuron system. Biol. Cybern. 2002, 87, 116–140. [Google Scholar] [CrossRef]
  53. Rizzolatti, G.; Fogassi, L.; Gallese, V. Neurophysiological mechanisms underlying the understanding and imitation of action. Nat. Rev. Neurosci. 2001, 2, 661–670. [Google Scholar] [CrossRef]
  54. Sauser, E.L.; Billard, A.G. Biologically inspired multimodal integration: Interferences in a human-robot interaction game. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 5619–5624. [Google Scholar]
  55. Yoo, Y.-H.; Kim, J.-H. Procedural memory learning from demonstration for task performance. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Hong Kong, China, 9–12 October 2015; pp. 2435–2440. [Google Scholar]
  56. Dautenhahn, K.; Walters, M.; Woods, S.; Koay, K.L.; Nehaniv, C.L.; Sisbot, A.; Alami, R.; Siméon, T. How may i serve you? A robot companion approaching a seated person in a helping context. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 172–179. [Google Scholar]
  57. Mitsunaga, N.; Smith, C.; Kanda, T.; Ishiguro, H.; Hagita, N. Adapting robot behavior for human—Robot interaction. IEEE Trans. Robot. 2008, 24, 911–916. [Google Scholar] [CrossRef]
  58. Sisbot, E.A.; Marin-Urias, L.F.; Alami, R.; Simeon, T. A human aware mobile robot motion planner. IEEE Trans. Robot. 2007, 23, 874–883. [Google Scholar] [CrossRef]
  59. Dragan, A.D.; Bauman, S.; Forlizzi, J.; Srinivasa, S.S. Effects of robot motion on human-robot collaboration. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; pp. 51–58. [Google Scholar]
  60. Mumm, J.; Mutlu, B. Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 331–338. [Google Scholar]
  61. Walters, M.L.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L.; Kaouri, C.; Woods, S.; Nehaniv, C.; Lee, D.; Werry, I. The influence of subjects’ personality traits on personal spatial zones in a human-robot interaction experiment. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 347–352. [Google Scholar]
  62. Takayama, L.; Pantofaru, C. Influences on proxemic behaviors in human-robot interaction. In Proceedings of the IROS 2009, IEEE/RSJ International Conference on Intelligent Robots and Systems, Louis, MO, USA, 11–15 October 2009; pp. 5495–5502. [Google Scholar]
  63. Stark, J.; Mota, R.R.; Sharlin, E. Personal space intrusion in human-robot collaboration. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 05–08 March 2018; pp. 245–246. [Google Scholar]
  64. Hoffman, G. Evaluating fluency in human-robot collaboration. In Proceedings of the International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 1–8. [Google Scholar]
  65. Lasota, P.A.; Shah, J.A. Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Hum. Factors 2015, 57, 21–33. [Google Scholar] [CrossRef]
  66. Cakmak, M.; Srinivasa, S.S.; Lee, M.K.; Kiesler, S.; Forlizzi, J. Using spatial and temporal contrast for fluent robot-human hand-overs. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 489–496. [Google Scholar]
  67. Stark, J.; Mota, R.R.; Sharlin, E. Personal Space Intrusion in Human-Robot Collaboration; Science Research & Publications: New York, NY, USA, 2017. [Google Scholar]
  68. Salem, M.; Lakatos, G.; Amirabdollahian, F.; Dautenhahn, K. Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; pp. 141–148. [Google Scholar]
  69. Ma, L.M.; Fong, T.; Micire, M.J.; Kim, Y.K.; Feigh, K. Human-robot teaming: Concepts and components for design. In Field and Service Robotics; Springer: Basel, Switzerland, 2018; pp. 649–663. [Google Scholar]
  70. Kuijt-Evers, L.F.; Groenesteijn, L.; de Looze, M.P.; Vink, P. Identifying factors of comfort in using hand tools. Appl. Ergon. 2004, 35, 453–458. [Google Scholar] [CrossRef] [PubMed]
  71. Stathopoulos, T.; Wu, H.; Zacharias, J. Outdoor human comfort in an urban climate. Build. Environ. 2004, 39, 297–305. [Google Scholar] [CrossRef]
  72. Xue, P.; Mak, C.; Cheung, H. The effects of daylighting and human behavior on luminous comfort in residential buildings: A questionnaire survey. Build. Environ. 2014, 81, 51–59. [Google Scholar] [CrossRef]
  73. Mahmoud, A.H.A. Analysis of the microclimatic and human comfort conditions in an urban park in hot and arid regions. Build. Environ. 2011, 46, 2641–2656. [Google Scholar] [CrossRef]
  74. Zhang, L.; Helander, M.G.; Drury, C.G. Identifying factors of comfort and discomfort in sitting. Hum. Factors 1996, 38, 377–389. [Google Scholar] [CrossRef]
  75. Robinette, P.; Howard, A.M.; Wagner, A.R. Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans. Hum. Mach. Syst. 2017, 47, 425–436. [Google Scholar] [CrossRef]
  76. Gao, F.; Clare, A.S.; Macbeth, J.C.; Cummings, M. Modeling the Impact of Operator Trust on Performance in Multiple Robot Control; AAAI: Menlo Park, CA, USA, 2013. [Google Scholar]
  77. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  78. Allen, I.E.; Seaman, C.A. Likert scales and data analyses. Quadratic. Program. 2007, 40, 64. [Google Scholar]
  79. Li, D.; Menassa, C.C.; Kamat, V.R. Personalized human comfort in indoor building environments under diverse conditioning modes. Build. Environ. 2017, 126, 304–317. [Google Scholar] [CrossRef]
  80. Lin, C.-Y.; Chen, L.-J.; Chen, Y.-Y.; Lee, W.-C. A Comfort Measuring System for Public Transportation Systems Using Participatory Phone Sensing. Acm Phonesense. 2010. Available online: https://www.iis.sinica.edu.tw/papers/cclljj/11583-F.pdf (accessed on 2 July 2019).
  81. Andrews, S.; Ellis, D.A.; Shaw, H.; Piwek, L. Beyond self-report: Tools to compare estimated and real-world smartphone use. PLoS ONE 2015, 10, e0139004. [Google Scholar] [CrossRef] [Green Version]
  82. Ji, Q.; Lan, P.; Looney, C. A probabilistic framework for modeling and real-time monitoring human fatigue. IEEE Trans. Syst. ManCybern. Part A Syst. Hum. 2006, 36, 862–875. [Google Scholar]
  83. Bořil, H.; Sangwan, A.; Hasan, T.; Hansen, J.H. Automatic excitement-level detection for sports highlights generation. In Proceedings of the Eleventh Annual Conference of the International Speech Communication Association, Chiba, Japan, 26–30 September 2010. [Google Scholar]
  84. Ali, M.; Al Machot, F.; Mosa, A.H.; Kyamakya, K. Cnn based subject-independent driver emotion recognition system involving physiological signals for adas. In Advanced Microsystems for Automotive Applications 2016; Springer: Berlin, Germany, 2016; pp. 125–138. [Google Scholar]
  85. Lee, H.-M.; Kim, D.-J.; Yang, H.-K.; Kim, K.-S.; Lee, J.-W.; Cha, E.-J.; Kim, K.-A. Human sensibility evaluation using photoplethysmogram (ppg). In Proceedings of the 2009 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS’09), Fukuoka, Japan, 16–19 March 2009; pp. 149–153. [Google Scholar]
  86. Salomon, R.; Lim, M.; Pfeiffer, C.; Gassert, R.; Blanke, O. Full body illusion is associated with widespread skin temperature reduction. Front. Behav. Neurosci. 2013, 7, 65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Li, L. A multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction. Ph.D. Thesis, Technische Universität Kaiserslautern, Kaiserslautern, Germany, 16 January 2017. [Google Scholar]
  88. Ackermann, P.; Kohlschein, C.; Bitsch, J.A.; Wehrle, K.; Jeschke, S. Eeg-based automatic emotion recognition: Feature extraction, selection and classification methods. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–17 September 2016; pp. 1–6. [Google Scholar]
  89. Heerink, M.; Krose, B.; Evers, V.; Wielinga, B. Measuring acceptance of an assistive social robot: A suggested toolkit. In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), Toyama, Japan, 27 September–2 October 2009; pp. 528–533. [Google Scholar]
  90. Heerink, M.; Kröse, B.; Evers, V.; Wielinga, B. Assessing acceptance of assistive social agent technology by older adults: The almere model. Int. J. Soc. Robot. 2010, 2, 361–375. [Google Scholar] [CrossRef] [Green Version]
  91. Park, E.; Joon Kim, K. User acceptance of long-term evolution (lte) services: An application of extended technology acceptance model. Program 2013, 47, 188–205. [Google Scholar] [CrossRef]
  92. Goetz, J.; Kiesler, S.; Powers, A. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings of the 12th IEEE international workshop on robot and human interactive communication, Millbrae, CA, USA, 31 October–2 November 2003; pp. 55–60. [Google Scholar]
  93. Goodrich, M.A.; Schultz, A.C. Human-robot interaction: A survey. Found. Trends Hum. Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  94. Dautenhahn, K. Socially intelligent robots: Dimensions of human–robot interaction. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2007, 362, 679–704. [Google Scholar] [CrossRef] [Green Version]
  95. Kuo, I.H.; Rabindran, J.M.; Broadbent, E.; Lee, Y.I.; Kerse, N.; Stafford, R.; MacDonald, B.A. Age and gender factors in user acceptance of healthcare robots. In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), Toyama, Japan, 27 September–2 October 2009; pp. 214–219. [Google Scholar]
  96. Flandorfer, P. Population ageing and socially assistive robots for elderly persons: The importance of sociodemographic factors for user acceptance. Int. J. Popul. Res. 2012, 2012, 829835. [Google Scholar] [CrossRef] [Green Version]
  97. Broadbent, E.; Stafford, R.; MacDonald, B. Acceptance of healthcare robots for the older population: Review and future directions. Int. J. Soc. Robot. 2009, 1, 319. [Google Scholar] [CrossRef]
  98. Casper, J.; Murphy, R.R. Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2003, 33, 367–385. [Google Scholar] [CrossRef] [Green Version]
  99. Shah, J.; Wiken, J.; Williams, B.; Breazeal, C. Improved human-robot team performance using chaski, a human-inspired plan execution system. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 8–11 March 2011; pp. 29–36. [Google Scholar]
  100. Nikolaidis, S.; Shah, J. Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 33–40. [Google Scholar]
  101. Nikolaidis, S.; Ramakrishnan, R.; Gu, K.; Shah, J. Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 2–5 March 2015; pp. 189–196. [Google Scholar]
  102. Hoffman, G.; Breazeal, C. Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction, Arlington, WV, USA, 10–12 March 2007; pp. 1–8. [Google Scholar]
  103. Hoffman, G.; Breazeal, C. Cost-based anticipatory action selection for human–robot fluency. IEEE Trans. Robot. 2007, 23, 952–961. [Google Scholar] [CrossRef]
  104. Huttenrauch, H.; Eklundh, K.S. Fetch-and-carry with cero: Observations from a long-term user study with a service robot. In Proceedings of the 2002 11th IEEE International Workshop on Robot and Human Interactive Communication, Berlin, Germany, 27–27 September 2002; pp. 158–163. [Google Scholar]
  105. Chuang, C.-P.; Huang, Y.-J.; Guo-Hao, L.; Huang, Y.-C. Popbl-based education and trainning system on robotics training effectiveness. In Proceedings of the 2010 International Conference on System Science and Engineering (ICSSE), Taipei, Taiwan, 1–3 July 2010; pp. 111–114. [Google Scholar]
  106. Davis, F.D. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results; Massachusetts Institute of Technology: Cambridge, MA, USA, 1985. [Google Scholar]
  107. Glas, D.F.; Satake, S.; Ferreri, F.; Kanda, T.; Hagita, N.; Ishiguro, H. The network robot system: Enabling social human-robot interaction in public spaces. J. Hum. Robot Interact. 2013, 1, 5–32. [Google Scholar] [CrossRef] [Green Version]
  108. Weiss, A.; Bernhaupt, R.; Tscheligi, M.; Wollherr, D.; Kuhnlenz, K.; Buss, M. A methodological variation for acceptance evaluation of human-robot interaction in public places. In Proceedings of the 2008 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2008), Munich, Germany, 1–3 August 2008; pp. 713–718. [Google Scholar]
  109. Jensen, B.; Tomatis, N.; Mayor, L.; Drygajlo, A.; Siegwart, R. Robots meet humans: Interaction in public spaces. ITIE 2005, 52, 1530–1546. [Google Scholar] [CrossRef]
  110. Lasota, P.A.; Rossano, G.F.; Shah, J.A. Toward safe close-proximity human-robot interaction with standard industrial robots. In Proceedings of the 2014 IEEE International Conference on Automation Science and Engineering (CASE), Taipei, Taiwan, 18–22 August 2014. [Google Scholar]
  111. Dragan, A.D.; Lee, K.C.; Srinivasa, S.S. Legibility and predictability of robot motion. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 301–308. [Google Scholar]
  112. Haddadin, S.; Albu-Schäffer, A.; Hirzinger, G. Safety evaluation of physical human-robot interaction via crash-testing. In Robotics: Science and Systems; MIT Press: Cambridge, MA, USA, 2007; pp. 217–224. [Google Scholar]
  113. Haddadin, S.; Albu-Schaffer, A.; Hirzinger, G. The role of the robot mass and velocity in physical human-robot interaction-part i: Non-constrained blunt impacts. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 9–23 May 2008; pp. 1331–1338. [Google Scholar]
  114. Edsinger, A.; Kemp, C.C. Human-robot interaction for cooperative manipulation: Handing objects to one another. In Proceedings of the 2007 16th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2007), Jeju Island, Korea, 26–29 August 2007; pp. 1167–1172. [Google Scholar]
  115. Mainprice, J.; Berenson, D. Human-robot collaborative manipulation planning using early prediction of human motion. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 299–306. [Google Scholar]
  116. Hoffman, G.; Breazeal, C. Achieving fluency through perceptual-symbol practice in human-robot collaboration. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; pp. 1–8. [Google Scholar]
  117. Forlizzi, J.; DiSalvo, C.; Gemperle, F. Assistive robotics and an ecology of elders living independently in their homes. Hum. Comput. Interact. 2004, 19, 25–59. [Google Scholar]
  118. Dautenhahn, K.; Woods, S.; Kaouri, C.; Walters, M.L.; Koay, K.L.; Werry, I. What is a robot companion-friend, assistant or butler? In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), Edmonton, AB, Canada, 2–6 August 2005; pp. 1192–1197. [Google Scholar]
  119. Nejat, G.; Sun, Y.; Nies, M. Assistive robots in health care settings. Home Health Care Manag. Pract. 2009, 21, 177–187. [Google Scholar] [CrossRef]
  120. Wu, Y.-H.; Fassert, C.; Rigaud, A.-S. Designing robots for the elderly: Appearance issue and beyond. Arch. Gerontol. Geriatr. 2012, 54, 121–126. [Google Scholar] [CrossRef]
  121. Nagasaka, K.; Kuroki, Y.; Suzuki, S.Y.; Itoh, Y.; Yamaguchi, J.I. Integrated motion control for walking, jumping and running on a small bipedal entertainment robot. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation (ICRA’04), New Orleans, LA, USA, 26 April–1 May 2004; pp. 3189–3194. [Google Scholar]
  122. Li, D.; Rau, P.P.; Li, Y. A cross-cultural study: Effect of robot appearance and task. Int. J. Soc. Robot. 2010, 2, 175–186. [Google Scholar] [CrossRef]
  123. Salichs, M.A.; Barber, R.; Khamis, A.M.; Malfaz, M.; Gorostiza, J.F.; Pacheco, R.; Rivas, R.; Corrales, A.; Delgado, E.; Garcia, D. Maggie: A robotic platform for human-robot social interaction. In Proceedings of the 2006 IEEE Conference on Robotics, Automation and Mechatronics, Luoyang, China, 25–28 June 2006; pp. 1–7. [Google Scholar]
Figure 1. The robot programming approach has gone through three distinct reforms.
Figure 1. The robot programming approach has gone through three distinct reforms.
Applsci 09 05152 g001
Figure 2. The robot performs hand-over tasks through different movement trajectories.
Figure 2. The robot performs hand-over tasks through different movement trajectories.
Applsci 09 05152 g002
Table 1. Comparisons of different human teaching approaches.
Table 1. Comparisons of different human teaching approaches.
ApproachCategoryHuman-Robot InterfaceCostFeatures
Kinesthetic-Based TeachingPhysical touchRobot links and force sensorsLow
  • Cost-competitive
  • Intuitive operation
  • Significant human efforts required
Joystick-Based TeachingPhysical touchJoystickLow
  • Cost-competitive
  • Intuitive operation
  • Not suitable for high degree of freedoms robot systems
Immersive Teleoperation Scenarios TeachingPhysical touchRobot force sensors and end effectorLow
  • No external devices required
  • Precise robot control
  • Professional expertise required
Wearable Sensor-Based TeachingPhysical touchForce-sensing glove and VR gloveHigh
  • Intuitive operation
  • No professional expertise required
  • High cost
Natural Language-Based TeachingNon-physical touchNatural language Low
  • Cost-competitive
  • Constrained by speech recognition technology
Vision-Based TeachingNon-physical touchVision sensors/systemsHigh
  • No professional expertise required
  • Intuitive operation
  • Easy to be influenced by environment
Table 2. Comparisons of different robot learning approaches.
Table 2. Comparisons of different robot learning approaches.
ApproachCategoryAlgorithmFeatures
Kinesthetic-Based LearningLow-level learningTask space control
  • Non-complex computation
  • Not suitable for complex tasks
One-Shot LearningLow-level learningPairwise mapping
  • Intuitive logics and non-complex computation
  • Perform tasks in sub-motion and one single instance for each learning
Multi Shot LearningLow-level learningIterative kinesthetic motion refinement
  • Perform tasks in a batch
  • Not suitable for large-scale tasks
Vision-Based LearningHigh-level learningSymbolic encoding, Structural support vector machine
  • Intuitive representation for tasks
  • Not suitable for tasks with a complex background
Reinforcement Learning-Based ApproachHigh-level learningReinforcement learning algorithm
  • Increases behavior and maximizes performance for the robot
  • Risk of overload of states may decrease the results
Inverse Reinforcement Learning-Based ApproachHigh-level learningInverse reinforcement learning algorithm
  • Learning rewards instead of learning policy to adapt dynamic environment
  • Need to repeatedly solve the Markov Decision Process in the learning
Skill Trees Construction-Based ApproachHigh-level learningSkill trees algorithm
  • Can segment multiple demonstrations and merge them into on
  • High computational complexity for complex tasks
Syntactics-Based ApproachHigh-level learningProbabilistic activity grammars
  • Learn from a small number of samples for complicated tasks
  • Tasks are constrained by specific structures
Semantic Networks-Based LearningHigh-level learningSemantic hierarchy algorithm
  • Intuitive knowledge representation for tasks
  • Not suitable for large-scale tasks
Neural Models-Based LearningHigh-level learningMirror neuron model
  • Not only produce robot basic motions, but also guide these motions
  • High computational complexity for complex tasks
Procedural Memory-Based LearningHigh-level learningAdaptive resonance model
  • Perform full sequences of tasks with only partial task information
  • High computational complexity for complex tasks

Share and Cite

MDPI and ACS Style

Wang, W.; Chen, Y.; Li, R.; Jia, Y. Learning and Comfort in Human–Robot Interaction: A Review. Appl. Sci. 2019, 9, 5152. https://doi.org/10.3390/app9235152

AMA Style

Wang W, Chen Y, Li R, Jia Y. Learning and Comfort in Human–Robot Interaction: A Review. Applied Sciences. 2019; 9(23):5152. https://doi.org/10.3390/app9235152

Chicago/Turabian Style

Wang, Weitian, Yi Chen, Rui Li, and Yunyi Jia. 2019. "Learning and Comfort in Human–Robot Interaction: A Review" Applied Sciences 9, no. 23: 5152. https://doi.org/10.3390/app9235152

APA Style

Wang, W., Chen, Y., Li, R., & Jia, Y. (2019). Learning and Comfort in Human–Robot Interaction: A Review. Applied Sciences, 9(23), 5152. https://doi.org/10.3390/app9235152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop