Next Article in Journal / Special Issue
Development of a Change Management Instrument for the Implementation of Technologies
Previous Article in Journal
Memristors for the Curious Outsiders
Previous Article in Special Issue
Identity Management and Protection Motivated by the General Data Protection Regulation of the European Union—A Conceptual Framework Based on State-of-the-Art Software Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges †

1
Department of Psychiatry, Yale University, New Haven, CT 06520, USA
2
Computer Science and Engineering Department, University of Texas at Arlington, Arlington, TX 76019, USA
3
Institute of Automation, University of Bremen, 28359 Bremen, Germany
4
Institute of Informatics and Telecommunications, NCSR Demokritos, 15310 Agia Paraskevi, Greece
5
Affective and Cognitive Institute, Offenburg University, 77652 Offenburg, Germany
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 26–29 June 2018; pp. 208–213.
Technologies 2018, 6(4), 119; https://doi.org/10.3390/technologies6040119
Submission received: 15 November 2018 / Revised: 9 December 2018 / Accepted: 10 December 2018 / Published: 13 December 2018
(This article belongs to the Special Issue The PErvasive Technologies Related to Assistive Environments (PETRA))

Abstract

:
In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.

1. Introduction

Robot-Assisted Training (RAT) is a growing body of research in Human–Robot Interaction (HRI) which studies how robots can assist and enhance human skills during a task-centered interaction. RAT systems have a wide range of applications, varying from physical assistance in post-stroke rehabilitation and robotic prosthetics [1,2], cognitive training for patients suffering from dementia and Alzheimer’s disease [3,4], to intervention and therapy for children with Autism Spectrum Disorders (ASD) [5,6,7] and Socially Assistive Robotics (SAR) for language learning and children education [8,9,10]. As a multidisciplinary research field, it requires expertise in several research areas, including robotics, human–machine interaction, machine learning, data mining, computer vision, as well as expertise in psychology and educational sciences, kinesiology, occupational therapy and others. A main difference from other assistive robotic systems is that Robot-Assisted Training aims to train and enhance user (physical or cognitive) skills, through the interaction, and not assist users to complete a task, e.g., Activities of Daily Living. Despite this large variety of applications, target populations and system requirements, a common goal of Robot-Assisted Training systems is to enhance user performance by providing personalized and targeted assistance towards maximizing training and learning effects. Personalization has the potential to create a tailored and compelling experience that encourages and assists users to perform a given task and meet the training goals.
The motivation and purpose of this research is to identify a common set of parameters (i.e., taxonomy categories) that characterize a Robot-Assisted Training system in order to highlight the current research trends and challenges in this growing research area. Towards this, we present a review of recent works in Robot-Assisted Training systems, focusing on the different application areas, as well as different approaches for robot perception and behavior control (Section 2). While there are existing taxonomies for HRI systems, to our knowledge, no taxonomies have also been proposed for Robot-Assisted Training systems. Taking into consideration existing taxonomies in HRI (Section 3), we present our proposed taxonomy for Robot-Assisted Training systems (Section 4), providing a list of examples, based on recent works. Finally, we present our concluding remarks, presenting a set of open challenges while designing, developing and evaluating a Robot-Assisted Training system (Section 5).

2. A Review of Robot-Assisted Training Systems

Modern advances in robotics and sensor technologies have made possible the use of robots as assistive systems with numerous applications in healthcare, education, cognitive and physical rehabilitation and personalized skill training for a variety of target populations, e.g., elderly users, children, language learners, students, etc. Depending on the application and the target population, such systems require different underlying architectures in order to efficiently capture the required information from the end-user and its environment and provide a personalized and efficient interaction. In this section, we review recent Robot-Assisted Training systems, both for physical and cognitive skill training, focusing on the different application areas and we discuss the different approaches for modeling such systems, both in terms of perception and behavior control.

2.1. Application Areas

Robot-Assisted Training systems have been successfully deployed as training assistants with applications in healthcare, physical and cognitive rehabilitation, education and personalized skill training. In the field of psychological and cognitive assessment, robots have been proposed as screening, assessment and training tools for cognitive functions and social skills [11,12,13]. A main advantage of such systems is the automation of well-established psychometric tools and tests, with the potential to provide users and experts with high-fidelity and standardized assessments. In a recent work [11], a social robot has been deployed as an administration tool, which assesses a set of cognitive functions, including working memory, attention, executive function, and others, following the guidelines of the Montreal Cognitive Assessment test (MoCa) for elderly patients, suffering from dementia. The proposed system was designed as a multimodal interface which accommodates all subtasks of the proposed assessment task, utilizing the different sensors and interfaces of the Pepper robot. The Pepper robot was programmed to administer the specific subtasks, as well as to provide a score at the end of the assessment. The authors conducted a preliminary study to compare the automated (robotic) scores to the paper-and-pencil (standard) scores of the standardized tests. Their results showed a high correlation between the robotic and the standard scores, indicating a promising validity of the proposed approach. In a similar application, a social robot has been proposed as a psychometric tool to assess cognitive functioning via social interaction with a humanoid robot [12]. The robot was deployed to administer a psychometric tool for detecting Mild Cognitive Impairment (MCI) in elderly users.
Robotic systems have been successfully deployed as intervention and therapeutic tools for children with ASD. In a recent work [14], a social robot was used in an in-home long-term study to investigate the effects of robot-assisted training for social communication skills. The autonomous robot was able to participate in a multiparty interaction with the child and the caregiver, assisting the child to complete a set of activities, including emotional storytelling, perspective-taking, and sequencing. The robot was able to provide the child with personalized interventions during the interaction, maintaining their engagement. The results from the one-month pilot study indicate that children showed improved social interaction skills during the interaction, as evaluated by the caregivers. Assistive robots have also been deployed to train and enhance physical and cognitive skills for Cerebral Palsy patients [15]. Physical skills include lower or upper limb motor function, such as standing and balancing, locomotion, manipulation, gross and fine motor function. In order to train children with Cerebral Palsy with navigation and mobility dysfunctions, a pre-industrial robotic vehicle was designed to enable children explore their environment and learn how to navigate in the presence of obstacles. The system includes a set of different interfaces (buttons, switches, an inertial head-mounted interface) which can be used by the child to navigate the vehicle under different training scenarios. Results showed that the system adapts efficiently to the particular users skills through the different driving modes (cognitive skills) and different interfaces (physical skills).
Social robots have been successfully used as educational tools in classrooms and other educational environments. Towards designing such systems, it is essential to consider how different robot’s features, e.g., robot appearance, verbal and non-verbal behavior, tutoring and communication style, affect both cognitive (learning) and affective outcomes [16,17,18]. A recent work presents a SAR system for language learning with children [19], where the system uses a camera to capture and analyze facial expressions and affective features (gaze, smile, engagement, valence, etc.) in order to provide a personalized affective interaction through social verbal behavior (valence and engagement of spoken instructions). The authors evaluated their system with 34 children in preschool classrooms (ages 3–5) for a duration of two months. The evaluation was both in terms of learning outcomes and affective outcomes. In order to assess learning outcomes, the authors conducted a pre- and post-assessment vocabulary test. The results showed that the interaction with the system improved childrens’ vocabulary. Considering affective outcomes, they estimated user engagement and valence, using real-time face detection and analysis algorithms. The results support the authors’ hypothesis that affective personalization increases long-term valence, while maintaining engagement. In another educational setting [20], the authors presented a robot tutor for giving lectures in classrooms, evaluating different teaching styles. More specifically, the robot was designed to display different non-verbal behaviours (pitch, volume, body postures, hand gestures), resulting in different models of warmth and competence—two dimensions related to teaching styles. While the most common evaluation approach is to analyze subjective and objective measures, based on user’s behavior, there are works that investigate the psychological effects (e.g., stress levels) of social robots based on bio-markers, such as urinary and salivary samples [21,22].
Robot-assisted rehabilitation systems have been proposed to assist patients after neurological injury in movement training of upper and lower limbs [23]. There are commercially available RAT systems for rehabilitation, such as the Lokomat [24] and the WalkBot [25], which are already used by clinics for lower extremity motor rehabilitation. The rehabilitation systems motivate and challenge the patient to reach the task goals in an interactive manner. A study was conducted in [26] to compare the conventional physiotherapy (CP) with the robotic training Lokomat combined with CP on stroke patients. The study separated the 107 patients of new cerebral stroke into two groups. The group which followed the robotic training combined with CP showed improvement in some parameters (e.g., Berg Balance Scale, Mini-Mental State Examination, Functional Independence Measure and others) in comparison to the CP group. There are also non-commercially RAT systems available. An interactive RAT system for stroke rehabilitation has been developed by [27], which assists with wrist, elbow, knee and ankle training. The proposed RAT system motivates the stroke patients to actively interact with it through a touch screen during task-related training sessions. The muscle activation, measured by EMG (electromyography) sensors, is used as a control signal for the system. A study with 15 chronic stroke patients trained with the system for 20 one-hour-sessions of upper limb training. The evaluation results show the improvement in movement of the wrist and elbow, while muscle spasticity was reduced after the training session. RAT systems have been also proposed for training of the kinesthetic sense for stroke patients [28]. Kinesthetic sense refers to the sense of position and movement of the limbs and body. The robotic arm supports the forearm against gravity and provides haptic feedback. Visual feedback is also provided to keep the subject engaged. A preliminary study was conducted with seven chronic hemiparetic subjects over three weeks, and the results show effectiveness in enhancing patients’ kinesthetic sense. It was also observed that the level of improvement over time could be affected by the level of impairment. Several overviews for robot-assisted rehabilitation systems exist, focusing on gait training and upper limb rehabilitation [29,30].
Chand and Kim [31] provide a review of the clinical use of robot-assisted therapy in stroke rehabilitation. The improvement of the motor function in stroke patients is compared between the robot-assisted therapy and conventional physiotherapy in order to evaluate the rehabilitation RAT systems. The results from the clinical studies of robot-assisted gait training devices (both end-effector and the exoskeleton devices) have shown to be effective to conventional physiotherapy in subacute stroke patients, but it is not proven that robot-assisted training provides improvements in chronic stroke patients in comparison to conventional training or when delivered alone. Moreover, robot-assisted upper limb training with end-effector devices in subacute stroke patients was superior to conventional therapy in patients with subacute stroke. However, the use of exoskeleton devices for upper limb motor function in stroke patients did not provide enough evidence for its effectiveness. To summarize, the role of robot-assisted therapy for improving motor function in stroke patients is an addition to conventional physiotherapy and not a replacement. Similar evaluation results were also highlighted by the review of robot-assisted gait training in neurological patients [32] and of robot-assisted upper limb therapy in stroke patients [33].

2.2. Robot Perception and Behavior Control

Despite the wide range of applications, target population and system functionality, there are two main components of any Robot-Assisted Training system: (a) the perception module and (b) the behavior control module. The perception module is responsible to collect and analyze the information provided by its environment, through the available sensors. Such information is used to model human behavior, understand user intentions and detect task-related events. The behavior control (or action selection, decision making, planning and acting module) uses this analyzed information in order to select and execute a desirable behavior, by steering the robot’s actuators, aiming to assist the user during the task. While in other applications of robotics, e.g., manipulation arms, industrial robotics, etc., planning and acting are separate and distinguished modules, in this paper, we consider planning and acting to be combined into the behavior control module. In this section, we present different approaches for robot perception and behavior control in Robot-Assisted Training systems.
Considering robot perception in the context of a Robot-Assisted Training system, it is essential that such a system can successfully process and analyze task-related modalities and signals, e.g., verbal and non-verbal cues, speech, gestures, motion, physiological and behavioral/social signals, and others. For example, designing a social skill training tool for children with ASD requires an efficient perception module able to measure and analyze children behavior during social interactions. To this end, several computational methods have been proposed to measure, analyze and assess social behavior, language functioning and emotion regulation, through speech and natural language processing, affect recognition and engagement estimation [34,35].
Maintaining engagement during a Robot-Assisted Training session is essential for an effective training tool. To this end, there are many approaches to measure and estimate the level of user engagement during a training task, including gaze and head pose estimation, gesture recognition, physiological signals (EEG, heart rate, skin conductance, etc.), and others. In a recent work, a multi-user engagement modeling approach has been proposed which utilizes multisensing data (affective state, gaze position, speech and gestures) in order to estimate different users’ engagement in a multi-user robot-assisted training scenario for cognitive activities [36]. In another work, a multimodal robot perception framework was proposed for non-structured social environments [37]. The authors present their proposed computational approaches for non-invasive and unobtrusive audiovisual scene analysis and human tracking, as well as for physiological user monitoring. They provide a detailed description of the sensors used, the data acquisition and analysis, arguing that their proposed framework can be used for a variety of contexts, including educational and learning environments.
In the domain of robot-assisted rehabilitation, a survey on multimodal adaptive interfaces for upper limb rehabilitation [38] provides an overview of 3D multimodal adaptive interfaces for robotic rehabilitation for multimodal data collection and analysis for human behavior modeling. The multimodal interfaces collect, analyze and monitor bio-mechanical data (e.g., position, velocity, and force which are obtained by sensors integrated into the robot, or else with wearable sensors on the subject, or else sensors in the environment), psychophysiological measurements (e.g., EMG, EEG, heart rate, skin conductance and other), as well as contextual and environmental factors (by analyzing robot and human behaviors through vision sensors). The system analyses the data, in order to determine the patient’s bio-mechanical and psycho-physiological state and intention. Visual and haptic augmented sensory feedback is also provided to motivate and keep the patient in the loop. The robotic system features online adaptation of the training exercises based on the patient’s performance.
The goal of a Robot-Assisted Training system is to utilize the perceived multisensing information and perform in such a way in order to provide the user with an effective training session. Robot behavior, in the context of a Robot-Assisted Training system, can be expressed and realized in several ways, e.g., by adjusting task-related parameters (e.g., task difficulty, duration), verbal or non-verbal behavior, gestures, proxemics and others, depending on the application and the system requirements and functionality. Several approaches have been proposed to model and optimize robot behavior in a Robot-Assisted Training system. The challenge is to simulate authentic or at least appropriate human behavior while avoiding both the uncanny valley and cartoon-like over-simplification [39]. In the domain of ASD intervention, a recent work proposes a behavior control architecture for a Robot-Assisted intervention system, which enables the robot to intervene in autism therapy with high autonomy, minimizing the workload of the supervisor therapist [40]. The proposed architecture is inspired from the ACT-R cognitive architecture which is a general model of cognition and provides a framework for information processing [41]. The proposed control architecture consists of different modules (intention, memory, task planning, action, social) which are responsible for different aspects of the robot behavior, aiming to facilitate social skills for children with ASD. In a similar application, another work presents a behavior control system for social robots in therapies with a focus on personalization and platform-independence [42]. The authors present the different components of their proposed architecture (user modeling, robot mood, affect and personality, behavior generation), as well as the set of design principles considered during the architecture design process, including multi-layer behavior, personalization and modularity.
Existing and well-established cognitive systems and software, e.g., IBM Watson, have been used as design tools for robot and virtual tutors. For example, IBM Watson, a cognitive question-answering system, has been used to design a virtual tutor which answers common questions of students during an introductory Java programming course [43]. The prototype was evaluated in a field test and the results indicated that existing cognitive architectures and software can be used to design robotic tutors in educational settings. A recent work presents ProCRob; a software architecture for cognitive robot programming, which enables non-technical experts (teachers, therapists) to design and develop personalized social robot applications, using a visual programming interface, for different human–robot interaction contexts, including therapy for children with ASD, and for encouraging rehabilitation activities in patients with post-stroke [44]. Taking into consideration that robots and humans must closely interact and collaborate in the context of robotic rehabilitation systems, a robotic architecture has been proposed to allow non-expert users to be involved in the robotics operation when needed [45]. The proposed architecture is presented, showing how human users can communicate with robotic systems at different levels, considering sensing planning and acting. Different users can interact with the system through different communication channels and modalities, resulting in a contextually-rich environment.

3. Related Taxonomies in HRI

Taking into consideration the large variety of applications, target population, as well as the computational approaches to design and model an effective Robot-Assisted Training system, the focus of this work is to identify a set of common parameters that can be used to define and evaluate a Robot-Assisted Training system. To this end, we review existing taxonomies for Human–Robot Interaction systems, which are used to classify and categorize different design methods and approaches. One of the most generalized and broad classifications for HRI systems, which we mainly considered for our taxonomy, provides a classification framework based on eleven taxonomy categories [46,47]: task type, task criticality, robot morphology, ratio of people to robots, composition of robot teams, level of shared interaction among teams, interaction roles, physical proximity, decision support for operators, time–space taxonomy and autonomy level/amount of interventions from operators. These different variables can be used to define and classify an HRI system, as we show in Table 1.
System requirements can be defined by task type, task criticality and robot morphology. The task type variable defines the task in a high-level representation (e.g., physical rehabilitation task). It is important because it sets the system requirements and the basic design guidelines. Some possible values of this variable are: tutoring session, assembly manufacturing task, rehabilitation exercises, etc. Task criticality is a subjective measure which considers safety issues (e.g., human safety risk) and has three values indicating the level of human life risk: high, medium, low. For example, a heavy industrial robot which physically interacts with humans would be classified as criticality = ‘high’, whereas, for a social robot tutor, criticality would be low. Since robot appearance affects how people interact with it, the robot morphology variable describes the robot appearance type, i.e., anthropomorphic, zoomorphic, and functional.
Depending on the application, there are different interaction types between human and robot members. One parameter under this category is the ratio of people to robots, which simply defines the number of humans and robots participating in the interaction. Another parameter is the type of interaction between human and robot participants, defining the level of shared interaction among (robot and human) teams. The most straightforward example is a single robotic agent that interacts with a single human user. A more complex example is a human operator that sends commands to a team of robots, which has to autonomously coordinate its members to execute the command. Another example is a team of human users that coordinates and sends specific commands to independent robots.
Since human participation is essential for any HRI system, human roles must be well-defined. Scholtz [48] has defined five different roles for a human participant in an interaction with a robot: supervisor, operator, teammate, mechanic/programmer and bystander. Moreover, two more are added by Goodrich [49]: mentor and information consumer. In many applications, where the human acts as an operator or supervisor, an HRI system should provide the user with decision support. The human user needs to monitor, intervene, and modify robotic behavior, when needed. Providing appropriate information to the operator can enhance their decision-making. For example, the robot can visualize information on the list of all available sensors and data streams. Interactive methods can be used to make the system’s decision process transparent to the user, as humans and machines require shared awareness and shared intent during human–robot interactions [50,51]. Another defining factor for HRI is the level of autonomy (or the amount of human intervention). Human operators or supervisors often have the ability to control the robot and modify its behavior. The level of the autonomy is defined as the amount of time that the robot acts in an autonomous manner. In many cases, this value can be adjusted during the interaction, resulting in a progressively autonomous system. Human workload and cognitive capacity are two important factors to take into consideration in order to define the level of autonomy.
Other parameters that are defined by this taxonomy are spatiotemporal and define human–robot interaction in terms of space and time. More specifically, these parameters categorize an HRI system based on whether human and robot share the same space (collocated, non-collocated), and whether they act at the same time or not (synchronous, asynchronous). Moreover, in a collocated HRI system, the robot can be defined by different proximity behaviors e.g., avoiding, passing, following, approaching, touching, and/or none. Focusing on specific applications and domains requires a more detailed description. For example, SAR systems have been used for physical rehabilitation [52], where proxemics are defined based on social interaction zones (e.g., social, personal, intimate) used to define robot’s personality (e.g., introvert, extrovert).
Depending on the application and the system requirements, several taxonomies have been introduced for human–robot interaction systems, such as human–robot collaboration, child–robot interaction, assistive robotics and others. More specifically, Salter [53] presented a taxonomy for child–robot interaction (CRI), based on the control factors for both robots and participants. They used three categories for both robots and human participants: Autonomy, Group and Environment. For example, the robotic autonomy (RA) can be classified as one of the following: autonomous, fixed, combination, Wizard of Oz, and remote-controlled. The participant autonomy (PA) can be: free, natural, comfortable, directed, and controlled, based on how the users are allowed to interact with the robot. The authors have provided a taxonomy rating in relation to participant and robot influences, for all three categories. They used a rating scale from 1 (None) to 9 (High) to describe the level of control of robots and participants.
Other taxonomies focus and elaborate on specific parameters, such as robot autonomy level. In [54], the authors present a framework for Levels Of Robot Autonomy (LORA) in HRI, identifying parameters that influence and get influenced by the level of robot autonomy. They provide a guideline flow chart to determine robot autonomy and effects on HRI. Their taxonomy for robot autonomy takes into consideration the level of autonomy during sensing, planning and acting. The guidelines can be used to identify task and environmental influences on robot autonomy level, measure and categorize autonomy level and identify HRI parameters that have an impact on robot autonomy. Focusing on human–robot collaboration systems, another recent taxonomy describes the level of automation, specifically for collaborative robots [55]. The Interaction Readiness Model (IRM) classifies a system in one of the four levels, based on the level of automation. This model correlates the level of automation with task complexity in a manufacturing environment. The automation level varies from gated robots mode, where a robot is idle while a human is present to fully interactive mode, where humans and robots learn how to solve a synergistic task. This model has been defined based on real industrial needs, towards Industry 4.0 and “robofacturing” [56].

4. A Taxonomy for Robot-Assisted Training Systems

Based on the existing taxonomies and classification frameworks, we propose a set of taxonomy categories which may be considered as guidelines for the design, development and evaluation of a Robot-Assisted Training system, as we show in Figure 1. The categories are: Task Type and Requirements, Interaction Types and Roles, Level of Autonomy and Learning and Personalization Dimensions. In this section, we present and describe the proposed taxonomy categories, using examples of recent RAT systems, highlighting the relationship between these categories, i.e., the requirements of a rehabilitation system (high level of task criticality) may require a supervisor to monitor the interaction—interaction roles. In Table 2, we illustrate the proposed taxonomy categories, using recent works in Robot-Assisted Training.

4.1. Task Type and Requirements

When designing a Robot-Assisted Training system, the task type and requirements are the first parameters to be defined, since they can set the tone for the overall design, implementation and evaluation process. The task type and requirements define important parameters as task criticality and safety issues, target populations, robot morphology, set of appropriate sensors and type of assistance (physical, social, mixed). Several parameters, including robot design, task criticality and target population, may be defined taking into account both researchers’ views and clinicians’ and other stakeholders’ recommendations. Task type provides a high-level description of the task and the system requirements. Based on a recent taxonomy [57], types of assistive robots include physically assistive robotics (PAR), socially assistive robotics (SAR), as well as sensory and feedback systems and user interface and control systems.
Physically Assistive Robotics (or Assistive Robotics) is an area that studies how robots can be used to provide assistance to users (e.g., stroke patients) through physical interaction (e.g., robotic rehabilitation). In [58], a physically assistive robot is presented for upper-limb rehabilitation. In this work, the authors presented an automated system for a rehabilitation robotic (physically assistive and functional) device that guides stroke patients through an upper-limb reaching task. The system uses task-related observations (e.g., task completion time and assistance needed) to estimate user-related metrics (e.g., user fatigue, progress, etc.) and adapt the reaching task parameters to enhance training effects. As part of the system’s requirements, the authors argue that the use of sensors (camera, EMG sensors, etc.) could lead to noisy and untrustworthy observations and system’s decisions. Due to high task criticality, a supervisor monitors the system’s decisions and intervenes when needed.
Social robots can provide supportive behavior, feedback and recommendations, as well as attention acquisition to assist users in several applications, e.g., through gestures to enhance memory training using a memory game [60]. For the purposes of the experiments, the authors deployed Furhat, an anthropomorphic robotic head [63], which has been developed for several HRI applications. Another example demonstrates how socially assistive robots can be deployed for physical rehabilitation with elderly users [52], investigating different robot behavior parameters (human–robot personality matching, robot proxemics, etc.), as well as their relationship to user performance and engagement. Social assistance can also improve compliance and performance for physical exercising in child–robot interaction [61]. In sensory and feedback systems, the input channels may include different modalities from several sensors in order to capture information on the state of the robot, as well as user-related information, e.g., user’s performance, affective state. Based on the system requirements, a robot’s behavior is expressed through the output channels, responsible for robot movement, emotion generation, task parameter adjustment, etc. User interfaces and control systems are being used as input/output communication channels, e.g., to visualize sensor information for a human supervisor who can control the system, if needed.

4.2. Interaction Types and Roles

Similar to previous taxonomies, we define the human–robot interaction types and roles. These parameters define the interaction types; how the human–robot team is formulated and communicates, as well as the interaction roles for each part of the interaction. Depending on the task type and requirements, there are different interaction types between human and robot members, considering the levels of interaction. For example, as mentioned before, the most straightforward interaction includes a single interaction channel between one robot and one human user. A more complicated interaction could involve a robot which communicates with multiple human users, with the same or different interaction roles (e.g., students, teacher–student). In the case of a collaborative training scenario, a social robot can be used as a moderator for a collaborative training game between two human users to improve their collaboration skills [64]. In Figure 2, we show the different interaction types of humans and robots in a Robot-Assisted Training system.
Previous taxonomies have focused on the interaction roles that human users can have in the interaction [46,47]. Depending on the application area and context, e.g., education, healthcare, industry, there are different interaction roles between human and robot members. For example, focusing on educational robots, the different interaction roles of the robot can be: presenter, teaching assistant, teacher, peer, or tutor [16]. In this work, we focus on the different categories of interaction roles that both human and robot members can have in a Robot-Assisted Training session. Considering the existing taxonomies, as well as recent works and applications in this area, the categories of interaction types in a Robot-Assisted Training system are: primary user, trainer, supervisor and teammate.
A primary user is the end user who participates actively in the interaction (e.g., patient, student, trainee). While the most frequent case is that this is a human user, there are works that focus on training a secondary user (therapist) [65], by simulating the primary user using the robot in order to evaluate the system from the aspect of the supervisor [66]. Moreover, as a primary user, a robot can act as peer-learner. Peer-learning (or peer-training) refers to students (or employees, trainees) learning with and from each other (classmates, colleagues, trainees). The role of a (human or robot) trainer is to instruct, assist and guide the primary user(s) during the training session (e.g., educational robotic tutors). For example, therapeutic robots can guide patients during rehabilitation sessions, by demonstrating the rehabilitation exercises that need to be performed [26,42,61]. A supervisor monitors the training session (i.e., through sensors or interfaces) to capture essential information of the training session (e.g., task parameters, user performance and condition, etc.), and intervene, if needed, to ensure an efficient and safe interaction. Team co-ordination and collaboration can be used as training tasks, thus the role of a (human or robot) teammate who interacts with the user can be an important member role in a training session. For example, robotic teammates can be used to simulate in real time the cooperation between industrial robotic manipulators and humans, executing simple manufacturing tasks [67].

4.3. Level of Autonomy and Learning

An essential aspect of a Robot-Assisted Training system is the level of robot autonomy, which defines whether the robot acts autonomously or under the guidance or control of a human user. Specific system requirements and parameters may require the presence of a human expert who acts as a supervisor to ensure safety and efficiency during the training session. Influenced by LORA [54], the level of autonomy in a RAT system varies from tele-operated to fully-autonomous systems, including supervised autonomy and decision support systems. Autonomy can be defined both in terms of perception and behavior control. Autonomy in perception defines to what extent the robot perceives its environment, with or without the supervision or intervention of a human user. For example, a decision support system can visualize the input modalities (speech, facial expressions, etc.) through a Graphical User Interface (GUI), allowing a human operator to provide the system with the required perception information (spoken utterances, engagement level, user intention, etc.), especially during system prototype evaluations, where automated processing modules (speech recognition, emotion classification) have not been developed. Autonomy in behavior control relates to the amount of human intervention during the decision making and execution process of the system based on the information provided by the processing module. For example, in the upper-limb reaching task example [58], the system suggests an action to the supervisor, through a GUI, and the supervisor agrees or disagrees with the system decision, resulting in a supervisory control system. This Wizard-of-Oz (WoZ) paradigm has been extensively used for RAT applications, where the robot executes the behaviors decided by a human supervisor. Despite its effectiveness, a main limitation relates to the amount of expert workload and attention to ensure a safe robot behavior. Towards this end, recent approaches enable the robot to learn through human (expert) input and progressively act in an autonomous manner.
Considering the above, we compare two systems in terms of robot autonomy in both perception and behavior control, illustrating their differences (Figure 3). In a user study for attention acquisition through gestures [60], a social robot was deployed to grab and guide user’s attention during a memory card game, through a combination of verbal and non-verbal behavior (e.g., speech, gaze, gestures). The system needs to perceive user behavior, attentive and affective state and make a decision based on these. In order to facilitate robot perception, a human supervisor provides the robot with the user state, based on gaze and speech behavior, by observing the interaction (teleoperation). The robot selects a combination of gestures to deploy based on the human-provided user state and the game state (number of remaining cards). While robot perception is fully dependent on the human supervisor, the robot uses a Reinforcement Learning policy to decide if the participant needs support and determine the combination of gestures to grab user’s attention, in a fully autonomous manner. In another study, the authors illustrate the SPARC framework (Supervised Progressively Autonomous Robot Competencies) with applications in Robot-Assisted Therapy [68]. The robot is designed to assist children with ASD during a set of turn-taking and imitation training tasks. The proposed system include a sensing and interpretation module which analyzes multimodal information for child behaviour classification. At the perception level, the system acts as a Decision Support system; it visualizes captured data, as well as extracted information and features related to child’s level of engagement, motivation, and performance based on the perceived modalities (gaze, motion, speech). A human expert uses this information to annotate child behaviors with the possibility of training classifiers based on this annotated data using Machine Learning approaches (e.g., Neural Networks and Support Vector Machines) for automated child behavior classification. Based on the perceived information, the robot uses a cognitive controller which maps therapist-specified child behaviours to appropriate therapist-specified robot actions. The system proposes actions to the supervisor who can passively accept the action or actively correct it. The system acts in a semi-autonomous way (supervised control) in order to reduce the supervisor’s workload.
Robotic agents can be either learning or non-learning agents, or they can switch between different levels of learning, depending on different parameters (i.e., uncertainty, performance). There are several types of learning agents, e.g., offline or online, model-based or model-free, supervised or unsupervised, the selection of which depends on the application and the system requirements. There is a variety of learning approaches which can be applied for interactive systems and agents, including Machine Learning, Active Learning, and Reinforcement Learning [69]. For example, Active Learning is a research area which studies when an agent should ask for human input (i.e., correct label/action) in order to improve system performance. Interactive Machine Learning and Interactive Reinforcement Learning are two promising approaches to integrate such human expertise and feedback in the learning mechanism of an interactive system (Human-in-the-Loop). Following such interactive learning approaches, intelligent WoZ interfaces can enable an assistive robot to integrate expert knowledge and guidance and switch from tele-operation to a progressively autonomous mode, decreasing expert workload and effort. Similar to autonomy levels, learning can occur both in perception and behavior control. Neural Networks and Support Vector Machines have been used to learn robot behavior from human expert input in a RAT session [66]. The presented system simulates a RAT session, where a human supervisor monitors a robot–child and a robot–instructor during a card classification task, using a WoZ interface. The neural network is trained using human input as training labels. Their user study results indicate that learning agents can decrease expert workload, as they learn how to provide human-like decisions. The robot shifts from a tele-operated agent (WoZ) to a supervised autonomous robot, demonstrating that progressive robot autonomy can reduce supervisor workload, while maintaining the quality of the interaction. In another work, a robotic device has been used as a haptic interface for upper-limb rehabilitation [59]. The robotic device acts as a joystick for the user who performs a rehabilitation game. The system follows a dynamic player modeling approach using Reinforcement Learning in order to learn a user model and adjust the game difficulty in real time.

4.4. Personalization Dimensions

Personalization plays an integral role in designing an efficient Robot-Assisted Training system. Based on the famous Bloom’s 2 sigma problem [70], one-to-one tutoring presents better learning effects than group (conventional) tutoring. Parameters that affect efficiency include training material (e.g., exercise regimen) and teacher behavior (e.g., supportive, challenging, etc.). Such parameters can be adjusted in order to maximize tutoring/training effects for each individual. Considering system parameters defined by other taxonomies and recent work in RAT systems, we define the personalization dimensions. Personalization dimensions refer to (a) the set of observations that the system perceives and considers in order to adjust its behavior and (b) the set of control parameters, that are adjusted to achieve personalization. Considering the two basic modules of a Robot-Assisted Training system, the observations would be the output of the perception module and the control parameters would be defined based on the behavior control module. Another crucial parameter is the evaluation metric (or objective function) based on which the effects of personalization will be maximized. This is highly dependent on the system requirements and can relate to both affective, learning/cognitive and physical gains. Since such parameters are defined based on the system’s requirements, as well as the design approach and can be defined either at a high level (e.g., robot supportive behavior) or at a low level (e.g., robot movement), one of the research objectives regarding personalization is how to use these observed parameters in order to learn the control parameters. Interactive Reinforcement Learning (IRL) techniques have been used to facilitate robot learning from human-generated feedback varying from button clicks and vocal commands to haptic feedback during human–robot object handover [71]. For example, a robot that learns behavior by analyzing and utilizing emotional and social user signals could facilitate real-time personalization in human–robot interaction on the wild. For example, the affective language tutor [19] uses a facial expression and feature extraction software in order to estimate child’s affective state (engagement and valence). The system combines these estimated values into a reward signal and the system learns to adjust its behavior by selecting appropriate motivational strategies (using verbal and non-verbal actions), based on the current child’s state (affect and performance).

5. Conclusions and Open Challenges

In this paper, we presented a taxonomy in Robot-Assisted Training, considering related taxonomies in Human–Robot Interaction, as well as current trends and needs in this growing body of research. The motivation of this work is to highlight research objectives related to the design and implementation of a Robot-Assisted Training system. We presented a review on recent works, aiming to delineate different aspects and trends to be taken into consideration when designing such a system, focusing on personalization. In this section, we discuss the open challenges and research objectives for the design, development and evaluation of a Robot-Assisted Training system. Considering a set of open challenges regarding physically robot-assisted training systems [72] and social aspects of human–robot interaction and evaluation of social robots [73], we identify the following research objectives and needs:
  • Perceiving and understanding user needs, focusing on techniques and approaches to enable an intuitive and non-intrusive interaction between the user and the system, maximizing user’s compliance, based on the different user types and roles and their participation in the personalization procedure,
  • Improvement of system self-awareness, in terms of perception, interpretation, reasoning, decision making, and learning. The system must be able to self-assess its functionality on different levels in order to prevent inappropriate interactions, e.g., notify if involvement of a human supervisor is required,
  • Improvement of system adaptation and personalization based on the perceived behavioral, cognitive and emotional states of the user(s), the task needs and the context of the interaction. The system must be able to know when and how to personalize its behavior with respect to appropriate evaluation metrics.
Robot-Assisted Training systems usually operate in contextually rich environments that can provide the system with valuable information to achieve personalization. A research question that arises is how to identify the optimal (e.g., minimum) set of modalities and sensors, as well as accurate perception and behavior control components, to ensure an efficient, intuitive and non-intrusive interaction. Different interaction types and member roles result in different types of human feedback that can be captured by different sensors/interfaces including, cameras, microphones, EEG sensors, GUIs, joysticks, and many others. Such systems should be able to utilize the different communication channels from different types of human users, who can provide the system with anticipatory guidance and performance feedback towards personalization [74]. Research works investigate how informative user interfaces and interactive learning methods can increase user engagement while interacting with a learning agent [75]. Interactive Reinforcement Learning can utilize human-generated feedback (i.e., facial expressions, emotion, GUI input, etc.) in order to facilitate personalization in the wild [19].
The selection of personalization parameters must be in line with the system design and requirements, in order to achieve positive effects on both learning and affective gains. Based on the time span or the frequency of the interaction, there is a distinction between long-term and short-term adaptation and personalization systems. Short-term adaptation systems are able to personalize their behavior during few interaction steps in order to provide an effective interaction, given a limited amount of data [76]. Long-term adaptation systems require more or longer interactions in order to personalize and improve future interactions. Long-term adaptation may be more efficient for frequent interactions (e.g., tutoring, rehabilitation), considering the learning benefits of long-term interactions in tutoring [77]. Moreover, the selection of the personalization parameters (observations and control) should serve the purpose and goal of the system. Affective robots are deployed to personalize the interaction by selecting and generating appropriate emotions, aiming to a more natural and effective interaction [73], considering both learning and affective gains, e.g., task performance and engagement.
A recent systematic review discusses adaptivity and personalization of human–robot interaction [78]. The paper presents usability studies with adaptive social robots interacting with users in heath care and therapy domain, in education, in work and home environments and public spaces. While most of the studies proposed adaptation of the HRI based on user performance and user profile, limited studies investigate adaptation based on user characteristics (e.g., gender, age, level of experience). Another important issue is to understand the influence of the adaptation on user’s engagement and performance [78,79]. A comparison between emotional-based, memory-based and game adaptations to engage social long-term interaction with between robot and children at a school is presented in [79]. The initial results showed that emotion-based adaptation was more effective, followed by memory-based adaptations, while game adaptation did not achieve long-term social engagement.
An important challenge for Robot-Assisted Training considers human safety during physical (and/or social) interaction [80]. While well-established standards (ISO) for direct HRI have been proposed for assistive and collaborative robots in industry [81], there are no established safety standards for robot-assisted training systems, to the best of our knowledge. Moreover, psychological safety for the user should also be considered based on the survey paper of methods for safe HRI by Lasota and his co-authors [82]. Negative psychological effects on the user from HRI, such as discomfort, stress or fatigue, should be recognized and the robot could take some retrieving actions. For example, if the user feels discomfort, the robot could slow down or keep a greater distance or pause/stop until the user’s psychological state improves. Such affective personalization can be considered as a sequence of human emotion recognition, appropriate robotic behavior selection and expression of robotic emotions. This loop of perception, regulation and expression is called affective loop. Research works focus on developing cognitive models to provide robots with social aspects and capacities, in order to personalize affective artificial behaviors in cooperative human–robot scenarios through emotion detection, regulation and expression [83,84,85].
There is also a need to investigate the negative effects of personalization, considering both learning (training) and affective gains. Personalization may result in a more enjoyable short-term interaction, considering also the novelty effect [86]. However, it is shown that human learners may not prefer a personalized and concise robot behavior, over the long-term, but a more varying one [62]. Considering these, there is a need to investigate different personalization mechanisms both in terms of perception and control, from the aspects of user acceptance and preferences, learning and affective gains and other evaluation metrics for Human–Robot Interaction systems [87]. Personalization is a complex computational problem that requires the robot to dynamically assess, adapt, and leverage a model of user’s abilities and needs [88] and can benefit from literature reviews in several areas, including but not limited to, Intelligent Tutoring Systems [89], Student Modeling [90], Affective Computing [91], Cyber-Physical Systems [92] and Machine Learning for Interactive Systems and Robots [69].

Author Contributions

K.T. conceived of and designed the study and wrote the paper. M.K. was responsible for the materials related to physically assistive robotics and rehabilitation. V.K., F.M. and O.K. supervised the study design and contributed to the manuscript preparation. O.K. provided his expertise and materials related to social robots.

Funding

This research was funded by the National Science Foundation (NSF) under award numbers CHS 1565328 and PFI 1719031.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gandolfi, M.; Geroin, C.; Waldner, A.; Maddalena, I.; Dimitrova, E.; Picelli, A.; Smania, N.; Tomelleri, C. Feasibility and safety of early lower limb robot-assisted training in sub-acute stroke patients: A pilot study. Eur. J. Phys. Rehabil. Med. 2017, 53, 870–882. [Google Scholar] [PubMed]
  2. Stroppa, F.; Loconsole, C.; Marcheschi, S.; Frisoli, A. A Robot-Assisted Neuro-Rehabilitation System for Post-Stroke Patients’ Motor Skill Evaluation with ALEx Exoskeleton. In Converging Clinical and Engineering Research on Neurorehabilitation II; Springer: Berlin/Heidelberg, Germany, 2017; pp. 501–505. [Google Scholar]
  3. Wada, K.; Shibata, T.; Musha, T.; Kimura, S. Robot therapy for elders affected by dementia. IEEE Eng. Med. Biol. Mag. 2008, 27, 53–60. [Google Scholar] [CrossRef]
  4. Jøranson, N.; Pedersen, I.; Rokstad, A.M.M.; Ihlebæk, C. Effects on symptoms of agitation and depression in persons with dementia participating in robot-assisted activity: A cluster-randomized controlled trial. J. Am. Med. Direct. Assoc. 2015, 16, 867–873. [Google Scholar] [CrossRef] [PubMed]
  5. Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Ann. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [PubMed]
  6. Bharatharaj, J.; Huang, L.; Mohan, R.E.; Al-Jumaily, A.; Krägeloh, C. Robot-assisted therapy for learning and social interaction of children with autism spectrum disorder. Robotics 2017, 6, 4. [Google Scholar] [CrossRef]
  7. Lee, J.; Takehashi, H.; Nagai, C.; Obinata, G.; Stefanov, D. Which robot features can stimulate better responses from children with autism in robot-assisted therapy? Int. J. Adv. Robot. Syst. 2012, 9, 72. [Google Scholar] [CrossRef]
  8. Lee, S.; Noh, H.; Lee, J.; Lee, K.; Lee, G.G.; Sagong, S.; Kim, M. On the effectiveness of robot-assisted language learning. ReCALL 2011, 23, 25–58. [Google Scholar] [CrossRef]
  9. Han, J. Emerging technologies: Robot assisted language learning. Lang. Learn. Technol. 2012, 16, 1–9. [Google Scholar]
  10. Clabaugh, C.; Ragusa, G.; Sha, F.; Matarić, M. Designing a socially assistive robot for personalized number concepts learning in preschool children. In Proceedings of the 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Providence, RI, USA, 13–16 August 2015; pp. 314–319. [Google Scholar]
  11. Varrasi, S.; Di Nuovo, S.; Conti, D.; Di Nuovo, A. A social robot for cognitive assessment. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human–Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 269–270. [Google Scholar]
  12. Varrasi, S.; Di Nuovo, S.; Conti, D.; Di Nuovo, A. Social robots as psychometric tools for cognitive assessment: A pilot test. In Human Friendly Robotics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 99–112. [Google Scholar]
  13. Korn, O.; Tso, L.; Papagrigoriou, C.; Sowoidnich, Y.; Konrad, R.; Schmidt, A. Computerized assessment of the skills of impaired and elderly workers: A tool survey and comparative study. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 29 June–1 July 2016; p. 50. [Google Scholar]
  14. Scassellati, B.; Boccanfuso, L.; Huang, C.M.; Mademtzi, M.; Qin, M.; Salomons, N.; Ventola, P.; Shic, F. Improving social skills in children with ASD using a long-term, in-home social robot. Sci. Robot. 2018, 3, eaat7544. [Google Scholar] [CrossRef]
  15. Raya, R.; Rocon, E.; Urendes, E.; Velasco, M.A.; Clemotte, A.; Ceres, R. Assistive robots for physical and cognitive rehabilitation in cerebral palsy. In Intelligent Assistive Robots; Springer: Berlin/Heidelberg, Germany, 2015; pp. 133–156. [Google Scholar]
  16. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef]
  17. Konijn, E.; Hoorn, J. Humanoid Robot Tutors Times Tables: Does Robot’s Social Behavior Match Pupils’ Educational Ability? IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  18. Saerbeck, M.; Schut, T.; Bartneck, C.; Janse, M.D. Expressive robots in education: Varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 1613–1622. [Google Scholar]
  19. Gordon, G.; Spaulding, S.; Westlund, J.K.; Lee, J.J.; Plummer, L.; Martinez, M.; Das, M.; Breazeal, C. Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills. In Proceedings of the AAAI, Phoenix, AZ, USA, 12–17 February 2016; pp. 3951–3957. [Google Scholar]
  20. Peters, R.; Broekens, J.; Neerincx, M.A. Robots educate in style: The effect of context and non-verbal behaviour on children’s perceptions of warmth and competence. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 449–455. [Google Scholar]
  21. Wada, K.; Shibata, T. Robot therapy in a care house-results of case studies. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 581–586. [Google Scholar]
  22. Bharatharaj, J.; Huang, L.; Al-Jumaily, A.; Elara, M.R.; Krägeloh, C. Investigating the Effects of Robot-Assisted Therapy among Children with Autism Spectrum Disorder using Bio-markers. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; Volume 234, p. 012017. [Google Scholar]
  23. Marchal-Crespo, L.; Reinkensmeyer, D.J. Review of control strategies for robotic movement training after neurologic injury. J. Neuroeng. Rehabil. 2009, 6, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Riener, R.; Lünenburger, L.; Maier, I.C.; Colombo, G.; Dietz, V. Locomotor training in subjects with sensori-motor deficits: An overview of the robotic gait orthosis lokomat. J. Healthc. Eng. 2010, 1, 197–216. [Google Scholar] [CrossRef]
  25. Jung, J.H.; Lee, N.G.; You, J.H.; Lee, D.C. Validity and feasibility of intelligent Walkbot system. Electron. Lett. 2009, 45, 1016–1017. [Google Scholar] [CrossRef]
  26. Dundar, U.; Toktas, H.; Solak, O.; Ulasli, A.; Eroglu, S. A comparative study of conventional physiotherapy versus robotic training combined with physiotherapy in patients with stroke. Top. Stroke Rehabil. 2014, 21, 453–461. [Google Scholar] [CrossRef] [PubMed]
  27. Tong, R.K.; Leung, W.W.; Hu, X.; Song, R. Interactive robot-assisted training system using continuous EMG signals for stroke rehabilitation. In Proceedings of the 3rd International Convention on Rehabilitation Engineering & Assistive Technology, Singapore, 22–26 April 2009; p. 20. [Google Scholar]
  28. De Santis, D.; Zenzeri, J.; Casadio, M.; Masia, L.; Riva, A.; Morasso, P.; Squeri, V. Robot-assisted training of the kinesthetic sense: Enhancing proprioception after stroke. Front. Hum. Neurosci. 2015, 8, 1037. [Google Scholar] [CrossRef]
  29. Morone, G.; Paolucci, S.; Cherubini, A.; De Angelis, D.; Venturiero, V.; Coiro, P.; Iosa, M. Robot-assisted gait training for stroke patients: Current state of the art and perspectives of robotics. Neuropsychiatr. Dis. Treat. 2017, 13, 1303. [Google Scholar] [CrossRef] [PubMed]
  30. Maciejasz, P.; Eschweiler, J.; Gerlach-Hahn, K.; Jansen-Troy, A.; Leonhardt, S. A survey on robotic devices for upper limb rehabilitation. J. Neuroeng. Rehabil. 2014, 11, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Chang, W.H.; Kim, Y.H. Robot-assisted therapy in stroke rehabilitation. J. Stroke 2013, 15, 174. [Google Scholar] [CrossRef] [PubMed]
  32. Schwartz, I.; Meiner, Z. Robotic-assisted gait training in neurological patients: Who may benefit? Ann. Biomed. Eng. 2015, 43, 1260–1269. [Google Scholar] [CrossRef] [PubMed]
  33. Veerbeek, J.M.; Langbroek-Amersfoort, A.C.; Van Wegen, E.E.; Meskers, C.G.; Kwakkel, G. Effects of robot-assisted therapy for the upper limb after stroke: A systematic review and meta-analysis. Neurorehabil. Neural Repair 2017, 31, 107–121. [Google Scholar] [CrossRef]
  34. Chetouani, M.; Boucenna, S.; Chaby, L.; Plaza, M.; Cohen, D.; Chaby, L.; Luherne-du Boullay, V.; Chetouani, M.; Plaza, M.; Templier, L.; et al. Social Signal Processing and Socially Assistive Robotics in Developmental Disorders. In Social Signal Processing; Cambrige University Press: Cambrige, UK, 2017; p. 389. [Google Scholar]
  35. Spaulding, S.; Chen, H.; Ali, S.; Kulinski, M.; Breazeal, C. A Social Robot System for Modeling Children’s Word Pronunciation: Socially Interactive Agents Track. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 10–15 July 2018; pp. 1658–1666. [Google Scholar]
  36. Fan, J.; Bian, D.; Zheng, Z.; Beuscher, L.; Newhouse, P.A.; Mion, L.C.; Sarkar, N. A Robotic Coach Architecture for Elder Care (ROCARE) based on multi-user engagement models. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1153–1163. [Google Scholar] [CrossRef] [PubMed]
  37. Cominelli, L.; Carbonaro, N.; Mazzei, D.; Garofalo, R.; Tognetti, A.; De Rossi, D. A multimodal perception framework for users emotional state assessment in social robotics. Futur. Internet 2017, 9, 42. [Google Scholar] [CrossRef]
  38. Simonetti, D.; Zollo, L.; Papaleo, E.; Carpino, G.; Guglielmelli, E. Multimodal adaptive interfaces for 3D robot-mediated upper limb neuro-rehabilitation: An overview of bio-cooperative systems. Robot. Auton. Syst. 2016, 85, 62–72. [Google Scholar] [CrossRef]
  39. Korn, O.; Stamm, L.; Moeckl, G. Designing Authentic Emotions for Non-Human Characters: A Study Evaluating Virtual Affective Behavior. In Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, UK, 10–14 June 2017; pp. 477–487. [Google Scholar]
  40. Feng, Y.; Jia, Q.; Wei, W. A Control Architecture of Robot-Assisted Intervention for Children with Autism Spectrum Disorders. J. Robot. 2018, 2018. [Google Scholar] [CrossRef]
  41. Trafton, J.G.; Hiatt, L.M.; Harrison, A.M.; Tamborello, F.P., II; Khemlani, S.S.; Schultz, A.C. Act-r/e: An embodied cognitive architecture for human–robot interaction. J. Hum.-Robot Interact. 2013, 2, 30–55. [Google Scholar] [CrossRef]
  42. Cao, H.L.; Van de Perre, G.; Kennedy, J.; Senft, E.; Esteban, P.G.; De Beir, A.; Simut, R.; Belpaeme, T.; Lefeber, D.; Vanderborght, B. A personalized and platform-independent behavior control system for social robots in therapy: Development and applications. IEEE Trans. Cognit. Dev. Syst. 2018. [Google Scholar] [CrossRef]
  43. Müller, S.; Bergande, B.; Brune, P. Robot Tutoring: On the Feasibility of Using Cognitive Systems as Tutors in Introductory Programming Education: A Teaching Experiment. In Proceedings of the 3rd European Conference of Software Engineering Education, Bavaria, Germany, 14–15 June 2018; pp. 45–49. [Google Scholar]
  44. Ziafati, P.; Lera, F.; Costa, A.; Nazarikhorram, A.; Van Der Torre, L.; Nazarikhor, A. ProCRob Architecture for Personalized Social Robotics. Presented at the Robots for Learning Workshop @ HRI 2017, Vienna, Austria, 6–9 March 2017; Available online: https://r4l.epfl.ch/wp-content/uploads/2018/09/R4L_HRI_2017_paper_9.pdf (accessed on 9 December 2018).
  45. Galindo, C.; Gonzalez, J.; Fernández-Madrigal, J. An architecture for cognitive human–robot integration. Application to rehabilitation robotics. In Proceedings of the 2005 IEEE International Conference on Mechatronics and Automation, Niagara Falls, ON, Canada, 29 July–1 August 2005; Volume 1, pp. 329–334. [Google Scholar]
  46. Yanco, H.A.; Drury, J.L. A taxonomy for human–robot interaction. In Proceedings of the AAAI Fall Symposium on Human–Robot Interaction, North Falmouth, MA, USA, 15–17 November 2002; pp. 111–119. [Google Scholar]
  47. Yanco, H.A.; Drury, J. Classifying human–robot interaction: An updated taxonomy. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 10–13 October 2004; Volume 3, pp. 2841–2846. [Google Scholar]
  48. Scholtz, J. Theory and evaluation of human–robot interactions. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003. [Google Scholar]
  49. Goodrich, M.A.; Schultz, A.C. Human-robot interaction: A survey. Found. Trends Hum.-Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  50. Lyons, J.B.; Havig, P.R. Transparency in a human–machine context: Approaches for fostering shared awareness/intent. In International Conference on Virtual, Augmented and Mixed Reality; Springer: Berlin/Heidelberg, Germany, 2014; pp. 181–190. [Google Scholar]
  51. Drury, J.L.; Scholtz, J.; Yanco, H.A. Awareness in human–robot interactions. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Washington, DC, USA, 8 October 2003; Volume 1, pp. 912–918. [Google Scholar]
  52. Tapus, A.; Ţăpuş, C.; Matarić, M.J. User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intell. Serv. Robot. 2008, 1, 169–183. [Google Scholar] [CrossRef]
  53. Salter, T.; Michaud, F.; Larouche, H. How wild is wild? A taxonomy to characterize the ‘wildness’ of child–robot interaction. Int. J. Soc. Robot. 2010, 2, 405–415. [Google Scholar] [CrossRef]
  54. Beer, J.; Fisk, A.D.; Rogers, W.A. Toward a framework for levels of robot autonomy in human–robot interaction. J. Hum.-Robot Interact. 2014, 3, 74. [Google Scholar] [CrossRef] [PubMed]
  55. Christiernin, L.G. How to Describe Interaction with a Collaborative Robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human–Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 93–94. [Google Scholar]
  56. Hermann, M.; Pentek, T.; Otto, B. Design principles for industrie 4.0 scenarios. In Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016; pp. 3928–3937. [Google Scholar]
  57. Zollo, L.; Wada, K.; Van der Loos, H.M. Special issue on assistive robotics [from the guest editors]. IEEE Robot. Autom. Mag. 2013, 20, 16–19. [Google Scholar] [CrossRef]
  58. Kan, P.; Huq, R.; Hoey, J.; Goetschalckx, R.; Mihailidis, A. The development of an adaptive upper-limb stroke rehabilitation robotic system. J. Neuroeng. Rehabil. 2011, 8, 33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Andrade, K.; Fernandes, G.; Caurin, G.; Siqueira, A.; Romero, R.; Pereira, R. Dynamic player modelling in serious games applied to rehabilitation robotics. In Proceedings of the SBR-LARS Robotics Symposium and Robocontrol, Sao Carlos, Brazil, 18–23 October 2014; pp. 211–216. [Google Scholar]
  60. Hemminghaus, J.; Kopp, S. Towards adaptive social behavior generation for assistive robots using reinforcement learning. In Proceedings of the 2017 ACM/IEEE International Conference on Human–Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 332–340. [Google Scholar]
  61. Magyar, G.; Vircikova, M. Socially-Assistive Emotional Robot that Learns from the Wizard During the Interaction for Preventing Low Back Pain in Children. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 411–420. [Google Scholar]
  62. Gao, Y.; Barendregt, W.; Obaid, M.; Castellano, G. When robot personalisation does not help: Insights from a robot-supported learning study. In Proceedings of the Robot and Human Interactive Communication, Tai’an, China, 27 August–1 September 2018. [Google Scholar]
  63. Al Moubayed, S.; Beskow, J.; Skantze, G.; Granström, B. Furhat: A back-projected human-like robot head for multiparty human–machine interaction. In Cognitive Behavioural Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 114–130. [Google Scholar]
  64. Short, E.; Mataric, M.J. Robot moderation of a collaborative game: Towards socially assistive robotics in group interactions. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 385–390. [Google Scholar]
  65. Alsos, O.A.; Svanæs, D. Designing for the secondary user experience. In IFIP Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2011; pp. 84–91. [Google Scholar]
  66. Senft, E.; Baxter, P.; Kennedy, J.; Belpaeme, T. Sparc: Supervised progressively autonomous robot competencies. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 603–612. [Google Scholar]
  67. Matsas, E.; Vosniakos, G.C. Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int. J. Interact. Des. Manuf. 2017, 11, 139–153. [Google Scholar] [CrossRef]
  68. Esteban, P.G.; Baxter, P.; Belpaeme, T.; Billing, E.; Cai, H.; Cao, H.L.; Coeckelbergh, M.; Costescu, C.; David, D.; De Beir, A.; et al. How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder. Paladyn J. Behav. Robot. 2017, 8, 18–38. [Google Scholar] [CrossRef]
  69. Cuayáhuitl, H.; van Otterlo, M.; Dethlefs, N.; Frommberger, L. Machine learning for interactive systems and robots: a brief introduction. In Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication, Beijing, China, 3–4 August 2013; pp. 19–28. [Google Scholar]
  70. Bloom, B.S. The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educ. Res. 1984, 13, 4–16. [Google Scholar] [CrossRef]
  71. Kupcsik, A.; Hsu, D.; Lee, W.S. Learning dynamic robot-to-human object handover from human feedback. In Robotics Research; Springer: Berlin/Heidelberg, Germany, 2018; pp. 161–176. [Google Scholar]
  72. Yakub, F.; Khudzari, A.Z.M.; Mori, Y. Recent trends for practical rehabilitation robotics, current challenges and the future. Int. J. Rehabil. Res. 2014, 37, 9–21. [Google Scholar] [CrossRef] [PubMed]
  73. Korn, O.; Bieber, G.; Fron, C. Perspectives on Social Robots: From the Historic Background to an Experts’ View on Future Developments. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 26–29 June 2018; pp. 186–193. [Google Scholar]
  74. Odette, K.; Rivera, J.; Phillips, E.K.; Jentsch, F. Robot Self-Assessment and Expression: A Learning Framework. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2017; Volume 61, pp. 1188–1192. [Google Scholar]
  75. Li, G.; Hung, H.; Whiteson, S.; Knox, W.B. Using informative behavior to increase engagement in the tamer framework. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, St. Paul, MN, USA, 6–10 May 2013; pp. 909–916. [Google Scholar]
  76. Zehfroosh, A.; Kokkoni, E.; Tanner, H.G.; Heinz, J. Learning models of Human–Robot Interaction from small data. In Proceedings of the 2017 25th Mediterranean Conference on Control and Automation (MED), Valletta, Malta, 3–6 July 2017; Volume 2017, p. 223. [Google Scholar]
  77. Spaulding, S. Personalized Robot Tutors that Learn from Multimodal Data. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 10–15 July 2018; pp. 1781–1783. [Google Scholar]
  78. Ahmad, M.; Mubin, O.; Orlando, J. A systematic review of adaptivity in human–robot interaction. Multimodal Technol. Interact. 2017, 1, 14. [Google Scholar] [CrossRef]
  79. Ahmad, M.I.; Mubin, O.; Orlando, J. Adaptive social robot for sustaining social engagement during long-term children–robot interaction. Int. J. Hum.–Comput. Interact. 2017, 33, 943–962. [Google Scholar] [CrossRef]
  80. Alami, R.; Albu-Schäffer, A.; Bicchi, A.; Bischoff, R.; Chatila, R.; De Luca, A.; De Santis, A.; Giralt, G.; Guiochet, J.; Hirzinger, G.; et al. Safe and dependable physical human–robot interaction in anthropic domains: State of the art and challenges. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1–16. [Google Scholar]
  81. Bicchi, A.; Peshkin, M.A.; Colgate, J.E. Safety for physical human–robot interaction. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1335–1348. [Google Scholar]
  82. Lasota, P.A.; Fong, T.; Shah, J.A. A survey of methods for safe human–robot interaction. Found. Trends Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
  83. Vircikova, M.; Magyar, G.; Sincak, P. The Affective Loop: A Tool for Autonomous and Adaptive Emotional Human–Robot Interaction. In Robot Intelligence Technology and Applications 3; Springer: Berlin/Heidelberg, Germany, 2015; pp. 247–254. [Google Scholar]
  84. Castillo, J.C.; Castro-González, Á.; Alonso-Martín, F.; Fernández-Caballero, A.; Salichs, M.Á. Emotion detection and regulation from personal assistant robot in smart environment. In Personal Assistants: Emerging Computational Technologies; Springer: Berlin/Heidelberg, Germany, 2018; pp. 179–195. [Google Scholar]
  85. Liu, X.; Xie, L.; Liu, A.; Li, D. Cognitive emotional regulation model in human–robot interaction. Discret. Dyn. Nat. Soc. 2015, 2015, 829387. [Google Scholar] [CrossRef]
  86. Kennedy, J.; Baxter, P.; Belpaeme, T. Can less be more? The impact of robot social behaviour on human learning. In Proceedings of the 4th International Symposium on New Frontiers in HRI at AISB, Canterbury, UK, 21–22 April 2015. [Google Scholar]
  87. Steinfeld, A.; Fong, T.; Kaber, D.; Lewis, M.; Scholtz, J.; Schultz, A.; Goodrich, M. Common metrics for human–robot interaction. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human–Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 33–40. [Google Scholar]
  88. Canny, J. Interactive Machine Learning; University of California: Berkeley, CA, USA, 2014. [Google Scholar]
  89. Kulik, J.A.; Fletcher, J. Effectiveness of intelligent tutoring systems: A meta-analytic review. Rev. Educ. Res. 2016, 86, 42–78. [Google Scholar] [CrossRef]
  90. Chrysafiadi, K.; Virvou, M. Student modeling approaches: A literature review for the last decade. Expert Syst. Appl. 2013, 40, 4715–4729. [Google Scholar] [CrossRef]
  91. Su, S.H.; Lin, H.C.K.; Wang, C.H.; Huang, Z.C. Multi-Modal Affective Computing Technology Design the Interaction between Computers and Human of Intelligent Tutoring Systems. Int. J. Online Pedagog. Course Des. 2016, 6, 13–28. [Google Scholar] [CrossRef]
  92. Ray, L. Cyber-Physical Systems. In Handbook of Research on Applied Cybernetics and Systems Science; IGI Global: Hershey, PA, USA, 2017; p. 335. [Google Scholar]
Figure 1. Taxonomy categories for robot-assisted training.
Figure 1. Taxonomy categories for robot-assisted training.
Technologies 06 00119 g001
Figure 2. Examples of interaction types in robot-assisted training (AE) (inspired by [47]).
Figure 2. Examples of interaction types in robot-assisted training (AE) (inspired by [47]).
Technologies 06 00119 g002
Figure 3. Levels of Robot Autonomy during perception and behavior control.
Figure 3. Levels of Robot Autonomy during perception and behavior control.
Technologies 06 00119 g003
Table 1. An updated taxonomy in Human-Robot Interaction [47].
Table 1. An updated taxonomy in Human-Robot Interaction [47].
System RequirementsInteraction TypeHuman RolesSpatio-Temporal
Task TypeRatio of People to RobotsHuman Interaction RolesTime–Space Taxonomy
Task CriticalityLevel of Shared Interaction Among TeamsDecision Support for OperatorsHuman–Robot Physical Proximity
Robot MorphologyComposition of Robot TeamsLevel of Autonomy-Amount of Intervention
Table 2. Considering our proposed taxonomy, we classify recent works in Robot-Assisted Training based on (a) Task Type and Requirements, (b) Interaction Types and Roles, (c) Level of Autonomy and Learning and (d) Personalization Dimensions.
Table 2. Considering our proposed taxonomy, we classify recent works in Robot-Assisted Training based on (a) Task Type and Requirements, (b) Interaction Types and Roles, (c) Level of Autonomy and Learning and (d) Personalization Dimensions.
Task Type and RequirementsInteraction Types and RolesLevel of Autonomy and LearningPersonalization Dimensions
Socially Assistive Robotics (SAR) for Language Learning with Children [19]A social robot acts as an affective tutor during a language learning gameThe robot acts fully autonomously and learns using Reinforcement LearningThe robot adjusts its engagement and valence during verbal instructions
SAR-based system for Post Stroke Rehabilitation for Elderly Patients [52]The robot therapist monitors, assists and encourages users during rehabilitationThe robot acts fully autonomously and personalizes its policy using Policy Gradient RLThe robot adjusts its therapy style, speed and proxemics based on user progress
Robot-Based Rehabilitation using Serious Games and Haptic device [59]The user performs a reaching task using a robotic haptic deviceThe robot acts autonomously and learns through RLThe system adjusts the game parameters to challenge the user
Adaptive Upper-Limb Rehabilitation using a Haptic Device [58]The robotic arm trains the user in a reaching task. A supervisor monitors system’s decisionsThe robot acts autonomously based on a given policy (no learning); an expert can alter the actionThe system decides reaching target, resistance level of resistance, or when the task should stop
Social Robot for Attention Acquisition during a Memory Game [60]The robot acts as a tutor who guides user’s attention during a memory game, in a WoZ setupThe system acts semi-autonomously. A supervisor provides RL with user state to select gesturesThe robot learns the appropriate gesture combination to increase user attention
Physical Exercising for Children using a Social Robot and Wizard-of-Oz Interfaces [61]The robot shows the exercises to be performed. A supervisor can control the robotThe system acts in a semi-autonomous manner. The robot learns from human inputThe robot personalizes the exercise regimen according to exercise performance and compliance
EMG-Controlled Interactive Robot for Upper Limb Training [27]The robot guides the user during the training tasks through assistive torques and a Graphical User InterfaceThe system records and analyzes EMG signals and generates a control signal to provide assistive forcesThe system adjusts the assistive forces based on real-time continuous EMG to improve task performance
Social Robotic Tutor for Grid-based Puzzle Solving [62]A social robot provides supportive behavior to help the user solve the puzzleThe robot acts fully autonomously and uses an RL framework to learn personalized policiesThe robot observes user progress and selects a supportive behavior to maximize performance and engagement

Share and Cite

MDPI and ACS Style

Tsiakas, K.; Kyrarini, M.; Karkaletsis, V.; Makedon, F.; Korn, O. A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges. Technologies 2018, 6, 119. https://doi.org/10.3390/technologies6040119

AMA Style

Tsiakas K, Kyrarini M, Karkaletsis V, Makedon F, Korn O. A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges. Technologies. 2018; 6(4):119. https://doi.org/10.3390/technologies6040119

Chicago/Turabian Style

Tsiakas, Konstantinos, Maria Kyrarini, Vangelis Karkaletsis, Fillia Makedon, and Oliver Korn. 2018. "A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges" Technologies 6, no. 4: 119. https://doi.org/10.3390/technologies6040119

APA Style

Tsiakas, K., Kyrarini, M., Karkaletsis, V., Makedon, F., & Korn, O. (2018). A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges. Technologies, 6(4), 119. https://doi.org/10.3390/technologies6040119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop