Next Article in Journal
Experimental and Analytical Investigations of Wire-Partially Insulated Parallel Plate Electrode Type Electrohydrodynamic Fan
Next Article in Special Issue
Bond-Graph-Based Approach to Teach PID and Sliding Mode Control in Mechatronics
Previous Article in Journal
Impact of Manufacturing Tolerances on Axial Flux Permanent Magnet Machines with Ironless Rotor Core: A Statistical Approach
Previous Article in Special Issue
Intelligent-PID with PD Feedforward Trajectory Tracking Control of an Autonomous Underwater Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Survey on Physiological Computing in Human–Robot Collaboration

1
Intel Labs, Hillsboro, OR 97124, USA
2
Electrical and Microelectronic Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
*
Author to whom correspondence should be addressed.
Machines 2023, 11(5), 536; https://doi.org/10.3390/machines11050536
Submission received: 4 April 2023 / Revised: 28 April 2023 / Accepted: 7 May 2023 / Published: 9 May 2023
(This article belongs to the Special Issue Control and Mechanical System Engineering)

Abstract

:
Human–robot collaboration has emerged as a prominent research topic in recent years. To enhance collaboration and ensure safety between humans and robots, researchers employ a variety of methods. One such method is physiological computing, which aims to estimate a human’s psycho-physiological state by measuring various physiological signals such as galvanic skin response (GSR), electrocardiograph (ECG), heart rate variability (HRV), and electroencephalogram (EEG). This information is then used to provide feedback to the robot. In this paper, we present the latest state-of-the-art methods in physiological computing for human–robot collaboration. Our goal is to provide a comprehensive guide for new researchers to understand the commonly used physiological signals, data collection methods, and data labeling techniques. Additionally, we have categorized and tabulated relevant research to further aid in understanding this area of study.

1. Introduction

The proliferation of robots is rapidly transforming the way we live, work, and interact with technology. The International Federation of Robotics (IFR) reports that the number of robots worldwide has increased by 11% between 2019 and 2020, with collaborative robots being a significant contributor to this growth [1]. This expansion can be attributed to two primary factors: cost and capabilities. The price of robots per unit has decreased by 50% over the last five years, while their abilities have been significantly enhanced through advances in machine learning, enabling them to perform more sophisticated tasks [2]. Consequently, robots have become more intelligent and talented, and companies are increasingly utilizing them in the production environment.
As robots have become more integrated into the workforce, safety measures have become a top priority. The International Standard Organization (ISO) recognizes the need for safety guidelines, as outlined in ISO/TS 15066, which specifies that human–robot collisions should not cause pain or injury [3]. Consequently, safety protocols have become a central focus in industrial applications, with physical and electronic safeguards being implemented to ensure worker safety. Despite these precautions, new strategies and approaches are necessary for human–robot collaboration, where fewer standards exist to implement complex protection schemes. To address this, a new category of robots known as “collaborative robots” or “cobots” has emerged in the market. These robots, such as Universal Robots, Kuka lbr-iiwa, and Rethink Robotics Sawyer, are intentionally designed to work in direct cooperation with humans in a defined workspace, reducing the severity and risks of injury due to collisions. In-depth research by Kumar et al. [4] provides insight into human–robot collaboration in terms of the awareness, intelligence, and compliance of the systems, highlighting the need for a more comprehensive understanding of the safety protocols required for human–robot interactions. As such, it is vital to continue developing safety standards and guidelines that enable humans and robots to work together seamlessly and safely.
This paper presents a comprehensive review of state-of-the-art methods in physiological computing for human–robot collaboration. The primary objective of this review is to provide new researchers with a deep understanding of physiological computing, its various categories, widely used physiological signals, data collection methods, and data labeling methods. To achieve this, we have conducted an extensive literature survey and classified the relevant research based on the questionnaire approach and physiological signal used.
Our contributions in this paper are as follows:
  • Presenting a comprehensive overview of the latest research in physiological computing.
  • Classifying the research based on the questionnaire approach and physiological signals used.
  • Providing an in-depth analysis of widely used physiological signals and their characteristics.
  • Discussing common data collection techniques and data labeling techniques.
Moreover, the paper highlights the challenges associated with physiological computing and provides insight into recent advancements in this field.
The paper is structured as follows: Section 3 provides an introduction to physiological computing and its categories. Section 4 presents a detailed analysis of commonly used physiological signals and their applications. Section 5 discusses various data collection methods used in physiological computing. Section 6 provides an overview of common data labeling techniques. Section 7 summarizes related works. Finally, Section 8 provides the discussion.

2. Methodology

This survey paper aims to provide a comprehensive overview of the physiological signals used in human–robot collaboration, along with the data collection methods and data labeling techniques used in related studies. To achieve the objective of the study, the following research methodology was followed:
  • Literature: A systematic review of the literature was conducted to identify all of the relevant research studies in the field of human–robot collaboration. The search was conducted using various academic databases. The search terms used included “human–robot collaboration”, “physiological signals”, “data collection methods”, and “labeling techniques”.
  • Categorization: All of the identified articles were screened based on their relevance to the study objective. The articles that met the inclusion criteria were further analyzed, and data were extracted related to physiological signals, stimuli types, data collection methods, labeling techniques, algorithms, and their applications. The extracted data were then categorized based on the identified criteria.
  • Limitations: The study has some limitations; we include articles that we do have access to, which may have limited the comprehensiveness of the study. Additionally, the study only focused on physiological signals and did not consider other modalities used in human–robot collaboration.

3. Physiological Computing

Physiological computing is a multi-disciplinary field that aims to use human physiological signals to simulate and understand the psycho-physiological state of individuals. This involves recognizing, interpreting, and processing physiological signals to dynamically adjust and adapt to the user’s psycho-physiological state. Areas of study within physiological computing include human–computer interaction, brain–computer interaction, and affective computing, as noted by Fairclough [5]. The ultimate goal of physiological computing is to enable programs and robots to modify their behavior in response to a user’s psycho-physiological state, which will allow them to interact in a socially intelligent and acceptable manner. A visual representation of physiological computing is shown in Figure 1.
Physiological computing has affected many fields, such as human–computer interaction, E-learning, automotive, healthcare, neuroscience, marketing, and robotics [5]. As an example of E-learning, physiological computing can help the tutor modify the presentation style based on students’ affective states such as interest, boredom, and frustration. In the automotive field, it can be used as an alert system in for surrounding vehicles when the driver is not paying attention to the road. In social robotics, physiological computing can help robotic pets to enhance realism.
According to NSF Research Statement for Cyber Human Systems (2018–2019), “improve the intelligence of increasingly autonomous systems that require varying levels of supervisory control by the human; this includes a more symbiotic relationship between human and machine through the development of systems that can sense and learn the human’s cognitive and physical states while possessing the ability to sense, learn, and adapt in their environments” [6]. Thus, to have a safe environment, a robot should sense human’s cognitive and physical state, which will help to build the trust between humans and robots.
In a human–robot interaction setup, a change in a robot’s motion can affect human behavior. Experiments such as [7,8] revealed similar results. The literature review in [9] highlights the use of the ‘psycho-physiological’ method to evaluate human response and behavior during human–robot interactions. In our opinion, the continuous monitoring of physiological signals during human–robot tasks is the first step in quantifying human trust in automation. The inferences from these signals and incorporating them in real-time to adapt robot motion can enhance human–robot interactions. Such a system capable of ‘physiological computing’ will result in a closed human-in-the-loop (also known as a ‘biocybernetics loop’ [10]) system where both humans and robots in an HRC setup are monitored and information is shared. This approach could result in better communication, which would improve trust in automation and increase productivity.
According to Fairclough, physiological computing can be divided into two categories. The first category is a system of sensory-motor function, which is related to extending body schema [10]. In this category, the subject is aware that he/she is in control. For example, an electromyography (EMG) sensor placed on the forearm can be used as an alternative method for typing [11], or it can control a prosthetic arm. Similarly, brain–computer interaction (BCI) provides an alternative way to type via an electroencephalogram (EEG) headset.
The second category concerns creating a representation of physiological state through monitoring and responding to simultaneous data originating from a psycho-physiological interaction in the central nervous system [10]. This category is also known as biocybernetics adaptation. The biocybernetics adaptation needs to detect spontaneous changes in the user’s physiological state. Thus, the system can respond to this change. The biocybernetics adaptation has many applications such as emotional detection, anxiety detection, and mental workload estimation. For example, based on mental workload, the amount of data displayed can be filtered to reduce the workload in flight simulation. A computer game can change difficulty levels based on the player’s anxiety levels.

4. Physiological Signals

The representation of a human’s psycho-physiological state requires a complex analysis of physiological signals. Hence, to estimate the psycho-physiological state, a variety of physiological signals were used such as electrocardiogram (ECG), photoplethysmography (PPG), galvanic skin response (GSR) (also known as electrodermal activity (EDA)), electroencephalography (EEG), electromyography (EMG), respiration rate (RSP), and pupil dilation.
In addition to the commonly used physiological signals in human–robot interactions (HRCs), there are several other signals that have potential usage in HRC research. These include arterial blood pressure (ABP); blood volume pulse (BVP); phonocardiography (PCG) signals; electrooculogram (EOG); functional near-infrared spectroscopy (fNIRS); and biomechanical/biokinetic signals such as acceleration, angular velocity, angle near joints, and force, which are generated by human movements. However, these signals are either not very common or difficult to collect. Therefore, in this survey paper, we have chosen to focus solely on the commonly used signals in HRC.

4.1. Electroencephalogram (EEG)

The EEG is a method to measure the electric activity of the neurons in the brain. The EEG signal is a complex signal; thus, extensive research is presently conducted in the field of neuroscience psychology. The EEG signal can be collected in invasive or non-invasive methods. The non-invasive method is widely used to collect the brain’s activity. The invasive method has started to become more available, and it is promising [12].
Researchers categorized EEG signals based on the frequency band: delta band (1–4 Hz), theta band (4–8 Hz), alpha band (8–12 Hz), beta band (13–25 Hz), and gamma band (>25 Hz). The results showed that the delta band has been used in several studies such as sleeping [13]. The theta band is related to brain processes, mostly mental workload [14,15]. It has been shown that alpha waves are associated with relaxed wakefulness [16], and beta waves are associated with focus attention or anxious thinking [17]. Finally, in the gamma band, it is not clear what the gamma band oscillation reflects.
It can be argued that wearing an EEG cap while working can be uncomfortable. However, it must be noted that in industry, workers are required to wear a helmet or a hat. With the advent of IoT systems and wireless communication, the size of the EEG sensors shrinks; hence, they can be embedded into a headphone [18].

4.2. Electrocardiogram (ECG)

The ECG is a widely used non-invasive method for recording the electrical activity of the heart, first developed by Dr. Willem Einthoven in 1902 [19]. By measuring the electrical signals generated by the heart, the ECG can provide valuable information about the heart’s function and detect diseases such as atrial fibrillation, ischemia, and arrhythmia.
The ECG signal is characterized by a repeating pattern of heartbeats, with the QRS complex being the most prominent and recognizable feature. Typically lasting between 0.06 and 0.10 s in adults [20], the QRS complex is used to determine heart rate (HR), which is the number of R peaks observed in one minute. While other methods exist to measure HR, the ECG is the most accurate and reliable as it directly reflects the heart’s electrical activity.
Another valuable metric extracted from the ECG is heart rate variability (HRV), which measures the time elapsed between consecutive R peaks. HRV has been shown to be useful in detecting heart disease, such as atrial fibrillation (AF), and can also be affected by an individual’s state, such as exercise or rest. Sudden changes in HRV may indicate a change in emotional state or heart disease. Recent research has shown a positive correlation between HRV and emotion, indicating that it may have potential applications in emotional detection [21].

4.3. Photoplethysmography (PPG)

The photoplethysmogram (PPG) is a low-cost and convenient method that provides an alternative to the traditional ECG approach for measuring heart rate and heart rate variability. Using a light source and photon detector placed on the skin, PPG technology measures the amount of reflection of light, which corresponds to the volumetric variations of blood circulation. Unlike the ECG signal, which uses the QRS complex, the PPG signal relies on the inter-beat-interval (IBI) for heart rate and HRV calculations.
The PPG method offers several advantages over the ECG approach. The ECG electrode placement is complicated and is prone to the effects of motion noise. However, PPG can be easily and non-invasively measured on the skin. Lu et al. demonstrated the feasibility of using PPG for heart rate and heart rate variability measurements, indicating its potential as an alternative to the ECG method [22].

4.4. Galvanic Skin Response/Electrodermal Activity

Galvanic skin response (GSR) or electrodermal activity (EDA) is a physiological signal obtained by measuring skin conductivity. The conductivity of skin changes whenever sweat glands are triggered. This phenomenon is an unconscious process controlled by the sympathetic division of the autonomic nervous system. The sympathetic division is activated when exposed to emotional moments (fear, happiness, or joy) or undesirable situations. Hence, it triggers the sweat glands, heart, lungs, and other organs; as a result, the hands become sweaty, the heart rate increases, and the breathing rate becomes excessive.
The GSR signal is used in various fields such as physiological research, consumer neuroscience, marketing, media, and usability testing. The GSR signal is a non-invasive method that uses two electrodes placed on the palms of the hands, fingers, or foot soles, which are the commonly used locations for emotional arousal. The GSR signal has two components: tonic level; skin conductance level (SCL); and phasic response, known as skin conductance response (SCR). The tonic level changes and varies slowly. It also may vary between individuals and their skin dryness, and hydration. Thus, it does not provide valuable information about the sympathetic division. Unlike the tonic level, the phasic response changes and alternates faster. All of these changes and deviations are directly related to reactions coming from the sympathetic division under the autonomic nervous system. The phasic response is sensitive to emotional arousal and mental load. Thus, the phasic response provides essential information about the physiological state.
The GSR signal provides valuable information about the strength of arousal, whether it is decreasing or increasing. However, positive and negative events (moments) may have similar GSR signal outputs. Therefore, the GSR signal should be used with another sensor such as EEG, ECG, EMG, or pupil dilation.

4.5. Pupil Dilation/Gaze Tracking

Human visual attention can be detected by eye movement, and this information is critical for neuromarketing and psychological study [23]. Gaze tracking provides information about where the subject is looking. This information also can be used in other fields such as robotics. For example, if the robot knows a co-worker is not paying attention in a critical operation, the robot can take an action to notify the co-worker.
The eye not only provides information about where we are looking but also provides information about pupil dilation. Pupil dilation is a measurement of change in pupil diameter. Although pupil dilation can be caused by ambient or other light-intensity changes in the environment, it has been shown that it can dilate due to emotional change as well [24].

4.6. Electromyography (EMG)

The EMG is a non-invasive method that measures electrical activity generated by a muscle. The EMG has been used in biocybernetics loop applications as a control input for a system or robot [25]. Another example of EMG is using facial muscles to provide information about sudden emotional changes or reactions [26,27].

4.7. Physiological Signal Features

Deep learning algorithms can learn from raw data, but they require a large dataset, which can be difficult to obtain. Compared to deep learning models, classical machine learning (ML) algorithms usually require features for training. Hence, features need to be extracted from signals. There are different methods of extracting features from a signal, such as the time and frequency domain. There are open-source libraries that simplify feature extraction tasks, such as the time series feature extraction library (TSFEL), tsfresh, and NeuroKit2. These libraries offer a range of automated feature extraction capabilities, with TSFEL extracting up to 60 different features [28,29,30]. In Table 1, commonly used features based on signal types are listed. Additionally, discrete wavelet transform can also be used as a feature extractor. Al-Qerem et al. have extensively studied the use of wavelet transform for EEG signals [31].
In addition to Deep Learning and Classical ML, there are other methods that rely on subsequence search and similarity measurement and that are more suitable for real-time applications. For example, time series subsequence search (TSSEARCH) is a Python library that focuses on query search and time series segmentation [32]. Similarly, Rodrigues et al. proposed a practical and manageable way to automatically segment and label single-channel or multimodal biosignal data using a self-similarity matrix (SSM) computed with signals’ feature-based representation [33].

5. Data Collection Methods

Collecting high-quality data is crucial in physiological computing systems since it enables the extraction of desired information from signals. There are several methods available for collecting physiological signals, and we will cover the most commonly used ones in this section, as shown in Figure 2.

5.1. Baseline

The baseline method is a way of defining what is considered normal or typical, which is then used as a reference point during an experiment or study. This approach is often used in biocybernetic adaptation applications, such as estimating anxiety levels during computer games [34,35]. To apply the baseline method in this context, researchers typically record a subject’s physiological signals before the game, marking this as the baseline. They then use this information to create thresholds and make decisions during the game. For instance, the game’s difficulty level may be adjusted automatically based on the player’s anxiety levels, and the difficulty may be lowered to improve the player’s experience. Overall, the baseline method provides a useful framework for measuring and responding to physiological signals in real-time, which can enhance the effectiveness of interventions in various domains.

5.2. Pre-Trial

Compared to the baseline data collection method, the pre-trial data collection method involves collecting physiological data before each trial. These data describe the participant’s physiological state before the trial. For instance, in a study conducted by Dobbins et al. [36], participants were asked to complete a questionnaire before and after their commute for five working days. The questionnaire was used to measure the participants’ stress levels while driving. This approach enables researchers to identify changes in participants’ physiological state before and after the trial, providing valuable information about their daily commute.
However, this approach has its limitations. It requires participants to answer the same questions multiple times, which can be overwhelming and may affect the quality of the data collected. Therefore, researchers need to find ways to minimize the burden on participants while collecting accurate and reliable data.

5.3. Post/After Trial

Post-trial data collection is a commonly used technique in which a visual stimulus is presented to the subject, and the subject evaluates the stimulus by answering a questionnaire after the trial. For instance, in a study by Kumar et al. [37], participants worked with a UR-10 robot to perform an assembling task. The participants then completed a post-questionnaire to provide feedback on their experience.
Although this approach is widely used and provides valuable insight into participants’ perceptions, it has some limitations. The subjective nature of post-questionnaires may lead to biased responses, and participants may have difficulty recaling their experience accurately. Therefore, researchers need to design their post-questionnaires carefully and ensure that they are appropriate for the study’s objectives to obtain reliable and valid data. Additionally, researchers may consider using complementary data collection techniques, such as physiological measurements, to validate the results obtained through post-questionnaires.

5.4. During Trial

The during-trial data collection method involves asking the same question to participants during an ongoing trial, as depicted in Figure 2. This approach is valuable for monitoring trial progress, as evidenced by Sahin et al. [38], who collected perceived safety data during the trial and demonstrated that during-trial data collection provides more information than the after-trial method.
To ensure the integrity of the experiment, two critical aspects of during-trial data collection must be considered. Firstly, it is essential to limit the number of questions asked since the trial has not yet concluded. Secondly, data entry should be effortless. Instead of using pen and paper to collect participant data, it would be advantageous to provide an app that enables participants to enter their responses using taps on a tablet’s screen. Alternatively, recording participant audio feedback during the trial may improve during-trial data collection.
In conclusion, during-trial data collection methods provide additional information, but the questionnaire should have a limited number of questions to maintain the experiment’s integrity.

6. Data Labeling

After data collection, physiological signals need to be labeled. In some cases, the labeling can be cumbersome, especially in biocybernetics adaptation. This section will discuss commonly used data labeling techniques as shown in Figure 3.

6.1. Action/Content-Related Labeling

Action/content-related labeling is commonly used in visual-stimuli-type experiments [39,40,41]. In a visual experiment, the exact time of the shown image or video is known. Thus, the physiological signal can easily be labeled with a corresponding label. Similarly, in an action-related experiment, the amount of time for which the subject is repeating the gesture/action is known; thus, a window that captures the gesture can be labeled accordingly [11]. Savur et al. talk about the critical aspect of data collection and labeling in HRC settings. They provide case studies for a human–robot collaboration experiment that has building signal synchronization and automatic event generation [11].
Action/content labeling is the simplest way of labeling, and it can be carried out during the data collection process. Thus, this method is widely adopted in different fields such as physiological study, marketing, emotion detection, and other related factors.

6.2. Subjective Labeling

The questionnaire is a widely used tool in quantitative research, including in HRC studies. In human–robot collaboration research, questionnaires are essential for evaluating the effectiveness of various methodologies. For instance, Kumar et al. [37] used subjective responses obtained through questionnaires to compare their speed and separation monitoring methods with state-of-the-art techniques. Similarly, in emotion detection research, questionnaires are used to evaluate subjective responses to different scenes that may elicit different emotions [42]. Dobbins et al. [36] employed pre- and post-surveys to evaluate the impact of their experiment on the subjects. The survey results were quantitatively analyzed to determine if the experiment had a positive, negative, or neutral effect.
Questionnaires are useful in quantifying the subject’s preferences and evaluating the proposed methodology. Although it is common to use questionnaires, there is no standardized set of questions that researchers follow [43]. Generally, researchers create their own set of questions or modify an existing questionnaire to suit their research hypothesis. Below are some commonly used questionnaires in HRC research.
  • Godspeed was designed to standardize measurement tools for HRI by Bartneck et al. [44]. Godspeed focused on five measurements: anthropomorphism, adaptiveness, intelligence, safety, and likability. Godspeed is commonly used, and it has been translated into different languages.
  • NASA TLX was designed to measure subjective workload assessment. It is widely used in cognitive experiments. The NASA TLX measures six metrics: mental demand, physical demand, temporal demand, performance, effort, and frustration [45].
  • BEHAVE-II was developed for the assessment of robot behavior [46]. It measures the following metrics: anthropomorphism, attitude towards technology, attractiveness, likability, and trust.
  • Multidimensional Robot Attitude Scale (MRAS) is a 12-dimensional questionnaire was developed by Ninomiya et al. [47]. The MRAS measures a variety of metrics such as familiarity, ease of use, interest, appearance, and social support.
  • Self-Assessment Manikin Instrument (SAM) consists of 18 questions that measure three metrics of pleasure, arousal, dominance [48]. Unlike most surveys, the SAM uses a binary selection of two opposite emotions: calm vs. excited, unhappy vs. happy, etc.
  • Negative Attitude toward Robots Scale (NARS), developed to measure negative attitudes toward robots in terms of negative interaction with robots, social influence, and emotions in interaction with robots. Moreover, the NARS measures discomfort, anxiety, trust, etc. [49].
  • Robot Social Attributes Scale (RoSAS) is a survey that seeks to extract metrics of social perception of a robot such as warmth, competence, and discomfort [50].
  • STAXI-2 consists of 44 questions that measure state anger, trait anger, and anger expression [51].

7. Relevant Works

Table 2 shows some of the related works that are categorized in terms of stimuli type, sample size, data collection method, data labeling technique, and the machine learning algorithm used in the field of physiological computing.
There are many studies that focus on estimating a person’s emotional state, human stress level, and cognitive state through physiological signals. In addition, there are other researchers investigating the psychological aspects of robot behavior [59].
Kulic et al. [7] present a method to calculate the danger index by using distance and relative velocity between a human and a robot, and the inertia of the closest point to the human as suggested in [72]. Then, the real-time calculated danger index is used to control the robot’s trajectory on a real robot. Similarly, the same authors [52] tried to detect anxiety triggered by two trajectory planners. Biological signals and subjective responses were collected from subjects during the experiment. The result of the subjective responses from the experiment showed that the subjects felt less anxiety during a safe planner than the classical planner. Moreover, the researcher found that the corrugator EMG signal did not help to estimate arousal and valence. However, they have found a strong positive correlation between anxiety and speed, and surprise and speed, and a negative correlation between calm and speed. Kulic and Croft in [26,73] present an extension work of the previous study [52] showing that the hidden Markov model (HMM) outperforms fuzzy inference at estimating arousal and valence from physiological signals.
Nomura et al. [49] investigated negative attitudes toward robots and developed a measurement scale called the “Negative Attitude towards Robot Scale” (NARS). One of the interesting results from this study shows that male students have fewer negative attitudes toward interactions with robots than female students in Japan. However, the authors propose a physiological experiment to be conducted to understand the human mental state during human–robot interactions. Additionally, the authors mention that this result may differ for other cultures since proxemics preferences [74] are different from culture to culture.
Villani et al. [35] introduced a framework that takes human mental health into account to simplify a task in an industrial setup to improve the interaction between humans and robots. The authors used a smartwatch to measure heart rate variability (HRV) to estimate stress by applying a threshold. In order to find the thresholds for stress and resting, a subject’s HRV signal was collected just before the experiment was started. In the experiment, the subject’s task was to navigate a mobile robot using hand gestures provided by the smartwatch IMU (roll, pitch, and yaw). While the subject was controlling the robot, his/her mental state was measured, and the speed of the robot halved when the subject was stressed. Reducing the robot speed extended the task-completion time; as a result, efficiency was reduced.
The authors of [53] analyzed the mental workload of an operator for an industrial setup where the operator teleoperates the task. The authors collected an HRV signal for 2.5 min of resting state and 2.5 min of stress state (“creating stress by listening to loud music and counting numbers”); the model was used in a teleoperated task where virtual fixtures appear on the screen for the operator only based on the subject’s stress level. Their system predicts stress every 2.5 min, which is a drawback of the system.
Villani et al. [35] tried to develop a system that estimates the affective state of a human through wearable sensors. The authors used fuzzy inference, which uses features extracted from ECG, EDA, and EMG signals. They also designed an experiment that allowed a user to communicate with a mobile robot implicitly. As a result, cardiac activity is a strong indicator of anxiety.
Liu et al. [55] designed a system that used effective cues to improve human–robot interactions. The authors used various biometrics such as ECG, EDA, and EMG, and multiple features were extracted. The study consisted of two phases. The first phase was developing a regression tree model for the classification of anxiety. In the second phase, the model was used to set the game’s difficulty based on the player’s anxiety. As a result, the participant reported a 71% increase in satisfaction while the anxiety-based task was active.
Rani et al. [75] compared the four most common learning algorithms: K-nearest neighbor, regression tree (RT), Bayesian network, and support vector machine (SVM), on biological signals to detect affect recognition. The results of the experiment showed that SVM outperforms other algorithms with 85.81% accuracy to classify three classes. However, RT was the fastest algorithm and is more suitable for real-time applications. Tan et al. [76] defined the important aspects of human–robot collaboration in factory settings. In addition, two experiments were proposed as a case study. The first case study investigated the effect of robot motion speed, and the second was conducted to see the effect of the human–robot distance. The experiment results showed that mental workload is in direct proportion to the robot’s speed and inversely propitiation to distance.
Arai et al. [77] investigated the effects of the distance from a subject and the speed of the robot on the subject’s mental state by using only skin conductivity sensors. As a result, research suggests that the distance between an operator and a robot should be more than two meters and the speed of the end effector 500 mm/s. In addition, notifying the operator about robot speed reduces an operator’s mental strain.
Schirner et al. [78] discussed the future of human-in-the-loop cyber-physical systems, gave possible applications, and explained the framework they are working on. The purposed framework receives biometrics and estimates human intention from signals where this kind of system is helpful for locked-in individuals. Hu et al. [56] experimented on an estimated human trust index model using EEG and GSR sensors in real-time. In their experiment, they asked users to evaluate a virtual sensor reading in simulation. Based on sensor accuracy and the subject’s response, their results show that using physiological signals to estimate human trust level is promising.
The authors of [60] introduce a multi-modal emotional state detector using multiple devices. Their experiment focuses on short-term GSR and heart rate, short-term GSR and EEG, and long-term GSR and heart rate characterization. Rani et al. [57,79] tried to analyze anxiety-based affective implicit communication between a human and a robot. The researcher used ECG, EDA, EMG, and temperature signals in the regression tree and fuzzy inference engine. Their results show that the detection of anxiety using physiological signals is promising and may show better results in the future.
As computer games become more popular, Rani et al. [34] are trying to keep computer games engaging by using physiological signals. The authors estimate a gamer’s effective state in real-time and alter game difficulty. The results show that performance improves and creates lower anxiety during gameplay. Erebak et al. [58] conducted an experiment among caregivers. Their result shows that human-like robots and typical robots are not different, there is a moderate correlation between trust in automation and intention to work with robots, and there is a weak positive correlation between trust in automation and preference of automation.
Dobbins et al. [36] used a wristband that recorded GSR signal during the day. Their study was to estimate negative emotions such as stress. The GSR signal from six subjects was collected during the day, with two surveys per day used for the labeling.
Ferrez et al. used EEG signals to detect error-related potentials (ErrP) when the subject makes a mistake. Their results show that the estimation of ErrP is promising and can be used in HRI [62]. Ehrlich et al. investigated the usage of EEG signals to detect intention to initiate eye contact when a robot needs to engage with humans [63]. Val-Calvo et al. proposed a framework that uses multiple signals of EEG, HR, and GSR to estimate emotions while the subject is watching TV that can be used in HRI applications [64]. Mower et al. used physiological signals to estimate engagement implicitly. GSR, skin temperature signals, and KNN were used to estimate user engagement with an accuracy of 84.73%. The author suggests that implicit cues can be used in HRI applications [65].
Novak et al. used ECG, GSR, RPS, skin temperature, EEG, and eye-tracking signals to estimate a human’s workload and effort. During trials, subjects were asked to fill out a NASA-TLX questionnaire [66]. Iturrate et al. used physiological signals to detect ErrP from EEG signals; they used both simulation and an actual robot in their experiments. Their results show that the brain–computer interface can be used as a continuous adaptation when there is no explicit information about the goal [67]. Ehrlich et al. tried to validate robot action implicitly by using EEG signals. In their proposed approach, they focused on ErrPs, and their results showed a classification accuracy of 69.0% for detection of incorrect robot action [68]. Similarly, Salazar-Gome et al. used EEG signal with error-related potential to fix the robot’s mistake during the task. In the experiment, they used a Baxter robot to make decisions based on EEG signal in real-time [69]. Dehais et al. focused on handover tasks from robots to humans and collecting the physiological signal during the task. Their results showed that the physiological response varies according to the robot’s motions [70].
Savur et al. [71] conducted a study where they collected physiological signals, including ECG, GSR, and pupillometry, from subjects during trials. The subjective responses were used to map emotions to arousal and valence domains. The authors also calculated a comfort index to determine the axis of discomfort on the arousal and valence domain. Based on the calculated comfort index, the velocity of robot behavior was adjusted to reduce discomfort as the subject became more uncomfortable. This study highlights the importance of considering subjective comfort when designing robot behavior and demonstrates the potential for physiological signals to be used as a feedback mechanism for controlling robot behavior.
In summary, the studies reviewed in this survey aim to estimate the emotional and cognitive state of humans during human–robot interactions. Different algorithms have been used for this purpose, such as fuzzy inference, the hidden Markov model, K-nearest neighbor, regression tree, Bayesian network, and support vector machine. Each algorithm has its own advantages and disadvantages in terms of accuracy, computational cost, and real-time applicability. The generalizability and universality of the algorithms depend on the type and number of physiological signals used, as well as the cultural and personal factors that affect the emotional and cognitive states of humans [74]. Therefore, it is important to conduct further experiments in different cultures and populations to validate the applicability of these algorithms in real-world scenarios.

8. Discussion

This survey paper provides an overview of state-of-the-art methods in physiological computing, including commonly used physiological signals, data collection techniques, data labeling techniques, and questionnaire methods. While this paper is not exhaustive, it presents a comprehensive summary of current techniques and highlights the need for continued development in this field.
In the field of HRI applications, safety and fluency are critical in the industrial standards. While current safety standards address a range of potential hazards, they may not be comprehensive enough to ensure maximum safety. Therefore, the implementation of new standards and personal adaptive methods is crucial to further enhance safety. One promising approach is physiological computing, which has the potential to improve the safety and fluency of HRI applications by personalizing the interactions between humans and robots. As such, it is anticipated that multiple variations of physiological computing will be developed in the near future, which will further enhance the safety and fluency of human–robot applications.
As the wearable technology industry continues to progress, it is becoming more accessible for researchers and companies to incorporate these technologies into their work. This includes the integration of new and advanced sensors, which are becoming more affordable as their production becomes more mainstream. By utilizing these new technologies, it is possible to enhance the collaboration between humans and robots, allowing for more seamless and intuitive interactions. As the field continues to evolve, it is likely that we will see even more innovative uses for multi-modal sensor systems, which will have a significant impact on the future of human–robot collaboration. In addition to HRC, another area that benefits from these sensors is activity recognition, which can also be used as input to HRC applications. Liu et al. recently published material by ten high-quality researchers in the field of activity recognition [80].
Although data collection is a challenging step, it is an essential input for physiological computing. The quality of the data is a key factor for physiological computing systems. Therefore, widely used data collection techniques need to be preferred and new innovative methods implemented. One drawback of HRI application is a lack of open-source datasets. Future research works need to implement their own experiments and collect data. Public datasets will allow researchers to compare and improve existing methods while maximizing time and effort.
Similar to data collection methods, data labeling also lacks protocols. Presently, there is an absence of any standard protocol for data labeling. Researchers label their data based on the experiment, and this makes it difficult to reproduce and compare the results.
In conclusion, this survey article discussed the commonly used physiological signals, data collection methods, data labeling methods, and questionnaires. We also categorized physiological computing research in terms of stimuli and data collection type used during experiments, data labeling methods, and machine learning algorithms. We are hoping that this paper will provide a framework for future researchers to select a suitable approach for future physiological computing systems.

Author Contributions

Conceptualization, C.S. and F.S.; methodology, C.S.; formal analysis, C.S.; resources, F.S.; writing—original draft preparation, C.S.; writing—review and editing, C.S.; visualization, C.S.; supervision, F.S.; project administration, F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work partially supported by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Data Availability Statement

The authors confirm that the data supporting the findings of this survey paper are available within the related research papers cited in this article.

Acknowledgments

The authors are grateful to the staff of Multi-Agent Bio-Robotics Laboratory (MABL), the CM Collaborative Robotics (CMCR) Lab, and the Electrical Engineering Department at RIT for their valuable inputs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. IFR. World Robotics Report 2020; Technical Report. Available online: https://ifr.org/ifr-press-releases/news/record-2.7-million-robots-work-in-factories-around-the-globe (accessed on 27 April 2023).
  2. Korus, S. Industrial Robot Cost Declines Should Trigger Tipping Points in Demand; Technical Report. Available online: https://ark-invest.com/articles/analyst-research/industrial-robot-cost-declines/ (accessed on 27 April 2023).
  3. ISO/TS 15066:2016; Robots And Robotic Devices-Collaborative Robots. ISO: Geneva, Switzerland, 2016.
  4. Kumar, S.; Savur, C.; Sahin, F. Survey of Human–Robot Collaboration in Industrial Settings: Awareness, Intelligence, and Compliance. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 280–297. [Google Scholar] [CrossRef]
  5. Fairclough, S.H. Fundamentals of physiological computing. Interact. Comput. 2009, 21, 133–145. [Google Scholar] [CrossRef]
  6. NSF. Information and Intelligent Systems (IIS): Core Programs. Available online: https://www.nsf.gov/pubs/2018/nsf18570/nsf18570.htm (accessed on 27 April 2023).
  7. Kulic, D.; Croft, E.A. Anxiety detection during human–robot interaction. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 616–621. [Google Scholar] [CrossRef]
  8. Kulić, D.; Croft, E. Physiological and subjective responses to articulated robot motion. Robotica 2007, 25, 13–27. [Google Scholar] [CrossRef]
  9. Tiberio, L.; Cesta, A.; Belardinelli, M. Psychophysiological Methods to Evaluate User’s Response in Human Robot Interaction: A Review and Feasibility Study. Robotics 2013, 2, 92–121. [Google Scholar] [CrossRef]
  10. Fairclough, S.H. Physiological Computing and Intelligent Adaptation. In Emotions and Affect in Human Factors and Human–Computer Interaction; Number 2017; Elsevier: Amsterdam, The Netherlands, 2017; pp. 539–556. [Google Scholar] [CrossRef]
  11. Savur, C.; Sahin, F. Real-Time American Sign Language Recognition System Using Surface EMG Signal. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 497–502. [Google Scholar] [CrossRef]
  12. Musk, E. An integrated brain-machine interface platform with thousands of channels. J. Med. Internet Res. 2019, 21, 1–14. [Google Scholar] [CrossRef]
  13. Hughes, J.R. Electroencephalography. Basic principles, clinical applications and related fields. Electroencephalogr. Clin. Neurophysiol. 1982, 54, 473–474. [Google Scholar] [CrossRef]
  14. Klimesch, W. Memory processes, brain oscillations and EEG synchronization. Int. J. Psychophysiol. 1996, 24, 61–100. [Google Scholar] [CrossRef]
  15. O’Keefe, J.; Burgess, N. Theta activity, virtual navigation and the human hippocampus. Trends Cogn. Sci. 1999, 3, 403–406. [Google Scholar] [CrossRef]
  16. Yılmaz, B.; Korkmaz, S.; Arslan, D.B.; Güngör, E.; Asyalı, M.H. Like/dislike analysis using EEG: Determination of most discriminative channels and frequencies. Comput. Methods Programs Biomed. 2014, 113, 705–713. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Chen, Y.; Bressler, S.; Ding, M. Response preparation and inhibition: The role of the cortical sensorimotor beta rhythm. Neuroscience 2008, 156, 238–246. [Google Scholar] [CrossRef]
  18. Alcaide, R.; Agarwal, N.; Candassamy, J.; Cavanagh, S.; Lim, M.; Meschede-Krasa, B.; McIntyre, J.; Ruiz-Blondet, M.V.; Siebert, B.; Stanley, D.; et al. EEG-Based Focus Estimation Using Neurable’s Enten Headphones and Analytics Platform. bioRxiv 2021. [Google Scholar]
  19. Hurst, J.W. Naming of the Waves in the ECG, With a Brief Account of Their Genesis. Circulation 1998, 98, 1937–1942. [Google Scholar] [CrossRef]
  20. Ali, M.; Machot, F.; Mosa, A.; Jdeed, M.; Machot, E.; Kyamakya, K. A Globally Generalized Emotion Recognition System Involving Different Physiological Signals. Sensors 2018, 18, 1905. [Google Scholar] [CrossRef]
  21. Choi, K.H.; Kim, J.; Kwon, O.S.; Kim, M.J.; Ryu, Y.H.; Park, J.E. Is heart rate variability (HRV) an adequate tool for evaluating human emotions?—A focus on the use of the International Affective Picture System (IAPS). Psychiatry Res. 2017, 251, 192–196. [Google Scholar] [CrossRef]
  22. Lu, G.; Yang, F.; Taylor, J.A.; Stein, J.F. A comparison of photoplethysmography and ECG recording to analyse heart rate variability in healthy subjects. J. Med. Eng. Technol. 2009, 33, 634–641. [Google Scholar] [CrossRef]
  23. Tobii. Available online: https://www.tobii.com/ (accessed on 27 April 2023).
  24. Bonifacci, P.; Desideri, L.; Ottaviani, C. Familiarity of Faces: Sense or Feeling? J. Psychophysiol. 2015, 29, 20–25. [Google Scholar] [CrossRef]
  25. Savur, C.; Sahin, F. American Sign Language Recognition system by using surface EMG signal. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002872–002877. [Google Scholar] [CrossRef]
  26. Kulic, D.; Croft, E.A. Affective State Estimation for Human–Robot Interaction. IEEE Trans. Robot. 2007, 23, 991–1000. [Google Scholar] [CrossRef]
  27. Gouizi, K.; Bereksi Reguig, F.; Maaoui, C. Emotion recognition from physiological signals. J. Med. Eng. Technol. 2011, 35, 300–307. [Google Scholar] [CrossRef]
  28. Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time Series Feature Extraction Library. SoftwareX 2020, 11, 100456. [Google Scholar] [CrossRef]
  29. Christ, M.; Braun, N.; Neuffer, J.; Kempa-Liehr, A.W. Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh–A Python package). Neurocomputing 2018, 307, 72–77. [Google Scholar] [CrossRef]
  30. Makowski, D.; Pham, T.; Lau, Z.J.; Brammer, J.C.; Lespinasse, F.; Pham, H.; Schölzel, C.; Chen, S.H.A. NeuroKit2: A Python toolbox for neurophysiological signal processing. Behav. Res. Methods 2021, 53, 1689–1696. [Google Scholar] [CrossRef] [PubMed]
  31. Al-Qerem, A.; Kharbat, F.; Nashwan, S.; Ashraf, S.; Blaou, K. General model for best feature extraction of EEG using discrete wavelet transform wavelet family and differential evolution. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720911009. [Google Scholar] [CrossRef]
  32. Folgado, D.; Barandas, M.; Antunes, M.; Nunes, M.L.; Liu, H.; Hartmann, Y.; Schultz, T.; Gamboa, H. TSSEARCH: Time Series Subsequence Search Library. SoftwareX 2022, 18, 101049. [Google Scholar] [CrossRef]
  33. Rodrigues, J.; Liu, H.; Folgado, D.; Belo, D.; Schultz, T.; Gamboa, H. Feature-Based Information Retrieval of Multimodal Biosignals with a Self-Similarity Matrix: Focus on Automatic Segmentation. Biosensors 2022, 12, 1182. [Google Scholar] [CrossRef] [PubMed]
  34. Rani, P.; Sarkar, N.; Liu, C. Maintaining Optimal Challenge in Computer Games through Real-Time Physiological Feedback Mechanical Engineering. In Task-Specific Information Processing in Operational and Virtual Environments: Foundations of Augmented Cognition; Taylor & Francis: Boca Raton, FL, USA, 2006; pp. 184–192. [Google Scholar]
  35. Villani, V.; Sabattini, L.; Secchi, C.; Fantuzzi, C. A Framework for Affect-Based Natural Human–robot Interaction. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1038–1044. [Google Scholar] [CrossRef]
  36. Dobbins, C.; Fairclough, S.; Lisboa, P.; Navarro, F.F.G. A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018; pp. 306–311. [Google Scholar] [CrossRef]
  37. Kumar, S.; Savur, C.; Sahin, F. Dynamic Awareness of an Industrial Robotic Arm Using Time-of-Flight Laser-Ranging Sensors. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2850–2857. [Google Scholar] [CrossRef]
  38. Sahin, M.; Savur, C. Evaluation of Human Perceived Safety during HRC Task using Multiple Data Collection Methods. In Proceedings of the 2022 17th Annual Conference System of Systems Engineering, SoSE 2022, Rochester, NY, USA, 7–11 June 2022; pp. 465–470. [Google Scholar]
  39. Kumar, S.; Sahin, F. A framework for a real time intelligent and interactive Brain Computer Interface. Comput. Electr. Eng. 2015, 43, 193–214. [Google Scholar] [CrossRef]
  40. Artal-Sevil, J.S.; Acon, A.; Montanes, J.L.; Dominguez, J.A. Design of a Low-Cost Robotic Arm controlled by Surface EMG Sensors. In Proceedings of the 2018 XIII Technologies Applied to Electronics Teaching Conference (TAEE), Canary Island, Spain, 20–22 June 2018; pp. 1–8. [Google Scholar] [CrossRef]
  41. Mangukiya, Y.; Purohit, B.; George, K. Electromyography(EMG) sensor controlled assistive orthotic robotic arm for forearm movement. In Proceedings of the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–4. [Google Scholar] [CrossRef]
  42. Shu, L.; Xie, J.; Yang, M.; Li, Z.; Li, Z.; Liao, D.; Xu, X.; Yang, X. A review of emotion recognition using physiological signals. Sensors 2018, 18, 2074. [Google Scholar] [CrossRef]
  43. Zoghbi, S.; Kulić, D.; Croft, E.; Van Der Loos, M. Evaluation of affective state estimations using an on-line reporting device during human–robot interactions. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, St. Louis, MO, USA, 11–15 October 2009; pp. 3742–3749. [Google Scholar] [CrossRef]
  44. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  45. Sandra, G. Hart and Lowell E. Staveland, Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research, Human Mental Workload; North-Holland: Amsterdam, The Netherlands, 1988; pp. 139–183. [Google Scholar] [CrossRef]
  46. Joosse, M.; Sardar, A.; Lohse, M.; Evers, V. BEHAVE-II: The Revised Set of Measures to Assess Users’ Attitudinal and Behavioral Responses to a Social Robot. Int. J. Soc. Robot. 2013, 5, 379–388. [Google Scholar] [CrossRef]
  47. Ninomiya, T.; Fujita, A.; Suzuki, D.; Umemuro, H. Development of the Multi-dimensional Robot Attitude Scale: Constructs of People’s Attitudes Towards Domestic Robots. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2015; Volume 9388 LNCS, pp. 482–491. [Google Scholar] [CrossRef]
  48. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  49. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. Studies. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 437–454. [Google Scholar] [CrossRef]
  50. Carpinella, C.M.; Wyman, A.B.; Perez, M.A.; Stroessner, S.J. The Robotic Social Attributes Scale (RoSAS). In Proceedings of the 2017 ACM/IEEE International Conference on Human–robot Interaction, Vienna, Austria, 6–9 March 2017; ACM: New York, NY, USA, 2017; Volume Part F1271, pp. 254–262. [Google Scholar] [CrossRef]
  51. Spielberger, C.D. State-Trait Anger Expression Inventory–2. Available online: https://www.parinc.com/Products/Pkey/429 (accessed on 27 April 2023).
  52. Kulic, D.; Croft, E.A. Real-time safety for human–robot interaction. In Proceedings of the ICAR ’05, Proceedings, 12th International Conference on Advanced Robotics, Seattle, WA, USA, 18–20 July 2005; Volume 2005, pp. 719–724. [Google Scholar] [CrossRef]
  53. Landi, C.T.; Villani, V.; Ferraguti, F.; Sabattini, L.; Secchi, C.; Fantuzzi, C. Relieving operators’ workload: Towards affective robotics in industrial scenarios. Mechatronics 2018, 54, 144–154. [Google Scholar] [CrossRef]
  54. Rani, P.; Sarkar, N.; Smith, C. Affect-sensitive human–robot cooperation–theory and experiments. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 September 2003; pp. 2382–2387. [Google Scholar] [CrossRef]
  55. Liu, C.; Rani, P.; Sarkar, N. Human–robot interaction using affective cues. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 285–290. [Google Scholar] [CrossRef]
  56. Hu, W.L.; Akash, K.; Jain, N.; Reid, T. Real-Time Sensing of Trust in Human-Machine Interactions. IFAC-PapersOnLine 2016, 49, 48–53. [Google Scholar] [CrossRef]
  57. Rani, P.; Sarkar, N.; Adams, J. Anxiety-based affective communication for implicit human–machine interaction. Adv. Eng. Inform. 2007, 21, 323–334. [Google Scholar] [CrossRef]
  58. Erebak, S.; Turgut, T. Caregivers’ attitudes toward potential robot coworkers in elder care. Cogn. Technol. Work 2019, 21, 327–336. [Google Scholar] [CrossRef]
  59. Butler, J.T.; Agah, A. Psychological effects of behavior patterns of a mobile personal robot. Auton. Robot. 2001, 10, 185–202. [Google Scholar] [CrossRef]
  60. Abdur-Rahim, J.; Morales, Y.; Gupta, P.; Umata, I.; Watanabe, A.; Even, J.; Suyama, T.; Ishii, S. Multi-sensor based state prediction for personal mobility vehicles. PLoS ONE 2016, 11, e0162593. [Google Scholar] [CrossRef]
  61. Dobbins, C.; Fairclough, S. Detecting and Visualizing Context and Stress via a Fuzzy Rule-Based System during Commuter Driving. In Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019, Kyoto, Japan, 11–15 March 2019; pp. 499–504. [Google Scholar] [CrossRef]
  62. Ferrez, P.W.; Milĺan, J.D.R. You are wrong!–Automatic detection of interaction errors from brain waves. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Edinburgh, UK, 30 July–5 August 2005; pp. 1413–1418. [Google Scholar]
  63. Ehrlich, S.K.; Cheng, G. Human-agent co-adaptation using error-related potentials. J. Neural Eng. 2018, 15, 066014. [Google Scholar] [CrossRef]
  64. Val-Calvo, M.; Álvarez-Sánchez, J.R.; Ferrández-Vicente, J.M.; Díaz-Morcillo, A.; Fernández-Jover, E. Real-Time Multi-Modal Estimation of Dynamically Evoked Emotions Using EEG, Heart Rate and Galvanic Skin Response. Int. J. Neural Syst. 2020, 30, 2050013. [Google Scholar] [CrossRef]
  65. Mower, E.; Feil-Seifer, D.J.; Matarić, M.J.; Narayanan, S. Investigating implicit cues for user state estimation in human–robot interaction using physiological measurements. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; pp. 1125–1130. [Google Scholar] [CrossRef]
  66. Novak, D.; Beyeler, B.; Omlin, X.; Riener, R. Workload estimation in physical human–robot interaction using physiological measurements. Interact. Comput. 2015, 27, 616–629. [Google Scholar] [CrossRef]
  67. Iturrate, I.; Chavarriaga, R.; Montesano, L.; Minguez, J.; Millán, J.D.R. Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control. Sci. Rep. 2015, 5, 1–11. [Google Scholar] [CrossRef]
  68. Ehrlich, S.K.; Cheng, G. A Feasibility Study for Validating Robot Actions Using EEG-Based Error-Related Potentials. Int. J. Soc. Robot. 2019, 11, 271–283. [Google Scholar] [CrossRef]
  69. Salazar-Gomez, A.F.; Delpreto, J.; Gil, S.; Guenther, F.H.; Rus, D. Correcting robot mistakes in real time using EEG signals. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 6570–6577. [Google Scholar] [CrossRef]
  70. Dehais, F.; Sisbot, E.A.; Alami, R.; Causse, M. Physiological and subjective evaluation of a human–robot object hand-over task. Appl. Ergon. 2011, 42, 785–791. [Google Scholar] [CrossRef] [PubMed]
  71. Savur, C. A Physiological Computing System to Improve Human-Robot Collaboration by Using Human Comfort Index. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2022. [Google Scholar]
  72. Nokata, M.; Ikuta, K.; Ishii, H. Safety-optimizing method of human-care robot design and control. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1991–1996. [Google Scholar] [CrossRef]
  73. Kulic, D.; Croft, E. Estimating Robot Induced Affective State using Hidden Markov Models. In Proceedings of the ROMAN 2006–The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 257–262. [Google Scholar] [CrossRef]
  74. Hall, E.T. Proxemics and Design. Des. Environ. 1971, 2, 24–25. [Google Scholar]
  75. Rani, P.; Liu, C.; Sarkar, N.; Vanman, E. An empirical study of machine learning techniques for affect recognition in human–robot interaction. Pattern Anal. Appl. 2006, 9, 58–69. [Google Scholar] [CrossRef]
  76. Too, J.; Tan, C.; Duan, F.; Zhang, Y.; Watanabe, K.; Kato, R.; Arai, T. Human–robot Collaboration in Cellular Manufacturing: Design and Development. In Proceedings of the International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 29–34. [Google Scholar] [CrossRef]
  77. Arai, T.; Kato, R.; Fujita, M. Assessment of operator stress induced by robot collaboration in assembly. CIRP Ann.–Manuf. Technol. 2010, 59, 5–8. [Google Scholar] [CrossRef]
  78. Schirner, G.; Erdogmus, D.; Chowdhury, K.; Padir, T. The future of human-in-the-loop cyber-physical systems. Computer 2013, 46, 36–45. [Google Scholar] [CrossRef]
  79. Rani, P.; Sarkar, N. Emotion-sensitive robots—A new paradigm for human–robot interaction. In Proceedings of the 4th IEEE/RAS International Conference on Humanoid Robots, Santa Monica, CA, USA, 10–12 November 2004; Volume 1, pp. 149–167. [Google Scholar] [CrossRef]
  80. Liu, H.; Gamboa, H.; Schultz, T. Sensor-Based Human Activity and Behavior Research: Where Advanced Sensing and Recognition Technologies Meet. Sensors 2022, 23, 125. [Google Scholar] [CrossRef]
Figure 1. Physiological computing can play a role in the field of human–robot collaboration (HRC), where it is assumed that the behavior of the robot will affect the human’s physiological response, such as changes in velocity or acceleration. By monitoring human physiological responses during collaboration, valuable information can be obtained about the human’s psycho-physiological state, which can then be used to enhance collaboration, making it safer and more seamless. Therefore, physiological computing is an essential component of HRC that facilitates effective human–robot interaction by providing critical insight into the user’s physiological state, which can guide the robot’s behavior accordingly.
Figure 1. Physiological computing can play a role in the field of human–robot collaboration (HRC), where it is assumed that the behavior of the robot will affect the human’s physiological response, such as changes in velocity or acceleration. By monitoring human physiological responses during collaboration, valuable information can be obtained about the human’s psycho-physiological state, which can then be used to enhance collaboration, making it safer and more seamless. Therefore, physiological computing is an essential component of HRC that facilitates effective human–robot interaction by providing critical insight into the user’s physiological state, which can guide the robot’s behavior accordingly.
Machines 11 00536 g001
Figure 2. Four data collection methods: baseline, pre, during and post/after trial.
Figure 2. Four data collection methods: baseline, pre, during and post/after trial.
Machines 11 00536 g002
Figure 3. Data collection methods.
Figure 3. Data collection methods.
Machines 11 00536 g003
Table 1. Commonly used physiological metrics extracted from the ECG, GSR, pupillometry, and EEG signals.
Table 1. Commonly used physiological metrics extracted from the ECG, GSR, pupillometry, and EEG signals.
Signal TypeFeatureDescription
ECGMeanNNThe mean of the RR intervals.
SDNNThe standard deviation of
the RR intervals.
RMSSDThe square root of the
mean of the sum of successive
differences between adjacent
RR intervals.
SDSDThe standard deviation of
the successive differences
between RR intervals.
pNN50The proportion of RR intervals
greater than 50 ms, out of
the total number of RR intervals.
pNN20The proportion of RR intervals
greater than 20 ms, out of the
total number of RR intervals.
LFThe spectral power of
low frequencies.
HFThe spectral power of
high frequencies.
GSRAmp. MeanMean value of peak amplitude
Amp. StdStandard deviation of
peak amplitude
Phasic MeanMean value of phasic signal
Phasic StdStandard deviation of
phasic signal
Tonic MeanMean value of tonic signal
Tonic StdStandard deviation of
tonic signal
Onset RateNumber of onsets per minute
PupillometryPupil MeanMean value of pupil signal
Pupil StdStandard deviation of
pupil signal
EEGMAVMean absolute value
ZCZero crossing
SSCSlope sign changes
SKESkewness of EEG signal 
Kurtosis Kurtosis  of EEG signal
EntropyEntropy of EEG signal
SEntropySpectral entropy of EEG signal
Table 2. The table shows related research that is tabulated based on bio-signal, number of subjects, data collection, and labeling type.
Table 2. The table shows related research that is tabulated based on bio-signal, number of subjects, data collection, and labeling type.
ReferenceBio-SignalsSample SizeStimuliData Col. TypeLabel TypeAlgorithmTarget
Kulic et al. [7]SC, HR, EMG36Robot trajectoryAfter trialSubjective (custom)Fuzzy inferenceArousal, valence
Kulic et al. [52]SC, HR, EMG36Robot trajectoryAfter trialSubjective (custom)HMMArousal, valence
Nomura et al. [49]None240Interaction with robotAfter trialNARSStatistical analysisNegative attitude
Villania et al. [35]Control Robot arm21Interaction with robotBaselineSubjective (custom)ThresholdingStress
Landi et al. [53]HRV (Smartwatch)21TeleoperationBaselineSubjective (custom)ThresholdingStress
Rani et al. [54]ECG, EDA, EMGNAControl mobile robotBaselineSubjective (custom)Fuzzy inferenceAffective state
Lui et al. [55]ECG, EDA, EMG14Control robot armBaselineSubjective (custom)Regression tree modelAffective cues
Rani et al. [34]ECG, EDA, EMG15GameBaselineSubjective (custom)KNN, BayesianCompare learning,
algorithm
Hu et al. [56]EEG, GSR31Car simulationBaselineSubjective (custom)LDA, LinearSVM, LR,
QDA, KNN
Measuring trust
Rani et al. [57]ECG, ICG, PPG,
Heart Sound, GSR,
and EMG
15GameBaselineNASA TLXRegression treeAffective state
Erebak et al. [58]None102Robot’s appearanceAfter trialSubjective (custom)Statistical analysisAnthropomorphism
of robot
Butler et al. [59]None40Mobile robot behaviorAfter trialSubjective (custom)Statistical analysisPsychological aspect
Rahim et al. [60]EEG, IBI, GSR15WheelchairBaselineSTAIANOVA, LDA,
SVM, and SLR
Stress estimation
Dobbins et al. [61]ECG, PPG21Commute (car)Before/after trialSTAXI-2, UMACLLDA, DT, and kNNNegative emotion
Ferrez et al. [62]EEG3HRIAfter trialSubjective (custom)Gaussian classifiersError-related potential
Ehrlich et al. [63]EEG6HRIAfter trialSubjective (custom)SVMError-related potential
Val-Calvo et al. [64]EEG, GSR, PPG18VisualAfter trialSubjective (custom)
Ada-Boost, 
Bayesian, and QDA
Arousal, valence
Mower et al. [65]GSR26HRI--KNNUser state estimation
Novak et al. [66]ECG, GSR, RPS,
Skin Temp., EEG,
and Eye tracking
10HRIAfter trialNASA TLXRFWorkload
Iturrate et al. [67]EEG12HRIAfter trialNASA TLX
Reinforcement learning
Error signal
Ehrlich et al. [68]EEG13HRI-Action (key press)LDAError signal
Salazar-Gomez et al. [69]EEG12HRIAfter trialSubjective (custom)LDAError signal
Dehais et al. [70]GSR, Pupil, Gaze12HRI (hand-over task)After trialSubjective (custom)Statistical analysis
Metrics
Sahin et al. [38]GSR, Pupil, ECG20HRIDuring and After
Trial
Subjective (custom)Statistical analysisPerceived safety
Savur et al. [71]GSR, Pupil, ECG36HRIDuring and After
Trial
Subjective (custom)Circumplex modelComfort index
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Savur, C.; Sahin, F. Survey on Physiological Computing in Human–Robot Collaboration. Machines 2023, 11, 536. https://doi.org/10.3390/machines11050536

AMA Style

Savur C, Sahin F. Survey on Physiological Computing in Human–Robot Collaboration. Machines. 2023; 11(5):536. https://doi.org/10.3390/machines11050536

Chicago/Turabian Style

Savur, Celal, and Ferat Sahin. 2023. "Survey on Physiological Computing in Human–Robot Collaboration" Machines 11, no. 5: 536. https://doi.org/10.3390/machines11050536

APA Style

Savur, C., & Sahin, F. (2023). Survey on Physiological Computing in Human–Robot Collaboration. Machines, 11(5), 536. https://doi.org/10.3390/machines11050536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop