Next Article in Journal
A Test Method for Finding Early Dynamic Fracture of Rock: Using DIC and YOLOv5
Next Article in Special Issue
How Do Joint Kinematics and Kinetics Change When Walking Overground with Added Mass on the Lower Body?
Previous Article in Journal
An Intelligent Automated System for Detecting Malicious Vehicles in Intelligent Transportation Systems
Previous Article in Special Issue
How Does Added Mass Affect the Gait of Middle-Aged Adults? An Assessment Using Statistical Parametric Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of Sensor Fusion Methods Using Peripheral Bio-Signals for Human Intention Decoding

1
Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
2
Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6319; https://doi.org/10.3390/s22176319
Submission received: 12 July 2022 / Revised: 2 August 2022 / Accepted: 18 August 2022 / Published: 23 August 2022

Abstract

:
Humans learn about the environment by interacting with it. With an increasing use of computer and virtual applications as well as robotic and prosthetic devices, there is a need for intuitive interfaces that allow the user to have an embodied interaction with the devices they are controlling. Muscle–machine interfaces can provide an intuitive solution by decoding human intentions utilizing myoelectric activations. There are several different methods that can be utilized to develop MuMIs, such as electromyography, ultrasonography, mechanomyography, and near-infrared spectroscopy. In this paper, we analyze the advantages and disadvantages of different myography methods by reviewing myography fusion methods. In a systematic review following the PRISMA guidelines, we identify and analyze studies that employ the fusion of different sensors and myography techniques, while also considering interface wearability. We also explore the properties of different fusion techniques in decoding user intentions. The fusion of electromyography, ultrasonography, mechanomyography, and near-infrared spectroscopy as well as other sensing such as inertial measurement units and optical sensing methods has been of continuous interest over the last decade with the main focus decoding the user intention for the upper limb. From the systematic review, it can be concluded that the fusion of two or more myography methods leads to a better performance for the decoding of a user’s intention. Furthermore, promising sensor fusion techniques for different applications were also identified based on the existing literature.

1. Introduction

In the last decade, robotic and prosthetic devices are gaining importance in everyday tasks (e.g., robotic teleoperation and telemanipulation, user assistance, rehabilitation and augmentation). For an intuitive user experience while interacting with and controlling these devices, new human–machine interfaces need to be developed [1]. Current interfaces that facilitate such interactions often include joysticks and mechanical buttons. A drawback of these interfaces is that they require a steep learning curve for the user to map complex user motions to simple joystick motions or buttons. This results in an inefficient control of the robotic or prosthetic devices due to limited functionality offered by the interface. An alternative approach to interface with such devices is to employ muscle–machine interfaces (MuMIs) [2]. Such interfaces can be developed employing different sensing modalities to measure the muscle activity or movement. Some of the most common modalities are, but not limited to: electromyography (EMG) [3,4,5,6,7,8,9,10], ultrasonography (US) [11,12,13,14,15], mechanomyography (MMG) [16,17,18,19], and near-infrared spectroscopy (NIRS) [20,21].
EMG measures the electrical activation in the muscles which are generated as a result of biological processes during the contraction of the muscles [22]. These activations can be measured both invasively, using needle electrodes, and non-invasively, by placing electrodes on the surface of the skin. The non-invasive method, known as surface EMG (sEMG), is more common and sees a wide range of applications [23,24,25,26,27,28]. Generally, EMG and sEMG are used to denote the non-invasive surface EMG method. Being non-invasive, these systems are easy to use. EMGs also have a high temporal resolution [29]. However, EMG-based methods are non-stationary, prone to crosstalk between different muscles, sensitive to electrode shifts during their use, sweating and fatigue, and electro-magnetic noise [30]. Moreover, they can only be used to measure surface muscles, as to measure the activations of the deep-seated muscles, the invasive variant is needed [31].
An alternative to EMG is MMG. MMG measures the mechanical response of the muscles during contraction [32]. For MMG, the information is usually in the frequency band of 2–200 Hz. In the literature, it is referred to as several different terms, such as: muscular sound, phonomyogram, acoustic myogram, soundmyogram or vibromyogram [33]. Some of its advantages over EMG are that it is not affected by sweat, has a higher signal-to-noise ratio, and is less sensitive to the variations in placement of the sensor on the muscle of interest [34,35]. However, a few drawbacks to this approach are that it is prone to crosstalk between different muscle groups muscle EMG [36]. Furthermore, interference due to ambient acoustic/vibrational noise as well as lack of established sensors inhibits its mainstream use [32].
Other alternative myography methods US and NIRS. US utilizes ultrasound images to photograph muscle movement, which are then used to decipher the user’s intention [37]. In the literature, US is also referred as sonomyography. An advantage of US over other myographies is that it can record the activity of deep muscles non-invasively without cross-talk from adjacent muscles [11]. It is also robust against sweat and electrical interference. However, US-based interfaces are generally bulky and expensive, and the US probe needs to be frequently gelled for proper functioning. NIRS decodes user intention by quantifying the relative changes in the concentration of oxygenated and deoxygenated hemoglobin during muscle contraction [20]. It offers high spatial resolution while tracking user motion, but it is sensitive to muscle fatigue and optical noise [38]. Moreover, NIRS has a delayed response to the motion of the user, leading to a low temporal resolution [39].
From the available literature, it is evident that the user intention can be decoded to execute various tasks using different sensing modalities. Researchers have employed MuMIs for decoding hand gestures to control robotic hands or interaction with computer application [40,41,42,43], decoding continuous arm–hand motions [44,45,46], rehabilitation after strokes [47,48], and for games and entertainment [49]. Such MuMIs have also been employed in decoding walking patterns for an effective control of lower limb prosthesis [50,51], quantifying user fatigue during various tasks [52], and for decoding user intentions during collaborative tasks with robots [53,54]. Several researchers have also focused on employing electroencephalography signals to decode user intentions, hand and finger motions, decoding walking intentions etc. [55,56,57,58,59]. Since the main focus of this work is on peripheral bio-signals, electroencephalography-based studies are not included in the systematic review process. While all available methods have their advantages and drawbacks, it appears that the performance of the individual methods can be improved by fusing different methods together or by performing fusion with other sensors (e.g., inertial measurement units (IMUs)). This work presents different methods of acquiring bio-signals from the peripheral nervous system of the user in a non-invasive way to decode the user’s intentions. The goal of this work was to analyze the advantages and disadvantages of different myography methods and to explore the properties of different fusion techniques in decoding the user’s intentions by reviewing and discussing different fusion methods. By finding the advantages and disadvantages of different sensing modalities, informed decisions can be taken to select sensing methods while developing MuMIs. To do this, we performed a systematic review of the works that employ the fusion of different sensors and myography techniques.
The rest of the paper is organized as follows: In Section 2, we present the details regarding the process followed for the systematic review. In Section 3, we present the results along with a description of the key papers included in the review. The papers that emphasize the potential of using fusing different myography methods and external sensors in the applications of decoding human intention are considered in Section 4 to give a broader overview. Finally, Section 5 presents the concluding remarks and potential future directions for the research.

2. Methods

The literature for the systematic review was searched during the month of September in 2021. The databases and the search terms are listed in Table 1. The term in the right column of Table 1 is exemplary of the syntax used, however, it was adjusted as per the requirements of the search engine of each database while making sure that they still are semantically identical. For a comprehensive study, a full-text search was conducted across all databases. Full-text searches included title, keywords, and research highlights along with the full-text papers, depending upon the database. For the whole process of the systematic review, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed [60,61]. It provides instructions to perform a review using systematic methods to collate and synthesize findings of studies that address a clearly formulated question.
Three exclusion (E) and one inclusion (I) criteria were chosen by all authors to determine the relevance of the papers. The screening and assessment process of the papers is shown in Figure 1. The search through the databases yielded 1402 papers which, after removing the duplicates, were reduced to 1298 for the first screening. In the first round of screening, only titles and abstracts were considered which were diligently excluded considering exclusion criteria E0 and E1. During the second screening round, the full texts were read, with an additional consideration of the inclusion criterion I0 and exclusion criterion E2. Studies were included only if they satisfied criterion I0, but excluded as soon as any of the exclusion principles were met. The exclusion and inclusion criteria considered in the study are as follows:
  • E0: Studies focusing on animals, the analysis of vital signs, humans under 18, electrical stimulation;
  • E1: A fixed structure (should have the potential to be portable to be included), stationary instrumentation
  • E2: No inclusion criteria met
  • I0: Should use two or more sensing methods focusing on skeletal muscles and perform fusion to achieve better performance
Inclusion criterion I0 covers both, theoretical and experimental contributions, based on which the subsequent survey and the results are structured. E0 excludes all studies that are conducted with animals, as well as, all the studies that employ different myographies and data fusion techniques for studying vital signs to monitoring health conditions were removed. Furthermore, a lot of works focus on the development on muscle strength in children, as this is not the main focus of the current review, the studies with humans under 18 were excluded. We also excluded all studies with electrical stimulation as they focus on actuating the muscles to help users regain control of their muscles (as part of therapy and rehabilitation) after a stroke as opposed to decoding the intention of the user. E1 focuses on the wearability of the interface and excludes the works which employ fixed structures for the experimental rig, such as: magnetic resonance imaging, computed tomography, positron emission tomography, etc. Finally, E2 excludes the papers that do not meet the inclusion criteria. Strict inclusion/exclusion criteria were set up for the selection of the papers focusing on the fusion of different sensing technologies targeting user intention decoding, however, unselected papers that emphasize the potential of such applications are considered in the discussion to give a broader overview.
All papers were separately screened by the first and second author. To assess the inter-rater agreement, Cohen’s Kappa value was calculated [62]. Cohen’s Kappa value is an established measure in the research community and was suggested as a statistical measure of inter-rater reliability [63,64,65]. There are several different ways to interpret Cohen’s metric. One of the proposed methods is as follows (other interpretations can be found in Section 3 with the discussion of the results): values < 0 as indicating no agreement and 0.00–0.20 as “none”, 0.21–0.39 as “minimal”, 0.40–0.59 as “weak”, 0.60–0.79 as “moderate”, and 0.80–0.90 as “strong” and >0.90 as “almost perfect” agreement [66]. If the two raters ( R 1 , R 2 ) mark their inclusion and exclusion responses as follows: R 1 , y e s R 2 , y e s = a , R 1 , y e s R 2 , n o = b , R 1 , n o R 2 , y e s = c , and R 1 , n o R 2 , n o = d . Then, the Cohen’s Kappa can be calculated as:
κ = p o p e 1 p e
where p o is the relative observed agreement among raters, and p e is the hypothetical probability of chance agreement. p o and p e can be calculated as follows:
p o = a + d a + b + c + d
p y e s = a + b a + b + c + d × a + c a + b + c + d
p n o = c + d a + b + c + d × b + d a + b + c + d
p e = p y e s + p n o
The papers with conflicting ratings were discussed and the decisions were revised and were included only if it was included by both authors. By following this approach, premature exclusion or inclusion can be avoided. The screening process was designed and supervised in collaboration with the third author.

3. Results

An overview of the screening process is shown in Figure 1 and discussed in detail in Section 2. From the 1298 papers that were reviewed during the first round of screening, 1199 papers were excluded by both the authors and 49 papers were excluded by only one author based on reading the title and abstract before a discussion between them. The remaining 50 papers were included by both authors. After resolving conflicting ratings in a discussion, 1239 papers were excluded by both the authors and 3 papers were excluded by only one author. Finally, 56 papers were included for the second screening round. In the first round of screening before a discussion between the authors, a Cohen’s Kappa value of κ = 0.65 was achieved, which suggests a “good” [67] or “substantial” [68] or a “moderate” [66] inter-rater agreement. However, after discussion between the authors, the Cohen’s Kappa value was increased to κ = 0.97 , suggesting an “almost perfect” [68] inter-rater agreement.
During the second round of screening, the papers were included or excluded based on a full-text analysis. In this round, an English manuscript (or an official translation) of the two papers was not found and due to a lack of expertise to translate the manuscripts by the authors the papers were excluded from the study. From the 56 papers that were reviewed during this round, 17 papers were excluded by both the authors before a discussion between them, while 7 papers were excluded by only one author. The remaining 30 papers were included by both the authors. However, after a discussion, a complete agreement was reached and 33 papers were included and 21 papers were excluded by both the authors. In the second round of screening before discussion between the authors, a Cohen’s Kappa value of κ = 0.73 was achieved suggesting again, a “substantial” [68] or a “moderate” [66] inter-rater agreement. However, after a discussion between the authors, a total agreement was achieved making the Cohen’s Kappa κ = 1 .
All the individual myography methods have respective advantages and disadvantages: EMG-based systems, being non-invasive, are easy to use and also have a high temporal resolution. However, such methods are sensitive to electrode shifts during their use, sweating and fatigue, and electro-magnetic noise. MMG-based systems have a higher signal-to-noise ratio, and are less sensitive to the variations in the placement of the sensor on the muscle of interest; however, interference due to ambient acoustic/vibrational noise as well as the lack of established sensors inhibits its mainstream use. US can record activity of deep muscles non-invasively without cross-talk from adjacent muscles, but US-based methods are usually bulky and expensive. NIRS-based systems offer high spatial resolution while tracking user motion, but are sensitive to muscle fatigue and optical noise as well as have a delayed response to the motion of the user, leading to a low temporal resolution. Table 2 lists the papers included in the systematic review. In this table, we also show the different myography methods and external sensors used in the paper to study performance enhancement using data fusion. Table 2 also discusses the properties of different fusion methods as well. In the following subsections, we present the fusion different methodologies explored in the studies.

3.1. Fusion of EMG and MMG

Tkach and Hargrove [69] employed EMG and MMG data to allow ambulation over various types of terrains. Furthermore, they compared the performance of EMG, MMG and fused EMG and MMG, where they concluded that sensor fusion performs better than individual methods. In [70], the authors presented a custom sensor to acquire EMG and MMG activations simultaneously. In this study, the MMG data acquisition is implemented using optical sensors. This proposed method is less susceptible to motion artefacts, as compared to when implemented using accelerometers. An advantage of data fusion with MMG is that meaningful data can be acquired even during passive motions, during which EMG is non-observable. This is because the passive motion has involuntary changes in the physiological muscle cross-sectional area resulting in passive extension and flexion of the limb, and such passive motions are performed without any myoelectric activations. In [71], Tsuji et al. proposed a fusion of EMG and MMG to distinguish between patients that are susceptible to bilateral patellar tendon hyperreflexia and patients with no prior history of neurological disorders. They observed a significantly high amplitude for the root mean square and low values of mean power frequency for the rectus femoris, vastus medialis and vastus leteralis muscles. This observation is true for both EMG and MMG with both maximal and constant force. They conclude that employing both EMG and MMG for objectively quantifying the patellar tendon reflex is simple and desirable for future clinical applications.

3.2. Fusion of EMG and US

In [73], Botter et al. focused on analyzing the effects on the signal quality due to interference with different sensors during multi-modal data acquisition. The analysis is based on the muscles of the leg of the participant. It was concluded that the US data with and without EMG electrodes was comparable, but the electrode–skin impedance of the employed EMG-US electrodes was higher than that of conventional EMG electrodes. However, despite higher impedance values, the electrode–skin noise levels of the EMG-US electrode was comparable to the conventional EMGs. In [74], Yang et al. compared EMG and US separately and also the fusion of EMG and US while decoding eight different hand gestures and discrete and continuous estimation of forces being exerted. They concluded that a fusion-based sensor is better than individual sensing methodologies, since US performed better than EMGs while decoding discrete gestures and discrete force estimations. However, EMG-based decoding models performed better during continuous force estimation. The studies suggest that the addition of a US sensor with the EMG electrodes increase the skin impedance values. However, the fusion of US and EMG is desirable, especially in cases where a combination of discrete and continuous intention is decoded (e.g., the classification of different grasp types and decoding the respective grasping forces for picking up objects using a prosthetic hand).

3.3. Fusion of EMG and NIRS

Guo et al. [38,39,75] employed the fusion of EMG and NIRS for hand gesture classification in both human–computer interaction applications and the control of upper limb prosthesis. In their work, they concluded that the fusion of EMG and NIRS leads to a better decoding performance as opposed to developing interfaces independently with each myography method. While in [76], Paleari et al. employed the same combination of sensors in the classification of hand motion instead of hand gestures and reach the same conclusion—that the fusion data performs better.

3.4. Fusion of EMG and Accelerometers

Bio-signals are also fused with other sensing methods, such as accelerometers and IMUs. The fusion of EMG with accelerometers is one of the most explored fusion techniques. A reason for this could be the commercial availability of this combination with the Delsys EMG bioampliers [102] and the Myo armband by Thalmic labs [103]. Several studies have employed it in decoding human arm–hand gestures and motions. Fougner et al. [78] utilized EMG and accelerometer fusion for the classification of limb position in various orientations. They showed that decoding models developed using EMG data from multiple arm orientations performed better compared to when it is developed using data from only one arm orientation but tested for gesture execution in all the orientations. However, further improvement in the performance can be achieved by performing two-stage classifications, where, in the first stage, the accelerometer data are employed followed by an arm orientation-aware EMG-based gesture classification. In [80,81], Gijsberts et al. utilized the NinaPro database [104] to demonstrate the need for the accelerometer data in decoding the human hand gestures. They concluded that the highest accuracy is obtained when both modalities are integrated in a multi-modal classifier. Furthermore, they proposed a movement error rate as a new metric of evaluation, since a drawback of the window-based accuracy is that it does not distinguish between different types of mistakes made by the classifier (e.g., errors due to misclassifications and prediction delay). In [85], Wang et al. employed the NinaPro database to compare the decoding performance of support vector machines, convolutional neural networks, and recurrent convolutional neural networks. They concluded that recurrent convolutional neural networks along with fusion of EMG and accelerometer data perform better than other decoding methods. Wang et al. further confirmed this finding on a custom dataset collected which was collected during the study [85]. The fusion of EMG and accelerometers is also used for the classification of different types of tremors of Parkinson’s disease [79]. Here, the authors concluded that the combination of the two sensing methods leads to an increased performance of classifying tremors.
This sensor fusion is also employed in improving the performance of a lower limb prosthesis as well. In [83], Joshi and Hahn proposed a framework to perform seamless transitions between different terrains. They detected the direction (ascent or descent) and terrain (ramp or stairs) patterns when a person transitions from over ground to stairs or ramp locomotion. Furthermore, they showed that EMG and accelerometer data sources are complementary across the transitional gait cycle, and concluded that sensor fusion will lead to a robust classification. While in [84], Gupta et al. classified nine activities of daily living related to walking (sit, stand, lay down, walk on level: normal and high speed, walk up/down stairs, walk up/down ramp). They too reached a conclusion that the fusion of EMG and accelerometer data leads to an improved performance of the system.

3.5. Fusion of EMG and IMU

Wu et al. presented a framework based on the fusion of EMG and IMU data for decoding Chinese sign language and tested its performance and capabilities over two studies. In the first study [82], they decoded 40 different gestures and later extended it to decoding 80 gestures [87]. In this study, four different classification methods were compared (naive Bayes, nearest neighbor, decision tree, and support vector machines), with nearest neighbor and support vector machines performing better than the others. Furthermore, the performance of each classifier was compared for just IMU data and the fusion of EMG and IMU. It was observed that the performance improves with the fusion data. In [88], Yang et al. focused on one-handed and two-handed gestures recognition in Chinese sign language and show that the fusion of the EMG, accelerometer, and gyroscope data performs significantly better than if each method was used independently, or the EMG was combined with the accelerometer, the EMG was combined with the gyroscope and accelerometer was combined with the gyroscope.
In [89], Fang et al. decoded five gestures for a real-time control of a turtlebot. Different feature extraction methods were compared and it was concluded that the use of both time-domain and time-frequency domain leads to a better performance. On the contrary, in [91], the authors utilized a fusion of EMG and IMU data for a control of robot arm. In this work, EMG was used to tune the joint stiffness of the robot, while IMUs were employed to teleoperate the robot arm.

3.6. Fusion of EMG and Accelerometer with Optical Sensing

In [92], Yoshikawa et al. studied a fusion of EMG data with an optical distance sensor. They optical sensor measures the distance between the skin and the sensor, and it was hypothesized that the distance changes caused by muscle elevation can compensate the limited information derived from myoelectric activities (e.g., low myoelectric activations for arm pronation and supination). From the results, it can be seen that the hybrid approach leads to a better gesture classification performance, especially with pronation and supination tasks. In [93], Luan et al. implemented a fusion of optical sensors with accelerometer data for the classification of hand gestures. For the fusion of the two data streams, a dynamic time warping method was employed. It was noted that the performance of the hybrid system is better than just using the accelerometer for classification.

3.7. Fusion of MMG and IMU

In [94,95], Woodward et al. developed a fusion of MMG and IMU data. The MMG data acquisition was implemented using microphones. In [94], experiments were conducted in three sets. In the first set, MMG was compared with EMG for signal onset, held contraction and offset of muscle activity. In the second set of experiments, MMG and IMU data are combined to determine gait and muscle contraction changes following a restriction in knee and ankle flexion/extension. Finally, the third set investigates uncontrolled and prolonged data collection to demonstrate the pervasive nature of the technology while showing that it can be employed to determine active periods. In [95], they observed a significant (p < 0.01) progressive change in muscular activity in subjects with atypical gait over time. Furthermore, in their experiments with the walking patterns of the user, the inclusion of MMG data with IMUs significantly improved the accuracy of recognizing the activities when running is a more abundant activity.
In [96], an armband was developed, fusing MMG and IMU data, to recognize different hand gestures for the control of a quadrotor. Five gestures were decoded with an average accuracy of 94.38%, with the lowest gesture decoding accuracy of 85.64%. In [97], different symptoms of Parkinson’s disease were classified using MMG and IMU data. Furthermore, the decoding models were developed to distinguish between healthy subjects and Parkinson’s disease patients. It was found that the inclusion of MMG-based features significantly improves the decoding accuracy of rigidity-based PD symptoms. However, the removal of MMG-based features does not affect the classification accuracy for a kinetic tremor and rest tremor, but the accuracy decreases for bradykinesia and postural tremor.

3.8. Fusion of EMG, US and MMG

In [98], Chen et al. utilized the fusion of EMG, US, and MMG to study the behavior of the rectus femoris muscle during isometric contractions. They employed local polynomial regression to reveal nonlinear and transient relationships between multimodal muscle features and torque. The authors concluded that the proposed multimodal method can provide complete and novel information on muscle contraction. In [99], Han et al. introduced a new outlier detection method to facilitate the detection of transient patterns of muscle behavior during isometric contraction.

3.9. Fusion of EMG, MMG and NIRS

In [100], Ding et al. presented a method utilizing the fusion of EMG, MMG, and NIRS, to study the relationship between the different sensing technologies during incremental force execution. The authors noticed that the signal intensity increases with the increasing force for both EMG and MMG. However, for NIRS, the trend is not obvious between 50% and 80%. They attribute this observation to the increased pressure of muscles and blood vessels as a result of increasing forces being exerted, which leads to a limited flow of blood in the blood vessels [105]. In [101], the authors developed a fused sensor for simultaneous data acquisition with EMG, MMG, and NIRS as sensing modalities. They observed that the classification accuracies for the decoding models developed using all the three sensing modalities is better than when developed using just EMG or a combination of EMG and NIRS. Furthermore, the models developed using EMG and NIRS outperformed those developed only using EMG.

4. Discussion

With the increasing demand for intuitive interfacing methods for various computer applications, robotic systems, or prosthetic and rehabilitation devices, new ways need to be explored to acquire information from the user to facilitate intention decoding. The present survey aimed at exploring different sensor fusion methods employed using peripheral bio-signals to develop such intention-decoding frameworks. The included studies evaluated different fusion methods and reached the conclusion that the fusion of two or more sensing methods leads to an increased performance in terms of decoding human intention. However, from the current survey, it is inconclusive to say which is the best combination for this purpose.
Figure 2 shows the distribution of the studies that were selected for this review over the last decade. It can be noticed that the interest in the fusion studies has been continuous over this period of time. However, from Table 3, it can be noticed that the EMG-based sensing are the most common methods. This can be attributed to widely available commercial data acquisition systems that include both equipment for research and analysis [102,106,107] in the field as well as hobby kits [108]. The second most explored myography method that is explored for fusion study is that of MMG-based systems. These are followed by NIRS and US, respectively. The reason for the popularity of MMG over NIRS and US can be attributed to the hardware needs, as the MMG data can be acquired using accelerometers or microphones. A lot of studies also consider the fusion of myography methods with accelerometers or IMUs for intention decoding.
The majority of the studies have focused on upper limb intention decoding. More precisely, the main topic of interest was to decode the gestures executed by the users using their hands to either control a robotic device [91,96], sign language decoding [82,87] or for prosthetic applications [80,81]. For this, studies have focused on both the classification of hand gestures as well as the classification of hand motions [78,109]. Few studies have focused on the diagnosis of symptoms of Parkinson’s disease as well as on identifying healthy participants from the patients of Parkinson’s disease [79,97]. Studies have also focused on lower limbs. Such studies have generally aimed to classify walking patterns (e.g., walking on the level surface, climbing up/down stairs, etc.) of the user as well as the terrain that the user is walking on for a better support from the leg prosthesis to facilitate a better walking experience [70,71,84,94,95]. For decoding continuous motions, EMG or MMG with accelerometers or IMU appears to be more promising, while in tasks that require both the classification and regression of user intention, a fusion of EMG and US can be employed. In cases where the timing of the decoding has high preference, NIRS may not be the ideal choice due its low-time resolution. NIRS-based interfaces also have difficulties in decoding forces exerted by the users while performing various tasks [100], however, a combination of EMG, MMG, and NIRS may be used in recognizing the onset of muscular fatigue [101].
For user intention decoding, the most common muscle sites were the extensor digitorum muscle group, the flexor digitorum muscle group, and the muscles around the radio-humeral joint. A few studies have also employed the bicep and tricep muscles. However, studies that acquired the myoelectric activation from the bicep and tricep muscles were interested in the classification of the user’s gesture that was either executed with various arm orientations or required arm motions [78,80,81,97]. In the studies focusing on the lower limbs, the main muscles groups utilized for decoding user intention are rectus femoris, vastus medialis, and vastus leteralis.
Different sensing methods require different signal conditioning methods. For EMG signals, the most studies employed a bandpass filter of 20–500 Hz. The MMG data was generally bandpass filtered between 10 and 100 Hz. However, the sampling rate for both the myographies was usually 1000 Hz. For NIRS data, most studies employed a lowpass filter of 300 Hz and a data sampling rate of 1000 Hz. US data do not require a specific data filtering step and they were captured as images with a frame rate of 25 Hz. The accelerometer and the IMU data were also used in the raw form without the application of any data filtering techniques. Regarding feature extraction methods, the time domain features were extracted in the majority of the EMG and MMG-based studies with root mean square value, mean absolute value, zero crossings, and waveform length being some of the common ones. Time domain features are usually the most common ones as they are computationally less complex to calculate, and thus more suitable for real-time applications [110]. The second most common feature types were frequency domain features and finally the time-frequency features. The window size for the extraction of features was usually between 250 and 300 ms, while the window slide was between 50 and 100 ms. The window size should be carefully selected as it must not be too large due to real-time constraints, and at the same time, it should be adequately large to avoid high biases and variance [111]. In studies employing US-based signals, basic statistical features (e.g., mean and standard deviations) were extracted. Studies have also defined regions of interest in the USimage to extract different features [13]. When using NIRS-based sensors, the information recorded from the user is oxyhemoglobin and deoxyhemoglobin concentration changes. However, to extract meaningful information from these signals, studies extract a group of features comprising algebraic sums or differences on the raw data. Studies have also calculated some frequency domain signals, such as power spectral distribution and spectrograms of oxyhemoglobin and deoxyhemoglobin [76].
The properties of myography methods are wide. EMG-based methods have high temporal resolution. However, they are prone to sweat, fatigue and electro-magnetic noise in the environment. The MMG-based signals are not affected by sweat, have a higher signal-to-noise ratio, and are less sensitive to the variations in placement on the target muscles, but it is affected by the ambient acoustic and vibrational noise. US-based sensing provides a non-invasive method for tracking the movement of the deep-seated muscles, but current US devices are generally bulky and expensive, prohibiting its mainstream use in everyday life. NIRS-based sensing offers high spatial resolution while tracking user motion, but it is sensitive to muscle fatigue and optical noise. It also has a delayed response leading to a low temporal resolution. The fusion of myography methods brings different advantages to the interfacing framework. The EMG and MMG-based sensing methods acquire complementary information, as the signals acquired by the two sensing methods are generated at different times in the gesture execution cycle (EMG is generated as the cause of muscle motion and MMG is the effect of the muscle motion) [72]. Similarly, EMG and US signals are also generated at different times in the gesture execution cycle, with EMG as the cause of the motion and US-based data which records the effect of the motion. The fusion of US information with EMG also allows recording the motion information of deep-seated muscles which is not possible with just EMG. Fusion with NIRS gives insights regarding the assessment of neurovascular coupling during motion or grasp execution [77]. Fusion with accelerometers or IMUs provides a kinematic aspect of the user’s intention along with the dynamic aspects obtained by acquiring bio-signals from the user.
The right fusion method for the application at hand may be selected based on the aforementioned advantages and disadvantages. The most common fusion method is of EMG and accelerometers and EMG and IMUs, which can be attributed to commercially available bioamplifiers, such as Delsys Trigno (Delsys Inc., Natick, MA, USA) and Myoband (Thalmic Labs, Kitchener, ON, Canada). Therefore, the authors believe that the utilization of other fusion methods in research may be increased with an increasing number of commercially available sensors.

5. Conclusions

Following the PRISMA guidelines, we reviewed and analyzed the advantages and disadvantages of different myography methods and the potential of fusing them. The properties of myography methods are wide: Electromyography (EMG) data have high temporal resolution, mechanomyography (MMG) is robust to the skin–sensor interface, ultrasonography (US) allows the tracking of deep muscles in a non-invasive way without any crosstalk from adjacent muscles, while near-infra-red spectroscopy (NIRS) offers a high spatial resolution. In contrast, EMGs are non-stationary signals and prone to crosstalk with adjacent muscles and sweating and fatigue. MMG signals are prone to crosstalk between different muscle groups such as EMG and to interference due to ambient acoustic/vibrational noise. US-based interfaces are generally bulky and expensive and the probe requires frequent re-gelling for proper functioning and NIRS sensitive to muscle fatigue and optical noise and also has a delayed response to the motion of the user leading to a low temporal resolution.
All the studies included in this systematic review concluded that the fusion of two or more myography methods leads to a better performance in terms of decoding user intention. The study of myography fusion has been of continuous interest over the last decade. It was noticed that one of the most adopted methods’ for fusion is EMG-based signals with either accelerometers or inertial measurement units. The main focus has been on decoding the user intention for the upper limb. Furthermore, majority studies have focused on the discrete intention classification for both upper limbs and lower limbs, as opposed to continuous intention decoding. From this review, it can also be concluded that currently, the fusion of myography has mainly been explored for decoding discrete user intention (e.g., the classification of hand gestures and motions, walking terrains and patterns), while the continuous decoding of user intention remains relatively unexplored (e.g., continuous motion of human arm, force exertions). Therefore, future work should focus on the fusion of different myography methods to improve the user intention decoding during continuous tasks, such as decoding limb motion or decoding continuous user effort. For example, to decode continuous motions, EMG or MMG with accelerometers or IMU appears to be more promising, while in tasks that require both the classification and regression of user intention, a fusion of EMG and US can be employed. In future works, a fusion of MMG with US may be explored as well in such tasks. NIRS-based interfaces have difficulties in decoding forces exerted by the users while doing various tasks [100]; however, a combination of EMG, MMG, and NIRS may be used in recognizing the on-set of muscular fatigue [101]. One of the biggest challenges in comparing different fusion methods is currently the lack of standardized experimental protocol and assessment methods. Future studies may also focus on outlining such experiment protocols to allow the direct comparison of interfaces across different studies. We expect this to lead to a more intuitive and dexterous control of computer applications, as well as robotic and prosthetic devices.

Author Contributions

Conceptualization, A.D., H.G. and P.B.; methodology, A.D., H.G. and P.B.; literature survey, A.D. and H.G.; text screening, A.D. and H.G.; critical analysis, A.D., H.G. and P.B.; writing—original draft preparation, A.D.; writing—review and editing, H.G. and P.B.; visualization, A.D.; supervision, P.B.; project administration, P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMGElectromyography
MMGMechanomyography
USUltrasonography
NIRSNear-infrared wpectroscopy
IMUInertial measurement unit
MuMIMuscle–machine interface
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses

References

  1. Beckerle, P.; Salvietti, G.; Unal, R.; Prattichizzo, D.; Rossi, S.; Castellini, C.; Hirche, S.; Endo, S.; Amor, H.B.; Ciocarlie, M.; et al. A human–robot interaction perspective on assistive and rehabilitation robotics. Front. Neurorobot. 2017, 11, 24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Dwivedi, A. Analysis, Development, and Evaluation of Muscle Machine Interfaces for the Intuitive Control of Robotic Devices. Ph.D. Thesis, The University of Auckland, Auckland, New Zealand, 2021. [Google Scholar]
  3. Vogel, J.; Castellini, C.; van der Smagt, P. EMG-based teleoperation and manipulation with the DLR LWR-III. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 672–678. [Google Scholar]
  4. Dwivedi, A.; Gorjup, G.; Kwon, Y.; Liarokapis, M. Combining electromyography and fiducial marker based tracking for intuitive telemanipulation with a robot arm hand system. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–6. [Google Scholar]
  5. Shieff, D.; Turner, A.; Dwivedi, A.; Gorjup, G.; Liarokapis, M. An Electromyography Based Shared Control Framework for Intuitive Robotic Telemanipulation. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; pp. 806–811. [Google Scholar]
  6. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Dwivedi, A.; Shieff, D.; Turner, A.; Gorjup, G.; Kwon, Y.; Liarokapis, M. A Shared Control Framework for Robotic Telemanipulation Combining Electromyography Based Motion Estimation and Compliance Control. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 9467–9473. [Google Scholar]
  8. Kwon, Y.; Dwivedi, A.; McDaid, A.J.; Liarokapis, M. Electromyography-Based Decoding of Dexterous, In-Hand Manipulation of Objects: Comparing Task Execution in Real World and Virtual Reality. IEEE Access 2021, 9, 37297–37310. [Google Scholar] [CrossRef]
  9. Liarokapis, M.V.; Artemiadis, P.K.; Katsiaris, P.T.; Kyriakopoulos, K.J.; Manolakos, E.S. Learning human reach-to-grasp strategies: Towards EMG-based control of robotic arm-hand systems. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 14–19 May 2012; pp. 2287–2292. [Google Scholar]
  10. Castellini, C.; Van Der Smagt, P. Surface EMG in advanced hand prosthetics. Biol. Cybern. 2009, 100, 35–47. [Google Scholar] [CrossRef] [Green Version]
  11. Hodges, P.; Pengel, L.; Herbert, R.; Gandevia, S. Measurement of muscle contraction with ultrasound imaging. Muscle Nerve 2003, 27, 682–692. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Z.; Fang, Y.; Zhou, D.; Li, K.; Cointet, C.; Liu, H. Ultrasonography and electromyography based hand motion intention recognition for a trans-radial amputee: A case study. Med. Eng. Phys. 2020, 75, 45–48. [Google Scholar] [CrossRef]
  13. Castellini, C.; Passig, G.; Zarka, E. Using ultrasound images of the forearm to predict finger positions. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 788–797. [Google Scholar] [CrossRef] [Green Version]
  14. Sierra González, D.; Castellini, C. A realistic implementation of ultrasound imaging as a human–machine interface for upper-limb amputees. Front. Neurorobot. 2013, 7, 17. [Google Scholar] [CrossRef] [Green Version]
  15. Castellini, C.; Passig, G. Ultrasound image features of the wrist are linearly related to finger positions. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2108–2114. [Google Scholar]
  16. Ibitoye, M.O.; Hamzaid, N.A.; Zuniga, J.M.; Wahab, A.K.A. Mechanomyography and muscle function assessment: A review of current state and prospects. Clin. Biomech. 2014, 29, 691–704. [Google Scholar] [CrossRef]
  17. Silva, J.; Heim, W.; Chau, T. A self-contained, mechanomyography-driven externally powered prosthesis. Arch. Phys. Med. Rehabil. 2005, 86, 2066–2070. [Google Scholar] [CrossRef]
  18. Wu, H.; Huang, Q.; Wang, D.; Gao, L. A CNN-SVM combined model for pattern recognition of knee motion using mechanomyography signals. J. Electromyogr. Kinesiol. 2018, 42, 136–142. [Google Scholar] [CrossRef] [PubMed]
  19. Wilson, S.; Vaidyanathan, R. Upper-limb prosthetic control using wearable multichannel mechanomyography. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1293–1298. [Google Scholar]
  20. Praagman, M.; Veeger, H.; Chadwick, E.; Colier, W.; Van Der Helm, F. Muscle oxygen consumption, determined by NIRS, in relation to external force and EMG. J. Biomech. 2003, 36, 905–912. [Google Scholar] [CrossRef]
  21. Nsugbe, E.; Phillips, C.; Fraser, M.; McIntosh, J. Gesture recognition for transhumeral prosthesis control using EMG and NIR. Iet-Cyber-Syst. Robot. 2020, 2, 122–131. [Google Scholar] [CrossRef]
  22. De Luca, C. Electromyography. In Encyclopedia of Medical Devices and Instrumentation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  23. Dwivedi, A.; Kwon, Y.; Liarokapis, M. Emg-based decoding of manipulation motions in virtual reality: Towards immersive interfaces. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3296–3303. [Google Scholar]
  24. Dwivedi, A.; Kwon, Y.; McDaid, A.J.; Liarokapis, M. EMG based decoding of object motion in dexterous, in-hand manipulation tasks. In Proceedings of the 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Enschede, The Netherlands, 26–29 August 2018; pp. 1025–1031. [Google Scholar]
  25. Al-Mulla, M.R.; Sepulveda, F.; Colley, M. A review of non-invasive techniques to detect and predict localised muscle fatigue. Sensors 2011, 11, 3545–3594. [Google Scholar] [CrossRef] [Green Version]
  26. Saponas, T.S.; Tan, D.S.; Morris, D.; Balakrishnan, R.; Turner, J.; Landay, J.A. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, Victoria, BC, Canada, 4–7 October 2009; pp. 167–176. [Google Scholar]
  27. Kiguchi, K.; Hayashi, Y. An EMG-based control for an upper-limb power-assist exoskeleton robot. IEEE Trans. Syst. Man Cybern. Part (Cybernet.) 2012, 42, 1064–1071. [Google Scholar] [CrossRef]
  28. Artemiadis, P.K.; Kyriakopoulos, K.J. EMG-based control of a robot arm using low-dimensional embeddings. IEEE Trans. Robot. 2010, 26, 393–398. [Google Scholar] [CrossRef]
  29. Perusquía-Hernández, M.; Hirokawa, M.; Suzuki, K. Spontaneous and posed smile recognition based on spatial and temporal patterns of facial EMG. In Proceedings of the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, 23–26 October 2017; pp. 537–541. [Google Scholar]
  30. Jiang, S.; Gao, Q.; Liu, H.; Shull, P.B. A novel, co-located EMG-FMG-sensing wearable armband for hand gesture recognition. Sens. Actuators A Phys. 2020, 301, 111738. [Google Scholar] [CrossRef]
  31. Péter, A.; Andersson, E.; Hegyi, A.; Finni, T.; Tarassova, O.; Cronin, N.; Grundström, H.; Arndt, A. Comparing surface and fine-wire electromyography activity of lower leg muscles at different walking speeds. Front. Physiol. 2019, 10, 1283. [Google Scholar] [CrossRef] [Green Version]
  32. Woodward, R.B.; Stokes, M.J.; Shefelbine, S.J.; Vaidyanathan, R. Segmenting mechanomyography measures of muscle activity phases using inertial data. Sci. Rep. 2019, 9, 5569. [Google Scholar] [CrossRef] [Green Version]
  33. Ouamer, M.; Boiteux, M.; Petitjean, M.; Travens, L.; Salès, A. Acoustic myography during voluntary isometric contraction reveals non-propagative lateral vibration. J. Biomech. 1999, 32, 1279–1285. [Google Scholar] [CrossRef]
  34. Xie, H.B.; Zheng, Y.P.; Guo, J.Y. Classification of the mechanomyogram signal using a wavelet packet transform and singular value decomposition for multifunction prosthesis control. Physiol. Meas. 2009, 30, 441. [Google Scholar] [CrossRef] [PubMed]
  35. Yu, H.-L.; Zhao, S.-N.; Hu, J.-H. MMG signal and its applications in prosthesis control. In Proceedings of the 4th International Convention on Rehabilitation Engineering & Assistive Technology, Shanghai, China, 21–23 July 2010; pp. 1–4. [Google Scholar]
  36. Talib, I.; Sundaraj, K.; Lam, C.K.; Hussain, J.; Ali, M. A review on crosstalk in myographic signals. Eur. J. Appl. Physiol. 2019, 119, 9–28. [Google Scholar] [CrossRef] [PubMed]
  37. Ortenzi, V.; Tarantino, S.; Castellini, C.; Cipriani, C. Ultrasound imaging for hand prosthesis control: A comparative study of features and classification methods. In Proceedings of the 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), Singapore, 11–14 August 2015; pp. 1–6. [Google Scholar]
  38. Guo, W.; Sheng, X.; Liu, H.; Zhu, X. Toward an enhanced human–machine interface for upper-limb prosthesis control with combined EMG and NIRS signals. IEEE Trans.-Hum.-Mach. Syst. 2017, 47, 564–575. [Google Scholar] [CrossRef]
  39. Guo, W.; Sheng, X.; Liu, H.; Zhu, X. Development of a multi-channel compact-size wireless hybrid sEMG/NIRS sensor system for prosthetic manipulation. IEEE Sens. J. 2015, 16, 447–456. [Google Scholar] [CrossRef]
  40. Chapman, J.; Dwivedi, A.; Liarokapis, M. A Wearable, Open-Source, Lightweight Forcemyography Armband: On Intuitive, Robust Muscle–Machine Interfaces. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4138–4143. [Google Scholar]
  41. Shahmohammadi, M.; Dwivedi, A.; Nielsen, P.; Taberner, A.; Liarokapis, M. On Lightmyography: A New Muscle Machine Interfacing Method for Decoding Human Intention and Motion. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 October 2021; pp. 4744–4748. [Google Scholar]
  42. Saponas, T.S.; Tan, D.S.; Morris, D.; Balakrishnan, R. Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 515–524. [Google Scholar]
  43. Das, A.; Tashev, I.; Mohammed, S. Ultrasound based gesture recognition. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 406–410. [Google Scholar]
  44. Artemiadis, P.K.; Kyriakopoulos, K.J. EMG-based teleoperation of a robot arm in planar catching movements using ARMAX model and trajectory monitoring techniques. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), Orlando, FL, USA, 15–19 May 2006; pp. 3244–3249. [Google Scholar]
  45. Godoy, R.V.; Lahr, G.J.G.; Dwivedi, A.; Reis, T.J.S.; Polegato, P.H.; Becker, M.; Caurin, G.A.P.; Liarokapis, M. Electromyography-Based, Robust Hand Motion Classification Employing Temporal Multi-Channel Vision Transformers. IEEE Robot. Autom. Lett. 2022, 7, 10200–10207. [Google Scholar] [CrossRef]
  46. Yang, C.; Chang, S.; Liang, P.; Li, Z.; Su, C.Y. Teleoperated robot writing using EMG signals. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2264–2269. [Google Scholar]
  47. Han, J.S.; Song, W.K.; Kim, J.S.; Bang, W.C.; Lee, H.; Bien, Z. New EMG pattern recognition based on soft computing techniques and its application to control of a rehabilitation robotic arm. In Proceedings of the 6th International Conference on Soft Computing (IIZUKA2000), Iizuka, Japan, 1–4 October 2000; pp. 890–897. [Google Scholar]
  48. Fang, C.; He, B.; Wang, Y.; Cao, J.; Gao, S. EMG-centered multisensory based technologies for pattern recognition in rehabilitation: State of the art and challenges. Biosensors 2020, 10, 85. [Google Scholar] [CrossRef]
  49. Smith, P.A.; Dombrowski, M.; Buyssens, R.; Barclay, P. The impact of a custom electromyograph (EMG) controller on player enjoyment of games designed to teach the use of prosthetic arms. Comput. Games J. 2018, 7, 131–147. [Google Scholar] [CrossRef]
  50. Kyeong, S.; Kim, W.D.; Feng, J.; Kim, J. Implementation issues of EMG-based motion intention detection for exoskeletal robots. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 915–920. [Google Scholar]
  51. Pan, C.T.; Chang, C.C.; Yang, Y.S.; Yen, C.K.; Kao, Y.H.; Shiue, Y.L. Development of MMG sensors using PVDF piezoelectric electrospinning for lower limb rehabilitation exoskeleton. Sens. Actuators A Phys. 2020, 301, 111708. [Google Scholar] [CrossRef]
  52. Tarata, M.; Spaepen, A.; Puers, R. The accelerometer MMG measurement approach, in monitoring the muscular fatigue. Meas. Sci. Rev. 2001, 1, 47–50. [Google Scholar]
  53. DelPreto, J.; Rus, D. Sharing the load: Human–robot team lifting using muscle activity. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7906–7912. [Google Scholar]
  54. DelPreto, J.; Salazar-Gomez, A.F.; Gil, S.; Hasani, R.; Guenther, F.H.; Rus, D. Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection. Auton. Robot. 2020, 44, 1303–1322. [Google Scholar] [CrossRef]
  55. Huang, D.; Lin, P.; Fei, D.Y.; Chen, X.; Bai, O. Decoding human motor activity from EEG single trials for a discrete two-dimensional cursor control. J. Neural Eng. 2009, 6, 046005. [Google Scholar] [CrossRef] [PubMed]
  56. Kilicarslan, A.; Prasad, S.; Grossman, R.G.; Contreras-Vidal, J.L. High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5606–5609. [Google Scholar]
  57. Jerbi, K.; Vidal, J.; Mattout, J.; Maby, E.; Lecaignard, F.; Ossandon, T.; Hamamé, C.; Dalal, S.; Bouet, R.; Lachaux, J.P.; et al. Inferring hand movement kinematics from MEG, EEG and intracranial EEG: From brain-machine interfaces to motor rehabilitation. IRBM 2011, 32, 8–18. [Google Scholar] [CrossRef]
  58. Cui, C.; Bian, G.B.; Hou, Z.G.; Zhao, J.; Zhou, H. A multimodal framework based on integration of cortical and muscular activities for decoding human intentions about lower limb motions. IEEE Trans. Biomed. Circuits Syst. 2017, 11, 889–899. [Google Scholar] [CrossRef] [PubMed]
  59. Salazar-Gomez, A.F.; DelPreto, J.; Gil, S.; Guenther, F.H.; Rus, D. Correcting robot mistakes in real time using EEG signals. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 6570–6577. [Google Scholar]
  60. Welcome to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Website! Available online: https://prisma-statement.org/ (accessed on 30 July 2012).
  61. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef] [PubMed]
  62. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  63. Nostadt, N.; Abbink, D.A.; Christ, O.; Beckerle, P. Embodiment, presence, and their intersections: Teleoperation and beyond. ACM Trans.-Hum.-Robot. Interact. 2020, 9, 1–19. [Google Scholar] [CrossRef]
  64. Hsu, L.M.; Field, R. Interrater agreement measures: Comments on Kappan, Cohen’s Kappa, Scott’s π, and Aickin’s α. Underst. Stat. 2003, 2, 205–219. [Google Scholar] [CrossRef]
  65. Rau, G.; Shih, Y.S. Evaluation of Cohen’s kappa and other measures of inter-rater agreement for genre analysis and other nominal data. J. Engl. Acad. Purp. 2021, 53, 101026. [Google Scholar] [CrossRef]
  66. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Med. 2012, 22, 276–282. [Google Scholar] [CrossRef]
  67. Altman, D.G. Practical Statistics for Medical Research; CRC Press: Boca Raton, FL, USA, 1990. [Google Scholar]
  68. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [Green Version]
  69. Tkach, D.; Hargrove, L.J. Neuromechanical sensor fusion yields highest accuracies in predicting ambulation mode transitions for trans-tibial amputees. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 3074–3077. [Google Scholar]
  70. Fukuhara, S.; Watanabe, S.; Oka, H. Novel mechanomyogram/electromyogram hybrid transducer measurements reflect muscle strength during dynamic exercise—Pedaling of recumbent bicycle. Adv. Biomed. Eng. 2018, 7, 47–54. [Google Scholar] [CrossRef] [Green Version]
  71. Tsuji, H.; Misawa, H.; Takigawa, T.; Tetsunaga, T.; Yamane, K.; Oda, Y.; Ozaki, T. Quantification of patellar tendon reflex using portable mechanomyography and electromyography devices. Sci. Rep. 2021, 11, 2284. [Google Scholar] [CrossRef] [PubMed]
  72. Donnarumma, M.; Caramiaux, B.; Tanaka, A. Muscular Interactions Combining EMG and MMG sensing for musical practice. In Proceedings of the International Conference on New Interfaces for Musical Expression, Daejeon, Korea, 27–30 May 2013. [Google Scholar]
  73. Botter, A.; Beltrandi, M.; Cerone, G.; Gazzoni, M.; Vieira, T. Development and testing of acoustically-matched hydrogel-based electrodes for simultaneous EMG-ultrasound detection. Med. Eng. Phys. 2019, 64, 74–79. [Google Scholar] [CrossRef] [PubMed]
  74. Yang, X.; Yan, J.; Liu, H. Comparative analysis of wearable a-mode ultrasound and SEMG for muscle-computer interface. IEEE Trans. Biomed. Eng. 2019, 67, 2434–2442. [Google Scholar] [CrossRef]
  75. Guo, W.; Yao, P.; Sheng, X.; Zhang, D.; Zhu, X. An enhanced human–computer interface based on simultaneous sEMG and NIRS for prostheses control. In Proceedings of the 2014 IEEE International Conference on Information and Automation (ICIA), Hailar, China, 28–30 July 2014; pp. 204–207. [Google Scholar]
  76. Paleari, M.; Luciani, R.; Ariano, P. Towards NIRS-based hand movement recognition. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1506–1511. [Google Scholar]
  77. Scano, A.; Zanoletti, M.; Pirovano, I.; Spinelli, L.; Contini, D.; Torricelli, A.; Re, R. NIRS-EMG for clinical applications: A systematic review. Appl. Sci. 2019, 9, 2952. [Google Scholar] [CrossRef] [Green Version]
  78. Fougner, A.; Scheme, E.; Chan, A.D.; Englehart, K.; Stavdahl, Ø. Resolving the limb position effect in myoelectric pattern recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 644–651. [Google Scholar] [CrossRef] [Green Version]
  79. Roy, S.H.; Cole, B.T.; Gilmore, L.D.; De Luca, C.J.; Thomas, C.A.; Saint-Hilaire, M.M.; Nawab, S.H. High-resolution tracking of motor disorders in Parkinson’s disease during unconstrained activity. Mov. Disord. 2013, 28, 1080–1087. [Google Scholar] [CrossRef]
  80. Gijsberts, A.; Caputo, B. Exploiting accelerometers to improve movement classification for prosthetics. In Proceedings of the 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), Seattle, WA, USA, 24–26 June 2013; pp. 1–5. [Google Scholar]
  81. Gijsberts, A.; Atzori, M.; Castellini, C.; Müller, H.; Caputo, B. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 735–744. [Google Scholar] [CrossRef]
  82. Wu, J.; Tian, Z.; Sun, L.; Estevez, L.; Jafari, R. Real-time American sign language recognition using wrist-worn motion and surface EMG sensors. In Proceedings of the 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Cambridge, MA, USA, 9–12 June 2015; pp. 1–6. [Google Scholar]
  83. Joshi, D.; Hahn, M.E. Terrain and direction classification of locomotion transitions using neuromuscular and mechanical input. Ann. Biomed. Eng. 2016, 44, 1275–1284. [Google Scholar] [CrossRef]
  84. Gupta, H.; Anil, A.; Gupta, R. On the combined use of Electromyogram and Accelerometer in Lower Limb Motion Recognition. In Proceedings of the 2018 IEEE 8th International Advance Computing Conference (IACC), Greater Noida, India, 14–15 December 2018; pp. 240–245. [Google Scholar]
  85. Wang, W.; Chen, B.; Xia, P.; Hu, J.; Peng, Y. Sensor fusion for myoelectric control based on deep learning with recurrent convolutional neural networks. Artif. Organs 2018, 42, E272–E282. [Google Scholar] [CrossRef]
  86. Cannan, J.; Hu, H. A Multi-sensor armband based on muscle and motion measurements. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 1098–1103. [Google Scholar]
  87. Wu, J.; Sun, L.; Jafari, R. A wearable system for recognizing American sign language in real-time using IMU and surface EMG sensors. IEEE J. Biomed. Health Inform. 2016, 20, 1281–1290. [Google Scholar] [CrossRef]
  88. Yang, X.; Chen, X.; Cao, X.; Wei, S.; Zhang, X. Chinese sign language recognition based on an optimized tree-structure framework. IEEE J. Biomed. Health Inform. 2016, 21, 994–1004. [Google Scholar] [CrossRef] [PubMed]
  89. Fang, J.; Xu, B.; Zhou, X.; Qi, H. Research on gesture recognition based on sEMG and inertial sensor fusion. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 1562–1567. [Google Scholar]
  90. Yu, Y.; Chen, X.; Cao, S.; Zhang, X.; Chen, X. Exploration of Chinese sign language recognition using wearable sensors based on deep belief net. IEEE J. Biomed. Health Inform. 2019, 24, 1310–1320. [Google Scholar] [CrossRef] [PubMed]
  91. Zhou, X.; He, J.; Qi, W.; Hu, Y.; Dai, J.; Xu, Y. Hybrid IMU/muscle signals powered teleoperation control of serial manipulator incorporating passivity adaptation. In Proceedings of the 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 18–21 December 2020; pp. 228–233. [Google Scholar]
  92. Yoshikawa, M.; Taguchi, Y.; Kawashima, N.; Matsumoto, Y.; Ogasawara, T. Hand motion recognition using hybrid sensors consisting of EMG sensors and optical distance sensors. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 144–149. [Google Scholar]
  93. Luan, J.; Chien, T.C.; Lee, S.; Chou, P.H. HANDIO: A Wireless Hand Gesture Recognizer Based on Muscle-Tension and Inertial Sensing. In Proceedings of the 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, USA, 6–10 December 2015; pp. 1–7. [Google Scholar]
  94. Woodward, R.; Shefelbine, S.; Vaidyanathan, R. Pervasive motion tracking and muscle activity monitor. In Proceedings of the 2014 IEEE 27th International Symposium on Computer-Based Medical Systems, New York, NY, USA, 27–29 May 2014; pp. 421–426. [Google Scholar]
  95. Woodward, R.B.; Shefelbine, S.J.; Vaidyanathan, R. Pervasive monitoring of motion and muscle activation: Inertial and mechanomyography fusion. IEEE/ASME Trans. Mechatron. 2017, 22, 2022–2033. [Google Scholar] [CrossRef]
  96. Ma, Y.; Liu, Y.; Jin, R.; Yuan, X.; Sekha, R.; Wilson, S.; Vaidyanathan, R. Hand gesture recognition with convolutional neural networks for the multimodal UAV control. In Proceedings of the 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), Linkoping, Sweden, 3–5 October 2017; pp. 198–203. [Google Scholar]
  97. Huo, W.; Angeles, P.; Tai, Y.F.; Pavese, N.; Wilson, S.; Hu, M.T.; Vaidyanathan, R. A heterogeneous sensing suite for multisymptom quantification of Parkinson’s disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1397–1406. [Google Scholar] [CrossRef] [PubMed]
  98. Chen, X.; Zhong, S.; Niu, Y.; Chen, S.; Wang, T.; Chan, S.C.; Zhang, Z. A multimodal investigation of in vivo muscle behavior: System design and data analysis. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, Australia, 1–5 June 2014; pp. 2053–2056. [Google Scholar]
  99. Han, S.; Chen, X.; Zhong, S.; Zhou, Y.; Zhang, Z. A novel outlier detection method for identifying torque-related transient patterns of in vivo muscle behavior. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4216–4219. [Google Scholar]
  100. Ding, X.; Wang, M.; Guo, W.; Sheng, X.; Zhu, X. Hybrid sEMG, NIRS and MMG Sensor System. In Proceedings of the 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Stuttgart, Germany, 20–22 November 2018; pp. 1–6. [Google Scholar]
  101. Sheng, X.; Ding, X.; Guo, W.; Hua, L.; Wang, M.; Zhu, X. Toward an integrated multi-modal sEMG/MMG/NIRS sensing system for human–machine interface robust to muscular fatigue. IEEE Sens. J. 2020, 21, 3702–3712. [Google Scholar] [CrossRef]
  102. Delsys—Wearable Systems for Movement Science. Available online: https://delsys.com/ (accessed on 17 April 2012).
  103. Thalmic Labs. Available online: https://developerblog.myo.com/author/thalmic-labs/ (accessed on 17 April 2012).
  104. Atzori, M.; Müller, H. The Ninapro database: A resource for sEMG naturally controlled robotic hand prosthetics. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 7151–7154. [Google Scholar]
  105. Hamaoka, T.; McCully, K.K.; Quaresima, V.; Yamamoto, K.; Chance, B. Near-infrared spectroscopy/imaging for monitoring muscle oxygenation and oxidative metabolism in healthy and diseased humans. J. Biomed. Opt. 2007, 12, 062105. [Google Scholar] [CrossRef]
  106. g.tec Medical Engineering GmbH. Available online: https://www.gtec.at/ (accessed on 17 April 2012).
  107. About BioSemi. Available online: https://www.biosemi.com/company.htm (accessed on 17 April 2012).
  108. MyoWare 2.0 Muscle Sensor. Available online: https://myoware.com/ (accessed on 17 April 2012).
  109. Wenhui, W.; Xiang, C.; Kongqiao, W.; Xu, Z.; Jihai, Y. Dynamic gesture recognition based on multiple sensors fusion technology. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 7014–7017. [Google Scholar]
  110. Dwivedi, A.; Kwon, Y.; McDaid, A.J.; Liarokapis, M. A learning scheme for EMG based decoding of dexterous, in-hand manipulation motions. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 2205–2215. [Google Scholar] [CrossRef]
  111. Oskoei, M.A.; Hu, H. Myoelectric control systems—A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [Google Scholar] [CrossRef]
Figure 1. Overview of the screening process for selecting the studies for the systematic review. The grey boxes indicate the Cohen’s Kappa values before and after (in brackets) discussion.
Figure 1. Overview of the screening process for selecting the studies for the systematic review. The grey boxes indicate the Cohen’s Kappa values before and after (in brackets) discussion.
Sensors 22 06319 g001
Figure 2. Distribution of the selected studies based on the year of publication.
Figure 2. Distribution of the selected studies based on the year of publication.
Sensors 22 06319 g002
Table 1. Databases used for the literature survey and the search terms employed.
Table 1. Databases used for the literature survey and the search terms employed.
DatabasesSearch Term
Scopus,
Pubmed, IEEE,
Web of Science
(Skeletal Muscle OR Human Muscle) AND
(Muscle Activity OR Electromyography OR Mechanomyography OR Sonomyography OR Myography) AND
(Hybrid OR Multimodal OR Sensor Fusion OR Data Fusion) NOT (EKG OR EEG or Electrical Stimulation)
Table 2. Myography fusion methods explored in the studies included in the systematic review.
Table 2. Myography fusion methods explored in the studies included in the systematic review.
Fusion MethodStudyProperties of Fusion Methods
Fusion of EMG and MMGTkach and Hargrove [69]
Fukuhara et al. [70] Tsuji et al. [71]
Provide complementary information
regarding the intention [72]
Fusion of EMG and USBotter et al. [73] Yang et al. [74]Acquire information of both
superficial and deep-seated muscles
Fusion of EMG and NIRSGuo et al. [75] Guo et al. [39]
Paleari et al. [76] Guo et al. [38]
Assess the same domain under
different perspectives [77]
Fusion of EMG and
Accelerometers
Fougner et al. [78] Roy et al. [79]
Gijsberts and Caputo [80] Gijsberts et al. [81]
Wu et al. [82] Joshi and Hahn [83]
Gupta et al. [84] Wang et al. [85]
Dynamic and kinematic information
of the user intentions
Fusion of EMG and IMUCannan and Hu [86] Wu et al. [87]
Yang et al. [88] Fang et al. [89]
Yu et al. [90] Zhou et al. [91]
Dynamic and kinematic (six or
more degrees of freedom)
information of the user intentions
Fusion of EMG
and accelerometer
with Optical Sensing
Yoshikawa et al. [92] Luan et al. [93]Dynamic and kinematic
information and complementary
information to EMG data
Fusion of MMG and IMUWoodward et al. [94] Woodward et al. [95]
Ma et al. [96] Huo et al. [97]
Dynamic and kinematic (six or more
degrees of freedom) information
of the user intentions; combination
is cheaper than using EMG [32]
Fusion of EMG, US and MMGChen et al. [98] Han et al. [99]Provides complementary information
regarding the intention and also acquires
information of both, superficial
and deep-seated muscles
Fusion of EMG, MMG and NIRSDing et al. [100] Sheng et al. [101]Provides complementary
information regarding the intention and
assesses the same domain under
different perspectives [77]
Table 3. Distribution of studies based on the topic included in each study. “X” denotes the topic included in that study.
Table 3. Distribution of studies based on the topic included in each study. “X” denotes the topic included in that study.
StudyMyographiesExternal Sensors
EMGMMGUSNIRSACCIMUOptical
Fougner et al. [78]X X
Cannan and Hu [86]X X
Yoshikawa et al. [92]X X
Roy et al. [79]X X
Tkach and Hargrove [69]XX
Gijsberts and Caputo [80]X X
Woodward et al. [94] X X
Guo et al. [75]X X
Chen et al. [98]XXX
Han et al. [99]XXX
Gijsberts et al. [81]X X
Luan et al. [93] X X
Wu et al. [82]X X
Guo et al. [39]X X
Joshi and Hahn [83]X X
Wu et al. [87]X X
Woodward et al. [95] X X
Ma et al. [96] X X
Paleari et al. [76]X X
Yang et al. [88]X X
Guo et al. [38]X X
Fukuhara et al. [70]XX
Fang et al. [89]X X
Gupta et al. [84]X X
Wang et al. [85]X X
Botter et al. [73]X X
Ding et al. [100]XX X
Huo et al. [97] X X
Yang et al. [74]X X
Yu et al. [90]X X
Zhou et al. [91]X X
Sheng et al. [101]XX X
Tsuji et al. [71]XX
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dwivedi, A.; Groll, H.; Beckerle, P. A Systematic Review of Sensor Fusion Methods Using Peripheral Bio-Signals for Human Intention Decoding. Sensors 2022, 22, 6319. https://doi.org/10.3390/s22176319

AMA Style

Dwivedi A, Groll H, Beckerle P. A Systematic Review of Sensor Fusion Methods Using Peripheral Bio-Signals for Human Intention Decoding. Sensors. 2022; 22(17):6319. https://doi.org/10.3390/s22176319

Chicago/Turabian Style

Dwivedi, Anany, Helen Groll, and Philipp Beckerle. 2022. "A Systematic Review of Sensor Fusion Methods Using Peripheral Bio-Signals for Human Intention Decoding" Sensors 22, no. 17: 6319. https://doi.org/10.3390/s22176319

APA Style

Dwivedi, A., Groll, H., & Beckerle, P. (2022). A Systematic Review of Sensor Fusion Methods Using Peripheral Bio-Signals for Human Intention Decoding. Sensors, 22(17), 6319. https://doi.org/10.3390/s22176319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop