Next Article in Journal
Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations
Previous Article in Journal
Application of Classified Elastic Waves for AE Source Localization Based on Self-Organizing Map
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress

by
Rytis Maskeliūnas
1,
Robertas Damaševičius
1,*,
Vidas Raudonis
1,
Aušra Adomavičienė
2,
Juozas Raistenskis
2 and
Julius Griškevičius
3
1
Faculty of Informatics, Kaunas University of Technology, LT-44249 Kaunas, Lithuania
2
Center of Rehabilitation, Physical and Sports Medicine, Vilnius University Hospital Santaros Clinics, LT-08661 Vilnius, Lithuania
3
Department of Biomechanical Engineering, VilniusTech, LT-10223 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5744; https://doi.org/10.3390/app13095744
Submission received: 28 March 2023 / Revised: 3 May 2023 / Accepted: 4 May 2023 / Published: 6 May 2023

Abstract

:
One of the most difficult components of stroke therapy is regaining hand mobility. This research describes a preliminary approach to robot-assisted hand motion therapy. Our objectives were twofold: First, we used machine learning approaches to determine and describe hand motion patterns in healthy people. Surface electrodes were used to collect electromyographic (EMG) data from the forearm’s flexion and extension muscles. The time and frequency characteristics were used as parameters in machine learning algorithms to recognize seven hand gestures and track rehabilitation progress. Eight EMG sensors were used to capture each contraction of the arm muscles during one of the seven actions. Feature selection was performed using the Pareto front. Our system was able to reconstruct the kinematics of hand/finger movement and simulate the behaviour of every motion pattern. Analysis has revealed that gesture categories substantially overlap in the feature space. The correlation of the computed joint trajectories based on EMG and the monitored hand movement was 0.96 on average. Moreover, statistical research conducted on various machine learning setups revealed a 92% accuracy in measuring the precision of finger motion patterns.

1. Introduction

Human-computer interaction (HCI) bridges the communication gap between humans and computers [1]. Hand Gesture Recognition is one type of HCI that predicts the class and moment of execution of a given hand action (gesture) [2]. Surface electromyography (sEMG), which captures the electrical activity of muscles, is one possible input for these models [3,4]. On the contrary, electromyography (EMG) is a type of physiological signal recording that detects electrical signals from muscle fibres in response to gestures or movements [5]. The brain creates electrical impulses that are sent to muscle fibres through the spinal cord and nerve fibres to activate movements [6,7]. EMG-based research for medical assistance and smart gadgets, such as stroke assessment [8], analysis of neural impairments [9], heartbeat analysis [10], prosthetics [11], rehabilitation [12,13], physical training assessment [14,15], sports analytics [16], movement recognition and analysis [17], affective computing [18], human-machine interfaces  [19,20], text input for the disabled [21], bio-signal fusion [22], and general healthcare [23], has been active. To aid rehabilitation from a distance, interactive forms for telerehabilitation can be used to measure the development of patients’ range of motion (ROM) in real time using artificial intelligence algorithms, by manipulating the angles of action of limbs about a joint [24]. Latreche et al. [25] then explored an artificial intelligence approach built into the MediaPipe website to measure range of motion with a universal goniometer and a digital angle meter. Anton et al. suggested a WebRTC-based telerehabiliation system framework capable of managing video and audio streams based on network status and available bandwidth to ensure real-time communication performance [26].
The novelty of the study lies in the combination of these elements:
  • The study developed a robot-assisted hand-motion therapy system.
  • The system utilizes machine learning approaches to recognize and describe hand motion patterns in healthy people.
  • The system also tracks rehabilitation progress.
The study used surface electrodes to collect electromyographic (EMG) data from the forearm flexion and extension muscles, which were used as input parameters to the machine learning algorithms to recognize seven hand gestures.
The study’s contributions include demonstrating the technical feasibility of measuring and recognizing finger movement patterns and the potential use of this approach for monitoring rehabilitation progress or other medical processes. The study also analysed nearly 10,500 individual measurements of the specified gestures, which were assigned to a specific movement class and used to develop a classification algorithm. The study findings could potentially lead to the development of more effective and personalized stroke therapy programs that improve hand mobility and function.
The remainder of the paper are as follows. Section 2 discusses the measurement and classification methodology. Section 3 presents and analyses the results. Section 4 provides a discussion. Section 5 outlines the conclusions.

2. Review of Related Approaches

The EMG-based gesture recognition process comprises classifying the unknown observed data into the most likely gesture that corresponds to the target movements [27]. Current research on EMG-based hand gesture categorization encounters challenges, such as poor classification accuracy, low generalization ability, a scarcity of training data, and a lack of robustness [28]. Recent research has focused on developing user-independent interfaces based on gesture recognition to diminish the time of calibration for novice users [29]. However, because EMG signals are erratic, signal analysis performance is generally poor [30,31]. Furthermore, due to the presence of noise and the need for significant processing power, it is very difficult to employ sEMG signals as user control signals [32]. For example, Alfaro et al. [29] offered a user-independent gesture identification system based on sensor fusion technology that combines the data from IMU and EMG sensors. The Myo Armband was employed to evaluate muscle activity and mobility of healthy participants. The data acquired from 22 people were used to classify the movements, and the Adaptive Least-Squares Support Vector Machine (SVM) model was 92.9% accurate. Similarly, Benalcázar et al. [33] provided a real-time approach to recognise five movements on the right hand. These are the exact actions that the Myo armband’s patented technology detects. A shallow artificial feed-forward neural network is used in the proposed system. The correlations among the EMG channels and the result of a bag of five components given to each EMG channel are sent into this network. Fajardo et al. [34] offered a mixed feature approach strategy for EMG classification that classifies signals acquired from a single channel device using handmade features generated from temporal-spectral analysis and deep features produced from a convolutional neural network (CNN). The approach requires only 100 impulses out of each gesture for training, significantly reducing the time necessary to train the system. Such techniques suggested combine hand-crafted characteristics from a time-spectral investigation with deep features to build the feature vector [35]. Subsequently, the EMG characteristic vector is often classified using a multilayer perceptron classifier (MLPC) [36]. In [37], EMG-based hand/finger motion classifiers were developed based on fixed electrode placement. Three channels of MG signals were monitored and each channel yielded six time-domain (TD) characteristics. A total of 18 attributes were used to create subject-specific classifiers for 10 gestures using an artificial neural network (ANN) and several machine learning classifiers. ANN had the highest mean accuracy and the lowest intersubject variability in accuracy, showing that subject-specific variance in EMG data had low influence on it. The research [38] proposed BoostEMD, a unique version of Huang’s Empirical Mode Decomposition (EMD) approach for computing higher-order intra-mode functions (IMFs) that identify higher frequency oscillations in EMG signals. The authors demonstrated the efficacy of denoising by classifying the EMG data before and after denoising, as well as analysing the characteristics of the recovered noise signal, similarly to [39]. Sahu et al. [40,41] proposed using the beta artificial bee colony (BetaABC) and binary BetaABC (BBABC) to pick important features in the recognition of the EMG pattern to improve the classification performance while reducing the complexity of the classifier, where the EMG signal was decomposed and the characteristics were extracted using the discrete wavelet transform (DWT).
Deep learning methods are a very popular use case for the classification of gestures using EMG signals [42,43]. However, deep learning algorithms are rarely used in the field of EMG-based gesture detection, as they take an unacceptable amount of effort from a single individual [44,45]. Although useful features may be acquired from the large volumes of data created by pooling the signals of several users, reducing the recording effort while improving gesture recognition, there are additional issues, such as bias [46]. Gopal et al. [32] compared machine learning and deep learning classifiers on generalization in general across different classes, and offered a systematic examination of the effect of variables in the time domain and pre-processing factors on the performance of the classification model. Ensemble and deep learning algorithms have exceeded other standard machine learning approaches. In [47], EMG data received from the forearm muscles was classified using Recurrent Neural Network (RNN) to improve the online classification of hand motions. This study contrasts and explains a Feed-Forward Neural Network (FFNN), an RNN, a Long Short-Term Memory network (LSTM), and a Gated Recurrent Unit (GRU). The FFNN, LSTM, RNN, and GRU models exhibited equivalent accuracy, 95% for the DualMyo dataset and 91% for the NinaPro DB5 dataset. Jo et al. [48] suggested Convolutional Recurrent Neural Network (CRNN) model to recognize hand motions in real time by combining LSTM for time-series information classification and CNN for feature extraction. Two grips, three hand signals, and one rest are recognized and categorised as hand gestures. For pre-processing, the Short Time Fourier Transform (STFT), Continuous-time Wavelet Transform (CWT), and the recently suggested Scale Average Wavelet Transform (SAWT) are utilized. The CRNN with SAWT and the overlapping window outperformed the other approaches. The recognition of lower-limb motions using the tunable Q-factor wavelet transform (TQWT) and the Kraskov entropy (KrEn) was presented in [49] with a similarly efficient result. Toro-Ossaba et al. [50] proposed to build a gesture classifier using an RNN model that incorporates LSTM units and dense layers, with the objective of reducing the number of EMG channels and total complexity of the model to increase scalability for embedded systems. The model needs only four EMG channels to identify five hand movements, resulting in a considerable decrease in the number of electrodes. The model was trained using a 20 s dataset of gesture EMG signals recorded with a custom EMG armband. During training and validation, the model obtained an accuracy of up to 97%, and an accuracy of 87% during real-time testing.
Li et al. [51] developed an approach to identify hand movements that uses an improved multichannel CNN (IMC-CNN) to extract properties of time and frequency domains from surface electromyography (sEMG). To recognize the 10 most frequently used hand gestures, the spectral features of sEMG signals, which serve as input to the IMC-CNN model, are utilised. The spectrogram characteristics of the sEMG signals from several channels are integrated into a complete enhanced spectrogram feature, which is sent into IMC-CNN to categorize hand motions. The recognition accuracy of the proposed model was 97.5%. In [52], the arm sEMG signal was acquired to infer four types of signals from nine static motions. To reduce redundant data in EMG signals, enhance recognition performance, and allow for real-time recognition, the action recognition system is built using the principal component analysis (PCA) method and the general regression neural network (GRNN). The precise action mode is determined by collecting information from human movements. The total recognition rate of the system was achieved by 95.1% after dimension reduction and model training. Shanmuganathan et al. [53] demonstrated that the use of recurrent CNN (R-CNN) in terms of EMG signals for hand gesture identification was demonstrated. The signal is captured using electrodes on the forearm, and the features are extracted using the wavelet packet transform. Using the wavelet power spectrum, the R-CNN model achieves 96.48% accuracy. The purpose of the alternative study [54] was to see if movements of the hand and wrist joints affected the recognition of the EMG pattern. In total, 11 hand and wrist movements were evaluated in able-bodied volunteers using 2 different gesture modalities. Movements of the hand and wrist joints had a substantial influence on the recognition of the EMG patterns. The findings provided a fresh perspective on the parameters that influence the recognition of EMG patterns. To evaluate a similar model, Zhang et al. [55] used a FFNN trained on segmented sEMG signals, which had an average identification rate of 98.7% using real sEMG data from 12 subjects. The system can recognize a gesture before it is completed due to the fast recognition time.

3. Materials and Methods

The research evaluated the technical possibilities of measuring and recognizing the movement of the experimenter’s hand, potentially monitoring the progress of rehabilitation or other medical process. At this stage, in order to check the precision, it was chosen to measure not only the movement of the whole hand or another large part of the body but more precise scenarios, that is, to examine the movement that is performed by the fingers. A total of 7 gestures were performed, namely: (a) raising the index finger, (b) raising the middle finger, (c) raising the ring finger, (d) raising the little finger, (e) thumbs up, (f) palm at rest with all fingers extended, and (g) victory sign displayed.
The 8 channel Mindrove armband (Mindrove, Budapest, HU) was used to capture the EMG data. The wearable equipment was placed on the forearm and used to measure different types of muscle groups in real time.
  • The carpi radials of the muscle flexor are responsible for the flexion and radial deviation of the wrist.
  • Musculus flexor pollicis longus, which is responsible for the flexion of the pollicis.
  • Musculus flexor digitorum, which is responsible for flexion of the fingers.
  • The carpi ulnaris muscle flexor, which is responsible for flexion and ulnar deviation of the wrist.
  • Muscuclus extensor pollicis longus at brevis, which is responsible for the extension of pollicis.
  • The carpi ulnaris muscle extensor is responsible for extension and ulnar deviation of the wrist.
  • Musculus extensor digitorum, that is responsible for extension of the fingers.
  • Musculus extensor carpi radialis, which is responsible for extension and radial deviation of the wrist.
According to accepted scientific practice, all gestures were recorded with EMG sensors pressed on the forearm to measure arm muscle activity (see Figure 1). The EMG time signal was recorded for up to 5 s while each participant performed a required gesture. A 5 s segment of the signal was analysed using the statistical estimates discussed in the previous section. An experimental study was conducted to determine which statistical estimates of the temporal signal can be used as motion-discriminating features and suitable for developing a classification algorithm. A proprietary software approach was used to process the captured data based on the Pareto optimized principles described below.
The study analysed nearly 10,500 individual measurements of the specified gestures (approximately 1500 for each class). All measurements are assigned to a specific movement class, which is numbered from M1 to M7, as explained above. The number and distribution of classes is shown in the figure below, where the measurement number values are plotted on the horizontal axis, and the motion class numbers are plotted on the vertical axis. The sample of data used is assumed to be balanced.

3.1. Features

Gesture and related muscle activity were recorded with 8 EMG sensors. Statistical estimates of the signal were calculated from the recorded time-varying signal, and features describing that signal were extracted:
  • Standard deviation;
  • Minimum;
  • Maximum;
  • Crossing the zero axis;
  • Average change in amplitude;
  • The first amplitude jump;
  • Mean absolute value;
  • Wave length;
  • Wilson amplitude.

3.1.1. Standard Deviation of the EMG Signal

Standard deviation is a measure of absolute variability that shows how individual observations are grouped with respect to the mean. It is measured in the same units as the mean. If our data are equivalent to the normal distribution law, about 95% subjects have values between minus and plus of the mean, about 2 standard deviations. The other 5% are equally distributed outside these limits. As the sample size increases, the standard deviation does not tend to change as much. The standard deviation is calculated based on Equation (1).
σ = i = 1 N ( x i μ ) 2 N
where σ is the standard deviation of the EMG signal, μ is the average of the EMG signal, x is the value of the EMG signal at the i-th time instant, N is the number of measurements in the EMG signal. The standard deviation was calculated for the EMG signals recorded by all sensors.

3.1.2. EMG Signal Minimum Values

In search of the feature that best discriminates the gesture, it was chosen to include the minimum value operator (min), which is obtained by searching for the minimum value in the EMG signal according to Equation (2):
X m i n = min 1 I N { x i }

3.1.3. EMG Signal Maximum Values

In search of the best discriminating feature of gesture, it was chosen to include the maximum value operator (max), which is obtained by searching for the minimum value in the EMG signal according to Equation (3):
X m a x = max 1 I N { x i }

3.1.4. EMG Signal Zero Crossing

Zero crossing [7,56,57] are a convenient and fast way of estimating the frequency of a sampled sequence of data. A zero crossing is the point where the sign of a function inverts into opposite, as shown by the axis crossing (zero value) on the graph of the function. It is a signal evaluation indicator often employed in electronics, mathematics, acoustics, and image processing.
Zero crossing is calculated using Equations (4) and (5)
s ( x , y ) = 1 if ( x y ˙ ) < 0 0 if ( x y ˙ ) > 0
Z C ( V ) = i = 1 n 1 s ( V i , V i + 1 )
where Z C is the zero crossing value, x and y are the EMG signal values.

3.1.5. The Mean Amplitude Change of the EMG Signal

The average amplitude is the average magnitude of all instantaneous values in the EMG time signal. The average amplitude is the ratio between the sum of the magnitudes of all values and the number of values in the EMG signal. For all alternating periodic signals, such as sine and cosine waves, the average amplitude of the entire signal is zero. The average amplitude of the signal of the first half is the same as the amplitude of the signal of the second half, but the amplitude is opposite. Therefore, the average amplitude of the entire signal is zero (the average amplitudes of both halves of the signal cancel). The average change in amplitude is calculated according to Equation (6).
i A v g = 1 N i = 1 N x i

3.1.6. Average Absolute Value of the EMG Signal

Mean absolute average error, also known as mean absolute average deviation, is a measure of the accuracy of a forecasting method, such as trend estimation, and is also used as a loss function. Usually, the accuracy is expressed by the ratio defined by Equation (7). The mean absolute value (MAV) is a method to determine and evaluate the level of muscle contraction.
M A V = 1 N i = 1 N x i

3.1.7. EMG Signal Wavelength

The waveform length (WL) is intuitively the total length of the waveform in a segment. The resulting waveform length count values provide a measure of the amplitude, frequency, and duration of the waveform. The signal wavelength is calculated according to Equation (8).
W L = i = 1 N 1 x i + 1 x i

3.1.8. Wilson Amplitude of the EMG Signal

The Wilson amplifier (WAMP) is the number of times the difference between the amplitudes of the sEMG signal of two adjacent segments exceeds a predetermined threshold to reduce the effects of noise. Wilson’s amplitude is related to the level of motor unit action potential (MUAP) and muscle contraction. A suitable value for the threshold parameter is usually chosen between 10 and 100 mV, which depends on the gain setting of the instrument. The Wilson amplitude is calculated using the Equation below.
W A M P = n = 1 N 1 f ( x n x n + 1 )
f ( x ) = 1 if ( x y ) 0 otherwise

3.1.9. Summary of EMG Signal Estimates

Each contraction of the arm muscles during one of the seven movements was recorded by 8 EMG sensors. Estimates of temporal signals were calculated from the recorded signals.

3.2. Reducing the Classification Space by Applying a Linear Data Transformation to PCA

PCA is one of the classical statistics methods. The main idea of this method is to perform a linear transformation and reduce the number of dimensions of the data by discarding a part of the components found after the transformation, which have the smallest variance. The PCA method is often referred to as the discrete K–L transformation because it reduces the number of data or dimensions that describe the features of the sample. In particular, the principal component analysis descriptor looks for the principal component P1 with the highest variance. This component crosses the central point of the data and is closest to all data points because the average distance from the component to the points is minimal. The second principal component P2 also crosses the central point of the data and is perpendicular to the first principal component. Each principal component is described by an eigenvector and an eigenvalue. The real vector indicates the direction of the principal component, and the real value describes how the data are distributed in that direction.
The PCA algorithm is performed in four steps:
  • Ensure that the data have the same mean and variability.
  • Find the covariance matrix C.
  • Calculate the real vectors E k and real values λ k of the covariance matrix C.
  • Sort the true values of λ k in descending order and form the matrix A of the main components from the true vectors E i according to the list obtained.
Each gesture is measured by 8 EMG sensors, and the temporal signal is estimated using 10 statistical estimators. A total of 80 dimensional vectors are used to describe one movement. To reduce the number of dimensions and simplify the structure of the gesture classification algorithm, the PCA method was applied to the EMG data.

3.3. Random Forest Classifiers for Hand Motion Identification

A random forest classifier includes many decision trees. Mostly, the random forest classifier, which is an advanced artificial intelligence algorithm used in robotics, produces high accuracy and good prediction results. Random forests, or random decision forests, are a machine learning technique for solving classification, regression, and other problems. A random forest classifier is constructed from a large number of decision trees whose classification properties are determined during training. In the research study, a generative array of decision trees was created. In each tree, depending on the training data obtained from the feature vectors, a decision is made on which class to assign the feature vector. The votes of all trees for the classes are checked, and the class that receives the most votes is the output of the classifier.

3.4. Pareto Optimization

Pareto optimization is a multi-objective optimization technique used to find a set of optimal solutions that best balance multiple conflicting objectives. In the context of EMG feature selection, we use Pareto optimization to identify the optimal subset of EMG features that best represent the underlying signal while minimizing the number of features required for classification. To do this, a Pareto front is first constructed by generating multiple solutions that represent different trade-offs between performance and feature number. Each solution is evaluated on the basis of its classification accuracy and the number of features it employs. The set of all solutions that cannot be improved in one objective without sacrificing performance in another objective is called the Pareto front.
The Pareto front combines non-dominated multi-objective optimization solutions. Given a set of solutions S = s 1 , s 2 , , s n with m objectives f 1 , f 2 , , f m , the Pareto front P is defined as follows. Let F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) be the m-dimensional objective function vector and X be the decision variable vector. The Pareto front P is defined as: P = x X | there is no other x X such that f j ( x ) < = f j ( x ) j 1 , 2 , , m with at least one strict inequality. In other words, a solution s i is on the Pareto front if there does not exist any other solution s j that is better than s i in all objectives. The Pareto front contains all non-dominated solutions, which means that there is no solution in the front that is worse than another solution in any objective. Constructing the Pareto front involves finding all non-dominated solutions in the solution set S. Once the Pareto front is constructed, we select a solution that best fits their needs based on their preference for accuracy versus the number of features used. This approach ensures that the selected solution is not only accurate but also parsimonious and less prone to overfitting. We present the EMG feature selection procedure using Pareto optimization as Algorithm 1.
Algorithm 1: EMG feature selection using Pareto optimization
  • Input: EMG dataset D, maximum number of features m
  • Output: Set of Pareto-optimal feature subsets P
  • F set of all possible feature subsets of size m
  • P
  • S random subset of features of size m
  • while not stopping criterion met do
  •      P P non-dominated solutions in S
  •      S next subset in F
  • end while
  • Return P
The above algorithm takes as input an EMG dataset D and the maximum number of features to consider m. It first generates a set F of all possible feature subsets of size m . It then initializes an empty set P to store the Pareto-optimal feature subsets. The algorithm then randomly selects a subset S of m features and begins an iterative process to generate the Pareto-optimal feature subsets. At each iteration, it adds the non-dominated solutions in S to the set P. It then updates S to the next subset in F, and the process continues until a stopping criterion is met. Finally, the algorithm returns the set of Pareto-optimal subsets of features P.

4. Results

Each contraction of the arm muscles during one of the seven movements was recorded by 8 EMG sensors. Estimates of temporal signals were calculated from the recorded signals. The average values of the statistical indicators of the eight sensors are presented in Figure 2. The minimum standard deviation of 0.007 was obtained when measuring the 6th gesture. The highest standard deviation of 0.253 was recorded for the 7th gesture. The minimum value −0.77 was obtained when measuring the 2nd gesture. The highest min value of −0.027 was recorded during the 6th gesture. The minimum (max) value was 0.007 when measuring the 6th gesture. The maximum (max) value of 0.77 was recorded during the 2nd gesture. The minimum value of 5.87 was obtained when measuring the 6th gesture. The highest value of 90.87 was recorded during the 7th gesture. For the average sensor, the minimum amplitude change value of 0.02 was obtained when measuring the 6th gesture. The maximum amplitude change value of 0.735 was recorded during the 7th gesture. The minimum value of the first amplitude jump of 0.018 was obtained when measuring the 7th gesture. The highest value of the first amplitude jump of 0.447 was recorded during the 3rd gesture. The minimum absolute value of 0.005 was obtained when measuring the 6th gesture. The highest absolute value of 0.189 was recorded for the 7th gesture. The minimum wavelength value of 1.19 was obtained during the measurement of the 6th gesture. The highest wavelength value of 46.26 was recorded during the 7th gesture. Minimum Wilson amplitude value of 0.0 was obtained by measuring the 1st, 2nd, 5th, and 6th gestures. The highest Wilson amplitude value of 10 was recorded during the 7th gesture. The minimum, mean, and maximum values of the gesture features are summarized in Table 1, Table 2 and Table 3, respectively.
PCA showed that gesture classes overlap strongly in the feature space. There are no clear dividing surfaces between classes to identify a separate group of gestures. In order to solve the motion classification of gestures, it is necessary to apply non-linear classification methods, which can be used to evaluate the non-linear distribution of the data. For this purpose, the random forest classifier is used in the project research.
Figure 3 shows the projections of the EMG signals to the two principle components P1 and P2. Different classes of gestures are represented by different colours. Each individual point represents one measurement of hand motion in the two-dimensional space of P1, and P2. Each individual movement measurement is represented by a separate circle, the colour of which depends on the gesture class (type). From the cloud of points, it can be seen that the different classes of gestures overlap strongly, and no clear dividing line can be drawn between them.
Figure 4 shows the classification result using the random forest classifier. The figure shows the confusion matrix, which contains the delayed true values and the predicted values of the classifier. If the classification result is evaluated as a number from 0 to 1. Multiplying the given number by 100% results in average expression of the result. The confusion matrix shows how well the classifier was able to accurately recognize a certain class of motions and with which class of motions the real one is confused.
Confusion matrix (Figure 4) shows the highest classification accuracy was obtained for the identification of the 6th gesture. In this case, the accuracy was 99%. The worst accuracy result was obtained for the classification of the 2nd gesture. In this case, a classification accuracy of 73% was achieved. The 7th gesture was classified with 95% accuracy. Meanwhile, gestures 4 and 5 were classified by 75% accuracy.

5. Discussion

Looking at the results, it is clear that using traditional rehabilitation approaches as a foundation, digitized rehabilitation treatment has the most potential to enhance recovery results. Patients must engage with the digital device in a way that comes naturally to them in order for computer-assisted treatments to be effective, and they must also obtain support from the computer system depending on how well they do throughout the rehabilitation session. Gesture recognition has been proposed as a potential remedy for applications of the human-machine interface to alleviate this problem, indicating that digital rehabilitation treatment offers the greatest promise as a complement to traditional rehabilitation procedures and could improve rehabilitation outcomes [29]. To be effective, computer-assisted treatments require patients to interact with the digital device in a way that seems natural to them, while also receiving support from the computer system depending on their performance throughout the rehabilitation session. Gesture recognition has been explored as a feasible approach for human-machine interaction applications to overcome this issue.
Gesture recognition based on EMG is a method that allows dynamic evaluation of muscle work in real time. EMG is focused on electrical signals from muscles that are controlled by the cerebral system and are generated during muscle contraction, the signal reflects physiological properties of the muscles; the EMG signal is the motor electrical activity of the muscles [58]. Nowadays, in rehabilitation, it is preferable to use surface EMG signals are recorded with non-invasive electrodes detected at the surface and to obtain information about the activation time or intensity of surface muscles. It is the simplest way to measure muscle activation patterns and understand the behaviour of the human body under normal and pathological conditions [59].
Scientists also identified the following advantages of surface EMG during rehabilitation: that the electrodes are easier to apply, the patient feels minimal discomfort or pain, skin stretching or allergy when using them, the signals are better reproduced and they are very suitable for various movement recognition and analysis during exercising or training. However, the surface electrodes, which, in most cases, cover a larger area, take more information from the adjacent muscles, which can affect the accurate measurement results, which is very important in physical training assessment, recovery effectiveness or movement recognition and analysis in pathological conditions, especially after stroke, brain trauma, or spinal cord injuries [60].
Recently, EMG-based hand gesture recognition systems have faced challenges such as categorisation encounters, poor classification accuracy, low generalisation ability, limited training data and lack of robustness, and long time to calibrate the system for new users [61].
The development of an existing robotic exoskeleton for upper limb and hand rehabilitation is already in progress. Currently, it is able to perform passive exercises or training during rehabilitation after a stroke or trauma. One of the gaps in the project is the absence of a convenient algorithm to enable exercises and training with active-assisted rehabilitation. The use of robotic assist devices is also increasing in the application of rehabilitation systems and is and will allow the possibility of “seeing”/allowing patient muscle performance directly, helping in decision-making before and after training due to difficulty and intensity of the exercise, determining muscle response to perform ergonomic tests and physical load tolerance, monitoring and analyzing the effectiveness of exercise/training programs and muscle recovery; also for patients, it helps to “find” and train muscles, and enables analyses and improvement of sports activities and training progress [62]. To monitor accurate muscle performance in real time under different pathologies or dysfunctions, achieving an accurate and easy method with low latency for gesture recognition remains a challenge in real life, especially for assistive computerized systems and rehabilitation.
In our research, gesture and related muscle activity were recorded with eight EMG sensors. Each contraction of the arm muscles during one of the seven movements was recorded by eight EMG sensors, and the EMG time signal was recorded for up to 5 s. The results show that the gesture classes overlap strongly in the feature space. There are no clear dividing surfaces between classes to identify a separate group of gestures. In order to solve the motion classification of gestures, it is necessary to apply Pareto optimised non-linear classification methods, which can be used to evaluate the non-linear distribution of the data. A comparison with other approaches is offered in Table 4.

6. Conclusions

This study presents an approach to robot-assisted hand motion therapy for stroke patients. The study used machine learning techniques to determine and describe hand motion patterns in healthy individuals. Electromyographic data were collected from the flexion and extension muscles of the forearm using surface electrodes. Feature selection was performed using the Pareto optimization algorithm. The time and frequency characteristics were used as parameters in machine learning algorithms to recognize seven hand gestures and track rehabilitation progress. Research showed that the system was capable of reconstructing hand/finger movement kinematics and simulating the behaviour of every motion pattern. The correlation between the computed joint trajectories and the monitored hand movement was 0.96 on average, and statistical research carried out in various machine learning setups revealed a 92% accuracy in measuring the precision of motion patterns. The experimental study was conducted to determine which statistical estimates of the temporal signal can be used as motion discriminating features and suitable for developing a classification algorithm. The study analyzed nearly 10,500 individual measurements of the specified gestures. The results showed that the highest classification accuracy was obtained for the identification of the 6th gesture, while the worst accuracy result was obtained for the classification of the second gesture. Gestures 4 and 5 were classified with 75% accuracy. In summary, this study presents a promising approach to robot-assisted hand-motion therapy for stroke patients, which could potentially monitor the progress of rehabilitation or other medical processes. More studies are required to confirm the efficacy of this approach and refine the system for practical use.
The study’s results showed a high accuracy in measuring hand motion patterns, but the efficacy of this approach must be confirmed in clinical settings with larger patient populations to establish its reliability and effectiveness. Future research should investigate developing personalized therapy plans and adoption of the algorithm to react to a larger variety of potential actions, all based on individual patient needs, rather than a generalized approach, allowing to potentially learn to recognize patterns automatically without the need for manual feature engineering, potentially improving classification accuracy. While the study evaluated the accuracy of the system in measuring hand motion patterns, the long-term effects of robot-assisted therapy on stroke patients’ functional recovery and quality of life require further investigation, including the integration of this approach with other therapies, such as occupational and physical therapy.

Author Contributions

Conceptualization, R.M.; Data curation, R.M.; Formal analysis, R.M., R.D., V.R., A.A., J.R. and J.G.; Investigation, R.M., R.D., V.R., A.A., J.R. and J.G.; Methodology, R.M. and R.D.; Supervision, R.M. and J.G.; Validation, R.M., R.D., A.A. and J.R.; Visualization, R.D. and V.R.; Writing—original draft, R.M. and R.D.; Writing—review and editing, R.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by EU Structural Funds project financed by the European Regional Development Fund under the 2014–2020 Operational Programme for the Investment of European Union Funds (2014–2020) Measure No 1.2.2-CPVA-K-703 “Promotion of Centres of Excellence and Centres for Innovation and Technology Transfer”, project number 01.2.2-CPVA-K-703-03-0022.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Vilnius tech faculty committee 64-2221.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data is available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMGElectromyography
HCIHuman-Computer Interaction
sEMGSurface Electromyography
IMUInertial measurement unit
CNNConvolutional Neural Network
MLPCMulti-layer Perceptron Classifier
ANNArtificial Neural Network
SVMSupport Vector Machine
RFRandom Forest
LRLogistic Regression
EMDEmpirical Mode Decomposition
ABCArtificial Bee Colony
BetaABCBeta Artificial Bee Colony
BBABCBinary Beta Artificial Bee Colony
DWTDiscrete Wavelet Transform
RNNRecurrent Neural Network
FFNNFeed-Forward Neural Network
LSTMLong Short-Term Memory network
GRUGated Recurrent Unit
DualMyoUC2018 DualMyo Hand Gesture Dataset
NinaProNinapro dataset 5 (double Myo armband)
CRNNConvolutional Recurrent Neural Network
STFTShort Time Fourier Transform
CWTContinuous-time Wavelet Transform
SAWTScale Average Wavelet Transform
IMC-CNNIn-Memory Computing Convolutional Neural Network
PCAPrincipal Component Analysis
GRNNGeneral regression neural network
ZCZero Crossing
MAVMean absolute value
WAMPWilson Amplitude
MUAPMotor Unit Action Potential
VRVirtual Reality
ARAugmented Reality

References

  1. Guberman, J.; Haimson, O. Not robots; Cyborgs—Furthering anti-ableist research in human-computer interaction. First Monday 2023, 28. [Google Scholar] [CrossRef]
  2. Yasen, M.; Jusoh, S. A systematic review on hand gesture recognition techniques, challenges and applications. PeerJ Comput. Sci. 2019, 5, e218. [Google Scholar] [CrossRef] [PubMed]
  3. Sultana, A.; Ahmed, F.; Alam, M.S. A systematic review on surface electromyography-based classification system for identifying hand and finger movements. Healthc. Anal. 2023, 3, 100126. [Google Scholar] [CrossRef]
  4. Ibrahim, A.F.T.; Gannapathy, V.R.; Chong, L.W.; Isa, I.S.M. Analysis of Electromyography (EMG) Signal for Human Arm Muscle: A Review. In Lecture Notes in Electrical Engineering; Springer: Berlin/Heidelberg, Germany, 2015; pp. 567–575. [Google Scholar] [CrossRef]
  5. Jaramillo-Yánez, A.; Benalcázar, M.E.; Mena-Maldonado, E. Real-time hand gesture recognition using surface electromyography and machine learning: A systematic literature review. Sensors 2020, 20, 2467. [Google Scholar] [CrossRef] [PubMed]
  6. Akinrodoye, M.A.; Lui, F. Neuroanatomy, Somatic Nervous System; StatPearls Publishing: Treasure Island, FL, USA, 2022. [Google Scholar]
  7. Reaz, M.B.I.; Hussain, M.S.; Mohd-Yasin, F. Techniques of EMG signal analysis: Detection, processing, classification and applications. Biol. Proced. Online 2006, 8, 11–35. [Google Scholar] [CrossRef]
  8. Maura, R.M.; Parra, S.R.; Stevens, R.E.; Weeks, D.L.; Wolbrecht, E.T.; Perry, J.C. Literature review of stroke assessment for upper-extremity physical function via EEG, EMG, kinematic, and kinetic measurements and their reliability. J. Neuroeng. Rehabil. 2023, 20, 21. [Google Scholar] [CrossRef]
  9. Zanini, R.A.; Colombini, E.L.; de Castro, M.C.F. Parkinson’s Disease EMG Signal Prediction Using Neural Networks. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2446–2453. [Google Scholar] [CrossRef]
  10. Beni, N.H.; Jiang, N. Heartbeat detection from single-lead ECG contaminated with simulated EMG at diferent intensity levels: A comparative study. Biomed. Signal Process. Control 2023, 83, 104612. [Google Scholar] [CrossRef]
  11. Cimolato, A.; Driessen, J.J.M.; Mattos, L.S.; De Momi, E.; Laffranchi, M.; De Michieli, L. EMG-driven control in lower limb prostheses: A topic-based systematic review. J. Neuroeng. Rehabil. 2022, 19, 43. [Google Scholar] [CrossRef]
  12. Yang, Z.; Guo, S.; Suzuki, K.; Liu, Y.; Kawanishi, M. An EMG-Based Biomimetic Variable Stiffness Modulation Strategy for Bilateral Motor Skills Relearning of Upper Limb Elbow Joint Rehabilitation. J. Bionic Eng. 2023, 1–16. [Google Scholar] [CrossRef]
  13. Toledo-Peral, C.L.; Vega-Martínez, G.; Mercado-Gutiérrez, J.A.; Rodríguez-Reyes, G.; Vera-Hernández, A.; Leija-Salas, L.; Gutiérrez-Martínez, J. Virtual/Augmented Reality for Rehabilitation Applications Using Electromyography as Control/Biofeedback: Systematic Literature Review. Electronics 2022, 11, 2271. [Google Scholar] [CrossRef]
  14. Sun, J.; Liu, G.; Sun, Y.; Lin, K.; Zhou, Z.; Cai, J. Application of Surface Electromyography in Exercise Fatigue: A Review. Front. Syst. Neurosci. 2022, 16, 893275. [Google Scholar] [CrossRef] [PubMed]
  15. Khant, M.; Gouwanda, D.; Gopalai, A.A.; Lim, K.H.; Foong, C.C. Estimation of Lower Extremity Muscle Activity in Gait Using the Wearable Inertial Measurement Units and Neural Network. Sensors 2023, 23, 556. [Google Scholar] [CrossRef] [PubMed]
  16. Žemgulys, J.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R. Recognition of basketball referee signals from real-time videos. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 979–991. [Google Scholar] [CrossRef]
  17. Kotov-Smolenskiy, A.M.; Khizhnikova, A.E.; Klochkov, A.S.; Suponeva, N.A.; Piradov, M.A. Surface EMG: Applicability in the Motion Analysis and Opportunities for Practical Rehabilitation. Hum. Physiol. 2021, 47, 237–247. [Google Scholar] [CrossRef]
  18. Tamulis, Z.; Vasiljevas, M.; Damaševičius, R.; Maskeliunas, R.; Misra, S. Affective Computing for eHealth Using Low-Cost Remote Internet of Things-Based EMG Platform. In Intelligent Internet of Things for Healthcare and Industry; Internet of Things; Springer: Cham, Switzerland, 2022; pp. 67–81. [Google Scholar]
  19. Zheng, Z.; Wu, Z.; Zhao, R.; Ni, Y.; Jing, X.; Gao, S. A Review of EMG-, FMG-, and EIT-Based Biosensors and Relevant Human-Machine Interactivities and Biomedical Applications. Biosensors 2022, 12, 516. [Google Scholar] [CrossRef]
  20. Zheng, M.; Crouch, M.S.; Eggleston, M.S. Surface Electromyography as a Natural Human-Machine Interface: A Review. IEEE Sens. J. 2022, 22, 9198–9214. [Google Scholar] [CrossRef]
  21. Vasiljevas, M.; Turcinas, R.; Damasevicius, R. EMG speller with adaptive stimulus rate and dictionary support. In Proceedings of the 2014 Federated Conference on Computer Science and Information Systems, FedCSIS 2014, Warsaw, Poland, 7–10 September 2014; pp. 227–234. [Google Scholar]
  22. Aly, H.; Youssef, S.M. Bio-signal based motion control system using deep learning models: A deep learning approach for motion classification using EEG and EMG signal fusion. J. Ambient. Intell. Humaniz. Comput. 2021, 14, 991–1002. [Google Scholar] [CrossRef]
  23. Issa, M.E.; Helm, A.M.; Al-Qaness, M.A.A.; Dahou, A.; Elaziz, M.A.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084. [Google Scholar] [CrossRef] [PubMed]
  24. Latreche, A.; Kelaiaia, R.; Chemori, A.; Kerboua, A. A New Home-Based Upper- and Lower-Limb Telerehabilitation Platform with Experimental Validation. Arab. J. Sci. Eng. 2023. [Google Scholar] [CrossRef]
  25. Latreche, A.; Kelaiaia, R.; Chemori, A.; Kerboua, A. Reliability and validity analysis of MediaPipe-based measurement system for some human rehabilitation motions. Measurement 2023, 214, 112826. [Google Scholar] [CrossRef]
  26. Antón, D.; Kurillo, G.; Goñi, A.; Illarramendi, A.; Bajcsy, R. Real-time communication for Kinect-based telerehabilitation. Future Gener. Comput. Syst. 2017, 75, 72–81. [Google Scholar] [CrossRef]
  27. Han, M.; Zandigohar, M.; Furmanek, M.P.; Yarossi, M.; Schirner, G.; Erdoğmuş, D. Classifications of Dynamic EMG in Hand Gesture and Unsupervised Grasp Motion Segmentation. In Proceedings of the 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, 1–5 November 2021; pp. 359–364. [Google Scholar] [CrossRef]
  28. Jia, G.; Lam, H.; Ma, S.; Yang, Z.; Xu, Y.; Xiao, B. Classification of Electromyographic Hand Gesture Signals Using Modified Fuzzy C-Means Clustering and Two-Step Machine Learning Approach. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1428–1435. [Google Scholar] [CrossRef] [PubMed]
  29. Alfaro, J.G.C.; Trejos, A.L. User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion. Sensors 2022, 22, 1321. [Google Scholar] [CrossRef] [PubMed]
  30. Phinyomark, A.; Campbell, E.; Scheme, E. Surface Electromyography (EMG) Signal Processing, Classification, and Practical Considerations. In Series in BioEngineering; Springer: Singapore, 2019; pp. 3–29. [Google Scholar] [CrossRef]
  31. Kaufmann, P.; Englehart, K.; Platzner, M. Fluctuating emg signals: Investigating long-term effects of pattern matching algorithms. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 6357–6360. [Google Scholar] [CrossRef]
  32. Gopal, P.; Gesta, A.; Mohebbi, A. A Systematic Study on Electromyography-Based Hand Gesture Recognition for Assistive Robots Using Deep Learning and Machine Learning Models. Sensors 2022, 22, 3650. [Google Scholar] [CrossRef]
  33. Benalcázar, M.E.; Caraguay, L.V.; López, L.I.B. A user-specific hand gesture recognition model based on feed-forward neural networks, emgs, and correction of sensor orientation. Appl. Sci. 2020, 10, 8604. [Google Scholar] [CrossRef]
  34. Fajardo, J.M.; Gomez, O.; Prieto, F. EMG hand gesture classification using handcrafted and deep features. Biomed. Signal Process. Control 2021, 63, 102210. [Google Scholar] [CrossRef]
  35. Sharma, N.; Ryait, H.S.; Sharma, S. Classification of biological signals and time domain feature extraction using capsule optimized auto encoder-electroencephalographic and electromyography. Int. J. Adapt. Control Signal Process. 2022, 36, 1670–1690. [Google Scholar] [CrossRef]
  36. Ieracitano, C.; Mammone, N.; Hussain, A.; Morabito, F.C. A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia. Neural Netw. 2020, 123, 176–190. [Google Scholar] [CrossRef]
  37. Lee, K.H.; Min, J.Y.; Byun, S. Electromyogram-based classification of hand and finger gestures using artificial neural networks. Sensors 2022, 22, 225. [Google Scholar] [CrossRef]
  38. Damasevicius, R.; Vasiljevas, M.; Martisius, I.; Jusas, V.; Birvinskas, D.; Wozniak, M. BoostEMD: An extension of EMD method and its application for denoising of EMG signals. Elektron. Elektrotechnika 2015, 21, 57–61. [Google Scholar]
  39. Manikanta, D.C.S.; Gowtham, G.; Gantasala, K. Implementation of Feature Extraction of Neuro Muscular EMG Signal. In Proceedings of the 2022 2nd International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE), Bangalore, India, 16–17 December 2022; pp. 1–5. [Google Scholar] [CrossRef]
  40. Sahu, P.; Singh, B.K.; Nirala, N. Beta Artificial Bee Colony Algorithm for EMG Feature Selection. In Lecture Notes in Electrical Engineering; Springer Nature: Singapore, 2023; pp. 3–17. [Google Scholar] [CrossRef]
  41. Sahu, P.; Singh, B.K.; Nirala, N. An improved feature selection approach using global best guided Gaussian artificial bee colony for EMG classification. Biomed. Signal Process. Control 2023, 80, 104399. [Google Scholar] [CrossRef]
  42. Goh, G.D.; Lee, J.M.; Goh, G.L.; Huang, X.; Lee, S.; Yeong, W.Y. Machine Learning for Bioelectronics on Wearable and Implantable Devices: Challenges and Potential. Tissue Eng. Part A 2023, 29, 20–46. [Google Scholar] [CrossRef] [PubMed]
  43. Buongiorno, D.; Cascarano, G.D.; De Feudis, I.; Brunetti, A.; Carnimeo, L.; Dimauro, G.; Bevilacqua, V. Deep learning for processing electromyographic signals: A taxonomy-based survey. Neurocomputing 2021, 452, 549–565. [Google Scholar] [CrossRef]
  44. Mudiyanselage, S.E.; Nguyen, P.H.D.; Rajabi, M.S.; Akhavian, R. Automated Workers’ Ergonomic Risk Assessment in Manual Material Handling Using sEMG Wearable Sensors and Machine Learning. Electronics 2021, 10, 2558. [Google Scholar] [CrossRef]
  45. Qiu, S.; Zhao, H.; Jiang, N.; Wang, Z.; Liu, L.; An, Y.; Zhao, H.; Miao, X.; Liu, R.; Fortino, G. Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fusion 2022, 80, 241–265. [Google Scholar] [CrossRef]
  46. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef]
  47. Simão, M.; Neto, P.; Gibaru, O. EMG-based online classification of gestures with recurrent neural networks. Pattern Recognit. Lett. 2019, 128, 45–51. [Google Scholar] [CrossRef]
  48. Jo, Y.; Oh, D. Real-Time Hand Gesture Classification Using Crnn with Scale Average Wavelet Transform. J. Mech. Med. Biol. 2020, 20, 2040028. [Google Scholar] [CrossRef]
  49. Wei, C.; Wang, H.; Zhou, B.; Feng, N.; Hu, F.; Lu, Y.; Jiang, D.; Wang, Z. sEMG signal-based lower limb movements recognition using tunable Q-factor wavelet transform and Kraskov entropy. IRBM 2023, 44, 100773. [Google Scholar] [CrossRef]
  50. Toro-Ossaba, A.; Jaramillo-Tigreros, J.; Tejada, J.C.; Peña, A.; López-González, A.; Castanho, R.A. LSTM Recurrent Neural Network for Hand Gesture Recognition Using EMG Signals. Appl. Sci. 2022, 12, 9700. [Google Scholar] [CrossRef]
  51. Li, J.; Wei, L.; Wen, Y.; Liu, X.; Wang, H. Hand gesture recognition based improved multi-channels CNN architecture using EMG sensors. J. Intell. Fuzzy Syst. 2022, 43, 643–656. [Google Scholar] [CrossRef]
  52. Qi, J.; Jiang, G.; Li, G.; Sun, Y.; Tao, B. Surface EMG hand gesture recognition system based on PCA and GRNN. Neural Comput. Appl. 2020, 32, 6343–6351. [Google Scholar] [CrossRef]
  53. Shanmuganathan, V.; Yesudhas, H.R.; Khan, M.S.; Khari, M.; Gandomi, A.H. R-CNN and wavelet feature extraction for hand gesture recognition with EMG signals. Neural Comput. Appl. 2020, 32, 16723–16736. [Google Scholar] [CrossRef]
  54. Pan, L.; Liu, K.; Zhu, K.; Li, J. Comparing EMG Pattern Recognition with and Without Hand and Wrist Movements. J. Bionic Eng. 2022, 19, 700–708. [Google Scholar] [CrossRef]
  55. Zhang, Z.; Yang, K.; Qian, J.; Zhang, L. Real-time surface EMG pattern recognition for hand gestures based on an artificial neural network. Sensors 2019, 19, 3170. [Google Scholar] [CrossRef] [PubMed]
  56. Aviles, M.; Sánchez-Reyes, L.M.; Fuentes-Aguilar, R.; Toledo-Pérez, D.; Rodríguez-Reséndiz, J. A Novel Methodology for Classifying EMG Movements Based on SVM and Genetic Algorithms. Micromachines 2022, 13, 2108. [Google Scholar] [CrossRef] [PubMed]
  57. Toledo-Pérez, D.C.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A. A Study of Computing Zero Crossing Methods and an Improved Proposal for EMG Signals. IEEE Access 2020, 8, 8783–8790. [Google Scholar] [CrossRef]
  58. Cases, C.M.P.; Baldovino, R.G.; Manguerra, M.V.; Dupo, V.B.; Dajay, R.C.R.; Bugtai, N.T. An EMG-based Gesture Recognition for Active-assistive Rehabilitation. In Proceedings of the 12th IEEE International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Manila, Philippines, 3–7 December 2020. [Google Scholar] [CrossRef]
  59. Bahador, A.; Yousefi, M.; Marashi, M.; Bahador, O. High accurate lightweight deep learning method for gesture recognition based on surface electromyography. Comput. Methods Programs Biomed. 2020, 195, 105643. [Google Scholar] [CrossRef]
  60. Farinha, D.; Dias, J.; Neves, P.; Pereira, K.; Ferreira, C.; Pires, G. Assistive Robotic Hand Orthosis (ARHO) controlled with EMG: Evaluation of a preliminary Prototype. In Proceedings of the 6th IEEE Portuguese Meeting on Bioengineering (ENBENG), Lisbon, Portugal, 22–23 February 2019; pp. 1–4. [Google Scholar] [CrossRef]
  61. Palma, F.; Perramont, N.; Rojas, V.; Bertolotto, B.; Tuesta, M. Electromyographic amplitude and coactivation of the core muscles during different unstable push-up exercises. Med. Sport 2021, 74, 209–222. [Google Scholar] [CrossRef]
  62. Shi, W.T.; Lyu, Z.J.; Tang, S.T.; Chia, T.L.; Yang, C.Y. A bionic hand controlled by hand gesture recognition based on surface EMG signals: A preliminary study. Biocybern. Biomed. Eng. 2018, 38, 126–135. [Google Scholar] [CrossRef]
  63. Zhou, H.; Zhang, Q.; Zhang, M.; Shahnewaz, S.; Wei, S.; Ruan, J.; Zhang, X.; Zhang, L. Toward Hand Pattern Recognition in Assistive and Rehabilitation Robotics Using EMG and Kinematics. Front. Neurorobot. 2021, 15, 659876. [Google Scholar] [CrossRef]
  64. Lu, Z.; Tong, K.Y.; Zhang, X.; Li, S.; Zhou, P. Myoelectric Pattern Recognition for Controlling a Robotic Hand: A Feasibility Study in Stroke. IEEE Trans. Biomed. Eng. 2019, 66, 365–372. [Google Scholar] [CrossRef] [PubMed]
  65. Meyers, E.C.; Gabrieli, D.; Tacca, N.; Wengerd, L.; Darrow, M.; Friedenberg, D. Decoding hand and wrist movement intention from chronic stroke survivors with hemiparesis using a user-friendly, wearable EMG-based neural interface. Arch. Phys. Med. Rehabil. 2021, 103, e14–e15. [Google Scholar] [CrossRef]
  66. Lu, Z.; Yu Tong, K.; Shin, H.; Li, S.; Zhou, P. Advanced Myoelectric Control for Robotic Hand-Assisted Training: Outcome from a Stroke Patient. Front. Neurol. 2017, 8, 107. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Data capture procedure with interpolation of 8 sensing zones and 7 gestures (A–G corresponding to motions M1–M7).
Figure 1. Data capture procedure with interpolation of 8 sensing zones and 7 gestures (A–G corresponding to motions M1–M7).
Applsci 13 05744 g001
Figure 2. Summary of EMG signal values for gestures 1–7.
Figure 2. Summary of EMG signal values for gestures 1–7.
Applsci 13 05744 g002
Figure 3. Projections of the EMG signals (for motions M1–M7) to the three principle components P1, P2 and P3.
Figure 3. Projections of the EMG signals (for motions M1–M7) to the three principle components P1, P2 and P3.
Applsci 13 05744 g003
Figure 4. Confusion matrix of the classification results.
Figure 4. Confusion matrix of the classification results.
Applsci 13 05744 g004
Table 1. Minimum values of gesture features.
Table 1. Minimum values of gesture features.
GestureStandard DeviationMinimumMaximumCrossing the Zero AxisAverage Change in AmplitudeFirst Amplitude JumpMean Absolute ValueWave LengthWilson Amplitude
10.055−0.2290.22549.910.1410.0990.0368.872.878
20.089−0.3330.31751.130.2230.1170.05914.593.66
30.101−0.3570.3258.910.2590.1900.06616.24.22
40.081−0.2860.27652.530.163670.0610.05413.433.49
50.058−0.2340.21248.640.0690.0980.0408.9762.746
60.014−0.0650.04922.850.174330.050.0092.2990.773
70.143−0.4340.41573.460.4110.2970.10425.585.928
Table 2. Mean values of gesture features.
Table 2. Mean values of gesture features.
GestureStandard DeviationMinimumMaximumCrossing the Zero AxisAverage Change in AmplitudeFirst Amplitude JumpMean Absolute ValueWave LengthWilson Amplitude
10.012−0.4760.02820.870.0320.0210.00920
20.011−0.770.01820.250.0320.020.0091.940
30.028−0.6750.05732.620.0790.0240.0194.530.625
40.014−0.6070.03622.370.0370.0230.012.250.125
50.012−0.510.02617.620.0330.020.0091.930
60.007−0.130.0075.8750.020.0180.0051.190
70.05−0.7460.13754.870.1450.0240.0358.631.875
Table 3. Maximum values of gesture features.
Table 3. Maximum values of gesture features.
GestureStandard DeviationMinimumMaximumCrossing the Zero AxisAverage Change in AmplitudeFirst Amplitude JumpMean Absolute ValueWave LengthWilson Amplitude
10.112−0.040.576.370.2780.2080.07217.786.375
20.211−0.030.7779.750.5150.2690.13834.48.5
30.193−0.080.63582.870.480.4470.12530.758
40.177−0.040.614810.3140.080.1229.627.625
50.126−0.0480.49677.50.10.2150.079196.5
60.026−0.0270.11943.250.4720.0990.0163.8672.25
70.253−0.1440.71690.870.7350.6870.18946.2610
Table 4. A comparison with other related approaches.
Table 4. A comparison with other related approaches.
MethodsNumber of MotionsNumber of SubjectsAccuracy
Ours72592%
Zhou et al. [63]8591.18%
Lu et al. [64]6884.1%
Meyers et al. [65]12777.l%
Lu et al. [66]6175%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maskeliūnas, R.; Damaševičius, R.; Raudonis, V.; Adomavičienė, A.; Raistenskis, J.; Griškevičius, J. BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress. Appl. Sci. 2023, 13, 5744. https://doi.org/10.3390/app13095744

AMA Style

Maskeliūnas R, Damaševičius R, Raudonis V, Adomavičienė A, Raistenskis J, Griškevičius J. BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress. Applied Sciences. 2023; 13(9):5744. https://doi.org/10.3390/app13095744

Chicago/Turabian Style

Maskeliūnas, Rytis, Robertas Damaševičius, Vidas Raudonis, Aušra Adomavičienė, Juozas Raistenskis, and Julius Griškevičius. 2023. "BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress" Applied Sciences 13, no. 9: 5744. https://doi.org/10.3390/app13095744

APA Style

Maskeliūnas, R., Damaševičius, R., Raudonis, V., Adomavičienė, A., Raistenskis, J., & Griškevičius, J. (2023). BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress. Applied Sciences, 13(9), 5744. https://doi.org/10.3390/app13095744

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop