Next Article in Journal
Development of Acoustic Emission Sensor Optimized for Partial Discharge Monitoring in Power Transformers
Next Article in Special Issue
A Novel Method for Classifying Liver and Brain Tumors Using Convolutional Neural Networks, Discrete Wavelet Transform and Long Short-Term Memory Networks
Previous Article in Journal
A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data
Previous Article in Special Issue
Myocardium Detection by Deep SSAE Feature and Within-Class Neighborhood Preserved Support Vector Classifier and Regressor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines

Programa de Pós-Graduação em Engenharia Elétrica da Universidade Federal do Rio Grande do Sul, Avenue Osvaldo Aranha 103, Porto Alegre 90035-190, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(8), 1864; https://doi.org/10.3390/s19081864
Submission received: 15 March 2019 / Revised: 12 April 2019 / Accepted: 14 April 2019 / Published: 18 April 2019

Abstract

:
Surface Electromyography (sEMG) signal processing has a disruptive technology potential to enable a natural human interface with artificial limbs and assistive devices. However, this biosignal real-time control interface still presents several restrictions such as control limitations due to a lack of reliable signal prediction and standards for signal processing among research groups. Our paper aims to present and validate our sEMG database through the signal classification performed by the reliable forms of our Extreme Learning Machines (ELM) classifiers, used to maintain a more consistent signal classification. To perform the signal processing, we explore the use of a stochastic filter based on the Antonyan Vardan Transform (AVT) in combination with two variations of our Reliable classifiers (denoted R-ELM and R-Regularized ELM (RELM), respectively), to derive a reliability metric from the system, which autonomously selects the most reliable samples for the signal classification. To validate and compare our database and classifiers with related papers, we performed the classification of the whole of Databases 1, 2, and 6 (DB1, DB2, and DB6) of the NINAProdatabase. Our database presented consistent results, while the reliable forms of ELM classifiers matched or outperformed related papers, reaching average accuracies higher than 99 % for the IEEdatabase, while average accuracies of 75.1 % , 79.77 % , and 69.83 % were achieved for NINAPro DB1, DB2, and DB6, respectively.

1. Introduction

Interfaces based on biosignals supported by late developments in areas such as medicine, engineering, computation science, and microelectronics are becoming increasingly popular. Recently, surface Electromyography (sEMG) and Electroencephalography (EEG) signals have been used to offer control of assistive devices for people with some level of impairment, amputation, or specific movement restriction [1,2,3]. Despite the advances in recent sEMG signal classification for the activation of auxiliary devices [3,4,5], optimal signal processing strategies and portable devices development yet face several restrictions. Despite the deterministic range of the sEMG signal in frequency and amplitude, factors such as subject dependency and lack of signal repeatability often preclude efficient and reliable myoelectric pattern recognition and control since the first studies in the area [6,7], making the optimal sEMG signal classification an arduous task from a machine learning perspective [8,9,10,11,12]. Thus, the natural control of assistive devices based on sEMG activation is a field of constant expansion in biomedical engineering.
Usually, open-access sEMG databases have some restrictions concerning the small number of movements performed, the small number of subjects, the lack of assay repetitions, and the variations in the assays themselves. Furthermore, these databases often use different experimental methodologies that make direct comparison among different works unfeasible. The NINAProdatabase [12,13,14,15] was created to compensate some of these limitations by offering data acquired from a large variety of subjects performing different upper-limb movements and, more recently, including repetitions [15]. However, the NINAPro and other sEMG databases frequently offer only the signals processed, but not the routines/algorithms and other methodological tools to help different study groups replicate and create their databases. Thus, we developed a full experimental protocol to offer alternatives to provide not only our database, but also algorithms, code, and experimental material that may help researchers to develop and test their different methods in the use of sEMG for prosthetic control.
In our previous paper [16], a variety of signal representations was explored to establish a more reliable classification procedure for the sEMG signals. This current paper aims to present and validate our sEMG database using the reliable forms of our ELM classifiers used to maintain a more consistent signal classification. Through this work, the first version of our database, which is constantly expanding to include more trials and subjects (including amputees), has been made available to download on our website (https://www.ufrgs.br/ieelab/IEE_sEMG_db.php). Besides the database, we are also making a considerable amount of the complementary material available, such as videos used for the subjects’ stimulation and also LabVIEW routines and MATLAB scripts to enable different groups to recreate the experimental conditions and formulate their databases. Moreover, we hope to help other groups to maintain a relatively close methodology that may make the result comparison more reliable and consistent among the different papers in the area. Currently, our database (denoted IEEdatabase) gathers 48 assays composed of three repetitions of four different trials acquired from four subjects. The different trials explore a different number and order of 17 distinct upper-limb movements repetitions to test the effect of these factors in the signal classification accuracy rate. Additionally, we explore the use of a stochastic filter based on the Antonyan Vardan Transform (AVT) and our concept of reliable sEMG signal classification using two ELM classifiers. The validation of the classifiers is performed using our database and three different NINAPro databases (DB1, DB2, and DB6) for comparison. “Exercise 2” (E2) of the NINAPro DB1 is comparable with our Trial B in movement types, sequential order, and the number of repetitions, while the E2 section of NINAPro DB2 is comparable with our Assay A for the same reasons. For all of our different assays, each subject performed three repetitions, while the DB6 is the only NINAPro database dealing with assay repetitions. On a newer NINAPro database (DB6), a study case involving day and trial repetitions was presented by Palermo et al. [15] focusing on the cross-session (different trials) signal classification, which resulted in considerable accuracy loss, proving the necessity of a regenerative or more reliable classification method to boost the results. Section 2 of this paper details all the methodology describing the IEE database acquisition protocol and the processing techniques. Section 3 presents the results of the validation of the IEE database and all NINAPro databases used through the different signal classification techniques. Section 4 presents the discussion concerning all the results achieved, and Section 5 consists of final considerations and usage notes for the database.

2. Materials and Methods

2.1. Experimental Protocol

The E2 NINAPro database exercise [13] inspired the 17 hand and wrist movements that form the IEE database. The 17 different movements performed interspersed by a rest class are presented in Figure 1 and are composed of three main groups. Group 1 consisted of finger movements (thumb up; extension of middle and index finger with flexion of other fingers; flexion of little and ring fingers and extension of the others; thumb flexion; abduction of all fingers; hand closure; pointing index and abduction of extended fingers). Group 2 gathered the torsion movements (wrist supination, axis: middle finger; wrist pronation, axis: middle finger; wrist supination, axis: little finger; and wrist pronation, axis: little finger). Group 3 was composed of wrist movements (wrist flexion; wrist extension; wrist radial deviation; wrist ulnar deviation; and wrist extension with the hand closed).
The characterization of subjects is pertinent to appraise the results since skinnier and younger individuals, for example, tend to reach better accuracy results, as demonstrated by Atzori et al. [14]. During the data acquisition, each one of the four untrained subjects was requested to sit comfortably on a chair positioned in front of an LCD monitor and to reproduce the movements displayed as naturally as possible according to the video stimulation.
For each one of the four different assays, the four subjects were requested to repeat the trials three times, forming a subset of the database with 12 trials to each assay. The assays differed from each other according to the number of repetitions and the order of movements. Assays A and B had, respectively, six and ten repetitions of movements in sequential order (as the NINAPro databases DB2 and DB1, respectively). Trials C and D had the same amount of repetitions performed randomly. The electrode positioning was the same as proposed in the NINAPro database, as presented in Figure 2.
All the sEMG signal acquisition was performed by 12 channels formed by 24 disposable surface electrodes and a reference electrode, placed on the individual’s forehead. The channels were connected to a battery-powered commercial sEMG device (EMG 830 C, from EMG System do Brasil). The signal was digitalized at a 2-kHz sample frequency and 18 bits of quantization by a NI USB-6289 platform from National Instruments. The acquisition was performed using a notebook running LabVIEW 9 on a Windows 10 environment.
All procedures performed followed the ethical standards of the 1964 Helsinki declaration and its later amendments or comparable ethical standards and were approved by the institutional research committee under the Certificate of Presentation for Ethical Appreciation (Number 11253312.8.0000.5347). Before the experiment, each subject was requested to give informed consent and answer questions regarding clinical data including age, gender, height, weight, and laterality and also had their Forearm Circumference (F.C) and Forearm Length (F.L) measured. Those characteristics are detailed in Table 1.

2.2. The LabVIEW Interface

The LabVIEW routines were designed to interact with the hardware and also provided a cognitive walkthrough to the user, presenting to the user the next movement to be performed in a small auxiliary interface window and providing preparation time. The videos were built using the MakeHuman software for the creation of the anthropomorphic arm forms and the Blender software to put them together and time the animations. The LabVIEW routines and the videos used are available for download on our website along with the MATLAB functions and syntax used to transform the LabVIEW “.lvm” files into “.m” MATLAB files. By making this material available, we hope to enable different groups to recreate the experimental conditions and formulate their own databases, as well.

2.3. Data Relabeling

The process of labeling the samples was performed based on the timestamps generated by each movement repetition displayed as the subject stimulation. However, commonly, some delays caused by the volunteer’s reaction time tended to create mismatches and frequently incorrect labels for some samples at the beginning and end of each movement. To improve the signal labeling, an algorithm based on [13], a signal filtering, and a Generalized Likelihood Ratio (GLR) model were used for relabeling.
To define the most proper label in the rest-movement-rest transitions, an exhaustive search between all possible values for the movement beginning and ending time—denoted t 0 and t 1 , respectively—was performed. The t 0 was fixed at a determined value based on the initial timestamps, while t 1 was incremented until the end of the window. After this first iteration, t 0 was incremented, and t 1 went through a new sweep. In each iteration, t 0 was incremented by 30 ms, and the Generalized Likelihood Ratio (GLR) between the rest-movement-rest sections was calculated. Subsequently, the combination of t 0 and t 1 was defined for the higher GLR value, which defined the movement beginning and end. The relabeling method, performed in MATLAB, is also available for download on our website.

2.4. Signal Filtering

Since the noise out of the interesting band of the sEMG signal was usually filtered and removed in the analog conditioning performed in the signal acquisition stage of the process, the digital filtering often aimed to remove artifacts within the bandwidth of the sEMG signal through the application of adaptive filters [18,19,20,21,22,23]. To provide a straightforward and time efficient filter alternative for the sEMG signal filtering, we developed the AVT filter. The AVT filter used in our database was designed to remove noise within the band of interest and to provide a smoother and more regular source for feature extraction, which enhanced the pattern recognition capacities of the machine learning method. The original AVT algorithm was designed using sample discard; however, we modified the original algorithm to avoid sample discarding before the classifier, as presented in Figure 3.
The AVT filter, as designed, processed each segment ( s g ) of 200 ms extracted from the sEMG signal. Once a 200 ms segment was processed, the segmentation window slid 10 ms forward in the signal, characterizing a sliding-window approach. Considering the Mean Amplitude Value ( M A V ), the overall process itself was similar to a moving average filter; the main difference resides in the selective capacity provided by the Mean Signal Deviation ( M S D ), composed by the standard deviation of the signal ( σ ) rather than just the hard threshold based on the Mean Signal Amplitude ( M S A ) alone. After the definition of the range of excursion ( M S A ± M S D ), the only samples (s) to be altered were those out of its boundaries, which were replaced by the M S A value, while the remaining samples stayed intact. The threshold also suffered from the influence of two pondering filter factors ( f f 1 ) and ( f f 2 ). The filter factors were used to provide a low-pass behavior for the 190-ms (95% of the signal) portion of the segment and highlight the dynamics of the 10-ms (5% of the signal) incoming portion for each sliding window. For that, the values of f f 1 and f f 2 were defined as 0.8 and 0.2 , respectively. The effect of the designed AVT filter on the sEMG signal is presented in Figure 4a,b, as well as its effect on the RMS feature in Figure 4c,d. The Gaussian response derived from the signal was a direct consequence of the incremental activation profile of the Motor Units’ Action Potentials (MUAPs), which led to proportional EMG responses. More detailed information about the AVT filter and its comparison with other filtering and non-filtering scenarios was explored in [16].

2.5. Signal Segmentation and Feature Extraction

Generally, the definition of segment lengths and the selection of features is not a well-defined field in sEMG signal processing, with approaches varying considerably between those two factors. Despite the representativity of the sEMG signal generally being proportional to the segment length used [24], previous tests with the NINAPro database [5,12] observed that variations of 100 ms, 200 ms, and 400 ms of signal length did not offer a significant statistical difference for the results. Moreover, the use of more sophisticated frequency or time-frequency features also did not present a clear accuracy improvement compared to more simplistic signal representations for offline classification [12,25]. Furthermore, according to [26], there is no guarantee that additional individual efficient features of a model would offer a more efficient combination for signal classification considering that the systems tend to overfit. Previous studies [13,27] also concluded that due to the sEMG nature, a non-linear kernel is even more efficient for the signal classification than a specific set of features.
Although higher accuracy rates are obtained using bigger sEMG portions (windows), the process of signal buffering results in delays in the classification response that usually preclude the real-time control of assistive devices [24]. In our paper, the overlapped windows approach was chosen for the signal segmentation to maintain a balance between reasonable signal representation and the system responsiveness. Additionally, similar papers [12,13,14,15,28] were also considered to enable a fair comparison of results. Thus, overlapped windows of 200 ms of length and 10 ms of increment were used to segment the signal and extract the classical time-domain features: RMS, Variance ( V A R ), Mean Absolute Value ( M A V ), and Standard Deviation ( S D ), which are commonly used in sEMG signal classification [7,10,29], for each one of the 12 channels. This simple set of features for the signal representation was chosen based on previous related NINAPro works and also to highlight the consistency of our database. This approach also highlights the potential of our reliable classifiers, which can match or even outperform related results in the literature even when not using longer sEMG segments or more complicated signal representations.

2.6. Signal Classification

The signal classification was performed using state-of-the-art classifiers based on Extreme Learning Machines (ELM) in its standard (ELM) and Regularized (RELM) forms [30,31,32]. We derived a reliable version of the classifiers denoted R-ELM and R-RELM, respectively.

2.6.1. ELM

The ELM is a particularly attractive machine learning solution for applications that demand a quick model formation. The classifier is formed through a non-iterative method that does not require the optimization process and instead uses a Moore–Penrose pseudoinverse that allows the achievement of an optimal model considering a tolerance for error. By its nature, the method natively avoids some classical problems of more traditional machine learning solutions such as local minimal and sub-optimal solutions [33]. Furthermore, the method has a natural multiclass capacity and a very reasonable computational cost when compared to the reference classifiers in the field, such as SVM [25]. The basic ELM structure is composed of the linear system presented in Equation (1), where H is the input matrix formed by the features projected by a kernel, β is the model to be found, and T is the label matrix. The derivation of H is detailed in Equation (2), where w and b are the random weight of the network neurons and bias, attributed within a range of [−1; 1] and [−1.5; 1.5], respectively, to maintain the low-pass response of the classifier. The ϕ represents the Radial Basis Function (RBF) kernel that projects each one of the N features in the L hidden neurons, a kernel acknowledged to be efficient for sEMG signal classification [13,27].
For the ELM, L is the only hyperparameter to be defined. Thus, we performed preliminary tests to relate the L number and accuracies in the training and testing of the models for both ELM and RELM classifiers within a range of 50–1000 hidden neurons. The optimal number of hidden neurons was defined by the maximum accuracy rate achieved for each subject.
H β = T
H = ϕ ( w 1 x 1 + b 1 , 1 ) ϕ ( w L x 1 + b 1 , L ) ϕ ( w 1 x N + b N , 1 ) ϕ ( w L x N + b N , L )
For the ELM and RELM, H was calculated through the Moore–Penrose pseudoinverse and Tikhonov regularization (with C as the regularization factor), respectively, as presented in Equation (3). With H defined, the system can be solved in a very straightforward manner, as presented in Equation (4) [33].
H = H T ( H T H + I C ) 1 T
β = H T
The final label is attributed using an a r g m a x heuristic where the highest output value ( T ) among all classes takes the label. Two metrics were used to appraise the system in its standard and reliable forms, the overall accuracy, presented in Equation (5), and the weighted accuracy, which ponders the accuracy for each one of the 18 classes (c), presented in Equation (6). The overall system architecture for the sEMG signal classification is presented in Figure 5.
overall   accuracy   ( % ) = Correct   classifications Total   samples   tested × 100 %
weighted   accuracy   ( % ) = Correct   c l a s s c   classifications Total   c l a s s c   classifications ¯ × 100 %

2.6.2. Reliable Signal Classification

To define the adequate class to label a determined input, the ELM method relies on the a r g m a x heuristic. Once the model processes the input, the likelihood of belonging to each class is derived from the a r g m a x value. Thus, the a r g m a x output vector retains the probability of a particular sample belong to each class. The output class label for each sample is then attributed based on the higher a r g m a x value, which in an ideal scenario is by far higher than the remaining classes, forming a reliable classification. Using this inherent mechanism of the ELM classifier to attribute labels, and based on the interval range calculated by Equation (7), a t h r e s h o l d ( t h ) value was designed to identify the non-reliable classifications; instantly ignored by our classifier. Thus, the reliable version of the classifier can maintain a more coherent and robust classification and autonomously discard outliers and poorly-fitted data. An example of the variation of the maximum a r g m a x value according to each classification performed is presented in Figure 6. The movement transitions are well known for the lack of signal representativity and class overlap in a machine learning perspective. Those factors make the class distinction in these sections particularly challenging, precluding an ideal class separation [34], and result in lower a r g m a x values, which lead to lower reliability of the classification for those periods. The same situation occurs in the classification ripples in the intermediate portion of the signal that provoke an erroneous classification each time the classifier fails to reach an appropriate value of a r g m a x and adequate class separation. The solid black line presents the ideal classification in Figure 6, while the solid red line represents the predicted class output. It is possible to note that the mismatches (errors) in the signal classification tended to occur for the drops of reliability in the classification, which is represented by the dashed blue line.
t h r e l i a b i l i t y = μ r e l i a b i l i t y σ r e l i a b i l i t y
The value of threshold ( t h r e l i a b i l i t y ) was derived from the average ( μ r e l i a b i l i t y ) reliability ( m a x ( a r g m a x ) ) of the signal, considering its standard deviation ( σ r e l i a b i l i t y ). This factor provides a relaxation factor for the classifier given that some classifications performed with a slightly lower value than the μ r e l i a b i l i t y are often correct. Our heuristic takes as the premise that if a representative dataset trained the classifier, a reliable test sample must provide patterns that are fitted enough in the trained model to offer consistent higher values of the a r g m a x in comparison to remaining classes. In an ideal scenario, the correct class is related to an a r g m a x output value, which is higher than the average, while the remaining classes achieve considerably lower value, characterizing an adequate class separation. This classification, when it occurs, characterizes a reliable classification. For non-satisfactory values of the a r g m a x , we ignored the classification performed, since it is better not to perform any action than to provide an erroneous action that may harm the user or prejudice the environment that surrounds him/her. At the same time, this method enables criteria for data discarding and can be improved in the future to decide when the model needs to retrain and which classes are in fact capable of being learned by the classifier. The reliable versions of ELM and RELM classifiers are denoted in our paper as R-ELM and R-RELM, respectively. The regular value of reliability in Figure 6 is perceptible for the rest class and so is its sudden fall in movement transition sections of the classifications.

3. Results

3.1. IEE Database Validation

Ideally, the data should be as representative and distinct as possible to enable the classifier’s optimal accuracy. Since several experimental factors such as electrode positioning, subjects’ physiology, and fatigue may alter the sEMG signal, we decided to evaluate the consistency of the IEE database. Figure 7 presents the average distribution of sEMG signals’ amplitude concerning movement repetitions and the 12 trials performed in each assay.
The first analysis showed that different assay types (A, B, C, and D) did not result in a significant difference regarding the average sEMG signal amplitude. However, the movement repetition itself was identified as an influencing factor that resulted in one outlier value. An ANOVA ( p = 0.05 ) was used to validate the rectified sEMG signal amplitude regularity using each assay separately. The results confirmed that the movement repetitions were executed differently by the subjects, which made them and ultimately the trials variable both significant factors. The signal dispersion concerning movement repetition was similar to E2 (Exercise 2) from the NINAPro database, as presented by Atzori et al. [13]. Regarding outliers, one of them was detected for each movement repetition of Assay A and Movement Repetitions 1 and 6 of Assay D. That was expected to occur in some assays as a consequence of the physiologic differences regarding the subjects, which was magnified by movement execution, generating distinct EMG activation profiles. Even so, Assays B and C did not present any outliers, considering the average signal amplitude.
Regarding the segmentation time, the related literature cites a trend of increasing the accuracy rate proportionally to the window length. We evaluated this aspect using the data from Assay A of Subject 1 of all repetitions to check the influence of segment window length on different metrics of the system, as presented in Figure 8. The evaluated metrics were: (1) training accuracy of the model (Figure 8a); (2) overall accuracy (Figure 8b); (3) weighted accuracy (Figure 8c); (4) reliable overall accuracy (Figure 8d); (5) non-reliable data rate (Figure 8e); and (6) reliable weighted accuracy (Figure 8f). Our test evaluated intervals of 100 ms, 200 ms, 300 ms, 400 ms, and 500 ms, for the same set of our original TDfeatures. The statistical significance for all tests performed was evaluated through an ANOVA (Tukey test, p = 0.05 ). In this same test, the influence of ELM vs. RELM was also evaluated as a control variable.
For the training accuracy rate, it was found that the segmentation length had a statistical significance (which tended to be most influenced by the 400-ms and 500-ms scenarios), while the method of classification itself did not have a significant influence on the results. This suggests that longer windows for feature extraction are most likely to benefit the formation of a more accurate model. The same result was also verified regarding the non-reliable data detected by the classifier; the more extended segmentation tended to lead the system to narrower ranges at lower plateaus, which seems to reflect a situation of non-overfitting, which would increase the discarding of samples considerably. The overall and weighted accuracy metrics presented a significant response regarding the classification method, with RELM showing higher accuracy rates, but it was not significant for the segment size variation. The results suggest that despite longer segments influencing the training accuracy, this does not necessarily translate into higher accuracy rates. Thus, despite offering lower results, the 100-ms segmentation can be applied without any significant implications, at least following the test conditions. Both overall and weighted reliable metrics did not present statistically-significant differences in the variation of segment length or classification method. This result suggests that our reliable forms of the classifiers, at least for the conditions tested, were able to mitigate the influence of segmentation length in the accuracy results, proving to be robust alternatives to sEMG signal classification.

3.2. Signal Classification

Table 2 presents the classification results using ELM and RELM models in their standard and reliable forms (denoted R-ELM and R-RELM, respectively). The results exposed in Table 2 are organized by assays, subjects, methods, and two different metrics. The overall sample to sample results (overall accuracy) are given, formed by the comparison of the ideal and predicted labels, and the weighted accuracy, which considers the weighted average results among all classes. For the reliable versions of the classifiers, the average discard rate of the non-reliable data is also present. Each result considers the three repetitions performed by each subject in each type of assay (Trials 1–3 for Subject 1, 4–6 for Subject 2, 7–9 for Subject 3, and 10–12 for Subject 4).
A full-factorial design of experiments ( p = 0.05 and R 2 = 85.4 % ) was conducted to define the factors that significantly affected the accuracy rate. The assay type, both classifiers (in their standard and reliable for ms), both accuracy metrics, and the subjects were considered controlled variables, while the accuracies were treated as the response to the experiment. All the variables excluding the classifiers’ individual interaction with assays and subjects and the mutual interaction with metric and assays and metric and subject were significant in the test. These results were coherent with the results presented in Table 2, which demonstrated distinct results and, therefore, the influence of all these factors on the results achieved.
The overall accuracy achieved rates above 90% in all cases and very close to 100% for several scenarios of the reliable versions of the classifiers. In contrast, the weighted accuracy, which considers all the movement classes to compose the final average, was lower for all tests. The use of both metrics was pertinent given that the overall accuracy is generally used in papers in the area, and the weighted accuracy presented accuracy without the bias caused by the rest class. There was a visible difference in the weighted accuracy among all the subjects, but a reasonable coherence in the rates considering the sequential (A and B) and random (C and D) assays performed in the baseline and the reliable form of the classifiers. The number of repetitions used proved to be insignificant concerning the use of four (Assays A and C) or six (Assays B and D) movement instances to train the classifier.

3.3. NINAPro Databases’ Classification

To validate and compare our classifiers with related papers, we performed the classification of the whole of Databases 1, 2, and 6 (DB1, DB2, and DB6) of the NINAPro database formed by its three different “exercises” (E1, E2, and E3) comprehending 50 different upper-limb movements (DB1, DB2) and 18 hand and force movements with assays’ repetition (DB6). The average results achieved for each database are presented in Table 3. For the sake of comparison, we chose not to detail each database result, but to use an average of the result concerning the exercises E1, E2, and E3, as presented in the related papers. The results derived from DB6 were tested for two different conditions. The first Condition (CD1) consisted of the intra-session signal classification, while the second Condition (CD2) used data from different assays to train and test the classifier, as presented by Palermo et al. [15].
As Palermo et al. [15], the intra-session results were far superior to the cross-session signal classification. However, our results presented higher accuracy than those of the related paper, which used only the mean amplitude value and wavelength as input features, as presented in Table 3.
The DB1 database gathered the same 50 distinct movements of DB2 with differences in data acquisition. The average accuracies presented in Table 3 was calculated based on the related papers’ best scenario comprehending the 27 subjects of the database who performed ten repetitions of each movement. Our baseline classifiers were slightly less accurate in this database; however, the reliable form of the regularized ELM was able to match the state-of-the-art rates.
Regarding the NINAPro DB2, the results presented in Table 3 were the best case scenarios of the baseline classifiers described in the related papers composed by the average results derived from 40 subjects. Although the baseline forms of our classifiers were slightly less accurate than the accuracies of the related papers, the reliable versions of our classifiers were capable of outperforming these rates. Moreover, the length of the windows used in the segmentation process is a factor to be considered, since generally, accuracy tends to increase proportionally to the length of the sEMG signal used in the signal classification.
All the comparative tests were conducted using the same movement samples (i.e., Movement Repetitions 1, 3, 4, and 6 to train and Movement Repetitions 2 and 5 to test in DB2) and the data ratio to train and test (i.e., 50% for training and 50% for testing in DB1 and DB3) of the related papers. However, differently from the related papers, the features used were those indicated in Section 2.5 of our paper.

4. Discussion

Regarding the IEE database, it was perceptible that the order of execution of movements had a direct impact on the accuracy achieved. The sequential order assays (A and B) had significantly higher accuracy than those formed by random movements (C and D). In our perspective, this is caused probably by the subjects’ learning capacity, implying more precise and regular movement execution concerning timestamps and the emphasis of the movement itself. In random assays, the movements can sometimes confuse the subject, generating an error factor, which generally appears magnified in Assay D, which had double the random repetitions and the lowest plateau of accuracy rates among all assays. The sequential order Assays A and B had consistent average results with a difference of ≈2% in weighted accuracy and less than 1 % in the overall accuracy, which demonstrates the regularity of the classifiers in the identification of signals derived from different movement repetitions (two repetitions tested in Assay A and four repetitions tested in Assay B). These results may indicate that four representative movement repetitions were enough to train a reliable model to be tested with future n instances and still maintain the classification consistency (not considering some experimental precluding factors such as noise, electrode displacement, etc.).
A significant difference between the accuracies achieved by different subjects was also perceptible. Even using the same experimental protocol and electrode positioning, different subjects tended to present distinct bioelectric characteristics that influenced the signal classification. According to Atzori et al. [14], younger and skinnier subjects tended to achieve higher accuracy rates. Assuming the assessment of Atzori et al. [14] is true, the higher accuracy observed for Subject 2 was potentially linked to his thinner fat tissue, which may have helped in acquiring more representative sEMG signals. Subjects with thicker fat tissue usually tended to attenuate the signal amplitude, tending to preclude the optimal signal pattern recognition. Subject 2 also presented a lesser standard deviation related to the weighted accuracy and generally had among the lesser discard rates concerning non-reliable data. The worst results were achieved by Subject 4 who had a higher Body Mass Index (BMI) among the population involved in the study. An exception occurred in Assay C, where Subject 1 reached a lower accuracy due to the outlier Repetition 2 that reached an 11% accuracy rate in the weighted accuracy metric. This value is highly unusual and should be treated as an outlier.
The number of hidden neurons is an essential factor to define from the perspective of machine learning. Since this is not the main point of our paper, we decided to define the number based on the number that provided us with a higher accuracy rate. However, there are more sophisticated approaches in information theory generally based on the Bayesian or Akaike information criterion and pruning algorithms that aim to balance the amount of useful information used in the creation of the model and the computational weighted accepted as function of a proposed application. The regularized form of the baseline ELM classifier tended to achieve slightly higher results (usually 2 % ) due to its capacity to be more resilient to input pattern variations. For all assays and trials, the reliable forms of the classifiers were able to boost the classification accuracy in all scenarios by eliminating the non-reliable classifications. The outcome of this sample discard was observable in both accuracy metrics, but the practical effect was more visible in the weighted accuracy where the best results reached improvements close to 20%. The more outstanding improvement rate achieved occurred for Subject 4 on Assay D with more than a 23% accuracy improvement. The data discard was consistent in every assay, varying between ≈7.8% and ≈14.4% (both on Assay C). The R-RELM discarded more samples in every scenario due to the more regular value of its a r g m a x value, a consequence of a regularized method, less sensitive to outlier sample inputs.
Regarding both metrics used, the weighted accuracy presented a more balanced alternative to evaluate the system considering that the sEMG databases are frequently unbalanced datasets, which tend to have more samples from the rest class than actual movements, causing a bias in the overall accuracy rate. Given its average lower amplitude, the rest class is the most reliable “movement” to classify [13], reaching values close to 99%. However, the overall accuracy (sample to sample classification) still provided valuable information regarding time, enabling a more precise evaluation of error occurrence in classification. Thus, it is possible to work around solutions to prevent and correct these errors, which are frequently related to the signal representativity, as presented in Figure 9, which compares the desired label (solid black line) with the predicted label (dashed red line). Despite the excellent signal prediction performed for the two sequential movements repetitions (Classes 9 and 10), the classification ripples at the end of the second repetition of Movement 9 and the middle portion of Movement 10 were present. Both ripples shifted the predicted label to Classes 10 and 11, respectively. The initial portion of the first repetition of both movements also contained a slight delay for the movement classification and in the second repetition of Movement 10. The predicted label was advanced in time compared to the ideal label. Classification errors in the signal transition are well known in the literature, although, upon an offline analysis, these errors could be in part attributed to the non-ideal relabeling process. The sample to sample plot was especially useful to identify specific classification drawbacks such as misclassification ripples. The correspondent accuracies achieved for Movements 9 and 10 using the weighted accuracy metric were 85.82% and 90.78%, respectively. Thus, although the weighted accuracy is a fairer comparison, the overall accuracy used along label comparison plots can still provide valuable information.
In comparison with related work, Atzori et al. [14] and Kuzborskij et al. [12] related average accuracy rates of 76% and 75%, respectively, in their best case scenario using Database 1 (DB1) of NINAPro. DB1 is composed of the same 52 classes of DB2 with differences regarding the experimental protocol in the signal acquisition and number of movement repetitions. To enable a fair comparison, we used the same movement repetitions to train and test our classifiers. However, two basics differences were the input features of the related work that also contemplated features in the frequency domain and the segment length for feature extraction. Despite our approach dealing only with features in the time domain and using segments half as long as the related papers, we were still capable of matching the best results related to our R-RELM method that, among others, contain the same 17 movements performed in our database. The segment length is an important factor to consider since longer signal portions tend to lead the accuracy to higher rates at the cost of system responsiveness [24]. Instead of using a sliding-window of 400 ms, we preferred to use segments of 200 ms and to leave a margin to improve in this context. The features in the frequency domain are commonly used to detach the signal representation from the amplitude-based metrics exclusively and tend to improve the accuracy rates when used in combination with time-domain features. Despite test several features in both domains, Kuzborskij et al. [12] found their best results using two time-domain features. Thus, we decided to use more straightforward features and avoid others that may overload the signal processing, which was enough to match both related papers with a difference of 1 % from Atzori et al. [14], who used six different features in both domains.
Regarding the papers that used the NINAPro DB2 database, as presented in Table 3, our classifiers were slightly less accurate, but compatible in their baseline forms and outperformed the referenced papers with the reliable forms of our classifiers. Once more, our classifiers were able to reach these results even using a signal segment half as long as the other papers and avoiding the use of frequency domain features, common to all of them. The paper of Zhai et al. [36] had the closest accuracy compared to our method. Zhai et al. used a deep learning classifier based on a Convolutional Neural Network (CNN) and an adaptive method to classify the NINAPro DB2; in their best results considering the baseline methods, they reached an average of 78.7 % . As typical of CNN-based methods, their method relied on a considerable amount of processing to generate the models, which in their best case scenario reached an average accuracy close to 80% using their adaptive method. The reliable versions of the classifiers were able to achieve an average accuracy close to 80.0 % for the NINAPro DB2 using our feed-forward method, without any retraining, keeping a more straightforward approach, but as accurate as the adaptive CNN.
The paper of Palermo et al. explored the recently-created NINAPro DB6, which is composed of signals acquired on five different days and made with the upper-limb movements focused on the gripping of different objects. Based on Kuzborskij et al. [12], the authors used the mean amplitude value and the wavelength of the signal as input features to test the accuracy of the method to classify signals from the same trial and in cross-sections of the database. On their best-case scenario by using the combination of both features, they reached an average accuracy of 52.43 % for the data derived from the same trial of each user (CD1) and 25.40 % when using cross-section data (CD2), mixing the data from different assays. This decrease in accuracy rate is expected since even for the same person, the characteristics of the sEMG signal tend to change in time, changing the signal morphology and, as a consequence, the features extracted. Our method was able to reach higher accuracies in both cases for all classifiers using the same length for the segmentation window and our four features in the time-domain. The reliable versions were capable of enhancing by 16 % the accuracy rate in both scenarios.
The IEE database presented consistent results with both metrics and in the best case scenario of the R-RELM method achieved average results of 85.41 % for accuracy rate considering all the trials and assays performed and the weighted accuracy metric. For each independent assay, the results of weighted accuracy were 90.00 % , 86.95 % , 80.21 % , and 78.14 % for Assays A, B, C, and D, respectively. For the overall accuracy metric, the same tests reached average accuracies of 99.05 % , 99.21 % , 98.68 % , and 98.89 % for the same scenarios. The higher accuracy rate in favor of the sequential and smaller assays tended to indicate the influence of the subjects on the classification process. For the sequential order assays, subjects tended to execute the movements more similarly, while random assays led to subjects to execute the movements in a more improvised way. Those small differences in the movement execution tended to affect the sEMG signal morphology and consequently its classification. Considering the possible variations of the movement execution (and the muscular activation related) and the number of repetitions, despite the initial 66 % of data being able to be used successfully to train the classifiers for adjacent samples, further studies yet must approach more detailed problems derived from prolonged usage. Overall, the IEE database presented comparable rates or higher with the NINAPro database, despite the fact that it only contained movements from Exercise 2 of the NINAPro database.

5. Conclusions

We evaluated the IEE sEMG database, which we are making available to download on our website (www.ufrgs.br/ieelab). Additionally, we presented the reliable versions of two ELM-based classifiers and the effects of data discard and accuracy rates reached by the regularized and the standard versions of each classifier. The reliable classification appeared to mitigate even the earlier stages of processing such as the segmentation times for feature extraction. We also presented a stochastic and practical filtering method to achieve smoother features from the signal, improving its representativity to achieve reasonable accuracy rates that may in the future help with using those tested algorithms in a real-time prosthetic application. All the results were evaluated by two metrics: the overall accuracy and the weighted accuracy. We also evaluated our pre-processing strategies and classifiers using three different databases from NINAPro.
The accuracy results achieved provided the experimental validation of the gathered data and the reliable forms of the ELM classifiers. We hope the IEE database can help the scientific community by providing a benchmark database, as well as all the supplementary materials related such as codes/routines, videos, and procedures, which will hopefully support the development of natural prosthetic control methods and the general development of this research field. While using our database, we strongly encourage all the users to use both metrics of evaluation since the rest class could bias the overall accuracy result. However, the use of label comparison plots in overall accuracy is also useful to check where classification errors occur and to propose specific solutions. The reliable forms of our ELM classifiers (especially in its regularized from) were shown to match or even outperform some state-of-the-art methods using a very straightforward approach. Since we were able to identify the reliable samples autonomously for the classification, further developments must focus on the development of regenerative classifiers. In this regenerative classification, the classifier must be capable of identifying a class with poor fitting and autonomously request for a more updated sample to refresh the classification model. We believe that it must be capable of maintaining stable accuracies for long-term classification and also help to enhance the cross-session/multiuser problem of limited accuracy.

Author Contributions

V.H.C. performed the signal acquisition, all the signal pre-processing and classification after the relabeling, developed the design of experiments, developed the reliable forms of classifiers, and wrote and revised the paper. M.T. performed the signal acquisition of the IEE database, developed the LabVIEW interface, relabeled the signals, and wrote and revised the paper. J.M. performed the signal acquisition and processing, helped to evaluate the system, and wrote and revised the paper. A.B. coordinated the project, providing ideas for the signal acquisition, processing, and classification, and also wrote and revised the paper.

Funding

This research received no external funding.

Acknowledgments

The authors would like to acknowledge the Brazilian Coordination for Improvement of Higher Level Personnel (CAPES) for the provision of the scholarships that made this work possible. The authors also want to thank professor Leia Bagesteiro for the sEMG device used in this paper, Karina Moura for the videos used in data acquisition, and all the volunteers who participated.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Minati, L.; Yoshimura, N.; Koike, Y. Hybrid Control of a Vision-Guided Robot Arm by EOG, EMG, EEG Biosignals and Head Movement Acquired via a Consumer-Grade Wearable Device. IEEE Access 2016, 4, 9528–9541. [Google Scholar] [CrossRef]
  2. Tacchino, G.; Gandolla, M.; Coelli, S.; Barbieri, R.; Pedrocchi, A.; Bianchi, A.M. EEG Analysis During Active and Assisted Repetitive Movements: Evidence for Differences in Neural Engagement. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 761–771. [Google Scholar] [CrossRef] [PubMed]
  3. Cene, V.H.; Favieiro, G.; Nedel, L.; Balbinot, A. Reever control: A biosignal controlled interface. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, Korea, 11–15 July 2017; IEEE: Jeju, Korea, 2017; pp. 706–709. [Google Scholar] [CrossRef]
  4. Polygerinos, P.; Wang, Z.; Galloway, K.C.; Wood, R.J.; Walsh, C.J. Soft robotic glove for combined assistance and at-home rehabilitation. Robot. Auton. Syst. 2015, 73, 135–143. [Google Scholar] [CrossRef]
  5. Atzori, M.; Müller, H. Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview. Front. Syst. Neurosci. 2015, 9, 162. [Google Scholar] [CrossRef]
  6. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The Extraction of Neural Information from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef] [PubMed]
  7. Phinyomark, A.; Quaine, F.; Charbonnier, S.; Serviere, C.; Tarpin-Bernard, F.; Laurillau, Y. EMG feature evaluation for improving myoelectric pattern recognition robustness. Expert Syst. Appl. 2013, 40, 4832–4840. [Google Scholar] [CrossRef]
  8. Young, A.J.; Smith, L.H.; Rouse, E.J.; Hargrove, L.J. Classification of Simultaneous Movements Using Surface EMG Pattern Recognition. IEEE Trans. Biomed. Eng. 2013, 60, 1250–1258. [Google Scholar] [CrossRef] [PubMed]
  9. Han, H.; Jo, S. Supervised Hierarchical Bayesian Model-Based Electomyographic Control and Analysis. IEEE J. Biomed. Health Inform. 2014, 18, 1214–1224. [Google Scholar] [CrossRef]
  10. Englehart, K.; Hudgins, B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 2003, 50, 848–854. [Google Scholar] [CrossRef]
  11. Micera, S.; Carpaneto, J.; Raspopovic, S. Control of Hand Prostheses Using Peripheral Information. IEEE Rev. Biomed. Eng. 2010, 3, 48–68. [Google Scholar] [CrossRef] [PubMed]
  12. Kuzborskij, I.; Gijsberts, A.; Caputo, B. On the challenge of classifying 52 hand movements from surface electromyography. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4931–4937. [Google Scholar] [CrossRef]
  13. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1. [Google Scholar] [CrossRef]
  14. Atzori, M.; Gijsberts, A.; Kuzborskij, I.; Elsig, S.; Mittaz Hager, A.G.; Deriaz, O.; Castellini, C.; Muller, H.; Caputo, B. Characterization of a Benchmark Database for Myoelectric Movement Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 73–83. [Google Scholar] [CrossRef]
  15. Palermo, F.; Cognolato, M.; Gijsberts, A.; Muller, H.; Caputo, B.; Atzori, M. Repeatability of grasp recognition for robotic hand prosthesis control based on sEMG data. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1154–1159. [Google Scholar] [CrossRef]
  16. Cene, V.H.; Balbinot, A. Using the sEMG signal representativity improvement towards upper-limb movement classification reliability. Biomed. Signal Process. Control 2018, 46, 182–191. [Google Scholar] [CrossRef]
  17. Cene, V.H.; Balbinot, A. Optimization of Features to Classify Upper—Limb Movements Through sEMG Signal Processing. Braz. J. Instrum. Control 2016, 4, 14–20. [Google Scholar] [CrossRef]
  18. Hofmann, D.; Jiang, N.; Vujaklija, I.; Farina, D. Bayesian Filtering of Surface EMG for Accurate Simultaneous and Proportional Prosthetic Control. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 1333–1341. [Google Scholar] [CrossRef] [PubMed]
  19. Hashim, F.R.; Soraghan, J.J.; Petropoulakis, L.; Daud, N.G.N. EMG cancellation from ECG signals using modified NLMS adaptive filters. In Proceedings of the 2014 IEEE Conference on Biomedical Engineering and Sciences (IECBES), Kuala Lumpur, Malaysia, 8–10 December 2014; pp. 735–739. [Google Scholar] [CrossRef]
  20. Ortolan, R.; Mori, R.; Pereira, R.; Cabral, C.; Pereira, J.; Cliquet, A. Evaluation of adaptive/nonadaptive filtering and wavelet transform techniques for noise reduction in emg mobile acquisition equipment. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 60–69. [Google Scholar] [CrossRef]
  21. Botter, A.; Vieira, T.M. Filtered Virtual Reference: A New Method for the Reduction of Power Line Interference with Minimal Distortion of Monopolar Surface EMG. IEEE Trans. Biomed. Eng. 2015, 62, 2638–2647. [Google Scholar] [CrossRef]
  22. Zhou, P.; Suresh, N.; Lowery, M.; Rymer, W. Nonlinear Spatial Filtering of Multichannel Surface Electromyogram Signals During Low Force Contractions. IEEE Trans. Biomed. Eng. 2009, 56, 1871–1879. [Google Scholar] [CrossRef]
  23. Zivanovic, M.; Gonzalez-Izal, M. Nonstationary Harmonic Modeling for ECG Removal in Surface EMG Signals. IEEE Trans. Biomed. Eng. 2012, 59, 1633–1640. [Google Scholar] [CrossRef] [PubMed]
  24. Farrell, T.R.; Weir, R.F. The Optimal Controller Delay for Myoelectric Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 111–118. [Google Scholar] [CrossRef]
  25. Akusok, A.; Miche, Y.; Lendasse, A. High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications. IEEE Access 2015, 3, 1011–1025. [Google Scholar] [CrossRef]
  26. Phinyomark, A.; Khushaba, R.N.; Scheme, E. Feature Extraction and Selection for Myoelectric Control Based on Wearable EMG Sensors. Sensors 2018, 18, 1615. [Google Scholar] [CrossRef]
  27. Gijsberts, A.; Atzori, M.; Castellini, C.; Muller, H.; Caputo, B. Movement Error Rate for Evaluation of Machine Learning Methods for sEMG-Based Hand Movement Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 735–744. [Google Scholar] [CrossRef] [PubMed]
  28. Castellini, C.; Ravindra, V. A wearable low-cost device based upon Force-Sensing Resistors to detect single-finger forces. In Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, 12–15 August 2014; pp. 199–203. [Google Scholar] [CrossRef]
  29. Cene, V.H.; Balbinot, A. Upper-limb movement classification through logistic regression sEMG signal processing. In Proceedings of the 2015 Latin America Congress on Computational Intelligence (LA-CCI), Curitiba, Brazil, 13–16 October 2015; pp. 1–5. [Google Scholar] [CrossRef]
  30. Cene, V.H.; Favieiro, G.; Balbinot, A. Using non-iterative methods and random weight networks to classify upper-limb movements through sEMG signals. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, Korea, 11–15 July 2017; IEEE: Jeju, Korea, 2017; pp. 2047–2050. [Google Scholar] [CrossRef]
  31. Zhang, L.; Suganthan, P.N. A comprehensive evaluation of random vector functional link networks. Inf. Sci. 2016, 367–368, 1094–1105. [Google Scholar] [CrossRef]
  32. Zhang, L.; Suganthan, P. A survey of randomized algorithms for training neural networks. Inf. Sci. 2016, 364–365, 146–155. [Google Scholar] [CrossRef]
  33. Huang, G.B. An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels. Cogn. Comput. 2014, 6, 376–390. [Google Scholar] [CrossRef]
  34. Riillo, F.; Quitadamo, L.R.; Cavrini, F.; Gruppioni, E.; Pinto, C.A.; Pasto, N.C.; Sbernini, L.; Albero, L.; Saggio, G. Optimization of EMG-based hand gesture recognition: Supervised vs. unsupervised data preprocessing on healthy subjects and transradial amputees. Biomed. Signal Process. Control 2014, 14, 117–125. [Google Scholar] [CrossRef]
  35. Zhai, X.; Jelfs, B.; Chan, R.H.M.; Tin, C. Short latency hand movement classification based on surface EMG spectrogram with PCA. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 327–330. [Google Scholar] [CrossRef]
  36. Zhai, X.; Jelfs, B.; Chan, R.H.M.; Tin, C. Self-Recalibrating Surface EMG Pattern Recognition for Neuroprosthesis Control Based on Convolutional Neural Network. Front. Neurosci. 2017, 11, 1–11. [Google Scholar] [CrossRef]
Figure 1. The three different groups of movements performed by each subject in the IEEdatabase.
Figure 1. The three different groups of movements performed by each subject in the IEEdatabase.
Sensors 19 01864 g001
Figure 2. Electrode fixation (Cene et al. [17]).
Figure 2. Electrode fixation (Cene et al. [17]).
Sensors 19 01864 g002
Figure 3. The main block diagram for the Antonyan Vardan Transform (AVT) filter where samples s out of the range Mean Signal Amplitude ( M S A ) ± Mean Signal Deviation ( M S D ) are substituted by the M S A value of the signal. sEMG, surface EMG.
Figure 3. The main block diagram for the Antonyan Vardan Transform (AVT) filter where samples s out of the range Mean Signal Amplitude ( M S A ) ± Mean Signal Deviation ( M S D ) are substituted by the M S A value of the signal. sEMG, surface EMG.
Sensors 19 01864 g003
Figure 4. Effect of the AVT filter on the channels of the raw rectified sEMG signal and RMS feature for a movement repetition performed: (a) raw rectified sEMG signals; (b) the same portion of sEMG signals filtered; (c) RMS feature extracted from the signals presented in (a); (d) RMS feature extracted from the signals presented in (b).
Figure 4. Effect of the AVT filter on the channels of the raw rectified sEMG signal and RMS feature for a movement repetition performed: (a) raw rectified sEMG signals; (b) the same portion of sEMG signals filtered; (c) RMS feature extracted from the signals presented in (a); (d) RMS feature extracted from the signals presented in (b).
Sensors 19 01864 g004
Figure 5. sEMG signal classification using a generic ELM classifier and the AVT filter. In this example, N is the number of samples to classify in C classes using d features and L hidden neurons.
Figure 5. sEMG signal classification using a generic ELM classifier and the AVT filter. In this example, N is the number of samples to classify in C classes using d features and L hidden neurons.
Sensors 19 01864 g005
Figure 6. An example of the reliability metric in the ELM classifier: The solid black line provides the ideal label of the output class. The solid red line provides the predicted class, while the blue dashed curve represents the reliability of the system for each classification performed. Classification errors tend to occur when the reliability metric drops, which indicates the non-reliable classifications, which can be autonomously identified by the classifier.
Figure 6. An example of the reliability metric in the ELM classifier: The solid black line provides the ideal label of the output class. The solid red line provides the predicted class, while the blue dashed curve represents the reliability of the system for each classification performed. Classification errors tend to occur when the reliability metric drops, which indicates the non-reliable classifications, which can be autonomously identified by the classifier.
Sensors 19 01864 g006
Figure 7. Distribution for sEMG amplitude after the rectification considering the four different assays: The evaluation considers all the movement repetitions and the 12 trials performed, with Trials 1–3, 4–6, 7–9, and 10–12 performed by Subjects 1, 2, 3, and 4, respectively.
Figure 7. Distribution for sEMG amplitude after the rectification considering the four different assays: The evaluation considers all the movement repetitions and the 12 trials performed, with Trials 1–3, 4–6, 7–9, and 10–12 performed by Subjects 1, 2, 3, and 4, respectively.
Sensors 19 01864 g007
Figure 8. Influence of segmentation lengths in different metrics of the system: Notably, the standard versions of the classifiers tend to benefit slightly from more extensive lengths for feature extraction, while this does not affect the reliable form of the classifiers significantly.
Figure 8. Influence of segmentation lengths in different metrics of the system: Notably, the standard versions of the classifiers tend to benefit slightly from more extensive lengths for feature extraction, while this does not affect the reliable form of the classifiers significantly.
Sensors 19 01864 g008
Figure 9. A label comparison plot: Through the analysis of the ideal and predicted label, it is possible to estimate the classification errors and propose specific solutions. In the figure, two repetitions of Movements 9 and 10 performed by Subject 1 in Assay A were used to illustrate the capability of the method.
Figure 9. A label comparison plot: Through the analysis of the ideal and predicted label, it is possible to estimate the classification errors and propose specific solutions. In the figure, two repetitions of Movements 9 and 10 performed by Subject 1 in Assay A were used to illustrate the capability of the method.
Sensors 19 01864 g009
Table 1. IEE database subject description (F.C: Forearm Circumference, F.L: Forearm Length (F.L)).
Table 1. IEE database subject description (F.C: Forearm Circumference, F.L: Forearm Length (F.L)).
SUBJECTLATERALITYGENDERAGEHEIGHT
(m)
WEIGHT
(kg)
F. C.
(cm)
F. L.
(cm)
1Right-HandedMale311.8778.028.325.3
2Right-HandedMale261.8070.027.024.0
3Right-HandedFemale291.646025.422.7
4Right-HandedMale341.8284.027.825.7
Table 2. Mean accuracy rates achieved considering the three repetitions for each assay. The results are divided by assay, subject, method, and accuracy. For the reliable versions of the classifiers, the amount of discarded data is also present. R-RELM, Reliable Regularized ELM.
Table 2. Mean accuracy rates achieved considering the three repetitions for each assay. The results are divided by assay, subject, method, and accuracy. For the reliable versions of the classifiers, the amount of discarded data is also present. R-RELM, Reliable Regularized ELM.
ASSAY AASSAY B
SUBJECTMETHODWEIGHTED
ACCURACY (%)
OVERALL
ACCURACY (%)
NON-RELIABLE
DATA (%)
SUBJECTMETHODCLASS
ACCURACY (%)
OVERALL
ACCURACY (%)
NON-RELIABLE
DATA (%)
1ELM72.17 ± 10.8094.00 ± 1.13-1ELM71.24 ± 12.4094.22 ± 1.48-
RELM74.60 ± 10.8294.51 ± 1.10-RELM72.56 ± 13.0794.51 ± 1.39-
R-ELM85.50 ± 16.3798.80 ± 0.3313.22 ± 1.76R-ELM82.63 ± 13.1698.45 ± 0.6011.34 ± 0.43
R-RELM89.25 ± 15.3899.16 ± 0.2613.15 ± 1.61R-RELM85.43 ± 14.3099.06 ± 0.4012.27 ± 0.78
2ELM80.16 ± 8.7095.73 ± 2.20-2ELM82.27 ± 7.6296.48 ± 0.56-
RELM80.98 ± 8.1195.86 ± 2.10-RELM83.09 ± 7.0796.67 ± 0.46-
R-ELM91.06 ± 7.8298.87 ± 0.9311.03 ± 1.58R-ELM92.61 ± 5.5299.18 ± 0.3511.10 ± 0.66
R-RELM92.57 ± 6.7099.17 ± 0.8311.67 ± 1.21R-RELM96.00 ± 3.2299.60 ± 0.2911.72 ± 0.51
3ELM72.20 ± 11.3793.65 ± 1.15-3ELM72.54 ± 11.6293.74 ± 0.47-
RELM74.17 ± 11.1094.10 ± 1.02-RELM74.55 ± 10.8694.17 ± 0.41-
R-ELM86.56 ± 10.2698.76 ± 0.6113.53 ± 0.20R-ELM84.82 ± 15.4798.57 ± 0.2612.98 ± 0.88
R-RELM91.40 ± 7.2999.17 ± 0.4013.93 ± 0.47R-RELM88.04 ± 14.4599.00 ± 0.1313.28 ± 0.81
4ELM64.60 ± 12.1093.75 ± 0.57-4ELM62.65 ± 13.1293.79 ± 0.26-
RELM67.34 ± 12.5694.23 ± 0.47-RELM63.67 ± 12.9893.98 ± 0.40-
R-ELM83.33 ± 13.2698.53 ± 0.4613.01 ± 0.36R-ELM74.89 ± 12.8297.89 ± 1.4310.57 ± 3.81
R-RELM88.47 ± 11.5799.02 ± 0.4613.31 ± 0.35R-RELM82.83 ± 10.2199.16 ± 0.1313.28 ± 0.43
1ELM42.23 ± 18.3189.25 ± 5.07-1ELM60.73 ± 16.3092.97 ± 0.39-
RELM43.24 ± 17.9189.43 ± 5.62-RELM62.12 ± 16.1893.25 ± 0.60-
R-ELM47.33 ± 23.5797.62 ± 1.1013.30 ± 1.76R-ELM73.26 ± 24.7698.46 ± 0.3814.02 ± 3.14
R-RELM50.48 ± 23.4097.64 ± 1.6313.23 ± 1.50R-RELM77.31 ± 21.8698.80 ± 0.3513.53 ± 2.43
2ELM73.81 ± 10.3494.42 ± 20.90-2ELM70.05 ± 12.2294.00 ± 2.13-
RELM74.89 ± 10.2294.70 ± 0.98-RELM71.28 ± 11.9594.27 ± 2.04-
R-ELM84.21 ± 9.8897.31 ± 2.477.76 ± 6.73R-ELM83.84 ± 11.0997.83 ± 2.0510.28 ± 1.95
R-RELM93.38 ± 6.2799.28 ± 0.4612.23 ± 0.28R-RELM87.92 ± 11.2598.90 ± 0.9512.51 ± 0.85
3ELM63.59 ± 13.2093.12 ± 0.43-3ELM58.98 ± 14.6092.67 ± 1.12-
RELM65.49 ± 12.4093.50 ± 0.68-RELM60.85 ± 14.2293.00 ± 1.17-
R-ELM78.73 ± 14.0098.00 ± 0.1911.30 ± 1.02R-ELM74.50 ± 15.5298.36 ± 0.6811.38 ± 0.54
R-RELM80.22 ± 16.6898.59 ± 0.2212.11 ± 0.74R-RELM79.00 ± 17.2598.97 ± 0.2512.03 ± 0.68
4ELM49.17 ± 16.7091.05 ± 0.95-4ELM50.06 ± 15.2091.50 ± 0.56-
RELM51.97 ± 17.1191.56 ± 1.27-RELM50.70 ± 15.6691.59 ± 0.67-
R-ELM66.65 ± 15.3497.96 ± 0.2613.90 ± 1.93R-ELM71.34 ± 12.4198.12 ± 0.9812.57 ± 0.40
R-RELM77.14 ± 17.3498.54 ± 0.6714.40 ± 1.10R-RELM74.38 ± 14.9898.65 ± 0.4313.33 ± 0.69
Table 3. Results and comparison with related papers in the area considering the three different NINAPro databases. CD, Condition.
Table 3. Results and comparison with related papers in the area considering the three different NINAPro databases. CD, Condition.
PAPERSEGMENTDATABASEAVERAGE ACCURACY (%)
Kuzborskij et al. [12]400 ms + 10 msDB175.00
Zhai et al. [35]200 ms + 100 msDB277.41
Gijsberts et al. [27]400 ms + 10 msDB277.48
Atzori et al. [13]200 msDB275.27
Atzori et al. [14]400 ms + 10 msDB176.00
Zhai et al. [36]256/184 points
(Hamming window)
DB278.71
Palermo et al. [15]200 ms + 10 msDB6CD152.43
CD225.40
ELMRELMR-ELMR-RELM
This work200 ms + 10 msDB168.7771.6373.1375.03
DB273.6774.4379.3379.77
DB6CD164.7265.2168.4369.83
CD237.7438.9339.9141.75

Share and Cite

MDPI and ACS Style

Cene, V.H.; Tosin, M.; Machado, J.; Balbinot, A. Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines. Sensors 2019, 19, 1864. https://doi.org/10.3390/s19081864

AMA Style

Cene VH, Tosin M, Machado J, Balbinot A. Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines. Sensors. 2019; 19(8):1864. https://doi.org/10.3390/s19081864

Chicago/Turabian Style

Cene, Vinicius Horn, Mauricio Tosin, Juliano Machado, and Alexandre Balbinot. 2019. "Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines" Sensors 19, no. 8: 1864. https://doi.org/10.3390/s19081864

APA Style

Cene, V. H., Tosin, M., Machado, J., & Balbinot, A. (2019). Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines. Sensors, 19(8), 1864. https://doi.org/10.3390/s19081864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop