Next Article in Journal
On a Symmetric Image Cryptosystem Based on a Novel One-Dimensional Chaotic System and Banyan Network
Next Article in Special Issue
Transfer Learning with ResNet3D-101 for Global Prediction of High Aerosol Concentrations
Previous Article in Journal
Paradox of Book and Claim for Carbon Emission Reduction in Maritime Operations Management: Mathematical Models and Numerical Experiments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-BCI Features Discrimination between Executed and Imagined Movements Based on FastICA, Hjorth Parameters, and SVM

by
Tat’y Mwata-Velu
1,2,3,
Armando Navarro Rodríguez
1,
Yanick Mfuni-Tshimanga
4,
Richard Mavuela-Maniansa
2,
Jesús Alberto Martínez Castro
1,
Jose Ruiz-Pinales
3 and
Juan Gabriel Avina-Cervantes
3,*
1
Centro de Investigación en Computación (CIC), Instituto Politécnico Nacional (IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial Vallejo, Alcadía Gustavo A. Madero, Ciudad de México 07738, Mexico
2
Institut Supérieur Pédagogique Technique de Kinshasa (ISPT-KIN), Av. de la Science 5, Gombe, Kinshasa 3287, Democratic Republic of the Congo
3
Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, University of Guanajuato, Salamanca 36885, Mexico
4
Institut Supérieur des Techniques Appliquées (ISTA-NDOLO), Avenue de l’aérodrome, Kinshasa 6593, Democratic Republic of the Congo
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4409; https://doi.org/10.3390/math11214409
Submission received: 26 September 2023 / Revised: 18 October 2023 / Accepted: 23 October 2023 / Published: 24 October 2023
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)

Abstract

:
Brain–Computer Interfaces (BCIs) communicate between a given user and their nearest environment through brain signals. In the case of device handling, an accurate control-based BCI depends essentially on how the user performs corresponding mental tasks. In the BCI illiteracy-related literature, one subject could perform a defined paradigm better than another. Therefore, this work aims to identify recorded Electroencephalogram (EEG) signal segments related to the executed and imagined motor tasks for BCI system applications. The proposed approach implements pass-band filters and the Fast Independent Component Analysis (FastICA) algorithm to separate independent sources from raw EEG signals. Next, EEG features of selected channels are extracted using Hjorth parameters. Finally, a Support Vector Machines (SVMs)-based classifier identifies executed and imagined motor features. Concretely, the Physionet dataset, related to executed and imagined motor EEG signals, provided training, testing, and validating data. The numerical results let us discriminate between executed and imagined motor tasks accurately. Therefore, the proposed method offers a reliable alternative to extract EEG features for BCI based on executed and imagined movements.

1. Introduction

Electroencephalographic (EEG) signals measure the electrical activity related to cerebral functioning [1]. Recently, research based on EEG signals has provided a better understanding of brain functioning [2], allowing accurate diagnoses and cerebral disease treatments [3,4]. Moreover, the processing and classification of EEG signals are of great importance for medical analyses and Brain–Computer Interface (BCI) applications; for example, to control devices [5] and to improve the quality of people’s lives [6]. Nowadays, various EEG-BCI systems are based on executed and imagined movements, offering an extensive range of solutions to assist people in daily life. Cho et al. [7] controlled a robot hand using EEG signals recorded by performing and imagining hand movements. They reported average classification accuracies of 56.83% for Motor Execution (ME), and 51.01% for Motor Imagery (MI), respectively, using the Common Spatial Patterns (CSP) and the Regularized Linear Discriminant Analysis (RLDA) algorithms. For their part, Demir Kanik et al. [8] decoded the upper limb movement intentions before and during the movement execution, adding MI to ME trials. Their framework aimed to improve ME classification for robot control. Traditionally, the effectiveness of an EEG-based BCI system relies on the accurate classification of brain signals, which depends on how features extracted from raw data are distinguishable. Therefore, the EEG feature extraction challenge becomes far more complex considering the EEG’s non-stationary nature and the artifacts caused by the body’s muscular activity.
In the recent literature, various techniques have been used to efficiently differentiate ME and MI features. For example, Thomas et al. [9] investigated parameters associated with ME and MI of left/right-hand movements using the CSP algorithm. They drew conclusions on the suitability of the frontal and parietal channels discriminating ME and MI tasks, with an average accuracy of 81%. In another work, Usama et al. [10] evaluated the Error-related Potentials (ErrP) decoding by combining ME and MI features of right–left wrist extensions and foot dorsiflexion. For that purpose, 89 ± 9 and 83 ± 9% accuracies were obtained using the random forest classifier. Such results showed significant inter-subject variability, detecting ErrPs from temporal features. Recently, a BCI-Transfer learning based-Relation Network (BTRN) allowed the decoding of MI and ME tasks, applying a transfer learning architecture to discriminate high-complexity features [11]. Definitely, ME and MI feature discrimination help establish their correlation, neurological similarity, and divergence. This analysis is of the utmost importance, especially for BCI applications, where exploiting either paradigm can improve the hoped-for results.
Recent studies have reported a parallel sensorimotor cortex activation caused by ME and MI tasks [12,13]. However, distinguishing between EEG features related to ME and MI tasks remains challenging for BCI-based paradigms. Therefore, the sensorimotor rhythm similarity between ME and MI depends essentially on the subject MI ability, as concluded by Toriyama et al. [13]. Furthermore, the inter-subject variability of BCI efficiency according to paradigms and sessions, well known in the associated literature as BCI illiteracy, explains that a considered subject can perform better in one specific task but not another [14].
Recently, Hardwick et al. [15] proposed a quantitative synthesis for MI, such as an action observation and ME neural correlation. Based on neuroimaging experiments, different mental tasks provoke particular activations in specific brain areas. In the same sense, Matsuo et al. [16] investigated cerebral hemodynamics during the ME and MI of a self-feeding task employing chopsticks to emphasize varied levels of brain activation in some areas during ME and MI tasks. In the recent past, techniques such as successive decomposition index [17], transfer learning [11], frequency-band power analysis [13], and Filter Bank Common Spatial Pattern (FBCSP) [18] have been frequently used to discriminate ME from MI features for BCI-based systems. However, from a general point of view, machine learning approaches demonstrated significant advantages in analyzing ME and MI deep features to be classified.
On the other hand, Hjorth et al. [19] introduced three parameters based on variance properties to analyze features of EEG curves using a time-based calculation. These parameters are activity, mobility, and complexity. Activity measures the square of EEG amplitude standard deviation, in other words, the variance of EEG signals in a considered window. Mobility is defined as the average power of the normalized derivative of EEG signals. At the same time, complexity represents the ratio between EEG mobilities, i.e., the first derivative’s mobility, and EEG signals’ mobility. In sum, several approaches have been proposed in the recent literature to accurately discriminate ME and MI EEG tasks for BCI applications [12,20,21,22]. Therefore, it is hypothesized in the present framework that ME and MI EEG features are distinguishable by gainfully employing the aforenamed Hjorth parameters.
This work aims to discriminate EEG features between ME and MI movements using the fast fixed-point algorithm for Fast Independent Component Analysis (FastICA) [23,24], Hjorth parameters, and the Support Vector Machines (SVMs) classifier. Consequently, four preprocessing steps were implemented as follows. First, frequency-band filters were applied to reduce undesired frequency components. Next, sources were estimated using the FastICA algorithm to determine components with increased electrical activity in the brain cortex areas. Then, both filtered and unfiltered features were extracted using the Hjorth parameters based on the activity, mobility, and complexity of EEG signals. In the last step, the SVM classifier discriminates and classifies the EEG segments related to ME and MI tasks.
Mainly, the paper’s contributions are summarized as follows.
  • Reliable discrimination between ME and MI features considering both filtered and unfiltered EEG signals.
  • A method based on Machine Learning combining FastICA, Hjorth parameters, and SVM algorithms is proposed.
  • Two classification approaches to discriminate ME and MI tasks were accurately developed.
This paper is organized as follows. Section 2 summarizes recent works related to the topic. The methods used in this framework are presented in Section 3, emphasizing the dataset description and signal processing steps. Section 4 obtained and discusses the achieved numerical results. Finally, Section 5 formulates the conclusions of this work.

2. Related Works

Numerous works have been proposed to discriminate among features in the recent literature related to ME and MI features classification for BCI systems. Long before, Sleight et al. [20] analyzed features carried by EEG frequency bands to distinguish executed and imagined segments. They reported average classification accuracies of 56.0% using ICA and SVM, classifying features from C3, C4, and Cz channels. In another work, Alomari et al. [12] analyzed EEG signals corresponding to executed and imagined fist movements for BCI applications. They used Wavelet Transform (WT) and an approach-based machine learning. Average classification accuracies of 81.5 and 88.93% were achieved with their framework, discriminating ME and MI of left–right fist tasks, respectively. Besides, ME and MI movements of the fists have been discriminated against using the power spectrum analysis of the Dynamic Mode Decomposition (DMD) components [21]. The highest reported accuracy was about 83%, utilizing an SVM classifier with a linear kernel function.
Furthermore, pattern recognition and categorization of ME and MI for one-hand finger movements were proposed by Sonkin et al. [25], using artificial neural network (ANN) and SVM models. Higher accuracies of 42% and 62% were obtained by decoding ME and MI of the thumb and index fingers. For their part, Baravalle et al. [26] analyzed EEG signals of the right and left hands, using emergent dynamical properties, wavelet decomposition, and entropy–complexity plane to distinguish ME from MI tasks. Their results allowed the quantification of intrinsic features corresponding to ME and MI tasks. Recently, Chen et al. [27] investigated how to increase the number of commands for BCI systems by using ME and MI of left–right gestures. They used hierarchical SVM (hSVM) to distinguish ME and MI tasks, achieving an average accuracy of 74.08 ± 7.29% through five-fold Cross-Validation (CV).
On the other hand, numerous datasets based on ME and MI EEG signals have been published [28,29]. In this sense, Schalk et al. [30] proposed a development platform named BCI2000 for applications-based BCI general purpose, in addition to releasing a large EEG ME-MI database. Since then, numerous scientific researchers have employed EEG data from the Physionet database to implement BCI systems and test algorithms. Table 1 summarizes recent works on ME and MI task discrimination.

3. Materials and Methods

This work aims to classify ME and MI EEG signals provided by the Physionet dataset. In this sense, the preprocessing step contemplates three stages. First, C3, Cz, C4, F3, Fz, F4, P3, Pz, and P4 channels are selected from the provided 64 channels according to the defined tasks because of their spatial location on the brain cortices [31,32]. Next, sub-band filtering is applied to constitute theta ( θ ), alpha ( α ), and beta ( β ) sub-band data. The last preprocessing stage uses the FastICA algorithm to separate independent sources from raw EEG signals and sub-band data. For the features extraction step, Hjorth criteria are employed to extract ME and MI features, reducing the complexity of EEG data. Finally, the SVM classifies patterns resulting from the Hjorth parameters combination. Concretely, patterns maximizing the distance from the optimal hyperplane to the nearest point of each pattern are classified as ME or MI, to give the output. Figure 1 presents the overview of the proposed method, contemplating the aforementioned steps.

3.1. The Physionet Dataset

The proposed method utilizes EEG data from the Physionet dataset [30,33]. This dataset contains ME and MI signals from 64 channels in the EDF+ format, representing 160 samples per second and channel. Besides, an annotation is given about the baseline record. Thus, 109 volunteers were labeled from S001 to S109 and performed 1526 records of ME and MI activities. Globally, 14 experimental runs were executed per subject: 2 one-minute baseline runs with eyes opened and closed each, and 3 two-minute the others. A detailed description of the experimental procedure is given in Table 2.
Therefore, T1 and T2 codes are assigned to the class “executed” in tasks R03, R05, R07, R09, R11, and R13. Whilst T1 and T2 codes correspond to the class “imagined” in tasks R04, R06, R08, R10, R12, and R14. Exceptionally, EEG recordings corresponding to volunteers S088, S089, S092, S100, S104, and S106 were removed from the dataset due to irregular and unpredictable variations. In summary, 14 experimental runs were executed with the remaining 103 volunteers. Hence, EEG data to be processed contain 103 × 14 or 1442 EEG records per channel.

3.2. Channel Selection

Current channel selection approaches include the correlation coefficient evaluation, Common Spatial Pattern (CSP), sequential-based algorithm, and binary particle swarm optimization-based algorithm methods [34]. However, in the specific case of ME and MI tasks, the related literature is unanimous that the Supplementary Motor Area (SMA), the Premotor Cortex (PMC), and the primary motor cortex (M1) are significantly activated during both ME and MI [35,36]. Therefore, two steps were implemented to select channels in this work. As a first approach, 19 channels corresponding to the 10-20 electrode placement system were considered from the 64 provided by the referred dataset, i.e., C3, Cz, C4, Fp1, Fp2, F7 F3, Fz, F4, F8, T7, T8, P7, P3, Pz, P4, P8, O1, and O2 channels. Such a channel selection was made considering two significant aspects. On the one hand, selecting a few channels reduces the processing time and computational resources by considering only discriminant signals, as reported by Baig et al. [37]. On the other hand, the 10–20 electrode placement system offers a good channel spatial resolution to evaluate the selected channel set contribution according to the defined paradigms [38]. In addition, the channels above were selected to evaluate their contribution, considering the conclusions formulated in [39,40] about the parallel activation of various cortices caused by one or various stimuli. Figure 2 shows the 19 considered channels for the proposed framework.

3.3. Sub-Band Filtering

In total, 8652 EEG segments were decomposed in three sub-bands to analyze their intrinsic features. The proposed approach considers filtering EEG signals in theta ( θ ), alpha ( α ), and beta ( β ) sub-bands, according to the literature related to ME and MI tasks, the defined paradigm, and the age of participants [9,41]. EEG signals in sub-band delta ( δ ∈ [0.5, 4] Hz) appear in deep dreamless sleep, loss of body awareness, quietness, lethargy, or fatigue. For this cause, the delta sub-band is not considered in the case of ME and MI tasks [42]. Therefore, fifth-order Chebyshev type II band-pass filters were applied to obtain 4–8, 8–13, and 13–30 Hz sub-bands for θ , α , and β bands, respectively. Sub-band filters were designed by specifying a stop band attenuation at −34 dB. In this sense, the filter transition bands reached 80% of the gain, considering that the pass-band frequency ranges are different for θ , α , and β waves.

3.4. FastICA

A noise reduction step was performed using the FastICA algorithm to deal with artifact problems and undesirable signals carried in EEG data [22,43]. The FastICA algorithm and its variants are commonly used for denoising biomedical signals from muscular artifacts [44], especially in enhancing EEG signals [45]. In addition, FastICA offers a dimensionality reduction of classification problems for independent observations, making it more suitable for practical implementation [46]. Concretely, the implemented FastICA-based denoising approach decomposes signals from 19 channels into 19 independent components. It is assumed that the number of independent components equals the number of channels to meet at least one of the FastICA algorithm constraints. At long last, the independent components with greater energy concentration were selected.
Formally, let us express an input signal vector u , such as a linear combination of the mixture matrix A and the noisy matrix s to be removed. Here, A is constituted to maximize the utility metric by combining channels with relevant information [47]. That is to say,
u = A s , A R 19 × 19 .
To this, the input signals must be centered. Next, the whitening process converts the mixture matrix A into a new orthogonal whitened matrix A ˜ as follows
u ˜ = V Λ 1 2 V s = A ˜ s ,
where V is an orthogonal matrix (eigenvectors of A ), Λ is the diagonal matrix of eigenvalues, and u ˜ is the whitened signal. The FastICA quests the appropriate direction for the weight vector W  R 19 that maximizes the non-Gaussianity of the W   u ˜ projection with the extraction purpose of Independent Components (ICs).
FastICA uses the non-linear function f ( x ) = log ( cosh ( x ) ) and its first (3) and second (4) derivatives to measure the non-Gaussianity, such as
f ( x ) = tanh ( x ) ,
f ( x ) = 1 tanh 2 ( x ) .
Finally, ICs are found according to the steps presented in Algorithm 1. S e l e c t e d _ c h a n n e l corresponds to a channel subset into the 19 preselected channels. In this work, the method essentially seeks to evaluate the ME and MI activities in the somatosensory cortex. Therefore, the s e l e c t e d _ c h a n n e l set comprised C3, Cz, C4, F3, Fz, F4, P3, Pz, and P4 channels because of their optimal location on the skull [35,48].
Furthermore, the criterion defined in the eighth step of Algorithm 1 specifies the threshold for an ith independent component to be selected according to the energy concentration in the selected channels.
Practically, a concentration value greater than 50% matched in independent components was not encountered, and concentrations lower than 20% resulted in poor noise suppression.Therefore, the minimal concentration of 35% for α , β , and γ rhythms in the motor brain cortex was established empirically to select half of the independent components s ^ with an acceptable noise suppression rate.
Algorithm 1 FastICA algorithm used to denoise EEG signals
1:
Initial weight vector W p + randomly, where p represents the number of projections.
2:
Let W p E { u ˜ f ( W p + u ˜ ) } E { u ˜ f ( W p + ) } W p + .
3:
Apply the Gramm-Shmidt orthogonalization to the weight vectors [ W 1 , W 2 , , W 19 ] , after the jth iteration for p projections:
W p W p j = 1 p 1 W p W j W j .
4:
Compute the vector normalization,
W p W p W p .
5:
Obtain the 19 Independent Components (ICs),
s ^ = [ W 1 , W 2 , , W 19 ] u
if W p converges;
otherwise, W p + W p , and repeat steps from 2 to 4.
6:
Evaluate each ICs separatelfy and by sub-band,
( θ , α , β ) c h a n n e l = v a r { u s b ( i ) } = 1 N 1 i = 1 N u s b μ s 2 ,
i is the number of channels, and u s b the sub-band EEG signals.
7:
Criterion: Select ICs with higher average energy by sub-band,
χ ( S e l e c t e d _ c h a n n e l ) c h a n n e l = 1 19 χ ( c h a n n e l ) 0.35 , χ = { α , β , γ } .
8:
Reconstruct EEG signals according to u ^ = A p S p ,
where A R 19 × p is the mixture matrix.

3.5. Feature Extraction

After the denoising step, the proposed method extracts EEG features by evaluating the parameters introduced by Hjorth [19]. Namely, the data’s activity, mobility, and complexity are calculated per channel for the raw EEG vector u and for the reconstructed EEG vector u ^ with and without sub-band decomposition. The activity represents the signal power or mean energy, obtained by calculating the signal variance as follows:
A i = v a r u ( i ) = 1 N 1 i = 1 N ( u ( i ) μ s ) 2 ,
where N is the sample length, and μ s denotes the mean of the signals’ amplitude. For its part, the mobility parameter M u is associated with the mean frequency as follows:
M u = 1 A i v a r d d i u ( i ) ,
where the signal derivative d d i u ( i ) is obtained by calculating the difference from two consecutive signal samples, d d i u ( i ) u ( i ) u ( i 1 ) . Finally, EEG complexity represents how the frequency changes in comparison to a plain sine wave, calculated as follows:
C u = M o b i l i t y { d d i u ( i ) } M o b i l i t y { u ( i ) } .
Therefore, activity, mobility, and complexity of EEG data were calculated for each sub-band and without sub-band decomposition, obtaining 12 features per channel, as shown in Table 3.
Since the ranges of feature mixtures varied considerably, especially those of activity versus mobility or complexity, it was recommended to normalize the data based on the minimum–maximum criteria to consider the limits of each range. Therefore, data features were normalized in the interval [ 0 , 1 ] as follows:
u ˜ i k = u i k min ( u k ) max ( u k ) min ( u k ) ,
where u ˜ i k is the ith normalized value of the kth feature, max ( u k ) and min ( u k ) are the maximum and minimum values of the kth feature, respectively. In sum, the Hjorth parameters helped calculate the EEG data statistical properties to constitute feature subsets, more distinguishable by the classifier [49].

3.6. Feature Classification

An SVM-based model carries out the feature classification step. Most works related to the classification of ME and MI features in the recent literature use the SVM model. Thus, in this case, the SVM model was selected to emphasize the contribution of the preprocessing and feature extraction steps using the Hjorth parameters. The SVM model uses a Gaussian kernel given by
Ψ ( X m , X n ) = exp γ X m X n 2 ,
where X m and X n are vector pairs in R n f with n f the number of features. The parameter γ is inversely proportional to the influence of a vector over its neighbors. Based on studies developed by Platt [50], the proposed margin SVM model uses the Sequential Minimal Optimization (SMO) algorithm solved by
max α m = 1 M α m 1 2 m = 1 M n = 1 M α m α n y m y n Ψ ( X m , X n ) ,
Subject to constraints,
0 α m C , m = 1 , 2 , , M , and m = 1 M α m y m = 0 ,
where ( x m , y n ) represents the pattern pairs of an input vector x i and the corresponding output class label y i , M is the number of classes, and α m is the Lagrange multiplier for a class m, varying between 0 and C. Therefore, feature x i and its output y i are associated following their pertinence to the ME or MI movements class.
In this work, 12 features per each of the 9 considered channels were available for classification, giving a total of 108 features.Moreover, the proposed method opted for feature and channel combinations to efficiently perform the classification task, as presented in Table 3. The proposed method was implemented in Python 3.6 on an Ubuntu 22.04 desktop with an NVIDIA GTX 1080 Ti GPU using the scikit-learn library [51].

4. Numerical Results

This section presents the numerical results obtained using denoising, feature extraction, and classification methods, highlighting valuable observations from the signals processing chain. The reported results correspond to R03, R04, R07, R08, R11, and R12 tasks, defined previously in Section 3.1.

4.1. Noise Suppression Using FastICA

The effect of denoising EEG data is explored by analyzing the brain mapping before and after applying the FastICA algorithm, as shown in Figure 3. It can be observed that Fp1 and Fp2 channels perceive more ocular artifacts caused by eye blinking and eye movements than other channels. Figure 4 compares artifact reduction in C3, Cz, C4, Fp1, Fp2, F7, and F3 channels before and after the denoising step. Illustratively, R04 task signals provided by Fp1, Fp2, and F7 channels were selected to highlight the increased performances using the FastICA algorithm. In addition, analyzing EEG segments’ mean energies revealed signals denoising the benefits of the FastICA algorithm, on the sensorimotor cortex.

4.2. Features Extraction

The proposed method uses histogram analysis to evaluate feature distribution using activity, mobility, and complexity descriptors. Therefore, various histograms were analyzed, considering essentially channel signals, sub-band, and raw data for both ME and MI classes. Illustratively, Figure 5 presents the histograms corresponding to filtered and unfiltered C3 channel signals for the EEG complexity parameter.

4.3. Features Classification Evaluation

The classification step used the accuracy metric given by
A c c u r a c y = T P + T N T P + T N + F P + F N ,
where T P represents the true positive for each m feature correctly assigned to class M, T N corresponds to the true negative for each n feature of other classes than M unassigned to class M. F P is the false positive, representing all features assigned erroneously to class M, and oppositely F N represents the false negative. Additionally, the SVM classifier performance was evaluated by utilizing the confusion matrix metric [52].
Therefore, two classification approaches were developed in this work. The first one (Method 1) is based on R03, R04, R07, R08, R11, and R12 tasks and uses signals of all subjects. That is 8652 samples from 103 subjects, 14 runs, and 6 considered tasks. In other words, the subject-independent classification approach. Therefore, samples of the first 10 runs constituted the training set; those of the 11th and 12th, and 13th and 14th runs were used as the testing and validation sets, respectively. Table 4 summarizes data splitting for training, testing, and validation sets.
The second classification approach (Method 2) used a cross-subjects approach where samples of a determined number of subjects were utilized for training the model. In contrast, samples of others were used to test and validate the results. Thus, samples of the first 83 subjects were used to train the model, while those from the 84th to 93rd subjects served to test the model. Finally, samples from the 94th to 103rd subjects were utilized to validate the model, as reported in Table 4. Hence, the first classification approach used 71.4%, 14.3%, and 14.3% of all data for training, testing, and validation. At the same time, the second approach utilized 80% of the data to train, 10% to test, and a further 10% to validate the model.
The SVM classifier was configured with a Gaussian kernel to discriminate feature combinations of ME and MI tasks, as presented in Table 5. Different combinations of γ and C parameters were explored, computing from Equations (9)–(11). Figure 6 shows the explored combinations of γ and C parameters to find the classifier’s optimal efficiency for both classification approaches.
Therefore, γ and C parameter values were set to 2 1 and 2 13 , respectively. The reported results were averaged by running the model five times for each approach. Therefore, Table 6 presents the ME and MI classification results using the accuracy metric followed by the standard deviation.
Moreover, the confusion matrix metric was used to evaluate the classifier performance in discriminating ME and MI tasks. Table 7 summarizes the confusion matrices for the five feature combination sets, as considered in Table 5. Therefore, the A θ , A α , A β , M r a w , and C r a w features of C3, C4, and C5 channels were correctly discriminated with Method 1, achieving a true negative rate of 89.33% for MI tasks, against a true positive rate of 89.7% for ME tasks.
Discrimination rates of true positive (68.17%) and true negative (68.41%) were obtained with the same combination of features by using Method 2, as reported in Table 7.
In addition, based on C3, Cz, and C4 channel selection, EEG signals were processed without either sub-band sampling or denoising steps to evaluate both the SVM classifier performance and the preprocessing benefits.
Furthermore, Table 8 presents the comparative results of feature classification using Method 1 and Method 2 with and without denoising.

4.4. Discussion

According to Table 6, the best classification accuracy was obtained with the A θ , A α , A β , M r a w , and C r a w features of C3, Cz, and C4 channels by implementing Method 1 (89.7 ± 0.78%). The same feature combination of the C3, Cz, and C4 channels gave reliable results using Method 2 (68.8 ± 0.71%). Considering the spatial location of C3, Cz, and C4 electrodes on the skull, these results confirm the hypothesis that the brain sensorimotor cortex is activated during both ME and MI tasks.
On the other hand, the combination of the A θ , A α , A β , M r a w , and C r a w features allowed the best discriminant accuracy to be achieved with Method 1 rather than by using Method 2. A difference of 20.9% between both methods is essentially due to the specificity of each method. According to Table 7, MI features have been better identified by the ICA–Hjorth–SVM method than ME features, independently of the selection of channels or feature combinations. It is important to highlight here that the key contribution of this work is both the channel and Hjorth feature combinations for raw and filtered signals. Concretely, classifying ME and MI gave different results depending on feature combinations. The features set composed of A θ , A α , and A β allowed discriminating 79.45% of ME against 79.75% of MI with Method 1. These results were improved by combining features from all Hjorth parameters for raw and filtered data (see Table 7).
The first classification approach (Method 1), where 10 runs were used to train and the remaining tasks to test and validate the model, was revealed to be more efficient than the second, where all data were partitioned by subject to constitute the training, testing, and validation sets (Method 2). The fact that fractions of features provided by all subjects were used in Method 1 to train, test, and validate the proposed model can justify this improvement. In other words, the model learns from the patterns of all subjects. Unlike Method 1, where the model learned from the data portion of all subjects, Method 2 used features from 83 subjects to train the model. Data from the remaining subjects were utilized to test and validate the model, respectively. This strategy would have decreased the classifier performance, considering that the testing and validation data were provided by subjects who would not performed better on the defined tasks.
For its part, the denoising process gained an accuracy increase of 17.6% and 14% in the classification of ME and MI features, using Method 1 and Method 2, respectively, than processing without denoising. Finally, Table 9 compares the results achieved in this work with those published in the related works, considering the referred dataset.
In the model proposed by Sleight et al. [20], ICA and non-ICA approaches were contemplated before the channel selection step. Successively, a rhythmic band selection and average power evaluation helped differentiate the ME and MI signals by employing an SVM classifier. They reported an average accuracy of 56.0 ± 0.27%, classifying EEG signals normalized in frequency bands and denoised by ICA.
For their part, Alomari et al. [12] achieved an average accuracy of 88.9% by utilizing a combination of WT and ANN to discriminate ME and MI tasks. By analyzing the closeness between their results and those achieved in this work, Alomari et al. used a wavelet Transform Analysis to extract ME and MI features using the Daubechies orthogonal wavelets Db12. An artificial neural network designed with 15 hidden layers allowed their model to obtain the reported results without minimizing the contribution of filtering and feature extraction by the wavelet transform. Lately, a cascade of DMD and SVM has been proposed to differentiate real and imaginary fist movements, achieving an accuracy of 83% [21].
In this work, an average accuracy of 89.7 ± 0.78% has been obtained in discriminating ME and MI features of C3, Cz, and C4 channels. The proposed method contemplated the EEG sub-band filtering, denoising, and classification steps. Therefore, this framework presents a reliable alternative for discriminating ME and MI EEG tasks for BCI-based applications.

5. Conclusions

This work aimed to classify ME and MI EEG features using sub-band filtering, noise suppression, and feature extraction algorithms. Concretely, EEG signals provided by the selected channels were filtered in θ , α , and β sub-bands before being denoised by the FastICA algorithm. Thereafter, Hjorth parameters, namely, activity, mobility, and complexity were computed to extract features, discriminated next by an SVM classifier to give the output. In other words, the developed approach allowed the discrimination of ME-MI tasks by reducing non-relevant information contained in EEG data. According to the results, the activity, mobility, and complexity of EEG segments carried by C3, Cz, and C4 channels with and without sub-band decomposition gave reliable classification accuracies. Therefore, a higher average classification accuracy of 89.7 ± 0.78% was achieved with the A θ , A α , A β , M r a w , C r a w feature combination.
Moreover, data processing without filtering and denoising steps was contemplated to highlight the overall preprocessing step benefit. In this sense, the proposed framework offers an alternative to BCI implementation for multiple users, considering the large amount of data used to experiment with the model.
However, the results reported in this work are essentially related to the combination of the features, to the Hjorth criteria, and especially to the operated splitting of data in training, testing, and evaluation sets. In future works, it is projected to compare the performance of the Hjorth algorithm coupled to FastICA method variants by implementing the proposed model on development boards, namely, FPGA and NVIDIA developer cards for embedded BCI systems.

Author Contributions

Data curation, T.M.-V., J.A.M.C. and J.R.-P.; Formal analysis, A.N.R., R.M.-M. and J.G.A.-C.; Investigation, T.M.-V., A.N.R., Y.M.-T., J.A.M.C. and J.R.-P.; Methodology, T.M.-V. and J.A.M.C.; Resources, R.M.-M.; Software, A.N.R., Y.M.-T. and J.G.A.-C.; Supervision, J.A.M.C.; Validation, J.R.-P. and J.G.A.-C.; Visualization, R.M.-M.; Writing—original draft, T.M.-V.; Writing—review and editing, Y.M.-T., J.R.-P. and J.G.A.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Centro de Investigación en Computación–Instituto Politécnico Nacional through the Dirección de Investigación (Folio SIP/1988/DI/DAI/2022), concurrently with the University of Guanajuato through the Convocatoria Institucional de Investigación Científica (CIIC) grant 094/2023, and by the Mexican Council of Humanities, Science, and Technology (CONAHCyT) under the postdoctoral grant 2022–2024 CVU No. 763527.

Institutional Review Board Statement

Ethical review and approval are waived for this kind of study.

Informed Consent Statement

No formal written consent was required for this study.

Data Availability Statement

Data available under a formal demand.

Acknowledgments

This project was fully supported by the Department of Electronics Engineering of the University of Guanajuato and the Mexican National Council of Humanities, Science, and Technology (CONAHCyT) through the postdoctoral grant 2022–2024 CVU No. 763527.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the study’s design, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Siuly, S.; Li, Y.; Zhang, Y. EEG signal analysis and classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 11, 141–144. [Google Scholar]
  2. Velnath, R.; Prabhu, V.; Krishnakumar, S. Analysis of EEG signal for the estimation of concentration level of humans. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1084, p. 012003. [Google Scholar]
  3. Oh, S.L.; Hagiwara, Y.; Raghavendra, U.; Yuvaraj, R.; Arunkumar, N.; Murugappan, M.; Acharya, U.R. A deep learning approach for Parkinson’s disease diagnosis from EEG signals. Neural Comput. Appl. 2020, 32, 10927–10933. [Google Scholar] [CrossRef]
  4. Siuly, S.; Li, Y.; Zhang, Y.; Siuly, S.; Li, Y.; Zhang, Y. Significance of eeg signals in medical and health research. In EEG Signal Analysis and Classification: Techniques and Applications; Springer: Cham, Switzerland, 2016; pp. 23–41. [Google Scholar]
  5. Jeong, J.H.; Shim, K.H.; Kim, D.J.; Lee, S.W. Brain-controlled robotic arm system based on multi-directional CNN-BiLSTM network using EEG signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
  6. Shao, L.; Zhang, L.; Belkacem, A.N.; Zhang, Y.; Chen, X.; Li, J.; Liu, H. EEG-Controlled Wall-Crawling Cleaning Robot Using SSVEP-Based Brain-Computer Interface. J. Healthc. Eng. 2020, 2020, 6968713. [Google Scholar] [CrossRef] [PubMed]
  7. Cho, J.H.; Jeong, J.H.; Shim, K.H.; Kim, D.J.; Lee, S.W. Classification of hand motions within EEG signals for non-invasive BCI-based robot hand control. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 515–518. [Google Scholar]
  8. Demir Kanik, S.U.; Yin, W.; Guneysu Ozgur, A.; Ghadirzadeh, A.; Björkman, M.; Kragic, D. Improving EEG-based Motor Execution Classification for Robot Control. In Social Computing and Social Media: Design, User Experience, and Impact: Proceedings of the 14th International Conference, SCSM 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, 26 June–1 July 2022; Part I; Springer: Berlin/Heidelberg, Germany, 2022; pp. 65–82. [Google Scholar]
  9. Thomas, K.P.; Robinson, N.; Smitha, K.G.; Vinod, A.P. EEG-based Discriminative Features During Hand Movement Execution and Imagination. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 883–888. [Google Scholar]
  10. Usama, N.; Kunz Leerskov, K.; Niazi, I.K.; Dremstrup, K.; Jochumsen, M. Classification of error-related potentials from single-trial EEG in association with executed and imagined movements: A feature and classifier investigation. Med. Biol. Eng. Comput. 2020, 58, 2699–2710. [Google Scholar] [CrossRef] [PubMed]
  11. Lee, D.Y.; Jeong, J.H.; Shim, K.H.; Lee, S.W. Decoding movement imagination and execution from eeg signals using bci-transfer learning method based on relation network. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1354–1358. [Google Scholar]
  12. Alomari, M.; Awada, E.; Younis, O. Subject-Independent EEG-Based Discrimination Between Imagined and Executed, Right and Left Fists Movements. Eur. J. Sci. Res. 2014, 118, 364–373. [Google Scholar]
  13. Toriyama, H.; Ushiba, J.; Ushiyama, J. Subjective vividness of kinesthetic motor imagery is associated with the similarity in magnitude of sensorimotor event-related desynchronization between motor execution and motor imagery. Front. Hum. Neurosci. 2018, 12, 295. [Google Scholar] [CrossRef] [PubMed]
  14. Lee, M.H.; Kwon, O.Y.; Kim, Y.J.; Kim, H.K.; Lee, Y.E.; Williamson, J.; Fazli, S.; Lee, S.W. EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience 2019, 8, giz002. [Google Scholar] [CrossRef]
  15. Hardwick, R.M.; Caspers, S.; Eickhoff, S.B.; Swinnen, S.P. Neural correlates of motor imagery, action observation, and movement execution: A comparison across quantitative meta-analyses. BioRxiv 2017. [Google Scholar] [CrossRef]
  16. Matsuo, M.; Iso, N.; Fujiwara, K.; Moriuchi, T.; Matsuda, D.; Mitsunaga, W.; Nakashima, A.; Higashi, T. Comparison of cerebral activation between motor execution and motor imagery of self-feeding activity. Neural Regen. Res. 2021, 16, 778. [Google Scholar]
  17. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Identification of motor and mental imagery EEG in two and multiclass subject-dependent tasks using successive decomposition index. Sensors 2020, 20, 5283. [Google Scholar] [CrossRef] [PubMed]
  18. Chaisaen, R.; Autthasan, P.; Mingchinda, N.; Leelaarporn, P.; Kunaseth, N.; Tammajarung, S.; Manoonpong, P.; Mukhopadhyay, S.C.; Wilaiprasitporn, T. Decoding EEG Rhythms During Action Observation, Motor Imagery, and Execution for Standing and Sitting. IEEE Sens. J. 2020, 20, 13776–13786. [Google Scholar] [CrossRef]
  19. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef] [PubMed]
  20. Sleight, J.; Pillai, P.; Mohan, S. Classification of Executed and Imagined Motor Movement EEG Signals; University of Michigan: Ann Arbor, MI, USA, 2009; Volume 110. [Google Scholar]
  21. Keerthi, K.K.; Soman, K. Recognition of Real and Imaginary Fist Movements Based on Dynamical Mode Decomposition Spectrum of EEG. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 0064–0067. [Google Scholar]
  22. Chen, Y.; Xue, S.; Li, D.; Geng, X. The application of independent component analysis in removing the noise of EEG signal. In Proceedings of the 2021 6th International Conference on Smart Grid and Electrical Automation (ICSGEA), Kunming, China, 29–30 May 2021; pp. 138–141. [Google Scholar]
  23. Hyvärinen, A.; Oja, E. A fast fixed-point algorithm for independent component analysis. Neural Comput. 1997, 9, 1483–1492. [Google Scholar] [CrossRef]
  24. Hyvärinen, A.; Oja, E. Independent component analysis: Algorithms and applications. Neural Netw. 2000, 13, 411–430. [Google Scholar] [CrossRef] [PubMed]
  25. Sonkin, K.M.; Stankevich, L.A.; Khomenko, J.G.; Nagornova, Z.V.; Shemyakina, N.V. Development of electroencephalographic pattern classifiers for real and imaginary thumb and index finger movements of one hand. Artif. Intell. Med. 2015, 63, 107–117. [Google Scholar] [CrossRef] [PubMed]
  26. Baravalle, R.; Rosso, O.A.; Montani, F. Discriminating imagined and non-imagined tasks in the motor cortex area: Entropy-complexity plane with a wavelet decomposition. Phys. A Stat. Mech. Its Appl. 2018, 511, 27–39. [Google Scholar] [CrossRef]
  27. Chen, C.; Chen, P.; Belkacem, A.N.; Lu, L.; Xu, R.; Tan, W.; Li, P.; Gao, Q.; Shin, D.; Wang, C.; et al. Neural activities classification of left and right finger gestures during motor execution and motor imagery. Brain-Comput. Interfaces 2021, 8, 117–127. [Google Scholar] [CrossRef]
  28. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
  29. Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Mueller-Putz, G.; et al. Review of the BCI competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar]
  30. Schalk, G.; McFarland, D.J.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [Google Scholar] [CrossRef] [PubMed]
  31. Deecke, L.; Weinberg, H.; Brickett, P. Magnetic fields of the human brain accompanying voluntary movement: Bereitschaftsmagnetfeld. Exp. Brain Res. 1982, 48, 144–148. [Google Scholar] [CrossRef] [PubMed]
  32. Neuper, C.; Pfurtscheller, G. Evidence for distinct beta resonance frequencies in human EEG related to specific sensorimotor cortical areas. Clin. Neurophysiol. 2001, 112, 2084–2097. [Google Scholar] [CrossRef]
  33. Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed]
  34. Abdullah; Faye, I.; Islam, M.R. Eeg channel selection techniques in motor imagery applications: A review and new perspectives. Bioengineering 2022, 9, 726. [Google Scholar] [CrossRef]
  35. Lotze, M.; Montoya, P.; Erb, M.; Hülsmann, E.; Flor, H.; Klose, U.; Birbaumer, N.; Grodd, W. Activation of cortical and cerebellar motor areas during executed and imagined hand movements: An fMRI study. J. Cogn. Neurosci. 1999, 11, 491–501. [Google Scholar] [CrossRef]
  36. Kato, K.; Takahashi, K.; Mizuguchi, N.; Ushiba, J. Online detection of amplitude modulation of motor-related EEG desynchronization using a lock-in amplifier: Comparison with a fast Fourier transform, a continuous wavelet transform, and an autoregressive algorithm. J. Neurosci. Methods 2018, 293, 289–298. [Google Scholar] [CrossRef]
  37. Baig, M.Z.; Aslam, N.; Shum, H. Filtering techniques for channel selection in motor imagery EEG applications: A survey. Artif. Intell. Rev. 2020, 53, 1207–1232. [Google Scholar] [CrossRef]
  38. Homan, R.; Herman, J.; Purdy, P. Cerebral location of International 10-20 system electrode placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef]
  39. Ganis, G.; Thompson, W.L.; Kosslyn, S.M. Brain areas underlying visual mental imagery and visual perception: An fMRI study. Cogn. Brain Res. 2004, 20, 226–241. [Google Scholar] [CrossRef]
  40. Hermes, D.; Vansteensel, M.J.; Albers, A.M.; Bleichner, M.G.; Benedictus, M.R.; Orellana, C.M.; Aarnoutse, E.J.; Ramsey, N.F. Functional MRI-based identification of brain areas involved in motor imagery for implantable brain–computer interfaces. J. Neural Eng. 2011, 8, 025007. [Google Scholar] [CrossRef]
  41. Sharma, P.K.; Vaish, A. Individual identification based on neuro-signal using motor movement and imaginary cognitive process. Optik 2016, 127, 2143–2148. [Google Scholar] [CrossRef]
  42. Demir, F.; Sobahi, N.; Siuly, S.; Sengur, A. Exploring Deep Learning Features for Automatic Classification of Human Emotion Using EEG Rhythms. IEEE Sens. J. 2021, 21, 14923–14930. [Google Scholar] [CrossRef]
  43. Hwaidi, J.F.; Chen, T.M. A noise removal approach from eeg recordings based on variational autoencoders. In Proceedings of the 2021 13th International Conference on Computer and Automation Engineering (ICCAE), Melbourne, Australia, 20–22 March 2021; pp. 19–23. [Google Scholar]
  44. Albera, L.; Kachenoura, A.; Comon, P.; Karfoul, A.; Wendling, F.; Senhadji, L.; Merlet, I. ICA-based EEG denoising: A comparative analysis of fifteen methods. Bull. Pol. Acad. Sci. Tech. Sci. 2012, 60, 407–418. [Google Scholar] [CrossRef]
  45. Lekshmylal, P.; Shiny, G.; Radhakrishnan, A. Removal of EOG and EMG artifacts from EEG signals using blind source separation methods. In Proceedings of the 2023 International Conference on Control, Communication and Computing (ICCC), Thiruvananthapuram, India, 19–21 May 2023; pp. 1–6. [Google Scholar]
  46. Shahshahani, S.M.R.; Mahdiani, H.R. FiCA: A Fixed-Point Custom Architecture FastICA for Real-Time and Latency-Sensitive Applications. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2896–2905. [Google Scholar] [CrossRef] [PubMed]
  47. Bertrand, A. Utility metrics for assessment and subset selection of input variables for linear estimation [tips & tricks]. IEEE Signal Process. Mag. 2018, 35, 93–99. [Google Scholar]
  48. Syakiylla, S.; Syakiylla Sayed Daud, S.; Sudirman, R. Decomposition Level Comparison of Stationary Wavelet Transform Filter for Visual Task Electroencephalogram. J. Teknol. 2015, 74, 7–13. [Google Scholar] [CrossRef]
  49. Galvão, F.; Alarc ao, S.M.; Fonseca, M.J. Predicting Exact Valence and Arousal Values from EEG. Sensors 2021, 21, 3414. [Google Scholar] [CrossRef]
  50. Platt, J. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines; Technical Report MSR-TR-98-14; Microsoft: Redmond, WA, USA, 1998. [Google Scholar]
  51. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  52. Valero-Carreras, D.; Alcaraz, J.; Landete, M. Comparing two SVM models through different metrics based on the confusion matrix. Comput. Oper. Res. 2023, 152, 106131. [Google Scholar] [CrossRef]
Figure 1. High-level general diagram of the proposed method. The Physionet public dataset provides ME and MI EEG signals. In the preprocessing step, 9 channels are selected from 64 available, followed by sub-band filtering and noise suppression steps. Therefore, ME and MI features are extracted using the Hjorth criteria and classified by an SVM-based model.
Figure 1. High-level general diagram of the proposed method. The Physionet public dataset provides ME and MI EEG signals. In the preprocessing step, 9 channels are selected from 64 available, followed by sub-band filtering and noise suppression steps. Therefore, ME and MI features are extracted using the Hjorth criteria and classified by an SVM-based model.
Mathematics 11 04409 g001
Figure 2. The 10–20 electrode placement system. The 19 considered channels from the 64 provided are highlighted in green and those processed in blue.
Figure 2. The 10–20 electrode placement system. The 19 considered channels from the 64 provided are highlighted in green and those processed in blue.
Mathematics 11 04409 g002
Figure 3. Brain mapping of R03 task signals. (a) Brain mapping before denoising. (b) Brain mapping after denoising. The strong energy concentration in the nasion area is partly due to artifacts caused by eye flicker. After denoising, the brain map shows an almost uniform increase in energy concentration in different brain areas.
Figure 3. Brain mapping of R03 task signals. (a) Brain mapping before denoising. (b) Brain mapping after denoising. The strong energy concentration in the nasion area is partly due to artifacts caused by eye flicker. After denoising, the brain map shows an almost uniform increase in energy concentration in different brain areas.
Mathematics 11 04409 g003
Figure 4. R04 task signals before and after the denoising process. (a) EEG before applying the FastICA algorithm. (b) Artifacts suppression after denoising. Most notable noises and suppression are observable in channels F p 1 and F p 2 .
Figure 4. R04 task signals before and after the denoising process. (a) EEG before applying the FastICA algorithm. (b) Artifacts suppression after denoising. Most notable noises and suppression are observable in channels F p 1 and F p 2 .
Mathematics 11 04409 g004
Figure 5. Distribution of complexity descriptor without denoising process for C3 channel signals. (a) Raw ME features. (b) Raw MI features. (c) θ sub-band ME features. (d) θ sub-band MI features. (e) α sub-band ME features. (f) α sub-band MI features. (g) β sub-band ME features. (h) β sub-band MI features. ME features are drawn in blue, while MI features are highlighted in green. The x and y coordinates refer to the range of features and their normalized energy density, respectively.
Figure 5. Distribution of complexity descriptor without denoising process for C3 channel signals. (a) Raw ME features. (b) Raw MI features. (c) θ sub-band ME features. (d) θ sub-band MI features. (e) α sub-band ME features. (f) α sub-band MI features. (g) β sub-band ME features. (h) β sub-band MI features. ME features are drawn in blue, while MI features are highlighted in green. The x and y coordinates refer to the range of features and their normalized energy density, respectively.
Mathematics 11 04409 g005aMathematics 11 04409 g005b
Figure 6. The map illustration of γ and C parameter settings. (a,b) represent γ and C parameter values for Method 1 and Method 2, respectively.
Figure 6. The map illustration of γ and C parameter settings. (a,b) represent γ and C parameter values for Method 1 and Method 2, respectively.
Mathematics 11 04409 g006
Table 1. The recent literature summary of related works.
Table 1. The recent literature summary of related works.
WorkDatasetChannelsMethodAccuracy [%]
Keerthi and Soman [21]PhysioNetCz, C1, C2, C3, C4DMD-SVM83.00
Alomari et al. [12]PhysioNetC3, Cz, C4WT-ANN88.90
Sleight et al. [20]PhysionetC3, Cz, C4ICA-SVM56.00 ± 0.27
Chen et al. [27]OwnC3, C4hSVM74.08 ± 13.42
Sonkin et al. [25]OwnC3, CzSVM-ANN42/62
Chaisaen et al. [18]OwnFCz, C3, …, POzFBCSP-SVM82.73 ± 2.54
Table 2. Tasks description of the experiment, documented from [30]. Time intervals between tasks are coded as presented on the three last lines.
Table 2. Tasks description of the experiment, documented from [30]. Time intervals between tasks are coded as presented on the three last lines.
TasksDescription
R03, R07, R11A target appears on the left or right side of the screen. Next, the volunteer executes the opening and closing of the corresponding fist until the target disappears. Then, the volunteer relaxes.
R04, R08, R12A target appears on the left or right side of the screen. Next, the volunteer imagines the opening and closing of the corresponding fist until the target disappears. Then, the volunteer relaxes.
R05, R09, R13A target appears on the top or bottom of the screen. Next, the volunteer executes the opening and closing of both fists (if the target is on the top) or both feet (if the target is on the bottom) until the target disappears. Then, the volunteer relaxes.
R06, R10, R14A target appears on the top or bottom of the screen. Next, the volunteer imagines the opening and closing of both fists (if the target is on the top) or both feet (if the target is on the bottom) until the target disappears. Then, the volunteer relaxes.
T0Relax
T1Execution of a real or imagined movement of the left fist in exercises R03, R04, R07, R08, R11, and R12; or a movement of both fists in exercises R05, R06, R09, R10, R13, and R14.
T2Execution of a real or imagined movement of the right fist in exercises R03, R04, R07, R08, R11, and R12; or a movement of both fists in exercises R05, R06, R09, R10, R13, and R14.
Table 3. Hjorth parameters nomenclature with and without sub-band filtering.
Table 3. Hjorth parameters nomenclature with and without sub-band filtering.
ParametersEEG Features
A θ , M θ , C θ θ -rhythm
A α , M α , C α α -rhythm
A β , M β , C β β -rhythm
A r a w , M r a w , C r a w without sub-band filtering
Table 4. Data splitting for training, testing, and validation sets.
Table 4. Data splitting for training, testing, and validation sets.
MethodDataTrainingTestingValidation
Runs1–1011–1213–14
1Samples618012361236
Percent71.414.314.3
Subjects1–8384–9394–103
2Samples6972840840
Percent801010
Table 5. Feature combinations for the classification step. Nb means the number of feature segments.
Table 5. Feature combinations for the classification step. Nb means the number of feature segments.
SetFeaturesChannelsNbMotivation
1 A θ , A α , A β C3, Cz, C49Proposed to compare obtained results to those achieved in related literature.
2 A G , M r a w , C r a w C3, Cz, C49Proposed to evaluate the three Hjorth descriptors’ efficiency without sub-band filtering.
3 A θ , A α , A β , M r a w , C r a w C3, Cz, C415Used to evaluate results of the Set 1 and Set 2 combination.
4 A θ , A α , A β , M r a w , C r a w F3, Fz, F415Similar to Set 3 but with other channels to evaluate the results of both brain cortexes.
5 A θ , A α , A β , M r a w , C r a w P3, Pz, P415Similar to Set 4.
Table 6. Results achieved with feature combinations of selected channels using the accuracy metric.
Table 6. Results achieved with feature combinations of selected channels using the accuracy metric.
SetFeaturesChannelsAccuracy [%]
Method 1Method 2
1 A θ , A α , A β C3, Cz, C481.4 ± 0.5862.8 ± 1.28
2 A r a w , M r a w , C r a w C3, Cz, C486.9 ± 0.8265.1 ± 0.96
3 A θ , A α , A β , M r a w , C r a w C3, Cz, C489.7 ± 0.7868.8 ± 0.71
4 A θ , A α , A β , M r a w , C r a w F3, Fz, F485.7 ± 1.0461.6 ± 1.53
5 A θ , A α , A β , M r a w , C r a w P3, Pz, P486.0 ± 0.4665.9 ± 0.84
Table 7. Summary of confusion matrices’ diagonal results by classifying ME and MI tasks with Method 1 and Method 2.
Table 7. Summary of confusion matrices’ diagonal results by classifying ME and MI tasks with Method 1 and Method 2.
Accuracies [%] Decoding Tasks
ChannelsFeatures CombinationMethod 1Method 2
MEMIMEMI
C3, Cz, C4 A θ , A α , A β 79.4579.7562.1962.67
C3, Cz, C4 A r a w , M r a w , C r a w 86.9486.4764.4564.53
C3, Cz, C4 A θ , A α , A β , M r a w , C r a w 89.789.3368.1768.41
F3, Fz, F4 A θ , A α , A β , M r a w , C r a w 84.8384.5562.0260.78
P3, Pz, P4 A θ , A α , A β , M r a w , C r a w 86.3986.7165.1267.39
Table 8. Results classifying ME and MI features of C3, Cz, and C4 with and without denoising process.
Table 8. Results classifying ME and MI features of C3, Cz, and C4 with and without denoising process.
DataAccuracy [%]
Method 1Method 2
EEG with denoising82.3 ± 1.1566.4 ± 0.62
EEG without denoising67.8 ± 0.9457.1 ± 1.42
Table 9. Comparison of the achieved results with those of the related works.
Table 9. Comparison of the achieved results with those of the related works.
WorkMethodAccuracy [%]
Sleight et al. [20]ICA-SVM56.0 ± 0.27
Alomari et al. [12]WT-ANN88.9
Keerthi et al. [21]DMD-SVM83.0
Proposed methodICA-HJORTH-SVM89.7 ± 0.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mwata-Velu, T.; Navarro Rodríguez, A.; Mfuni-Tshimanga, Y.; Mavuela-Maniansa, R.; Martínez Castro, J.A.; Ruiz-Pinales, J.; Avina-Cervantes, J.G. EEG-BCI Features Discrimination between Executed and Imagined Movements Based on FastICA, Hjorth Parameters, and SVM. Mathematics 2023, 11, 4409. https://doi.org/10.3390/math11214409

AMA Style

Mwata-Velu T, Navarro Rodríguez A, Mfuni-Tshimanga Y, Mavuela-Maniansa R, Martínez Castro JA, Ruiz-Pinales J, Avina-Cervantes JG. EEG-BCI Features Discrimination between Executed and Imagined Movements Based on FastICA, Hjorth Parameters, and SVM. Mathematics. 2023; 11(21):4409. https://doi.org/10.3390/math11214409

Chicago/Turabian Style

Mwata-Velu, Tat’y, Armando Navarro Rodríguez, Yanick Mfuni-Tshimanga, Richard Mavuela-Maniansa, Jesús Alberto Martínez Castro, Jose Ruiz-Pinales, and Juan Gabriel Avina-Cervantes. 2023. "EEG-BCI Features Discrimination between Executed and Imagined Movements Based on FastICA, Hjorth Parameters, and SVM" Mathematics 11, no. 21: 4409. https://doi.org/10.3390/math11214409

APA Style

Mwata-Velu, T., Navarro Rodríguez, A., Mfuni-Tshimanga, Y., Mavuela-Maniansa, R., Martínez Castro, J. A., Ruiz-Pinales, J., & Avina-Cervantes, J. G. (2023). EEG-BCI Features Discrimination between Executed and Imagined Movements Based on FastICA, Hjorth Parameters, and SVM. Mathematics, 11(21), 4409. https://doi.org/10.3390/math11214409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop