Next Article in Journal
Soot Monitoring of Gasoline Particulate Filters Using a Radio-Frequency-Based Sensor
Next Article in Special Issue
Rehabilitation of Patients with Arthrogenic Muscular Inhibition in Pathologies of Knee Using Virtual Reality
Previous Article in Journal
Constrained DRL for Energy Efficiency Optimization in RSMA-Based Integrated Satellite Terrestrial Network
Previous Article in Special Issue
Robust Arm Impedocardiography Signal Quality Enhancement Using Recursive Signal Averaging and Multi-Stage Wavelet Denoising Methods for Long-Term Cardiac Contractility Monitoring Armbands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals

1
Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
2
Indian Institute of Information Technology, Design and Manufacturing (IIITDM) Jabalpur, Jabalpur 482005, India
3
International Institute of Information Technology, Bangalore 560100, India
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7860; https://doi.org/10.3390/s23187860
Submission received: 7 July 2023 / Revised: 10 August 2023 / Accepted: 5 September 2023 / Published: 13 September 2023
(This article belongs to the Special Issue Advances in Biomedical Sensing, Instrumentation and Systems)

Abstract

:
Technological advancements in healthcare, production, automobile, and aviation industries have shifted working styles from manual to automatic. This automation requires smart, intellectual, and safe machinery to develop an accurate and efficient brain–computer interface (BCI) system. However, developing such BCI systems requires effective processing and analysis of human physiology. Electroencephalography (EEG) is one such technique that provides a low-cost, portable, non-invasive, and safe solution for BCI systems. However, the non-stationary and nonlinear nature of EEG signals makes it difficult for experts to perform accurate subjective analyses. Hence, there is an urgent need for the development of automatic mental state detection. This paper presents the classification of three mental states using an ensemble of the tunable Q wavelet transform, the multilevel discrete wavelet transform, and the flexible analytic wavelet transform. Various features are extracted from the subbands of EEG signals during focused, unfocused, and drowsy states. Separate and fused features from ensemble decomposition are classified using an optimized ensemble classifier. Our analysis shows that the fusion of features results in a dimensionality reduction. The proposed model obtained the highest accuracies of 92.45% and 97.8% with ten-fold cross-validation and the iterative majority voting technique. The proposed method is suitable for real-time mental state detection to improve BCI systems.

1. Introduction

Recent technological developments have changed the roles of humans in safety-critical and complex areas, such as autonomous driving vehicles, aviation, healthcare systems, industries, etc., from manual to autonomous control systems [1]. However, due to the involvement of humans in several tasks, at the same time, the growing sophistication of these processes makes human intervention and control difficult. Therefore, there is an urgent need for the development of more accurate and automated systems. An analysis of an individuals’ cognitive, emotional, and psychological states can provide a solution using brain–computer interface (BCI) technologies [2]. Such information measures the mental states of the users to make these environments safer for human–machine interfaces. The brain’s physiological activities have been studied by electroencephalograms (EEGs) [3], functional magnetic resonance imaging (fMRI) [4], functional near-infrared spectroscopy (fNIRS) [5], magnetoencephalograms (MEGs) [6], and other forms of biosignals, such as electrooculograms (EOGs) [7], electrocardiograms (ECGs) [7,8], and galvanic skin responses (GSRs) [9], to detect various conditions [10,11]. Taking the day-to-day perspective of a mental activity measurement, issues related to size, weight, expense, power consumption, and radioactivity restrict the usage of MEGs and fMRI [12]. EOG, ECG, and GSR signals provide some degree of correlation with mental states (mental fatigue, drowsiness, and stress) [10]. However, such techniques have demonstrated success only in combination with neuro-imaging methods linked to the central nervous system [10]. As a result, fNIRS and EEG signals proved the most appropriate choices for BCI systems [10]. EEG signals are favored over fNIRS signals, as they offer higher sensitivity to variations in brain activities and higher temporal resolution [10]. Moreover, researchers have widely used EEG signals to study emotions, cognitive load, fear of missing out, drowsiness, and schizophrenia, due to their low-cost, portable, and non-invasive properties [13,14,15,16,17,18].
Recently, many studies have been presented for detecting mental states using EEG signals. The mental states of “workload”, “fatigue”, and “situational awareness” have been studied by examining the correlation between mental workload and EEG signals in different conditions, such as in airplane pilots and car drivers [11]. Myrden et al. presented an EEG-BCI model to predict the mental states of frustration, fatigue, and attention. Different features extracted using the fast Fourier transform (FFT) have been classified with linear discriminant analysis (LDA), support vector machines (SVMs), and naive Bayes classifiers [19]. Li et al. recognized reading silently, a comprehension task, a mental arithmetic task, and a question-answering task based on the self-assessment Manikin (SAM) model [20]. Nuamah et al. classified five tasks (baseline, visual counting, geometric figure rotation, letter composition, and multiplication) using the short-time Fourier transform (STFT) to extract different features, which were classified using an SVM classifier [21]. Liu et al. presented a frequency domain analysis of features using the FFT in combination with SVM to detect attentive and inattentive mental states of students [22]. Ket et al. classified attention, no attention, and rest states using sample entropy and linear features with an SVM classifier [23].
Wang et al. used the focus of attention ability during mathematical problem solving and lane-keeping driving tasks. The central, parietal, frontal, occipital, right-motor, and left-motor power spectra computed using filtering and independent component analysis (ICA) were classified with an SVM classifier [24]. Djamal et al. evaluated features from raw EEG signals and wavelet decomposition to recognize attention and inattention activities [25]. Arico et al. used stepwise linear discriminant analysis and the statistical test of analysis of variance (ANOVA) to detect easy, medium, and hard mental assessments [12]. Hamadicharef et al. developed an attention and non-attention classification state model using a combination of filter banks, common spatial patterns, and a Fisher linear discriminant classifier [26]. Mardi et al. used Log energy, Higuchi, and Petrosian’s fractal dimension to extract chaotic features for detecting alertness and drowsiness states [27]. Richer et al. evaluated the band power of frequency bands. They computed histograms of naive and entropy-based scores using the P2 algorithm and classified them with binary classifiers [28]. Aci et al. used STFT-based features to detect focused (F), unfocused (UF), and drowsiness (D) mental states [29]. Zhang et al. used six convolutional networks and one output layered deep neural network to predict F, UF, and D states [30]. Islam et al. explored multivariate empirical mode decomposition (MEMD) and the discrete wavelet transform (DWT) to detect working and relaxed states. The nonlinear features extracted from intrinsic mode functions and subbands (SBs) have been classified with an ensemble classifier [31]. Tiwari et al. used rhythm level analysis using filtering and the FFT. The SVM, k-nearest neighbor (KNN), and random forest classifiers have been used to detect high- and low-level attention [32]. Samima and Sarma used an analysis of rhythms using filtering and artificial neural network (ANN) classifiers for mental workload level assessments [33]. Mohdiwale et al. used a DWT-based rhythm analysis using teaching–learning-based optimization for detecting cognitive work assessments [34]. Easttom and Alsmadi presented a comparative analysis of EMD and variational mode decomposition to extract nonlinear entropy and Higuchi features for mental state detection [35]. Khare et al. used wavelet-based analysis using only the rational dilation wavelet transform (RDWT) to extract five statistical and nonlinear features and classified them using an ensemble classifier to detect various mental states [36]. Kumar et al. used analysis of EEG rhythms using the discrete Fourier transform and power spectral density (PSD) to detect mental states using the KNN classifier [37]. Rastogi and Bhateja explored artifacts of or noise elimination in mental state EEG signals using a stationary wavelet transform (SWT)-enhanced fixed-point fast ICA technique [38].
The methods in the literature used traditional feature extraction from raw EEG signals, statistical analysis, filtering techniques, frequency-based transforms such as the FFT or STFT, rhythm-based analysis, and wavelet-based decomposition. However, direct feature extraction exhibits a decreased performance [15], frequency-based transforms result in a time–frequency trade-off [15], filtering and rhythmic analyses require choosing filter coefficients [15], and wavelet-based methods require the selection of a mother wavelet [15]. The experimental and empirical selection of parameters can cause information loss and performance degradation due to misclassification [15]. Thus, to overcome these shortcomings, we propose an ensemble-based analysis using advanced decomposition techniques, including the tunable Q wavelet transform (TQWT), the multilevel DWT (MDWT), and the flexible analytic wavelet transform (FAWT). Individual and feature fusion for the automated detection of three mental states (F, UF, and D) is accomplished with an optimizable ensemble technique. The major contributions of the proposed work are listed below:
  • Analysis of ensemble decomposition techniques using multi-wavelet decomposition.
  • Statistical analysis to reduce the feature dimensions of multi-wavelet feature analysis for mental state detection.
  • Analysis of feature fusion to detect the best combination of features.
  • Exploring an optimized ensemble classifier to determine the optimum hyper-parameter selection.
The remainder of paper is organized as follows: Section 2 explains the methodology. The results are presented in Section 3. The discussion and conclusions are presented in Section 4 and Section 5.

2. Methodology

The proposed methodology comprises several steps, such as EEG dataset pre-processing, signal analysis using ensemble decomposition, feature extraction, and classification. The flowchart of the method is shown in Figure 1.

2.1. Dataset and Preprocessing

The EEG signals of mental states from Kaggle were used, which is a public dataset repository [29,39]. The EEG recordings from five subjects originally consisted of a total of 25 h of recording. The participants performed train control on the “Amtrak-Philadelphia” route using the Acela-express simulator. The subjects were instructed to maintain the locomotive speed at 40 mph in every experiment. Each subject controlled the train for 35 to 55 min. The subjects performed seven experiments each, performing at most one experiment per day. The focused state was captured by paying attention to simulator control during the first 10 min of the experiment. The participants became unfocused and stopped paying attention during the second 10 min, exhibiting an unfocused state. Finally, the participants closed their eyes, relaxed freely, and dozed off during the next 10 min to capture the drowsy state. The recording of EEG data was in accordance with international 10–20 standards using an EPOC EEG system. A voltage resolution of 0.51 μ V, a sampling frequency of 128 Hz, and a bandwidth between 0.2 and 43 Hz were chosen for data acquisition and pre-processing. A 10 min segment of each class was stratified into 30 s non-overlapping EEG segments with 3840 samples. Each class consists of a total of 680 EEG segments. The dataset details are available in [29,30,39].

2.2. Ensemble Decomposition Techniques

In this paper, we have explored an ensemble of three wavelet-based analyses. A brief description of MDWT, TQWT, and FAWT is given in the following subsections.

2.2.1. Multilevel Discrete Wavelet Transform (MDWT)

The MDWT decomposes the signal into two bands called low-pass (LP) and high-pass (HP) filter banks, respectively. The LP filter bank captures the low-frequency content of the signal, while the HP filter bank captures the high-frequency content of the signal. Decomposition of EEGs into four levels results in four HP SBs and one LP SB. The mathematical formulation of the MDWT for the jth level of decomposition is defined as [40]
V ϕ ( j ) ( k ) = i = 1 M j + 1 ϕ ( i 2 k ) V ϕ ( j 1 ) ( i ) , V θ ( j ) ( k ) = i = 1 M j + 1 θ ( i 2 k ) V ϕ ( j 1 ) ( i ) , k = 1 , 2 , . . , 2 n i ,
where M is the length of the signal ( M = 2 n ), ϕ and θ are LP and HP filter, V ϕ is the LP-filtered signal, and V θ is the HP-filtered signal.

2.2.2. Tunable Q Wavelet Transform (TQWT)

The traditional forms of wavelet transforms decompose any signal into subsequent LP SBs and HP SBs with a choice of the mother wavelet. Accurately choosing a wavelet to extract meaningful information is another topic of discussion. The TQWT does not require the selection of a mother wavelet. The decomposition into LPSBs and HPSBs using the TQWT requires tuning parameters, namely the quality factor (q), the oversampling rate (R), and decomposition levels (B), respectively [41]. The quality factor (q) is chosen as 1 for non-oscillatory signals, and it is >1 for oscillatory signals [41]. R controls the localization of the time-domain response, and it is selected as ≥3 to better capture the time-domain response [41]. The EEG signal can split into a number B of high-pass subbands (HPSBs) and one low-pass subband (LPSB) using B decomposition levels. The HPSBs and LPSBs are generated by filter-bank analysis with an LP and a HP frequency response of U 0 B ( ω ) and U 1 B ( ω ) denoted as [41]:
U 0 B ( ω ) = b = 0 B 1 U 0 ω a b , | ω | a B π , 0 , a B π < | ω | π ,
U 1 B ( ω ) = U 1 ω a B 1 b = 0 B 2 U 0 ω a b , ( 1 β ) a B 1 π | ω | a B 1 π , 0 , ω [ π , π ] .
The low-frequency and high-frequency components from any signal can be obtained by LP scaling (a) and HP scaling ( β ) denoted as [41]
β = 2 q + 1 ,
a = 1 β R .
The quality factor is represented as [41]
q = 2 β β .
The oversampling rate is denoted as [41]
R = β 1 a .

2.2.3. Flexible Analytic Wavelet Transform (FAWT)

The FAWT offers several benefits over the conventional dyadic wavelet transform, which includes a provision for arbitrary sample rates for LP and HP channels that allow flexible time-frequency covering. The HP channel used by the FAWT uses a complex pair of atoms, giving it more freedom in choosing the transform parameters. These advantages allow the FAWT to analyze complex oscillating signals, such as vibrations and EEG signals [42,43]. The iterative filter bank structure of the FAWT decomposes the signals into two HP channels and one LP channel, respectively [42]. The frequency responses of the LP and HP filter, denoted as V ϕ ( ω ) and V θ ( ω ) , are defined as [42]
V ϕ ( ω ) = α 1 α 2 , for | ω | < ω p , α 1 α 2 θ ω ω p ω s ω p , for ω p | ω | ω s , α 1 α 2 θ π ω + ω p ω s ω p , for ω s | ω | ω p , 0 , for | ω | ω s , V θ ( ω ) = 2 α 3 α 4 θ π ω ω 0 ω 1 ω 0 , for ω 0 ω < ω 1 , 2 α 3 α 4 , for ω 1 ω < ω 2 , 2 α 3 α 4 θ ω ω 2 ω 3 ω 2 , for ω 2 | ω | ω 3 , 0 , for ω [ 0 , ω 0 ) ( ω 3 , 2 π ) ,
ω p = ( 1 β ) π + ϵ α 1 , ω s = π α 2 , ω 0 = ( 1 β ) π + ϵ α 3 , ω 1 = α 1 π α 2 α 3 , ω 2 = π ϵ α 3 , ω 3 = π + ϵ α 3 ,
where α 1 and α 2 are the up-sampling and down-sampling factors of the LP channel, α 3 and α 4 are the up-sampling and down-sampling factors of the HP channel, ω p is the pass-band frequency, and ω s is the stop-band frequency. β and ϵ are factors related to perfect reconstruction.
A typical example of the SBs obtained after seven levels of decomposition is represented in Figure 2.

2.3. Features Extraction

Features are crucial for drawing a decision boundary to improve system performance. Nonlinear, fractal dimension, and statistical features provide representative information for different physiological and neurological conditions [44,45]. Such features provide an effective representation of brain dynamics, which helps to improve the system performance [44,45]. The current work explores the application of 27 statistical and nonlinear features to detect three mental states. These features are the standard deviation, Hurst exponent, average energy, wavelength, V order, skewness, kurtosis, Hjorth mobility, Higuchi fractal dimension, Lyapunov exponent, differential absolute standard deviation value, absolute value of the summation of an exponential root, absolute value of the sum of square root, normalized first difference, normalized second difference, mean value of the square root, difference variance value, log energy, absolute energy, simple square integral, slope sign change, peak amplitude, minima, peak amplitude, zero crossing rate, interquartile range, and trimean [46,47,48,49,50].

2.4. Ensemble Classifiers

Bootstrap aggregating is an ensemble method usually used to improve classification performance. This work combines five ensemble models to obtain the best optimum combination of hyper-parameters for classification. The classification techniques are ensemble bagged tree, ensemble boosted tree, random under-sampling boosted tree, ensemble subspace knn, and ensemble discriminant trees classifiers with hyper-parameters. In ensemble operation, bootstrap resampling is applied to divide the training data into subsets. Each subset is then used to construct a decision tree, and the output is a function of the voting scheme from the different sets of decision trees. The best-performing classifier is selected as a meta-classifier. Figure 3 shows the operations of ensemble classification techniques. In addition, the performance of classifiers is highly hyper-parameter dependent [51]. Careful selection of the hyper-parameters prevents the model from over-fitting and performance degradation. An accurate choice of hyper-parameters is time-consuming and prone to human error. To overcome this, we have explored an optimizable ensemble classification design using the Matlab classifier application.
The classification setting for a datum with pair U i , V i , where (i = 1, 2, …, M), U i is the predictor with a dimension j, and V i is the response with K numbers of classes, is described. The estimator function for classification is represented by [52]
f ( . ) = h M ( ( U 1 , V 1 ) , ( U 2 , V 2 ) , . . ( U M , V M ) ) ,
where h M ( . ) is the estimator as a function of the input data. The ensemble algorithm is as follows [52]:
1.
Construct ( U 1 , V 1 ) , ( U 2 , V 2 ) , . . ( U M , V M ) bootstrap samples M times randomly ( U 1 , V 1 ) , ( U 2 , V 2 ) , . . ( U M , V M ) .
2.
Evaluate the bootstrap estimator f ( . ) = ( U 1 , V 1 ) , ( U 2 , V 2 ) , . . ( U M , V M ) .
3.
Repeat steps 1 and 2 L times, where L = 50 or 100.
4.
f l ( . ) ( l = 1 , 2 , , L ) . Finally, the ensemble estimator is obtained as f e n s e m b l e ( . ) = L 1 l = 1 L f l ( . )

2.5. Performance Measure

The evaluation of model performance is a crucial stage to measure the effectiveness of the developed model [53]. We have performed a comprehensive analysis of the developed ensemble model to test the effectiveness of the developed system. The evaluation strategy uses three stages. In the first stage, a model is evaluated for its consistency using different validation techniques, namely holdout cross-validation (HOCV), five-fold cross-validation (FFCV), and ten-fold cross-validation (TFCV) techniques. In HOCV, we used the 80:20 strategy, where training and testing were performed on 80% and 20% of the total data, respectively. In five- and ten-fold validation techniques, data were divided into five and ten equal parts, respectively. The model was trained and tested five and ten times, with one part used for testing and the remaining for training, respectively. In the second stage, we performed feature fusion and selected the most prominent features. Finally, we evaluated different performance measures to obtain insights into the developed model. Five evaluation matrices, accuracy, recall, specificity (SPE), precision (PPV), and F-1 score, were used to test the system performance. It is noteworthy to mention that we have used subject-independent training and testing to evaluate the model performance. The mathematical formulations of the performance parameters are expressed as follows.
A c c u r a c y = T p + T n T p + F p + T n + F n , R e c a l l = T n F p + T n , S P E = T p F n + T p , P P V = T p F p + T p , F - 1 score = 2 × R e c a l l × P P V R e c a l l + P P V ,
where T p , T n , F p , and F n are the values of true positive, true negative, false positive, and false negative, respectively.

3. Results

We aimed at classifying mental states using ensemble decomposition and classification algorithms. At first, stratification of the EEG signals was performed to obtain 3840 non-overlapping samples for each class. The stratified signals were decomposed into SBs using three wavelet-based decomposition techniques (MDWT, TQWT, and FAWT). We used four-level decomposition using Daubechies wavelet (db2), yielding five SBs corresponding to five EEG rhythms. The tuning parameters of the TQWT were chosen as q = 2 , R = 5 , and B = 7 . For the FAWT, the tuning parameters were selected as B = 6 , p = 3 , q = 5 , r = 2 , and s = 3 , respectively. We extracted 27 features from the SBs of the MDWT, FAWT, and TQWT with an empirical setting of the tuning parameters. The current analysis includes a feature matrix of all the channels with 27 features. Therefore, a total of 378 features with a total of 2040 segments were introduced into the ensemble classification techniques. The model uses three validation strategies, i.e., HOCV, FFCV, and TFCV. It is noteworthy to mention that we have maintained the same experimental setup. Table 1 shows the accuracy obtained for each SB using MDWT features. The accuracy of two-class and multiclass classification is highest for SB-1. The model yielded the highest accuracies of 95.07%, 94.93%, and 94.36% for D vs. F using HOCV, FFCV, and TFCV, respectively. For UF vs. F, the highest accuracies were 91.18%, 89.34%, and 88.60%, while for D vs. UF, the accuracies were 88.84%, 89.78%, and 88.53% using the optimizable ensemble classifier with HOCV, FFCV, and TFCV techniques. Similarly, three-class classification yielded the highest accuracies of 87.45%, 87.45%, and 86.27% using HOCV, FFCV, and TFCV.
The accuracy obtained for the TQWT features using an optimized ensemble classifier is shown in Table 2. The accuracy of SB-1 was higher then other SBs. For D vs. F, the optimizable model obtained the highest accuracies of 95.22%, 96.10%, and 94.85% with HOCV, FFCV, and TFCV. HOCV, FFCV, and TFCV for UF vs. F classification yield the highest accuracies of 93.01%, 91.32%, and 90.74%. For D vs. UF, the optimizable ensemble classifier yielded accuracies of 90.74%, 90.74%, and 90.22% with HOCV, FFCV, and TFCV. Similarly, HOCV, FFCV, and TFCV techniques yielded accuracies of 85.78%, 89.82%, and 89.02% for three-class classification.
Table 3 shows the accuracy obtained in each SB using FAWT-based features and the optimizable ensemble classifier. The analysis reveals that the last SB yielded the highest accuracy for different classification scenarios. Table 3 shows that the ensemble-based classifier yielded the highest accuracies of 97.79%, 96.91%, and 96.84% for D vs. F classification using HOCV, FFCV, and TFCV techniques. The model provided the highest accuracies of 93.75%, 92.28%, and 91.01% for UF vs. F, D vs. UF, and D vs. F vs. UF using the HOCV technique. The highest accuracies of 93.09%, 91.10%, and 90.90% for UF vs. F, D vs. UF, and D vs. F vs. UF were obtained with FFCV. The accuracies obtained with TFCV for UF vs. F, D vs. UF, and D vs. F vs. UF were 92.94%, 90.96%, and 90.10%, respectively.
Thus, it is clear from Table 1, Table 2 and Table 3 that the accuracy of our developed model is almost stable for three validation techniques in various SBs for different classification scenarios. SB-1 generated the highest accuracy for MDWT and TQWT feature classification. The accuracy yielded by FAWT-based features was highest in SB-7. Analysis also reveals that FAWT-based features provide discernable characteristics, and due to this it obtained the highest accuracy over TQWT- and MDWT-based features. Further, our developed model is consistent for different classification scenarios (binary and multiclass analysis) with three validation techniques. The features provided by drowsy and focused classes are highly discernable; therefore, they yielded the highest classification rate over other scenarios. On the other hand, the features of focused and unfocused classes significantly overlap, resulting in a decreased model performance. An exemplary training curve obtained for the optimized ensemble classifier is shown in Figure 4.
As stated earlier, our training and testing feature set comprised all features from all channels. Analysis of the model with all features may increase the time without improving the classification performance [54]. Therefore, we used feature ranking analysis to test our model performance with optimal features using the minimum redundancy feature selection technique. Figure 5 shows the feature rank obtained for FAWT, TQWT, and MDWT-based features. As seen from Figure 5, out of twenty-seven features, only a few features are statistically significant for classification. The feature importance values for FAWT, TQWT, and MDWT decrease significantly or remains the same after six features. This reveals that a similar performance can be obtained using less features with higher feature ranks.
To obtain an insight into our developed model, we explored a fusion of the most important features of the three decomposition techniques. During fusion, we concatenated the features from all channels according to their ranks. As evident from Table 1, Table 2 and Table 3, SB-1 for TQWT and MDWT and SB-7 for FAWT features yielded the highest accuracy. Therefore, we have fused the features from these SBs. Table 4 represents the accuracy obtained by feature fusion of decomposition techniques with different feature combinations. As seen from Table 4, the accuracy yielded by the ensemble model increases with an increase in the feature count. The model provides the highest performance with four features. After that, the accuracy of the model decreases slightly or remains constant. Furthermore, our model exhibits that feature fusion helps to improve system performance. The fusion of three decomposition techniques yielded the highest accuracy, followed by the features based on a fusion of TQWT and FAWT decomposition. The combination of TQWT and MDWT feature fusion resulted in the lowest performance. Further, to obtain the highest score, we evaluated the highest performance measures of TFCV using iterative majority voting (IMV). For IMV, we conducted multiple rounds of TFCV, and selected the one with the best overall and fold-wise accuracy. The model exhibited the highest accuracy of 97.8%, obtained twice during fold-wise analysis.
Further, we tested the model performance using four performance metrics, as shown in Table 5. The performance measures show that the drowsy class generated the most discriminant features with the highest recall, SPE, PPV, and F1 score. The focused class is the second best, while the worst performance is exhibited by the unfocused class. The analysis shows that feature fusion of the drowsy class yields the highest recall, SPE, PPV, and F1 score of 93.13%, 95.91%, 91.76%, and 92.44%. The recall, SPE, PPV, and F1 score yielded for the drowsy class was 97.12%, 99.63%, 99.26%, and 98.18% using the IMV technique.
To obtain more insight into the proposed system, the receiver operating characteristics (ROC) and area under the curve (AUC) were evaluated, as shown in Figure 6. The ROC and AUC of D vs. F and UF, F vs. D and UF, and UF vs. D and F states for fused features are shown in Figure 6a–c. It is evident that the AUC for drowsy is 94%, while for focused and unfocused states it is 95% and 92%, with an average accuracy of 93.67%, respectively.

4. Discussion

We have tested the efficacy of our proposed model by comparing it with existing state-of-the-art techniques. Borghini et al. [11] computed the power of alpha, theta, and delta frequency bands. An analysis of these frequency bands was performed, and they reported an accuracy of around 90%. Myrden et al. [19] used the FFT to evaluate frequency domain features and classified them with SVM, LDA, and naive Bayes classifiers. Their model yielded the highest accuracies of 71.6%, 74.8%, and 84.8% for frustration, fatigue, and attention levels using the LDA classifier. In another method by Liu et al. [22], an FFT- and SVM-based model yielded an accuracy of 76.82%. Li et al. [20] used an SAM model and obtained and average accuracy rate of 57.03% with the KNN classifier. Nuamah et al. [21] presented a combination of STFT and SVM for feature extraction and classification. Their method obtained an accuracy of 93.33% using the radial basis function kernel. Ket et al. [23] automatically identified three tasks, namely attention, no attention, and rest, using two experiments (ball playing or walking cartoon). The sample entropy and linear features classified using SVM and their method yielded an accuracy of 76.19% and 85.24% using two experiments with sample entropy features. Wang et al. [24] fed features extracted by filtering and ICA into an SVM classifier and achieved 86.2% and 84.6% accuracies in the classification of driving tasks and math-related activities. Djamal et al. [25] computed non-wavelet- and wavelet-based features and classified them with an SVM classifier, and their method provided accuracies in the range of 44–58% and 69–83%, respectively. Hamadicharef et al. [26] developed a filters, common spatial patterns, and Fisher linear discriminant-based attention and non-attention classification model with an accuracy of 89.4%. Chaotic features based on log energy, Higuchi, and Petrosian’s fractal dimension artificial neural network classifiers claim an accuracy of 83.3%. Richer et al. [28] used the power of frequency bands, naive, entropy scores, and a binary classification model to obtain a sensitivity of 82% and 80.4% and a specificity of 82.8% and 80.8% for the focus and relax scores, respectively. The methods discussed above have been tested on different datasets for mental state classification. The proposed method was compared with the work of Aci et al. [29] and Zhang et al. [30] on the same dataset, as shown in Table 6. Aci et al. used STFT-based feature extraction to compute different feature sets. ANFIS, SVM, and KNN classifiers were employed to classify 154 features with accuracies of 81.55%, 77.76%, and 91.72%, respectively. A method by Zhang et al. [30] used a deep-learning-based convolutional neural network (CNN) and provided an accuracy of 96.4%. Kumar et al. explored the analysis of PSD using FFT-based feature extraction and a KNN classifier. The channel-wise and grouped channel analysis yielded accuracies of 80% and 97.5% [37]. Khare et al. used RDWT wavelet analysis with statistical feature extraction. The classification of features resulted in an accuracy of 91.77% using the bagged tree classifier [36]. Rastogi and Bhateja et al. [38] performed elimination of artifacts and noise using SWT and ICA. However, their group did not report the classification accuracy. In our method, we have used ensemble-based decomposition and extraction of nonlinear features. The individual analysis of MDWT, TQWT, and FAWT features yielded accuracies of 88.27%, 89.02%, and 90.1%. Fused feature analysis yielded accuracies of 90.98%, 88.62%, and 89.61% for TQWT/FAWT, TQWT/MDWT, and MDWT/FAWT feature fusion using the TFCV technique. A combined fused feature analysis using TFCV and IMV resulted in accuracies of 92.45% and 97.8%. The analysis shows that our developed model has surpassed the performance of existing state-of-the-art techniques, showing the efficacy of our developed model.

5. Conclusions

The proposed method classifies focused, unfocused, and drowsy mental states. We have developed an ensemble decomposition and optimized classification technique to create an effective model for mental state detection. Our analysis shows that feature extraction using the FAWT yields the most discernible features for detecting mental states. We demonstrated that feature fusion is helpful for the extraction of representative SBs from EEG signals. Hence, it is useful for extracting meaningful information for mental state analysis. Our model also shows that feature fusion with statistical analysis helps to reduce the feature dimensions with an increased accuracy. The model yielded an accuracy of 97.8% with IMV, which is higher than existing state-of-the-art techniques. Our developed model can detect drowsy, focused, and unfocused states with accuracies of 99.26%, 98.52%, and 94.11%. The proposed work is ready for real-time mental state classification applications to take brain–computer and human–machine interfaces to the next level. The advantages of our developed model are listed as follows:
  • The model can explore multi-level ensemble wavelet analysis.
  • The model is effective and robust due to comprehensive analysis.
  • The optimized ensemble classifier allows tuning of the hyper-parameters to achieve the best classification performance.
  • The model yielded the highest accuracy of 97.8%.
  • The model supports binary and multi-class analyses.
The model has following limitations:
  • The model has been tested on a single EEG dataset.
  • The dataset contains fewer subjects.
  • The model has not been tested with leave-one-subject-out classification.
In the future, we will aim to:
  • Perform adaptive parameter tuning and channel selection.
  • Develop model leave-one-subject-out classification on a relatively larger dataset.

Author Contributions

Conceptualization: S.K.K.; Methodology: S.K.K. and V.B.; Software: S.K.K., V.B. and N.B.G.; Validation: S.K.K., V.B., N.B.G. and G.R.S.; Formal analysis: S.K.K. and V.B.; Investigation: S.K.K., V.B. and N.B.G.; Data curation: S.K.K., V.B., N.B.G. and G.R.S.; Writing—original draft preparation: S.K.K. and N.B.G.; Writing—review and editing: S.K.K., V.B., N.B.G. and G.R.S.; Visualization: S.K.K., V.B. and N.B.G.; Supervision: S.K.K., V.B. and G.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data have been created in this research. The dataset was taken from a public repository (https://www.kaggle.com/datasets/inancigdem/eeg-data-for-mental-attention-state-detection) (accessed on 22 May 2021).

Acknowledgments

The authors would like to thank Cigdem Inan Aci for making these data publicly available. The authors also extend their thanks to the reviews and editors for their efforts in the review process.

Conflicts of Interest

The authors declare that there are no conflict of interest.

References

  1. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain Computer Interfaces, a Review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  2. Ortiz-Echeverri, C.J.; Salazar-Colores, S.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A. A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network. Sensors 2019, 19, 4541. [Google Scholar] [CrossRef] [PubMed]
  3. Sterman, M.B. Physiological origins and functional correlates of EEG rhythmic activities: Implications for self-regulation. Biofeedback Self-Regul. 1996, 21, 3–33. [Google Scholar] [CrossRef]
  4. Nisha, A.V.; Pallikonda Rajasekaran, M.; Priya, R.K.; Al Bimani, A. Artificial Intelligence based Neurodegenerative Disease Diagnosis and Research Analysis using Functional MRI (FMRI): A Review. In Proceedings of the 2021 3rd International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), Greater Noida, India, 17–18 December 2021; pp. 446–450. [Google Scholar] [CrossRef]
  5. Eastmond, C.; Subedi, A.; De, S.; Intes, X. Deep learning in fNIRS: A review. Neurophotonics 2022, 9, 041411. [Google Scholar] [CrossRef] [PubMed]
  6. Fred, A.L.; Kumar, S.N.; Kumar Haridhas, A.; Ghosh, S.; Purushothaman Bhuvana, H.; Sim, W.K.J.; Vimalan, V.; Givo, F.A.S.; Jousmäki, V.; Padmanabhan, P.; et al. A Brief Introduction to Magnetoencephalography (MEG) and Its Clinical Applications. Brain Sci. 2022, 12, 788. [Google Scholar] [CrossRef]
  7. Rim, B.; Sung, N.J.; Min, S.; Hong, M. Deep Learning in Physiological Signal Data: A Survey. Sensors 2020, 20, 969. [Google Scholar] [CrossRef]
  8. Neri, L.; Oberdier, M.T.; van Abeelen, K.C.J.; Menghini, L.; Tumarkin, E.; Tripathi, H.; Jaipalli, S.; Orro, A.; Paolocci, N.; Gallelli, I.; et al. Electrocardiogram Monitoring Wearable Devices and Artificial-Intelligence-Enabled Diagnostic Capabilities: A Review. Sensors 2023, 23, 4805. [Google Scholar] [CrossRef]
  9. Markiewicz, R.; Markiewicz-Gospodarek, A.; Dobrowolska, B. Galvanic Skin Response Features in Psychiatry and Mental Disorders: A Narrative Review. Int. J. Environ. Res. Public Health 2022, 19, 13428. [Google Scholar] [CrossRef]
  10. Aricò, P.; Borghini, G.; Di Flumeri, G.; Sciaraffa, N.; Colosimo, A.; Babiloni, F. Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends. IEEE Trans. Biomed. Eng. 2017, 64, 1431–1436. [Google Scholar] [CrossRef]
  11. Borghini, G.; Astolfi, L.; Vecchiato, G.; Mattia, D.; Babiloni, F. Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. Neurosci. Biobehav. Rev. 2014, 44, 58–75. [Google Scholar] [CrossRef]
  12. Aricò, P.; Borghini, G.; Di Flumeri, G.; Colosimo, A.; Pozzi, S.; Babiloni, F. Chapter 10—A passive brain–computer interface application for the mental workload assessment on professional air traffic controllers during realistic air traffic control tasks. In Brain-Computer Interfaces: Lab Experiments to Real-World Applications; Progress in Brain Research; Coyle, D., Ed.; Elsevier: Amsterdam, The Netherlands, 2016; Volume 228, pp. 295–328. [Google Scholar] [CrossRef]
  13. Khare, S.; Nishad, A.; Upadhyay, A.; Bajaj, V. Classification of emotions from EEG signals using time-order representation based on the S-transform and convolutional neural network. Electron. Lett. 2020, 56, 1359–1361. [Google Scholar] [CrossRef]
  14. Khare, S.K.; Bajaj, V. Entropy based Drowsiness Detection using Adaptive Variational Mode Decomposition. IEEE Sens. J. 2020, 21, 6421–6428. [Google Scholar] [CrossRef]
  15. Khare, S.K.; Bajaj, V.; Acharya, U.R. SchizoNET: A robust and accurate Margenau–Hill time-frequency distribution based deep neural network model for schizophrenia detection using EEG signals. Physiol. Meas. 2023, 44, 035005. [Google Scholar] [CrossRef]
  16. Yin, Y.; Cai, X.; Ouyang, M.; Li, S.; Li, X.; Wang, P. FoMO and the brain: Loneliness and problematic social networking site use mediate the association between the topology of the resting-state EEG brain network and fear of missing out. Comput. Hum. Behav. 2023, 141, 107624. [Google Scholar] [CrossRef]
  17. Örün, Ö.; Akbulut, Y. Effect of multitasking, physical environment and electroencephalography use on cognitive load and retention. Comput. Hum. Behav. 2019, 92, 216–229. [Google Scholar] [CrossRef]
  18. Shahabi, H.; Moghimi, S. Toward automatic detection of brain responses to emotional music through analysis of EEG effective connectivity. Comput. Hum. Behav. 2016, 58, 231–239. [Google Scholar] [CrossRef]
  19. Myrden, A.; Chau, T. A Passive EEG-BCI for Single-Trial Detection of Changes in Mental State. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 345–356. [Google Scholar] [CrossRef]
  20. Li, Y.; Li, X.; Ratcliffe, M.; Liu, L.; Qi, Y.; Liu, Q. A Real-Time EEG-Based BCI System for Attention Recognition in Ubiquitous Environment. In Proceedings of the 2011 International Workshop on Ubiquitous Affective Awareness and Intelligent Interaction (UAAII ’11), New York, NY, USA, 18 September 2011; pp. 33–40. [Google Scholar] [CrossRef]
  21. Nuamah, J.; Seong, Y. Support vector machine (SVM) classification of cognitive tasks based on electroencephalography (EEG) engagement index. Brain-Comput. Interfaces 2017, 5, 1–12. [Google Scholar] [CrossRef]
  22. Liu, N.H.; Chiang, C.Y.; Chu, H.C. Recognizing the Degree of Human Attention Using EEG Signals from Mobile Sensors. Sensors 2013, 13, 10273–10286. [Google Scholar] [CrossRef]
  23. Ke, Y.; Long, C.; Fu, L.; Jia, Y.; Li, P.; Qi, H.; Zhou, P.; Zhang, L.; Wan, B.; Ming, D. Visual Attention Recognition Based on Nonlinear Dynamical Parameters of EEG. Bio-Med. Mater. Eng. 2013, 23, S349–S355. [Google Scholar] [CrossRef]
  24. Wang, Y.; Jung, T.; Lin, C. EEG-Based Attention Tracking During Distracted Driving. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 1085–1094. [Google Scholar] [CrossRef] [PubMed]
  25. Djamal, E.C.; Pangestu, D.P.; Dewi, D.A. EEG-based recognition of attention state using wavelet and support vector machine. In Proceedings of the 2016 International Seminar on Intelligent Technology and Its Applications (ISITIA), Lombok, Indonesia, 28–30 July 2016; pp. 139–144. [Google Scholar] [CrossRef]
  26. Hamadicharef, B.; Zhang, H.; Guan, C.; Wang, C.; Phua, K.S.; Tee, K.P.; Ang, K.K. Learning EEG-based spectral-spatial patterns for attention level measurement. In Proceedings of the 2009 IEEE International Symposium on Circuits and Systems, Taipei, Taiwan, 24–27 May 2009; pp. 1465–1468. [Google Scholar] [CrossRef]
  27. Mardi, Z.; Ashtiani, S.; Mikaeili, M. EEG-based Drowsiness Detection for Safe Driving Using Chaotic Features and Statistical Tests. J. Med. Signals Sens. 2011, 1, 130–137. [Google Scholar] [CrossRef] [PubMed]
  28. Richer, R.; Zhao, N.; Amores, J.; Eskofier, B.M.; Paradiso, J.A. Real-time Mental State Recognition using a Wearable EEG. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5495–5498. [Google Scholar] [CrossRef]
  29. Acı, Ç.İ.; Kaya, M.; Mishchenko, Y. Distinguishing mental attention states of humans via an EEG-based passive BCI using machine learning methods. Expert Syst. Appl. 2019, 134, 153–166. [Google Scholar] [CrossRef]
  30. Zhang, D.; Cao, D.; Chen, H. Deep Learning Decoding of Mental State in Non-Invasive Brain Computer Interface. In Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing, New York, NY, USA, Sanya, China, 19–21 December 2019. AIIPCC ’19. [Google Scholar] [CrossRef]
  31. Islam, M.; Lee, T. Multivariate Empirical Mode Decomposition of EEG for Mental State Detection at Localized Brain Lobes. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, 11–15 July 2022; pp. 3694–3697. [Google Scholar] [CrossRef]
  32. Tiwari, A.; Arora, A.; Goel, V.; Khemchandani, V.; Chandra, S.; Pandey, V. A Deep Learning Approach to Detect Sustained Attention in Real-Time Using EEG Signals. In Proceedings of the 2021 International Conference on Computational Performance Evaluation (ComPE), Shillong, India, 1–3 December 2021; pp. 475–479. [Google Scholar] [CrossRef]
  33. Samima, S.; Sarma, M. Mental workload level assessment based on compounded hysteresis effect. Cogn. Neurodyn. 2022, 17, 357–372. [Google Scholar] [CrossRef] [PubMed]
  34. Mohdiwale, S.; Sahu, M.; Sinha, G.R.; Bajaj, V. Automated Cognitive Workload Assessment Using Logical Teaching Learning-Based Optimization and PROMETHEE Multi-Criteria Decision Making Approach. IEEE Sens. J. 2020, 20, 13629–13637. [Google Scholar] [CrossRef]
  35. Easttom, C.; Alsmadi, I. A Comparitive Study of Machine Learning Algorithms for Identifying Mental States from EEG Recordings. In Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 26–29 January 2022; pp. 0644–0648. [Google Scholar] [CrossRef]
  36. Khare, S.K.; Bajaj, V.; Sengur, A.; Sinha, G. Classification of mental states from rational dilation wavelet transform and bagged tree classifier using EEG signals. In Artificial Intelligence-Based Brain-Computer Interface; Elsevier: Amsterdam, The Netherlands, 2022; pp. 217–235. [Google Scholar]
  37. Kumar, R.S.; Srinivas, K.K.; Peddi, A.; Vardhini, P.A.H. Artificial Intelligence based Human Attention Detection through Brain Computer Interface for Health Care Monitoring. In Proceedings of the 2021 IEEE International Conference on Biomedical Engineering, Computer and Information Technology for Health (BECITHCON), Dhaka, Bangladesh, 4–5 December 2021; pp. 42–45. [Google Scholar] [CrossRef]
  38. Rastogi, A.; Bhateja, V. Pre-processing of electroencephalography signals using stationary wavelet transform-enhanced fixed-point fast-ICA. In Proceedings of the Data Engineering and Intelligent Computing: Proceedings of ICICC 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 387–396. [Google Scholar]
  39. Available online: https://www.kaggle.com/inancigdem/eeg-data-for-mental-attention-state-detection (accessed on 22 May 2021).
  40. Parkale, Y.V.; Nalbalwar, S.L. Application of 1-D discrete wavelet transform based compressed sensing matrices for speech compression. SpringerPlus 2016, 5, 2048. [Google Scholar] [CrossRef]
  41. Selesnick, I.W. Wavelet transform with tunable Q-factor. IEEE Trans. Signal Process. 2011, 59, 3560–3575. [Google Scholar] [CrossRef]
  42. Bayram, I. An analytic wavelet transform with a flexible time-frequency covering. IEEE Trans. Signal Process. 2012, 61, 1131–1142. [Google Scholar] [CrossRef]
  43. Sharma, M.; Pachori, R.B.; Rajendra Acharya, U. A new approach to characterize epileptic seizures using analytic time-frequency flexible wavelet transform and fractal dimension. Pattern Recognit. Lett. 2017, 94, 172–179. [Google Scholar] [CrossRef]
  44. Khare, S.K.; March, S.; Barua, P.D.; Gadre, V.M.; Acharya, U.R. Application of data fusion for automated detection of children with developmental and mental disorders: A systematic review of the last decade. Inf. Fusion 2023, 99, 101898. [Google Scholar] [CrossRef]
  45. Yaacob, H.; Hossain, F.; Shari, S.; Khare, S.K.; Ooi, C.P.; Acharya, U.R. Application of Artificial Intelligence Techniques for Brain–Computer Interface in Mental Fatigue Detection: A Systematic Review (2011–2022). IEEE Access 2023, 11, 74736–74758. [Google Scholar] [CrossRef]
  46. Flood, M.W.; Grimm, B. EntropyHub: An open-source toolkit for entropic time series analysis. PLoS ONE 2021, 16, e0259448. [Google Scholar] [CrossRef] [PubMed]
  47. Too, J.; Abdullah, A.R.; Mohd Saad, N.; Tee, W. EMG feature selection and classification using a Pbest-guide binary particle swarm optimization. Computation 2019, 7, 12. [Google Scholar] [CrossRef]
  48. Too, J.; Abdullah, A.R.; Saad, N.M. Classification of hand movements based on discrete wavelet transform and enhanced feature extraction. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 1–7. [Google Scholar] [CrossRef]
  49. Sudarshan, V.K.; Acharya, U.R.; Oh, S.L.; Adam, M.; Tan, J.H.; Chua, C.K.; Chua, K.P.; San Tan, R. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2 s of ECG signals. Comput. Biol. Med. 2017, 83, 48–58. [Google Scholar] [CrossRef] [PubMed]
  50. Baygin, M.; Barua, P.D.; Dogan, S.; Tuncer, T.; Key, S.; Acharya, U.R.; Cheong, K.H. A hand-modeled feature extraction-based learning network to detect grasps using sEMG signal. Sensors 2022, 22, 2007. [Google Scholar] [CrossRef] [PubMed]
  51. Thornton, C.; Hutter, F.; Hoos, H.H.; Leyton-Brown, K. Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 847–855. [Google Scholar]
  52. Bühlmann, P. Bagging, Boosting and Ensemble Methods. In Handbook of Computational Statistics: Concepts and Methods; Gentle, J.E., Härdle, W.K., Mori, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 985–1022. [Google Scholar] [CrossRef]
  53. Luque, A.; Carrasco, A.; Martín, A.; de las Heras, A. The impact of class imbalance in classification performance metrics based on the binary confusion matrix. Pattern Recognit. 2019, 91, 216–231. [Google Scholar] [CrossRef]
  54. Baygin, M.; Barua, P.D.; Chakraborty, S.; Tuncer, I.; Dogan, S.; Palmer, E.E.; Tuncer, T.; Kamath, A.P.; Ciaccio, E.J.; Acharya, U.R. CCPNet136: Automated detection of schizophrenia using carbon chain pattern and iterative TQWT technique with EEG signals. Physiol. Meas. 2023, 44, 035008. [Google Scholar] [CrossRef]
Figure 1. Proposed ensemble model for mental state detection.
Figure 1. Proposed ensemble model for mental state detection.
Sensors 23 07860 g001
Figure 2. A typical example of SBs generated by the TQWT.
Figure 2. A typical example of SBs generated by the TQWT.
Sensors 23 07860 g002
Figure 3. Typical working of ensemble classifier techniques.
Figure 3. Typical working of ensemble classifier techniques.
Sensors 23 07860 g003
Figure 4. Training curve obtained for our optimizable ensemble classifier.
Figure 4. Training curve obtained for our optimizable ensemble classifier.
Sensors 23 07860 g004
Figure 5. The feature rank obtained using the minimum redundancy feature selection technique: (a) FAWT, (b) TQWT, and (c) MDWT decomposition.
Figure 5. The feature rank obtained using the minimum redundancy feature selection technique: (a) FAWT, (b) TQWT, and (c) MDWT decomposition.
Sensors 23 07860 g005
Figure 6. ROC and AUC obtained using fused features for (a) drowsy, (b) focused, and (c) unfocused classes.
Figure 6. ROC and AUC obtained using fused features for (a) drowsy, (b) focused, and (c) unfocused classes.
Sensors 23 07860 g006
Table 1. The overall accuracy (%) obtained in MDWT SBs for HOCV, FFCV, and TFCV techniques using different classification scenarios.
Table 1. The overall accuracy (%) obtained in MDWT SBs for HOCV, FFCV, and TFCV techniques using different classification scenarios.
ClassSB-1SB-2SB-3SB-4SB-5
HOCV
D vs. F95.0794.1293.5395.5988.97
UF vs. F91.1888.9790.0788.6081.62
D vs. UF88.8486.4087.1388.6080.51
D vs. UF vs. F87.4584.6184.1281.4775.98
FFCV
D vs. F94.9393.5392.2193.0989.19
UF vs. F89.3488.9788.1685.1582.65
D vs. UF89.7888.6087.0686.7681.10
D vs. UF vs. F87.4584.6184.1281.4775.98
TFCV
D vs. F94.2693.8292.7992.9488.97
UF vs. F88.6088.2488.2486.6981.40
D vs. UF88.5387.6588.0185.5179.12
D vs. UF vs. F86.2783.3381.5779.2274.17
Table 2. The overall accuracy (%) obtained in SBs of the TQWT for HOCV, FFCV, and TFCV techniques using different classification scenarios.
Table 2. The overall accuracy (%) obtained in SBs of the TQWT for HOCV, FFCV, and TFCV techniques using different classification scenarios.
ClassSB-1SB-2SB-3SB-4SB-5SB-6SB-7SB-8
HOCV
D vs. F95.2295.5994.1297.0695.5194.1295.5996.32
UF vs. F93.0192.6593.3890.8186.0386.4088.2492.65
D vs. UF90.7492.6593.7588.2491.1884.1984.1984.93
D vs. UF vs. F85.7885.3485.6485.4983.1481.8679.7183.68
FFCV
D vs. F96.1095.0794.4195.5194.4994.1994.1296.32
UF vs. F91.3290.5991.6990.9689.4987.5784.8590.96
D vs. UF90.7490.4489.9388.3189.0487.5086.9985.74
D vs. UF vs. F89.8287.2586.8186.4785.7883.8781.3285.00
TFCV
D vs. F94.8595.0094.1995.3795.0094.1993.7595.74
UF vs. F90.7489.7190.6690.0789.2687.7284.8588.24
D vs. UF90.2290.0088.5388.1687.8784.8584.9384.26
D vs. UF vs. F89.0285.3485.6485.4983.1481.8679.7183.68
Table 3. The overall accuracy (%) obtained in SBs of the FAWT for HOCV, FFCV, and TFCV techniques using different classification scenarios.
Table 3. The overall accuracy (%) obtained in SBs of the FAWT for HOCV, FFCV, and TFCV techniques using different classification scenarios.
ClassSB-1SB-2SB-3SB-4SB-5SB-6SB-7
HOCV
D vs. F90.4488.8289.2986.4093.0189.1397.79
UF vs. F86.7683.8279.4181.2580.1583.4693.75
D vs. UF83.6081.9981.2583.8284.1980.8892.28
D vs. UF vs. F76.4776.4276.5276.2377.0175.3491.01
FFCV
D vs. F88.3887.8788.4688.1690.1587.5796.91
UF vs. F83.3882.7283.6081.8482.4381.2593.09
D vs. UF81.0382.0680.0081.8481.9180.5191.10
D vs. UF vs. F73.7376.1275.8676.1176.9775.3490.90
TFCV
D vs. F87.2887.3588.2486.4788.1686.6296.84
UF vs. F81.7682.1382.1382.0682.1381.9192.94
D vs. UF77.7978.9078.3179.1979.6379.2690.96
D vs. UF vs. F73.7374.2275.8375.1573.7373.2890.10
Table 4. The overall accuracy (%) of fused features of best performing SBs of ensemble decomposition techniques.
Table 4. The overall accuracy (%) of fused features of best performing SBs of ensemble decomposition techniques.
No. of FeaturesTQWT/FAWTTQWT/MDWTMDWT/FAWTFused Model
One86.281.8381.5186.3
Two88.2687.788.2489
Three86.6182.584.6191.62
Four90.9888.6289.6192.45
Five90.0488.2488.6292.24
Six90.8388.6289.6191.6
IMV97.8
Table 5. Performance parameters obtained for different scenarios of the proposed model.
Table 5. Performance parameters obtained for different scenarios of the proposed model.
MeasuresRecall (%)SPE (%)PPV (%)F1 Score (%)
FeaturesFAWT
Drowsy91.4594.9089.7190.57
Focused92.3097.5495.1593.70
Unfocused86.4692.7685.4485.95
FeaturesMDWT + FAWT
Drowsy90.2295.4290.8890.55
Focused91.6395.8891.7691.70
Unfocused86.9493.1286.1886.56
FeaturesMDWT + TQWT
Drowsy90.2395.4991.0390.63
Focused90.0695.2890.5990.32
Unfocused85.5292.1984.2684.89
FeaturesFAWT + TQWT
Drowsy92.4294.9189.7191.04
Focused94.0297.7095.4194.71
Unfocused87.1094.1388.3887.74
FeaturesFused model
Drowsy93.1395.9191.7692.44
Focused94.4897.7895.5995.03
Unfocused89.7494.9990.0089.87
FeaturesMajority iterative voting
Drowsy97.1299.6399.2698.18
Focused97.1099.2698.5397.81
Unfocused97.7197.1194.1295.88
Table 6. Comparison with existing state-of-the-art techniques using the same dataset.
Table 6. Comparison with existing state-of-the-art techniques using the same dataset.
AuthorsMethodClassifierAccuracy (%)
Aci et al. [29]STFTKNN77.76
ANFIS81.55
SVM91.72
Zhang et al. [30]CNNCNN96.4
Khare et al. [36]RDWTBagged tree91.77
Kumar et al. [37]PSDKNN97.5
Rastogi and Bhateja [38]SWTArtifact removal
ProposedEnsemble decompositionOptimizable ensemble97.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khare, S.K.; Bajaj, V.; Gaikwad, N.B.; Sinha, G.R. Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals. Sensors 2023, 23, 7860. https://doi.org/10.3390/s23187860

AMA Style

Khare SK, Bajaj V, Gaikwad NB, Sinha GR. Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals. Sensors. 2023; 23(18):7860. https://doi.org/10.3390/s23187860

Chicago/Turabian Style

Khare, Smith K., Varun Bajaj, Nikhil B. Gaikwad, and G. R. Sinha. 2023. "Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals" Sensors 23, no. 18: 7860. https://doi.org/10.3390/s23187860

APA Style

Khare, S. K., Bajaj, V., Gaikwad, N. B., & Sinha, G. R. (2023). Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals. Sensors, 23(18), 7860. https://doi.org/10.3390/s23187860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop