Next Article in Journal
Acute Kidney Injury in Liver Cirrhosis
Previous Article in Journal
The Reliability of Three-Dimensional Landmark-Based Craniomaxillofacial and Airway Cephalometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals

Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(14), 2358; https://doi.org/10.3390/diagnostics13142358
Submission received: 25 May 2023 / Revised: 4 July 2023 / Accepted: 8 July 2023 / Published: 13 July 2023
(This article belongs to the Topic Artificial Intelligence in Healthcare - 2nd Volume)

Abstract

:
Sleep stage classification plays a pivotal role in predicting and diagnosing numerous health issues from human sleep data. Manual sleep staging requires human expertise, which is occasionally prone to error and variation. In recent times, availability of polysomnography data has aided progress in automatic sleep-stage classification. In this paper, a hybrid deep learning model is proposed for classifying sleep and wake states based on a single-channel electroencephalogram (EEG) signal. The model combines an artificial neural network (ANN) and a convolutional neural network (CNN) trained using mixed-input features. The ANN makes use of statistical features calculated from EEG epochs, and the CNN operates on Hilbert spectrum images generated during each epoch. The proposed method is assessed using single-channel Pz-Oz EEG signals from the Sleep-EDF database Expanded. The classification performance on four randomly selected individuals shows that the proposed model can achieve accuracy of around 96% in classifying between sleep and wake states from EEG recordings.

1. Introduction

Sleep is essential for maintaining both mental and physical health. This biological process helps the body recover and rejuvenate at the end of the day. Bad quality sleep has numerous adverse physiological effects that lead to many short-term and long-term consequences: stress, somatic problems, obesity, type 2 diabetes, and even death [1]. However, research shows it is possible to obtain valuable insight into a person’s present and future health issues by analyzing sleep patterns. A study from the American Academy of Sleep Medicine (AASM) revealed it is possible to predict future health issues like dementia, cardiovascular health, psychological disorders, and mortality by analyzing EEG signals recorded during sleep [2]. Research from the University of California, Berkeley, conducted on 32 healthy adults confirmed that studying sleep patterns can predict development of Alzheimer’s disease [3]. Sleep-related research covers a wide spectrum. With the availability of different physiological signal databases and data processing tools and algorithms, more and more researchers with expertise in signal processing and artificial intelligence are extracting significant insights from data related to sleep. Among sleep research topics, sleep stage classification is important for understanding the structure of sleep and for diagnosing and treating sleep disorders, monitoring sleep quality, and improving overall sleep health. Classification of sleep stages is generally carried out using physiological signals from the electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG), and electromyogram (EMG), and from pulse oximetry, respiratory signals, body temperature, and thoracic and abdominal movements, etc. Trained experts analyze these signals and designate a specific sleep stage for each 30 s of a recording or epoch. Generally, there are two standards for labeling sleep stages: the Rechtschaffen and Kales (R&K) rule [4] and a form of R&K modified by the AASM [5].
According to the R&K rules, EEG patterns are divided into seven different stages: wake, stage 1, stage 2, stage 3, stage 4, stage REM (rapid eye movement), and movement time. These labels are assigned to the EEG recordings by expert technicians observing other physiological signals such as eye movements, muscle activity, and electrooculography (EOG) simultaneously. Since this method heavily depends upon visual analysis and subjective interpretation of sleep recordings, it is prone to subjectivity and inter-expert reliability. Additionally, R&K rules were developed for young healthy adults. Thus, it is not directly applicable to elderly subjects and patients [6]. On the other hand, the American Academy of Sleep Medicine (AASM) proposes a more comprehensive and standardized modification of the R&K standard. The AASM divides sleep into three non-rapid eye movement (NREM) stages (N1, N2, N3) and one rapid eye movement (REM) stage. This method provides detailed scoring criteria for each sleep stage based on the presence or absence of specific waveforms, frequencies, and amplitudes in the EEG, as well as features such as eye movements and muscle tone. The AASM guidelines specify criteria for determining the onset and offset of sleep. Different types of movements, such as periodic limb movements or excessive body movements, are identified and scored separately. The AASM scoring manual includes guidelines for separately scoring respiratory events, such as apneas and hypopneas. The AASM guidelines also address technical considerations related to sleep recording and scoring, including the use of appropriate montages, filters, and sampling rates. It also provides recommendations for artifact identification and removal to ensure accurate and reliable sleep stage scoring.
Assigning labels to sleep epochs by visually scrutinizing polysomnography signals is a complex and challenging process that requires careful observations by an expert technician. Expert systems, that rely on rule-based algorithms to classify sleep stages following specific criteria and decision thresholds determined by experts’ guidelines, are used for sleep stage classification. As such an approach involves human involvement, it is a time-consuming task and, due to the chaotic nature and variability of the signals, there is the possibility of different observers interpreting the same epoch differently. Therefore, research to develop an automatic sleep stage classification system with considerable accuracy is important to ease the process and assist technicians involved in such tasks. Emergence of artificial intelligence (AI) aided the automatic sleep stage classification greatly. Combined with signal processing, deep learning and machine learning algorithms are now able to classify sleep stages with satisfactory accuracy without expert involvement [7,8].

2. Related Works

In recent times, automated sleep stage classification has been largely based on machine learning and deep learning frameworks as data collection, processing, and sharing become more feasible [9,10]. Sleep stage classification based on different polysomnography data has been explored in some research [11,12,13]. Polysomnography signals collected from wearable sensors have also been used for sleep stage detection owing to the ease of use and low cost. Wrist-worn accelerometer data were used for sleep–wake state detection in [14] with random forest as the classifier model. A decision-tree-based classifier was implemented in [15] using features computed from hand acceleration, ECG signals, and distal skin temperature collected by a wearable, wireless sensor. The model’s ability to detect sleep and wake states is higher than from using other wearable sensor data. It has been observed that among the signals used for diagnosing sleep, the EEG is considered by a majority of researchers for its high temporal accuracy, non-invasiveness, the availability of wearable sensors, and ease of use [16].
In [17], the authors provided an extensive comparison of common machine learning algorithms and neural network performance for automatic sleep stage classification using EEG signals from the Sleep-EDF expanded database. At the pre-processing stage, DC artifacts and low-frequency deviations are removed, which is followed by a denoising step where two independent components are denoised using a wavelet transform. Later, several features are extracted from the reconstructed denoised signal in four broad categories: time-based, frequency-based, entropy-based, and non-linear features. Based on all these features, support vector machine (SVM), random forest (RF), and multi-layer perceptron (MLP) classifiers provided accuracy of 94%. The authors also discussed the effect on classification accuracy from selecting a single-channel and a multi-channel EEG signal combination. A similar approach was followed in [18] using a single-lead ECG signal labeled according to three sleep stages: wake, non-rapid eye movement (NREM), and rapid eye movement (REM). In that work, a total of 39 features were computed utilizing Singular Value Decomposition (SVD), Variational Mode Decomposition (VMD), Hilbert–Huang Transform (HHT), and a morphological method. Selective features using a wrapper approach attained good accuracy from the RF classifier. Sleep staging from EOG signals based on various statistical features extracted from detailed and approximate coefficients and obtained with a discrete wavelet transform (DWT) was investigated in [19]. The statistical features were grouped as spectral entropy-based, moment-based measures, refined composite multiscale dispersion entropy, and auto-regressive model coefficients. The authors summarized the classification performance when the output sleep stage varied from two-class to six-class, and their proposed scheme showed good accuracy when the SVM was used as the classifier.
The convolutional neural network (CNN) for its automatic feature extraction ability was used in conjugation with image-based features in several studies. Joe and Pyo [20] used time-domain images and frequency domain images of EEG and EOG signals to train a CNN. A fairly simple CNN architecture with only two convolutional layers attained 94 accuracy when trained with both types of image. A relatively deep CNN architecture was investigated by Yildirim et al. [21]. The model consists of 10 1-D convolutional layers that classify sleep stages from direct EEG and EOG signals. Similar to [19], in their paper, the sleep classes are also framed from two-class to six-class, finding that for single-channel EEG, EOG, or EEG-EOG combination, two-class (sleep and wake) classification accuracy was highest and remained higher than a multi-class scenario. In [22], time-frequency images obtained using Morse, Bump, and Amor continuous wavelets from a single-channel EEG signal and a pre-trained SqueezeNet model were used for sleep stage classification. When the Fpz-Cz channel was considered, scalogram images from all the wavelets resulted in accuracy slightly over 83% for 30 s epochs. When the 150 s epoch is considered, Morse wavelet scalogram provided 85.07% average accuracy. In the case of the Pz-Oz channel for 30 s epoch, Amor wavelet scalogram achieved 81.85% average accuracy, which is highest among the three scalograms. The mean accuracy increased slightly to 82.92% for the 150 s epoch when Morse wavelet scalogram is used. Zhou et al. proposed a modified CNN architecture for a single-channel EEG signal [23]. The architecture uses a multi-convolutional block to perform convolution in several stages with different filter sizes, and maximum-average pooling is used in every convolution block to maximize the feature-capturing ability. The model’s performance is presented in terms of mean accuracy for a five-fold cross-validation over two datasets. For single channel EEG C4/A1 from the Cleveland Children’s Sleep and Health Study (CCSHS) dataset, 90.2% overall accuracy is obtained. The model provided a lower average accuracy of 86.1% when the Fpz-Cz channel EEG signal is considered from the Sleep-EDF Database Expanded (Sleep-EDF) dataset. A temporal convolutional neural network (TCNN) was proposed in [24]. In that model, the TCNN segment extracts temporal features from output features generated by two parallel CNN blocks. The CNNs extract high-level features from the EEG signal, and the TCNN finds non-linear dependency patterns. The final classification is performed by a conditional random field module that provides a conditional probability for the possible sleep stages. An orthogonal CNN (OCNN) architecture was modified and its performance investigated in [25], where the authors included an alternate version of a rectified linear unit (ReLU) and the Adam optimizer in the OCNN model. The authors claimed that this modification enhances feature extraction and accuracy, and it also reduces the learning time. At the pre-processing stage, a time–frequency image is generated using empirical mode decomposition (EMD) and the Hilbert transform to train the model. This improvised model exhibited consistent performance on EEG data taken from different datasets.
A few mixed neural network architectures have been proposed for sleep stage classification. A mixed or hybrid network combines two different networks. In [26], a cascaded model comprising a CNN followed by Bidirectional Long Short Term Memory (Bi-LSTM) was proposed. Spectrograms obtained by Fourier transform are used to train the model. A multi-layer CNN is utilized to extract time and frequency features from an EEG spectrogram. Afterwards, two-layer Bi-LSTMs are employed to learn how features extracted from adjacent epochs are related to each other, and to then classify the different sleep stages. A very similar model was studied in [27], which achieved an average accuracy of 92.21% on single-channel EEG data. Another mixed architecture formed in conjugation with multilayer perceptron and an LSTM was proposed in [28] to take advantage of both sparse patterns and sequential patterns in the temporal domain of an EEG signal. This mixed model attained better accuracy than other baseline models, such as the SVM, RF, and MLP. Pei et al. proposed a combined model in [29] that conjugates a CNN and gated recurrent units (GRU) to form a sleep stage classifier. In their study, the authors considered EEG, ECG, EMG, and EOG signals for training the model simultaneously. Different stacked combinations of LSTM and Bi-LSTM structures were utilized in [30] for the same classification task. Here, a total of 24 features are computed from six EEG channels, one EOG channel, and one EMG channel in the feature extraction stage, and the best accuracy (78.90%) was reported for the LSTM/Bi-LSTM combination. A hybrid framework consisting of three different deep learning architectures was discussed in [31]. This model has two subdivisions: one is a CNN-LSTM block, and the other is a transformer network. A hidden Markov model (HMM) and a 1-D CNN were combined in [32] to form a hybrid model. The fundamental concept behind the 1-D-CNN-HMM model is to leverage the benefits of using a one-dimensional convolutional neural network to automatically extract features from raw EEG data, and then combine it with a hidden Markov model that can take advantage of prior information about sleep stage transitions between adjacent EEG epochs. The fundamental concept behind the 1D-CNN-HMM model is to leverage the benefits of using a one-dimensional convolutional neural network (1D-CNN) to extract features from raw EEG data automatically, and then combining it with a hidden Markov model (HMM) that can take advantage of the prior information about sleep stage transitions between adjacent EEG epochs. A hybrid ANN–CNN model which has similar architecture to what we investigated in this paper is studied in [33] for identifying high-risk COVID-19 patients. The ANN network is trained using clinical, demographic, and laboratory data on the other hand the CNN network is trained using computed tomography (CT) images. The authors reported an accuracy of 93.9% for the combined network. Although the paper used similar model architecture as we worked with in this paper, they used two different sources (clinical records and CT images) but in our work we utilized the same source (only the EEG signal) to create both datasets for ANN and CNN.
In most of the mixed or hybrid models studied, we observed that these models are trained with only one type of feature, like a spectrum image or raw EEG signals. Few works involved different features for training two different architectures. To explore this approach to training a hybrid model with different types of features, we implement a mixed-input model for sleep–wake state classification. The mixed-input model combines an artificial neural network (ANN) and a CNN. To train this model, two types of features are used. The ANN is trained using statistical features calculated from EEG signals that are arranged in a tabular structure, while the CNN is trained with Hilbert spectrum images. The main contributions of our research can be summarized as follows:
  • A hybrid deep learning model for sleep–wake stage classification utilizing mixed features is proposed. We also describe the classification performance of our proposed model.
  • We demonstrate how mixed input is used to train two different segments, and we combine their responses to finally determine the target classes.
The rest of this paper is organized as follows. In Section 3 the proposed hybrid model is briefly described. This section also explains the data preparation, and provides details of the feature extraction procedure. In Section 4, the results and discussion are presented, and concluding remarks are made in Section 5.

3. Methods

In this section, the architecture of our proposed model is presented. The description of the dataset and the preprocessing of the EEG signals for feature extraction is provided. Finally, the details of empirical mode decomposition (EMD) and preparing a mixed dataset from EMD coefficients are discussed.

3.1. Proposed Model

Our proposed mixed-input ANN–CNN model is depicted in Figure 1. The CNN segment, which processes the Hilbert spectrum images, is composed of three convolutional layers. In each layer, the number of filters increases by a power of 2. Therefore, the first layer has 16 convolutional filters, and the second and third layers have 32 and 64 filters each. The kernel size is set to 3, and a rectified linear unit (ReLU) is used as the activation function in each layer. Each convolutional layer is followed by a 2 × 2 max pooling layer. There are two fully connected layers, one with 16 nodes and one with eight nodes, following the last convolutional layer. There is no softmax layer in the CNN network to provide a probability for the classes, unlike conventional CNNs. The output of the last fully connected layer is concatenated with the output from the ANN model.
The ANN, on the other hand, has 2 dense layers, one with 16 nodes and one with 8 nodes. The same ReLU activation is used in these nodes, and there is no softmax layer here either. Finally, the output from the CNN and the ANN are concatenated to a vector of size 16, which then proceeds to the final output through another fully connected layer of 4 nodes. Since this is a binary classification scenario, the final layer of this combined model is implemented as a sigmoid layer.

3.2. Data Preparation

In this work, EEG recordings from the Sleep-EDF expanded database (European Data Format) [34] are used for sleep and wake state classification. This dataset contains 197 whole-night EEG, EOG, and chin EMG recordings, and event markers. The data is collected from a total number of 77 individuals among them 40 were female and 37 were male subjects [35]. EEG recordings are collected for two nights from each individual. The mean age of the group was 54 years. The data was collected under the supervision of researchers from the Department of Neurology and Clinical Neurophysiology of Leiden University Hospital. The EEG data were collected through electrodes placed at subjects’ head at home while they were maintaining their habitual activities. A modified Oxford four-channel cassette recorder with frequency response range from 0.5 to 100 Hz is used to record the EEG. The records were converted to a digital form (12 bits/sample) in a personal computer. To check the noise levels, zero-voltage calibration signals are used which were recorded at the same time as the EEG recording.
The complete dataset contains polysomnography recordings from two studies: Sleep Cassette Study and Data and Sleep Telemetry Study and Data. Although the dataset contains multiple polysomnography data, this paper only uses EEG Pz-Oz signals from Sleep Cassette Study and Data. The Pz-Oz channel contains the delta wave and posterior alpha wave information. The delta wave captures the slow wave activity which is dominant in deep sleep stages whereas the alpha waves are generally observed in relaxed wakefulness state and light sleep states. Our goal was to develop a sleep/wake state classification with a single channel EEG data. Thus, we choose the Pz-Oz signal. The Sleep-Cassette data include 153 sleep recordings from healthy Caucasian individuals with ages ranging between 25 and 101. EEG signals were sampled at 100 Hz. The dataset also contains annotation information that corresponds to the physiological signals. These annotation files are called Hypnograms. The hypnogram contains sleep patterns for each 30-s recording. This 30-s duration is an epoch, which is a standardized format for data analysis in sleep-related research. This sort of segmented recording allows for identification and classification of different sleep stages based on the pattern of the physiological signals observed during each epoch. This is also beneficial for efficient data storage and retrieval. The sleep patterns marked in the hypnogram file consist of the following sleep stages: W (wake state), REM, 1, 2, 3, 4, M (movement time), and ? (not scored). All hypnograms were manually evaluated by skilled medical technicians according to the Rechtschaffen and Kales manual. In our analysis we discarded recordings corresponding to movement time (M) and not scored (?). A visual representation of the EEG signal and its concurrent hypnogram patterns is presented in Figure 2.
In Figure 2, EEG signals for several epochs are shown as the blue line plot, and the respective stage of sleep in that epoch is presented by the multi-labeled red line. Clearly, the EEG signal corresponds to several stages, but our focus is to build a model that can classify two sleep stages: the wake state and the sleep state. To frame this multi-sleep-state scenario into a binary-state scenario, we proceed by considering the five sleep stages (from sleep stage 1 to sleep stage 4, plus REM) as a single class (the sleep class) and the remainder represents the wake class. However, this approach leads to an imbalanced dataset where more samples correspond to the sleep class than to the wake class. To address this issue, an equal number of random samples were removed from the sleep instances to balance the two classes in a restructured dataset. Therefore, the data preparation has two steps:
  • Merge the different sleep stages into a single sleep class.
  • Perform random sampling to balance the classes in the dataset.
The abovementioned process created the dataset used in this work. The size of the original dataset is quite large, so for this work, EEG recordings of from four individuals were used to construct a fairly sized dataset to proceed with. The final dataset consisted of 6880 epochs with equal instances of both classes.

3.3. Feature Extraction

As described previously, our proposed model process both tabular and image features to perform classification. In this section, we discuss extraction of both types of feature.
Using time frequency analysis tools to compute features from bio-signals (especially EEG signals) is widely adopted by researchers, and it has shown great performance in many applications. Following that, we utilized the EMD approach (a powerful technique for analyzing non-stationary signals) to calculate the features. EMD offers several advantages such as more detailed time-frequency analysis than STFT and WT. STFT uses a fixed window size, which limits its time-frequency resolution, whereas WT uses a fixed set of basis functions, which limits its ability to capture the local features of a signal. EMD, on the other hand, adaptively decomposes a signal into its Intrinsic Mode Functions (IMFs), which provides a more detailed and accurate time-frequency representation of the signal. Additionally, EMD does not suffer from the Heisenberg uncertainty principle because it is an adaptive technique that can adjust its time and frequency resolution based on the local features of the signal. Finally, EMD can be used to analyze a wide range of signals without the need for specific assumptions about the signal’s underlying properties. In contrast, STFT and WT require prior selection of the window size or basis functions, respectively, which may not always be optimal for the signal being analyzed.
In the EMD process, a non-stationary signal, x ( t ) is decomposed into a set of intrinsic mode functions, h i ( t ) , and a residual r n ( t ) :
x ( t ) = i = 1 N h i ( t ) + r n ( t ) ,
The IMFs are computed using an iterative process as follows
(i)
Compute the upper and lower envelopes, u ( t ) and l ( t ) , by interpolating the local maxima and minima of the signal using cubic-spline interpolation.
(ii)
Compute the mean of upper and lower envelopes: m ( t ) = u ( t ) + l ( t ) 2
(iii)
Compute the difference between the signal and the mean envelope: d ( t ) = x ( t ) m ( t )
(iv)
If d ( t ) is a monotonic function, then h i ( t ) = d ( t ) and the algorithm terminates. Otherwise, repeat steps ( i ) ( i i i ) on d ( t ) to obtain the next IMF.
An IMF satisfies two conditions:
(a)
The number of extrema (maxima and minima) and the number of zero-crossings must be equal or differ by at most 1.
(b)
The mean value of the function over any complete cycle must be zero.
The pseudocode for computing the IMFs are presented in Table 1.
The IMFs can be used to analyze the frequency content of the signal as a function of time. The first IMF typically represents the highest frequency oscillatory mode in the signal, whereas the last IMF represents the lowest frequency mode. The number of IMFs obtained from a signal depends on its complexity and the desired level of decomposition. Because IMFs represent various frequency contents from the complex signal, in this work, we compute statistical features from the IMFs obtained from the epochs of the EEG recordings to create the numerical features to be used in the mixed-input model in tabular format. Statistical features are listed in Table 2 with the equations to compute them.
There is a CNN segment in the proposed mixed-input model, so the other feature to be used in the mixed-input model is images. Many researchers utilize spectrogram images to analyze various bio-signals by observing the spectral content of the signal by time or frequency. Common spectrograms are obtained through various time-frequency analysis methods like STFT and WT. In computing the first type of feature, we used EMD and the resulting IMFs to calculate numerical features. We think it logical to use the same tool to obtain a spectrogram using the same decomposition technique, and to use the spectrogram image as the image feature for our model. Therefore, Hilbert spectrum images are obtained from IMFs resulting from the EMD by using the Hilbert–Huang transform (HHT), and the spectrum image is used as the feature to be processed by the CNN component of the model. The Hilbert spectrum, unlike a traditional spectrogram, can capture and analyze frequency modulations and transients that may be missed by other spectral analysis methods [36,37]. The process of obtaining the Hilbert spectrum is visually depicted in Figure 3.
At the end of the feature extraction stage, two types of feature are now available: one is a numerical dataset in tabular format, and the other is an image dataset that contains Hilbert spectrum images for two classes, namely, the wake and sleep stages. In both datasets, there is an equal number of instances for the two classes. Some sample Hilbert spectrum images for the two classes are provided in Figure 4a,b.

4. Results and Discussion

In this section, the classification performance of the proposed mixed-input model is discussed. To assess the classification performance of a model, a confusion matrix is used. It summarizes the number of correct and incorrect predictions made by the model by comparing predicted classes with actual classes. In a binary classification framework, the confusion matrix has four entries: true positive (TP), false positive (FP), true negative (TN), and false negative (FN). In the present classification problem, they can be interpreted as follows:
  • TP: the number of samples correctly classified as the wake class.
  • FP: the number of sleep class samples incorrectly predicted as the wake class.
  • TN: the number of samples correctly classified as the sleep class.
  • FN: the number of wake class samples incorrectly predicted as the sleep class.
These are used to compute the following performance metrics:
A c c u r a c y = T N + T P T N + F P + T P + F N
P r e c i s i o n = T P F P + T P
R e c a l l = T P F N + T P
F 1 - s c o r e = T P T P + 1 2 ( F P + F N )
Accuracy is a general measure of the model’s overall correctness calculated by dividing the number of correct predictions (true positives and true negatives) by the total number of instances in the dataset. Precision indicates reliability of the model when it predicts a positive outcome. A higher precision score indicates fewer false positives, which means the model is more accurate in identifying positive instances. On the other hand, Recall evaluates the model’s ability to find all positive instances. A higher recall score indicates a lower rate of false negatives, meaning the model is better at capturing positive instances. Finally, The F1-score is the harmonic mean of precision and recall. It provides a balance between precision and recall, considering both the quality and completeness of the predictions. The best possible value for F1-score is 1. The F1-score penalizes models that have a significant difference between precision and recall.
Initially, the classification performance is observed utilizing only a CNN model. The CNN model has the same architecture as the CNN segment described in the proposed model in Section 3.1. The CNN model is compiled and trained using similar optimizer, loss function and batch sizes as it for the mixed-input ANN–CNN model. The training and validation performance of the CNN model after same number of epochs are presented in Figure 5a,b.
From the training and validation loss pattern in Figure 5a it can be observed that the validation loss has an increasing pattern but the training loss continues to decrease. It means there is a minor case of overfitting the model. The CNN model attained an average accuracy of 95.25%. The corresponding confusion matrix and other performance metrics are presented in Figure 6a,b.
Finally, we included the numerical features and Hilbert spectrum images to train the combined ANN–CNN model. The training and validation performance of the proposed model are illustrated by the loss and accuracy curves in Figure 7a,b.
We can see that the model was trained for 100 epochs, and the patterns for binary cross-entropy loss and accuracy in the training and validation data have regular patterns. The loss was reduced sharply until epoch 10 and stabilized after epoch 20. Similarly, the accuracy converged to around 96% after 60 epochs. It has been observed that when the model is trained for more than 100 epochs, the accuracy did not increase. Therefore, for this model an epoch 100 can be considered as optimal.
The ability of the model in classifying EEG epochs for a test dataset can be understood better from the confusion matrix in Figure 8a, where 2001 of 2064 test samples were correctly classified, and the model was slightly better at predicting the wake class than the sleep class. The other performance metrics calculated from this confusion matrix are in Figure 8b. Observing the confusion matrices in Figure 6a and Figure 8a, we can see that the combined model could classify a few more samples better than the CNN model only. In terms of accuracy the combined ANN–CNN model has 1.69% higher accuracy than the CNN model. Although the improvement is not significantly large, the ANN–CNN model can be considered more stable than the CNN model as there is small overfitting issue present in the CNN model as seem from the training and validation loss in Figure 5a.
In Figure 9, the effect of training both the models for different number of epochs is presented. From the line plots, it can be observed that both the models have their best classification accuracy when they are trained for 100 epochs which are 96.94% and 95.25% for ANN–CNN and CNN model respectively. For the combined ANN–CNN model the accuracy stays around 96% even it is trained for more than 100 epochs. In case of the CNN model, the model accuracy decreases to around 94% for more training epochs. Observing the pattern in the plot, the training the model for 100 epochs would be appropriate.
The performance of the proposed hybrid model is also observed for classifying the all available sleep stages in the dataset. There are six sleep stages corresponding to the EEG epochs: wake, sleep stage 1, sleep stage 2, sleep stage 3, sleep stage 4 and REM sleep stage. EEG epochs marked with Movement time (M) and not scored (?) are discarded in the data preparation stage. In total, 5440 samples/EEG epochs are considered from four subjects according to the distributions listed in Table 3.
Although, there are more samples in the sleep class since more epochs in the dataset belongs to wake state, 2000 wake samples are considered so that the wake instances do not outnumber the other class samples. Even after down-sampling the wake class, the sleep stage 1, 3, and 4 still have relatively lower number of samples compared to the remaining three classes. With this sample distribution, the ANN–CNN model is trained for 100 epochs as numbers from Figure 9 shows that the model could achieve best accuracy for 100 epochs. The loss and accuracy trends over the training period are presented in Figure 10a,b.
The trend in training and validation losses in Figure 10a is a manifestation of over-fitting. The maximum validation accuracy attained by the model is 83.64% for six-class sleep state classification. The performance of the model in classifying each individual classes can be better realized from the confusion matrix depicted in Figure 11.
The numbers in the confusion matrix indicates that the model is quite good at classifying the wake class and shows somewhat descent performance in detecting sleep stages 2, 4, and REM sleep stage. However, the model is not good at distinguishing sleep stages 1 and 3. If we consider the numbers in Table 3, it shows that these two sleep stages along with sleep stage 4 have lower number of samples. This is one reason that the model could not classify these sleep stages with good accuracy. Another possible reason could be the image pattern of the Hilbert spectrum. The Hilbert spectrum image for the wake class has quite distinctive pattern than the images of the remaining classes. Besides having lower number of samples, the Hilbert spectrum images of sleep stage 1, 3, 4, and REM stage may have a highly similar pattern which results less accurate classification for those classes. A class-wise list of precision, recall, and F1-score is provided in Table 4.
Although the ANN–CNN model does not perform equally well for all the classes, it still has promising performance even with similar patterns in the Hilbert transform image. This model might demonstrate better classification if different image encoding from the EEG signals is utilized which generates more distinctive images for different sleep stage EEG recordings.
The performance of the mixed-input model in terms of classification accuracy is compared in Table 5 with some existing work. Most of the work considered here used Sleep EDF or Sleep EDF Extended (EDFx) datasets and a combined deep learning architecture. Three works were based on different datasets; University College Dublin Sleep Apnoea Database (UCDDB), Sleep Heart Health Study (SHHS), and Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) are included because they used hybrid classification models. In most of the work, multiple sleep classes are classified, and in some work, two-class accuracy (sleep and wake) is reported. Two-class accuracy scores are listed in Table 5 when the research address 2–5 class classification.
As we can observe in the table, most of the hybrid model is implemented for classifying five sleep stages and the works thus addressed do not use hybrid models. Although the results of two-class classification in other work seem better than our proposed method, they mostly used advanced feature extraction approaches. As in [40,43], the classification accuracy is better than our model, but they used advanced approaches as Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Ensemble Empirical Mode Decomposition (EEMD) along with spectral features. Ronzhina, M. et al. [42] also attained better accuracy in case of two-class classification. In this work the authors utilized more than one channel signal and numerous features from time, frequency, time–frequency, nonlinear, and hybrid features. The 1-D CNN model proposed in [21] achieved higher accuracy by using combinations of EEG, EOG, and EEG + EOG combined signals. Additionally, the proposed 1-D CNN model is very deep and consists of 10 convolutional layers. Therefore, if more complex means of feature extraction or very deep models are utilized, the classification accuracy can be improved. In our work, we tried to keep the feature extraction process less complex and we created a mixed feature for both CNN and ANN segments of the hybrid model. We used the same EMD coefficients for calculating the numerical features, less complex to calculate, for ANN and the image features for CNN segments. We think, due to this trade-off, our accuracy is somewhat less than other two-class classification tasks. In future work, we will focus on extracting more effective numerical features and implement different image encoding approaches for creating image datasets from the EEG signal to ensure better distinction among different class samples. Also, augmenting images to balance the dataset will be considered.

5. Conclusions

In this paper, a hybrid model is proposed for sleep and wake state classification from single-channel EEG signals. The proposed model combines ANN and CNN models by concatenating the output from the fully connected layer of the two architectures. The proposed model is trained using mixed-input with statistical features computed after performing EMD on EEG signals and by obtaining Hilbert spectrum images. The statistical features, organized in tabular format, are processed by the ANN, and the images are processed by the CNN. Single-channel EEG signals collected from the Sleep-EDF expanded database and the EEG epoch labels are reorganized to create a dataset having two classes: sleep and wake. As a primary approach, the model was trained with EEG signals from four random individuals, and after training the model for 100 epochs, the highest accuracy, 96.94%, was achieved, which is promising because of the simple statistical features considered. In future, we plan to address the classification of multiple sleep stages with more advanced features, and will include all subjects’ EEG signals from the Sleep-EDF expanded database.

Author Contributions

Conceptualization, M.N.H. and I.K.; methodology, M.N.H. and I.K.; software, M.N.H.; writing—original draft preparation, M.N.H.; writing—review and editing, M.N.H. and I.K.; visualization, M.N.H.; supervision, I.K.; funding acquisition, I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the 2022 research fund of the industrial cluster program by korea industrial complex corporation, and in part by the Regional Innovation Strategy (RIS) through the NRF funded by the Ministry of Education (MOE) under Grant 2021RIS-003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the industrial cluster program by korea industrial complex corporation, and the Regional Innovation Strategy (RIS) for funding this paper through the NRF funded by the Ministry of Education (MOE) under Grant 2021RIS-003.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Medic, G.; Wille, M.; Hemels, M.E. Short-and long-term health consequences of sleep disruption. Nat. Sci. Sleep 2017, 9, 151–161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Preston, S. Brain Wave Sleep Data Can Predict Future Health Outcomes. 2022. Available online: https://neurosciencenews.com/sleep-brain-health-20757/ (accessed on 18 April 2022).
  3. Winer, J.R.; Mander, B.A.; Kumar, S.; Reed, M.; Baker, S.L.; Jagust, W.J.; Walker, M.P. Sleep disturbance forecasts β-amyloid accumulation across subsequent years. Curr. Biol. 2020, 30, 4291–4298. [Google Scholar] [CrossRef]
  4. Rechtschaffen, A. Techniques and scoring systems for sleep stages of human subjects. Man. Stand. Terminol. 1978. [Google Scholar]
  5. Berry, R.B.; Brooks, R.; Gamaldo, C.; Harding, S.M.; Lloyd, R.M.; Quan, S.F.; Troester, M.T.; Vaughn, B.V. AASM scoring manual updates for 2017 (version 2.4). J. Clin. Sleep Med. 2017, 13, 665–666. [Google Scholar] [CrossRef] [PubMed]
  6. Moser, D.; Anderer, P.; Gruber, G.; Parapatics, S.; Loretz, E.; Boeck, M.; Kloesch, G.; Heller, E.; Schmidt, A.; Danker-Hopfe, H.; et al. Sleep classification according to AASM and Rechtschaffen & Kales: Effects on sleep scoring parameters. Sleep 2009, 32, 139–149. [Google Scholar] [PubMed]
  7. Boostani, R.; Karimzadeh, F.; Nami, M. A comparative review on sleep stage classification methods in patients and healthy individuals. Comput. Methods Programs Biomed. 2017, 140, 77–91. [Google Scholar] [CrossRef] [PubMed]
  8. Aboalayon, K.A.I.; Faezipour, M.; Almuhammadi, W.S.; Moslehpour, S. Sleep stage classification using EEG signal analysis: A comprehensive survey and new investigation. Entropy 2016, 18, 272. [Google Scholar] [CrossRef] [Green Version]
  9. Cay, G.; Ravichandran, V.; Sadhu, S.; Zisk, A.H.; Salisbury, A.; Solanki, D.; Mankodiya, K. Recent Advancement in Sleep Technologies: A Literature Review on Clinical Standards, Sensors, Apps, and AI Methods. IEEE Access 2022, 10, 104737–104756. [Google Scholar] [CrossRef]
  10. Sarkar, D.; Guha, D.; Tarafdar, P.; Sarkar, S.; Ghosh, A.; Dey, D. A comprehensive evaluation of contemporary methods used for automatic sleep staging. Biomed. Signal Process. Control 2022, 77, 103819. [Google Scholar] [CrossRef]
  11. Casal, R.; Di Persia, L.E.; Schlotthauer, G. Classifying sleep–wake stages through recurrent neural networks using pulse oximetry signals. Biomed. Signal Process. Control 2021, 63, 102195. [Google Scholar] [CrossRef]
  12. Chen, Z.; Wu, M.; Cui, W.; Liu, C.; Li, X. An attention based CNN-LSTM approach for sleep-wake detection with heterogeneous sensors. IEEE J. Biomed. Health Inform. 2020, 25, 3270–3277. [Google Scholar] [CrossRef] [PubMed]
  13. Herlan, A.; Ottenbacher, J.; Schneider, J.; Riemann, D.; Feige, B. Electrodermal activity patterns in sleep stages and their utility for sleep versus wake classification. J. Sleep Res. 2019, 28, e12694. [Google Scholar] [CrossRef] [PubMed]
  14. Sundararajan, K.; Georgievska, S.; Te Lindert, B.H.; Gehrman, P.R.; Ramautar, J.; Mazzotti, D.R.; Sabia, S.; Weedon, M.N.; van Someren, E.J.; Ridder, L.; et al. Sleep classification from wrist-worn accelerometer data using random forests. Sci. Rep. 2021, 11, 24. [Google Scholar] [CrossRef] [PubMed]
  15. Boe, A.J.; McGee Koch, L.L.; O’Brien, M.K.; Shawen, N.; Rogers, J.A.; Lieber, R.L.; Reid, K.J.; Zee, P.C.; Jayaraman, A. Automating sleep stage classification using wireless, wearable sensors. NPJ Digit. Med. 2019, 2, 131. [Google Scholar] [CrossRef] [Green Version]
  16. Khosla, A.; Khandnor, P.; Chand, T. A comparative analysis of signal processing and classification methods for different applications based on EEG signals. Biocybern. Biomed. Eng. 2020, 40, 649–690. [Google Scholar] [CrossRef]
  17. Sekkal, R.N.; Bereksi-Reguig, F.; Ruiz-Fernandez, D.; Dib, N.; Sekkal, S. Automatic sleep stage classification: From classical machine learning methods to deep learning. Biomed. Signal Process. Control 2022, 77, 103751. [Google Scholar] [CrossRef]
  18. Yücelbaş, Ş.; Yücelbaş, C.; Tezel, G.; Özşen, S.; Yosunkaya, Ş. Automatic sleep staging based on SVD, VMD, HHT and morphological features of single-lead ECG signal. Expert Syst. Appl. 2018, 102, 193–206. [Google Scholar] [CrossRef]
  19. Rahman, M.M.; Bhuiyan, M.I.H.; Hassan, A.R. Sleep stage classification using single-channel EOG. Comput. Biol. Med. 2018, 102, 211–220. [Google Scholar] [CrossRef]
  20. Joe, M.J.; Pyo, S.C. Classification of Sleep Stage with Biosignal Images Using Convolutional Neural Networks. Appl. Sci. 2022, 12, 3028. [Google Scholar] [CrossRef]
  21. Yildirim, O.; Baloglu, U.B.; Acharya, U.R. A deep learning model for automated sleep stages classification using PSG signals. Int. J. Environ. Res. Public Health 2019, 16, 599. [Google Scholar] [CrossRef] [Green Version]
  22. Jadhav, P.; Rajguru, G.; Datta, D.; Mukhopadhyay, S. Automatic sleep stage classification using time–frequency images of CWT and transfer learning using convolution neural network. Biocybern. Biomed. Eng. 2020, 40, 494–504. [Google Scholar] [CrossRef]
  23. Zhou, D.; Wang, J.; Hu, G.; Zhang, J.; Li, F.; Yan, R.; Kettunen, L.; Chang, Z.; Xu, Q.; Cong, F. SingleChannelNet: A model for automatic sleep stage classification with raw single-channel EEG. Biomed. Signal Process. Control 2022, 75, 103592. [Google Scholar] [CrossRef]
  24. Khalili, E.; Asl, B.M. Automatic sleep stage classification using temporal convolutional neural network and new data augmentation technique from raw single-channel EEG. Comput. Methods Programs Biomed. 2021, 204, 106063. [Google Scholar] [CrossRef] [PubMed]
  25. Bhusal, A.; Alsadoon, A.; Prasad, P.; Alsalami, N.; Rashid, T.A. Deep learning for sleep stages classification: Modified rectified linear unit activation function and modified orthogonal weight initialisation. Multimed. Tools Appl. 2022, 81, 9855–9874. [Google Scholar] [CrossRef]
  26. Li, C.; Qi, Y.; Ding, X.; Zhao, J.; Sang, T.; Lee, M. A deep learning method approach for sleep stage classification with eeg spectrogram. Int. J. Environ. Res. Public Health 2022, 19, 6322. [Google Scholar] [CrossRef]
  27. Hao, J.; Luo, S.; Pan, L. A novel sleep staging algorithm based on hybrid neural network. In Proceedings of the 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 12–14 July 2019; pp. 158–161. [Google Scholar]
  28. Dong, H.; Supratak, A.; Pan, W.; Wu, C.; Matthews, P.M.; Guo, Y. Mixed neural network approach for temporal sleep stage classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 324–333. [Google Scholar] [CrossRef] [Green Version]
  29. Pei, W.; Li, Y.; Siuly, S.; Wen, P. A hybrid deep learning scheme for multi-channel sleep stage classification. Comput. Mater. Contin. 2022, 71, 889–905. [Google Scholar]
  30. Kuo, C.E.; Chen, G.T. Automatic sleep staging based on a hybrid stacked LSTM neural network: Verification using large-scale dataset. IEEE Access 2020, 8, 111837–111849. [Google Scholar] [CrossRef]
  31. Chen, Z.; Yang, Z.; Wang, D.; Huang, M.; Ono, N.; Altaf-Ul-Amin, M.; Kanaya, S. An end-to-end sleep staging simulator based on mixed deep neural networks. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 848–853. [Google Scholar]
  32. Yang, B.; Zhu, X.; Liu, Y.; Liu, H. A single-channel EEG based automatic sleep stage classification method leveraging deep one-dimensional convolutional neural network and hidden Markov model. Biomed. Signal Process. Control 2021, 68, 102581. [Google Scholar] [CrossRef]
  33. Ho, T.T.; Park, J.; Kim, T.; Park, B.; Lee, J.; Kim, J.Y.; Kim, K.B.; Choi, S.; Kim, Y.H.; Lim, J.K.; et al. Deep learning models for predicting severe progression in COVID-19-infected patients: Retrospective study. JMIR Med. Inform. 2021, 9, e24973. [Google Scholar] [CrossRef]
  34. Kemp, B.; Zwinderman, A.; Tuk, B.; Kamphuisen, H.; Oberye, J. Analysis of a sleep-dependent neuronal feedback loop: The slow-wave microcontinuity of the EEG. IEEE Trans. Biomed. Eng. 2000, 47, 1185–1194. [Google Scholar] [CrossRef]
  35. Mourtazaev, M.; Kemp, B.; Zwinderman, A.; Kamphuisen, H. Age and gender affect different characteristics of slow waves in the sleep EEG. Sleep 1995, 18, 557–564. [Google Scholar] [CrossRef] [Green Version]
  36. Luque, J.; Anguita, D.; Pérez, F.; Denda, R. Spectral analysis of electricity demand using hilbert–huang transform. Sensors 2020, 20, 2912. [Google Scholar] [CrossRef]
  37. Adam, O. Advantages of the Hilbert Huang transform for marine mammals signals analysis. J. Acoust. Soc. Am. 2006, 120, 2965–2973. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Jiang, D.; Lu, Y.N.; Yu, M.; Yuanyuan, W. Robust sleep stage classification with single-channel EEG signals using multimodal decomposition and HMM-based refinement. Expert Syst. Appl. 2019, 121, 188–203. [Google Scholar] [CrossRef]
  39. Hassan, A.R.; Bashar, S.K.; Bhuiyan, M.I.H. On the classification of sleep states by means of statistical and spectral features from single channel Electroencephalogram. In Proceedings of the 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Kochi, India, 10–13 August 2015; pp. 2238–2243. [Google Scholar] [CrossRef]
  40. Hassan, A.R.; Bhuiyan, M.I.H. Computer-aided sleep staging using complete ensemble empirical mode decomposition with adaptive noise and bootstrap aggregating. Biomed. Signal Process. Control 2016, 24, 1–10. [Google Scholar] [CrossRef]
  41. Berthomier, C.; Drouot, X.; Herman-Stoïca, M.; Berthomier, P.; Prado, J.; Bokar-Thire, D.; Benoit, O.; Mattout, J.; d’Ortho, M.P. Automatic analysis of single-channel sleep EEG: Validation in healthy individuals. Sleep 2007, 30, 1587–1595. [Google Scholar] [CrossRef] [Green Version]
  42. Ronzhina, M.; Janoušek, O.; Kolářová, J.; Nováková, M.; Honzík, P.; Provazník, I. Sleep scoring using artificial neural networks. Sleep Med. Rev. 2012, 16, 251–263. [Google Scholar] [CrossRef]
  43. Hassan, A.R.; Bhuiyan, M.I.H. Automated identification of sleep states from EEG signals by means of ensemble empirical mode decomposition and random under sampling boosting. Comput. Methods Programs Biomed. 2017, 140, 201–210. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Mixed-input sleep–wake classification model.
Figure 1. Mixed-input sleep–wake classification model.
Diagnostics 13 02358 g001
Figure 2. EEG signal epochs and corresponding sleep stage marker. The blue line curve represents the EEG signal for several epochs. The red line curve represents the corresponding hypnogram that represents different sleep stages.
Figure 2. EEG signal epochs and corresponding sleep stage marker. The blue line curve represents the EEG signal for several epochs. The red line curve represents the corresponding hypnogram that represents different sleep stages.
Diagnostics 13 02358 g002
Figure 3. Hilbert spectrum image generation process. The EEG epochs (a sample epoch is presented in the left plot) are decomposed using EMD which results in corresponding IMFs (shown in the middle plot). Finally, using HHT a Hilbert spectrum is obtained from the IMFs (shown in the right plot).
Figure 3. Hilbert spectrum image generation process. The EEG epochs (a sample epoch is presented in the left plot) are decomposed using EMD which results in corresponding IMFs (shown in the middle plot). Finally, using HHT a Hilbert spectrum is obtained from the IMFs (shown in the right plot).
Diagnostics 13 02358 g003
Figure 4. Samples of Hilbert spectrum images created from sleep and wake EEG epochs following the process showed in Figure 3.
Figure 4. Samples of Hilbert spectrum images created from sleep and wake EEG epochs following the process showed in Figure 3.
Diagnostics 13 02358 g004
Figure 5. Training and validation performance of CNN model only. (a) shows the training and validation loss. (b) shows the training and validation accuracy over the training epochs for the CNN model.
Figure 5. Training and validation performance of CNN model only. (a) shows the training and validation loss. (b) shows the training and validation accuracy over the training epochs for the CNN model.
Diagnostics 13 02358 g005
Figure 6. Performance metrics of the CNN model. (a) shows the confusion matrix and (b) shows the accuracy, precision, recall, and F1-score for the CNN model.
Figure 6. Performance metrics of the CNN model. (a) shows the confusion matrix and (b) shows the accuracy, precision, recall, and F1-score for the CNN model.
Diagnostics 13 02358 g006
Figure 7. Training and validation performance of the proposed model. (a) shows the training and validation loss. (b) shows the training and validation accuracy over the training epochs.
Figure 7. Training and validation performance of the proposed model. (a) shows the training and validation loss. (b) shows the training and validation accuracy over the training epochs.
Diagnostics 13 02358 g007
Figure 8. Performance metrics of the mixed-input model. (a) shows the confusion matrix and (b) shows the accuracy, precision, recall, and F1-score for the mixed-input model.
Figure 8. Performance metrics of the mixed-input model. (a) shows the confusion matrix and (b) shows the accuracy, precision, recall, and F1-score for the mixed-input model.
Diagnostics 13 02358 g008
Figure 9. Effect of training the ANN–CNN and CNN model for different number of epochs.
Figure 9. Effect of training the ANN–CNN and CNN model for different number of epochs.
Diagnostics 13 02358 g009
Figure 10. Training and validation performance of the proposed model for six-class classification scenario. (a) shows the training and validation loss. (b) shows the training and validation accuracy.
Figure 10. Training and validation performance of the proposed model for six-class classification scenario. (a) shows the training and validation loss. (b) shows the training and validation accuracy.
Diagnostics 13 02358 g010
Figure 11. Confusion matrix of the ANN–CNN model for six-class sleep stage classification.
Figure 11. Confusion matrix of the ANN–CNN model for six-class sleep stage classification.
Diagnostics 13 02358 g011
Table 1. Pseudocode for computing IMFs.
Table 1. Pseudocode for computing IMFs.
Input: Signal x ( t )
Output: IMFs
Initialize residue: r n ( t ) = x ( t )
Initialize IMFs = [ ]
while  r n ( t ) has more than one extrema do:
            Find all local extrema in r n ( t ) .
            Initialize upper envelope, u ( t ) by connecting local maxima with cubic spline.
            Initialize upper envelope, l ( t ) by connecting local maxima with cubic spline.
            Compute the mean of upper and lower envelopes, m ( t ) = u ( t ) + l ( t ) 2 .
            Append the current IMF to IMFs.
            Update residue, r n ( t ) by subtracting the current IMF from it.
Append the final residue to IMFs.
return IMFs.
Table 2. Statistical features and their equations.
Table 2. Statistical features and their equations.
FeatureEquationFeatureEquation
Minimum min ( y i ) Maximum max ( y i )
Mean μ = 1 N i = 1 N y i Standard deviation σ = i = 1 N y i μ 2 N
Kurtosis K = 1 N i = 1 N y i μ 4 σ 4 Skewness S = 1 N i = 1 N y i μ 3 σ 3
Root Mean Square y R M S = 1 N i = 1 N y i 2 Crest factor C F = m a x ( y i ) y i R M S
Shape factor S F = y R M S 1 N i = 1 N y i Impulse factor I F = y p e a k 1 N i = 1 N y i
Clearance factor C L F = y p e a k ( 1 N i = 1 N y i ) 2 Variance S 2 = i = 1 N ( y i μ ) n 1
Energy E = n = y n 2 Power P = lim N 1 N n = 0 n = N 1 y n 2
Peak to RMS y p e a k y r m s Range y m a x y m i n
Table 3. Number of samples in different classes.
Table 3. Number of samples in different classes.
ClassNumber of Samples
Wake stage2000
Sleep stage 1320
Sleep stage 21730
Sleep stage 3364
Sleep stage 4353
Sleep stage REM673
Table 4. Class-wise precision, recall, and F1-scores.
Table 4. Class-wise precision, recall, and F1-scores.
ClassPrecisionRecallF1-Score
Wake stage94.93%98.09%96.49%
Sleep stage 151.67%31.63%39.24%
Sleep stage 283.02%87.49%85.27%
Sleep stage 354.21%43.27%48.13%
Sleep stage 471.43%86.73%78.34%
Sleep stage REM77.25%73.00%75.06%
Table 5. Comparison with similar works on relevant datasets.
Table 5. Comparison with similar works on relevant datasets.
StudyDatasetMethodsNumber of ClassesAccuracy
Li et al. [26]Sleep EDF and EDFxCNN-Bi-LSTM594.17%
Joe, M.J. and Pyo, S.C. [20]Sleep EDFxCNN494%
jadhav, P. et al. [22]SleepEDFxSqueezeNet5<84%
Yildirim, O. et al. [21]Sleep EDF and EDFx1-D CNN2–698.33%
Jiang, D. et al. [38]Sleep EDF and EDFxHidden Markov Model592.7%
Pei, W. et al. [29]UCDDB and SHHSCNN-GRU568.28% to 83.15%
Hao, J. et al. [27]MIT-BIHCNN-Bi-LSTM592.21%
Kuo, C.E. and Chen, G.T. [30]PhysioNet2018Hybrid stacked LSTM583.07%
Yang, B. et al. [32]Sleep EDFx1-D-CNN-HMM583.23%
Hassan, A.R. et al. [39]Sleep EDFBagging(Decision tree)2–595.05%
Hassan, A.R. and Bhuiyan, M.I.H. [40]Sleep EDFCEEMDAN-Bagging2–699.48%
Berthomier, C. et al. [41]Proprietary datasetFuzzy classification2–596%
Ronzhina, M. et al. [42]Sleep EDFANN2–696.9%
Hassan, A.R. and Bhuiyan, M.I.H. [43]Sleep EDFEEMD-Random under sampling boosting2–698.15%
Proposed methodSleep EDFxANN–CNN2, 696.94%, 83.64%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasan, M.N.; Koo, I. Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals. Diagnostics 2023, 13, 2358. https://doi.org/10.3390/diagnostics13142358

AMA Style

Hasan MN, Koo I. Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals. Diagnostics. 2023; 13(14):2358. https://doi.org/10.3390/diagnostics13142358

Chicago/Turabian Style

Hasan, Md. Nazmul, and Insoo Koo. 2023. "Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals" Diagnostics 13, no. 14: 2358. https://doi.org/10.3390/diagnostics13142358

APA Style

Hasan, M. N., & Koo, I. (2023). Mixed-Input Deep Learning Approach to Sleep/Wake State Classification by Using EEG Signals. Diagnostics, 13(14), 2358. https://doi.org/10.3390/diagnostics13142358

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop