Next Article in Journal
Hybrid Multi-Channel MAC Protocol for WBANs with Inter-WBAN Interference Mitigation
Next Article in Special Issue
On the Beat Detection Performance in Long-Term ECG Monitoring Scenarios
Previous Article in Journal
Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution
Previous Article in Special Issue
Assessment of Dry Epidermal Electrodes for Long-Term Electromyography Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of 24 Feature Types to Accurately Detect and Predict Seizures Using Scalp EEG Signals

College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
The first two authors contributed equally to this work.
Sensors 2018, 18(5), 1372; https://doi.org/10.3390/s18051372
Submission received: 4 April 2018 / Revised: 23 April 2018 / Accepted: 26 April 2018 / Published: 28 April 2018
(This article belongs to the Special Issue Sensors for Health Monitoring and Disease Diagnosis)

Abstract

:
The neurological disorder epilepsy causes substantial problems to the patients with uncontrolled seizures or even sudden deaths. Accurate detection and prediction of epileptic seizures will significantly improve the life quality of epileptic patients. Various feature extraction algorithms were proposed to describe the EEG signals in frequency or time domains. Both invasive intracranial and non-invasive scalp EEG signals have been screened for the epileptic seizure patterns. This study extracted a comprehensive list of 24 feature types from the scalp EEG signals and found 170 out of the 2794 features for an accurate classification of epileptic seizures. An accuracy (Acc) of 99.40% was optimized for detecting epileptic seizures from the scalp EEG signals. A balanced accuracy (bAcc) was calculated as the average of sensitivity and specificity and our seizure detection model achieved 99.61% in bAcc. The same experimental procedure was applied to predict epileptic seizures in advance, and the model achieved Acc = 99.17% for predicting epileptic seizures 10 s before happening.

1. Introduction

Epilepsy is one of the most common neurological disorders and affects 60 million people around the world [1,2]. Active epilepsy patients are 4 to 8 among 1000 in the population [3], and may reach as high as 57 among 1000 in some developing countries [4]. Unattended epilepsy patients may suffer sudden unexpected death due to various factors, including epilepsy associated syndromes [5] and uncontrolled generalized tonic–clonic seizures [6], etc. Surgery may help cure the epilepsy symptoms [7] but the success rate needs to be increased [8]. Detection or prediction of seizure occurrences may be established using Electroencephalography (EEG) signals [9] or other non-invasive biomedical measurements [10], and will provide a better monitoring for the unattended epilepsy patients.
EEG is a technology used to record the electrical signals produced by the brain neurons and the clinical EEG recorder usually takes 128 channels of electric probes [11]. The voltage fluctuations in an EEG channel are associated with the neural activities and are hypothesized to have close relationships with various brain activities [12]. This hypothesis serves as the basis of many EEG-based applications, e.g., detection of epileptic seizures [13] or sleep stages [14]. EEG has already become one of the major technologies to monitor and detect the epilepsy occurrences due to its nature of non-invasiveness and low-cost [15].
Epileptic seizures may be detected by experienced clinicians’ manual screening of EEG signals but its very labor intensive and time consuming [16]. A modern EEG recorder may generate as many as 1000 data points per second (1000 Hz in sampling rate), and a standard recording procedure may last for a few days [16]. One day of such a recording procedure will generate 11 billion data points, which renders the manual screening highly labor intensive. Many clinical studies demonstrated that epilepsy specific waveforms, like spikes and sharp waves, appear during the epileptic seizure onset, or even in a short time before the onset [17]. This forms the basis of in silico detection and prediction algorithms of epileptic seizures.
Various algorithms were proposed to extract features from the EEG signals for training the epileptic seizure diagnosis models. An EEG-based epileptic seizure diagnosis model evaluates whether an EEG signal is more similar to that of a seizure onset event than an inter-ictal EEG signal. This EEG signal will be predicted as an epileptic seizure onset or inter-itcal status, based on how it is similar to the EEG signals of seizure or inter-ictal statuses. It’s intuitive to extract characteristic features directly from the time domain data, e.g., the inter-channel Robust Generalized Synchrony features [18]. Another widely-employed strategy is to extract the frequency domain features from the given EEG signals. Fourier Transform is one of the most common algorithms to extract the frequency domain features from time-series data and researchers usually use the approximate implementation Fast Fourier Transform (FFT) [19,20]. But FFT or similar algorithm work on stationary signals and do not provide time information for the extracted features. Short Time Fourier Transform (STFT) was proposed to solve this issue by calculating the fourier transformed spectrum within a series of sliding windows, so that both time and frequency domain information were reflected in the STFT features [21,22]. The integration of both time and frequency domain features was shown to outperform the epileptic seizure detection model using single source of features [17].
The life quality of an epilepsy patient will be enormously improved, if a seizure may be predicted before its occurrence. A seizure prediction algorithm evaluates whether a given EEG signal is more similar to the signals before the seizure onset than to those with no following seizure events. Such an accurate prediction model will determine whether an epileptic seizure will happen in the near future. So scientists kept exploring the possibility of accurately predicting the seizure occurrences even a few seconds in advances. Viglione and Walsh carried out the first attempt of predicting epileptic seizures in 1975 [23]. Salant, Y. et al., extracted the spectral features to train an EEG-based epileptic seizure prediction algorithm and successfully predict seizures 1–6 s in advance for 5 patients [24]. A number of recent studies presented much-improved prediction models using the intracranial EEG (iEEG), which requires patients undergoing craniotomies and places electrodes in a selected portion of the brain to measure the patients’ intracranial EEG [25,26,27]. These invasive models may predict epileptic seizures 10–50 min in advances, but are difficult to deploy during patients’ everyday life.
Various other feature extraction technologies have been proposed to describe the EEG signals from different aspects. High Order Spectra (HOS) was the extension of the concept High Order Statistics [28], has been used together with the cumulant features [29] or Continuous Wavelet Transform to detect epileptic seizures. The Discrete Wavelet Transform (DWT) was also applied to extract the frequency domain features of EEG signals for the epileptic seizure detection problem [30]. Intrinsic time-scale decomposition (ITD) is another feature engineering algorithm to represent EEG signals in the time-frequency domain and demonstrated the detection accuracy 95.67% for the normal, inter-ictal and ictal EEG signals [31]. The feature extraction algorithm recurrence quantification analysis (RQA) is a non-linear data analysis method in the chaos theory and has been utilized to train a Support Vector Machine (SVM) model for classifying epileptic seizures in the EEG signals with the accuracy 96.67% [32]. Wang et al. proposed a multi-domain entropy based feature extraction framework and achieved a high classification accuracy 99.25% of the epileptic seizure events [17].
This study integrated 24 different types of features extracted from the scalp EEG signals, and carried out a comprehensive investigation of how these features may be refined to train an accurate epileptic seizure detection model. We further applied the parameters of the optimal model to predict epileptic seizures 5 and 10 s in advances. Our experimental data showed that both the epileptic seizure detection and prediction algorithms achieved at least 99% in accuracies.

2. Materials and Methods

2.1. Data Description and Sample Extractions

The TUH EEG Seizure Corpus is released by the American Clinical Neurophysiology Society a standard EEG database and this study chose the dataset of 993 scalp EEG recorded files [33,34]. These EEG signals were recorded using 21 electrodes on the scalp of the participants and each electrode generates 250 data points per second (sampling rate 250 Hz) using the averaged reference (AR) channel configuration by a standard ACNS (American Clinical Neurophysiology Society) TCP (Temporal Central Parasagittal) montage [35], as shown in the Table 1.
This dataset annotated 227 seizures for 27 out of 43 patients with diagnosed epilepsy and a data sample was defined as 10-s window of EEG signals extracted from the 993 EEG files, as illustrated in Figure 1. A seizure period is the annotated seizure ictal period in the database TUH EEG, while the inter-ictal period is defined as the region with at least 20 s in length. A seizure sample was a 10-s window randomly extracted from the seizure regions (as illustrated in Figure 1), while an inter-ictal sample was a 10-s window randomly extracted from the inter-ictal period (as illustrated in Figure 1). Both types of samples avoided the 5-s region in the two boundaries. A pre-seizure sample was a 10-s window with a k-second window offset to the seizure period, where the parameter k was set to 0, 5 or 10 s in the study.

2.2. Feature Engineering

This study extracted 24 types of features from a given 10-s EEG signal, as described in the Table 2. The number of features extracted from each channel was listed in the column “FpC” of Table 2. These 24 types of features may be grouped as 4 families, i.e., statistical, fractal, entropy and spectral features. Except for the feature type “PSI_RIR”, each of the other feature types has 5 features. Take the feature type “Mean” as an example. An ordered list of voltage values was processed by the Discrete Wavelet Transform (DWT) with the decomposition level 4, and four wavelet coefficients (list) were decomposed from raw EEG. So the feature type “Mean” has the averaged value of the raw EEG data in the time domain and the averaged values of the four wavelet coefficients in time-frequency domain. The spectral feature type “PSI_RIR” has 12 features, as described in the following section. So 127 features were generated from each channel of EEG signals of a sample, and each sample had 127 × 22 = 2794 features in total.

2.2.1. Statistical Features

The 16 statistical feature types were defined as follows. An ordered list of EEG voltage values, defined as V = <V1, V2, …, Vn>, and the ordered list of the first k values were V(k) = <V1, V2, …, Vk>. V was summarized by the averaged (Mean), maximum (Crest), minimum (Trough) and variance (Var) values. The skewness (Skw) of this voltage value list was defined as the asymmetry: i = 1 n ( V i V ¯ ) 3 / [ ( n 1 ) V d e v ] , where Vdev was the deviation of this voltage list.
The measurement Kurtosis (Kurt) was defined to measure the “peakedness” of the given list of EEG voltages: i = 1 n ( V i V ¯ ) 4 / [ ( n 1 ) V d e v ] [37]. The absolute peak value of V was defined as Peak. The measurement Root Mean Square (RMS) was the arithmetic mean of the squares of Vi, as in i = 1 n V i 2 / n [38]. And the Peak-to-Average Power Ratio (PAPR) was defined as the Peak divided by RMS of the waveform, and evaluated the difference between the peak and the effective value of the EEG signals [39]. The ratio between RMS and Mean was defined as the Form Factor (FFac), and the total variation (TotVar) was defined as i = 1 n 1 | V i + 1 V i | / [ ( V M a x V M i n ) ( n 1 ) ] .
The Hurst Exponent (also called Rescaled Range statistics) was used to measure the correlations within V [40]. The range R(i) was defined as [max(V(i)-min(V(i))] and S(i) was t = 1 i ( V t V ¯ ) 2 / i . The Hurst Exponent (HuExp) was defined as the slope of the line regressed by ln{R(i)/S(i)} and ln(n).
The statistical self-affinity of V was evaluated by the Detrended Fluctuation Analysis (DFA), and it has been widely used analyze time series data with non-stationary and long-memory property [41]. Hjorth proposed a mathematical method to describe an EEG trace quantitatively [42], which has been widely applied to various EEG-based problems [43,44]. The Hjorth parameter Mobility (HMob) was the ratio of the average frequency and standard deviation, and the parameter Complexity (HComp) described the changes of the signal frequencies.
The Fisher Information (FI) measures the amount of information within a random variable and was widely used to analyze the EEG signals [45].

2.2.2. Fractal Dimension Features

A fractal dimension describes a statistical index of how a complexity pattern changes with the scale, and how similar between the local and global patterns [46]. Fractal dimension performs well on turbulent and irregular time series data and has been widely used to extract quantitative features from biomedical signals, including imaging [47] and EEG [48], etc. Three types of fractal dimension features were used in this study, i.e., Mandelbrot Fractal Dimension (MFD), Petrosian Fractal Dimension (PFD) and Higuchi Fractal Dimension (HFD).
The Mandelbrot Fractal Dimension (MFD) described a fractal dimension in a planar curve as log(L)/log(d), where L is the sum of the Euclidean distances of all the successive data points in an ordered list, and d is the largest distance between the first point and the others. The Petrosian Fractal Dimension (PFD) simply describes the sign changes in the given ordered list [49]. The Higuchi Fractal Dimension (HFD) measures the signal length and has been applied on various biophysical signals, e.g., ECG [50], MEG [51] and EEG [52], etc. The parameters were set to the same as in [53].

2.2.3. Entropy Features

Entropy is another important aspect for the analysis of biophysical signals [54,55], and four types of entropy features were used in this study.
The classic approximate entropy was refined by the Sample Entropy (SampEn) to measure how complex a time series data is [56]. A smaller value of SampEn suggests that the data is more self-similar. Permutation Entropy (PeEn) was introduced as a simple complexity parameter by the fast calculation formula of neighboring values [57]. PeEn performs similarly to Lyapunov exponents and demonstrates a strong robustness against the various data noises. The Singular Value Decomposition Entropy (SVDEn) evaluates the complexity of the non-stationary bio-signals by calculating the singular value decomposition (SVD) [58]. The irregular signal patterns in the epileptic seizure EEG data may be quantified well by the Spectral Entropy (SEn) [59], so SEn was also employed to describe the EEG signals for the epileptic seizure detection and prediction problems.

2.2.4. Spectral Features

Power Spectral Intensity (PSI) and Relative Intensity Ratio (RIR) were employed to describe the EEG signals in the spectral space. PSI calculates the intensity of each frequency band for a given data list. The frequency range of the EEG signals were usually split into the following frequency bands δ (0.5–4 Hz), θ (4–8 Hz), α (8–13 Hz), β2 (13–20 Hz), β1 (20–30 Hz), γ (30–60 Hz). Let the results of Fast Fourier Transform (FFT) be {F1, F2, …, Fn}. Then the list of bands {f1, f2, …, fm} were constructed to describe the frequency boundaries. So the PSI of k bands is calculated as P S I k = i = int ( n × f k / R ) int ( n × f k + 1 / R ) | F i | , and RIR is the density of PSI, defined as R I R k = P S I k / i = 1 k 1 P S I i , where k {1, 2, …, m − 1} and R is the sampling rate [60]. For abbreviation, the two feature types were combined as one feature type PSI_RIR in Table 2.

2.3. Experimental Procedure

This study carried out a series of feature evaluation and model optimization procedures in the following outline, as shown in Figure 2. Compared with intracranial EEG signals, scalp EEG signals are subject to many sources of external disturbances, e.g., the influences of the scalp and skull.

2.4. Feature Selection

The step of feature engineering generated 2794 features for each sample, but not all of these features contribute to the discrimination between seizure and inter-ictal samples. These features were evaluated by a series of three consecutive selection algorithms, i.e., VarA, iRFE and BackFS, to ensure the maximal classification accuracy. Firstly, the features with small standard deviations were removed (step VarA). Secondly, the remaining features were ranked by their weights in the trained linear-kernel SVM-RFE model and important features with larger weights were kept for further analysis (Step iRFE). Thirdly, the feature subset was refined by a step BackFS to remove redundancy. The details of each step were described in the following paragraphs.
A feature with a small variation was removed due to its difficulty of being separated at a low data detection resolution, denoted as VarA. That is to say, if a feature remains similar in most of the samples, its discrimination power is questionable. Features were ranked from small to large in their standard deviations, and top 30% of the features were removed from further analysis.
SVM-RFE was demonstrated to perform very well on selecting a subset of features with reasonable performances [61], and was employed on the rest features iteratively, denoted as iRFE. For each iteration, a linear-kernel SVM classification model was trained on the remaining features and the weight of each feature was calculated from the trained SVM model. The feature with the smallest weight was eliminated. Repeat this iteration until the user-specified number of features was met. The parameter feature number was denoted as pNumF, which would be evaluated and optimized in the section Results and Discussion.
Inter-feature redundancy was not investigated in the above two feature selection modules, and an iterative redundancy-removal module BackFS was utilized to further reduce the number of features, as described in the following pseudocode.
Function BackFS(DataMatrix, ClassLabel):
Begin
 Set = null;
 while FeatureNum(DataMatrix) > 1:
   for Feature(i) in DataMatrix:
     Performance(i) = PerformanceMeasurement(DataMatrix\Feature(i))
   remove the jth feature with the largest Performance(j) from DataMatrix
   add the largest Performance(j) into Set
 find the largest one in Set, the subset in that iteration is the best one
End.
The variable DataMatrix has the feature data for all the samples, with each row the values of the same feature for all the samples and each column all the feature values for one sample. The variable ClassLabel gives the class assignments of all the samples, and has two different values for a binary classification problem. The function PerformanceMeasurement(DataMatrix) was calculated as the balanced accuracy bAcc = (Sn + Sp)/2 using the 10-fold cross validation strategy of the binary classifier linear-kernel SVM. The denotion DataMatrix\Feature(i) was the data matrix after removing the row of the feature Feature(i).

2.5. Classification Performance Measurement

This study used the following widely-used performance measurements for a binary classification algorithm, i.e., sensitivity (Sn), specificity (Sp) and accuracy (Acc) [61]. For the seizure detection problem, seizure samples are defined as the positive samples, while the inter-ictal samples are negative ones. For the seizure prediction problem between pre-seizure and inter-ictal samples, pre-seizure samples are positives while inter-ictal samples are negatives. Sn is defined as the percentage of correctly predicted positive samples, while Sp is the percentage of correctly predicted negative samples. Acc is the percentage of overall correctly predicted samples. Another measurement bAcc was introduced as (Sn + Sp)/2 for the dataset with in-balanced classes [62]. All these performance measurements were calculated by the 10-fold cross validation strategy.
The default classifier linear-kernel SVM was compared with 5 other classification algorithms. Logistic Regression (LR) is a regression-based binary classifier [63] while Naïve Bayes (NBayes) is a parameter-independent Bayesian classifier [64]. Both LR and NBayes estimate the probabilities of whether a sample belongs to a class. Random Forest (RF) is a randomized forest based classifier [65]. Gradient Boosting (GBoost) classifier is a greedy searching strategy to find the best solution [66]. K Nearest Neighbor (KNN) is a simple parameter-independent classifier [67].

3. Results

This study extracted a comprehensive list of 24 feature types from the EEG signals and investigated the EEG-based seizure detection problem. These feature types may be roughly grouped as four feature families, i.e., statistical, fractal, entropy and spectral features. So it would be interesting to evaluate how each feature type and their combinations contribute to the seizure detection performances. Then we will select the best set of features to build the seizure detection model. Finally we will tackle the challenge of predicting the seizure onset before it really happens.

3.1. How the 24 Feature Types Contribute Individually

This study firstly evaluated how each of the 24 feature types contributes to the classification performances, as shown in Figure 3. The maximal bAcc = 60.15% was achieved by the statistical feature type HMob, which is the Hjorth mobility parameter, while the minimal bAcc = 49.35% was achieved by the statistical feature type Var. The fractal features achieved the best averaged bAcc = 57.47%, which is slightly better (0.99%) than that of the entropy features. The average bAcc of all the 24 feature types is only 54.07%, which slightly better than a random guess. Figure 3 also suggested that the detection model using single feature type performed much worse on the seizure samples than the inter-ictal ones. So we hypothesized that the integration of multiple feature types may perform better.

3.2. Pairwise Orchestration of the 24 Feature Types

All the models using a single feature type were outperformed by an orchestration with one other feature type, as shown in Figure 4. The diagonal grids illustrated the performance measurements bAcc of a pair of the same feature types, i.e., only one feature type was utilized. While the other grids illustrated how two different feature types collaborated with each other. The bottom row “GT-Diagonal”. For any of the 24 feature types, its orchestrations with at least half of the other feature types outperformed this single feature type. All the other 23 feature types orchestrated with the feature type Stat|PAPR outperformed the model using this feature type only. Another two feature types Stat|DFA and Entr|SVDEn demonstrated the same observations. The maximal improvement 25.65% in bAcc over a single feature type was achieved by an orchestration of Stat|Var with Stat|HMob compared with the single feature type Stat|HMob.

3.3. How the Best Model Was Achieved

The above experimental data showed the necessity of integrating multiple feature types to improve the seizure detection model. So we carried out our three-step feature selection procedure VarA/iRFE/BackFS on all the 24 feature types to get an optimal feature subset for the seizure detection model, as shown in Figure 5. After removing the features with little variations (step VarA), we evaluated whether the two steps iRFE and BackFS are necessary. The classification performances were calculated using the linear-kernel SVM classifier by the 10-fold cross validation strategy.
The best value for the parameter pNumF is 200. Firstly, the feature selection module iRFE selected the best feature subset when the parameter pNumF is 200, as shown in Figure 5a. Both of the two measurements Acc and bAcc reached their best values at pNumF = 200. But the sensitivity (Sn) was still not very accurate and didn’t reach 97.00% for all the choices of pNumF, i.e., 160, 180, 200, 220, and 240. Then we applied the redundancy removal step BackFS on the feauture subset, and significantly improved the seizure detection model, as shown in Figure 5b. The value 200 is still the best choice for the parameter pNumF. We may see that the three measurements Sn/Sp/bAcc reached the best values for pNumF = 200, but Acc was slightly smaller (0.10%) than that of pNumF = 240. Considering that the model of pNumF = 200 achieved Sn = 100.00% and Sp = 99.22%, while the model of pNumF = 240 achieved Sn = 99.13% and Sp = 99.61%, it may be preferred to have a minor increased false alarm rate with an accurate seizure capture capability.
The best model used 170 features after the redundancy was removed from the 200 features generated from iRFE, as shown in Figure 5b. Montage 8 contributed 13 features to the final feature subset, while both of the montages 17 and 18 contributed 11 features. It’s interesting to see that the montage 8 on the left side of the brain contributed the largest number of features to the best model. But the next two best montages 17 and 18 were located in the left and right sides, respectively. So we summed the feature contributions of the two brain sides, and observed little difference (two-tailed unpaired t-test p value = 0.6324) between the contributed feature numbers of the montages from the left and right sides of the brain. Actually, the left and right sides of the brain contributed 88 and 82 features to this best model.
So the following sections will use 200 as the default value for the parameter pNumF.

3.4. The Performance of Models Using Different Classifier

We further evaluated whether other classifiers may achieve better classification performances on the best feature subset, as shown in Figure 6. The classifier SVM was used in this study and demonstrated a significant improvement over the four classifiers, i.e., RF, GBoost, NBayes and KNN. The best of these four classifiers didn’t achieve bAcc = 67.98% better than 70%. The classifier LR performed very well in bAcc = 96.33%, but the best classifier SVM outperformed LR by an improvement of 3.28% in bAcc. SVM achieved Sn = 100% in detecting the positive samples while Sp = 99.22% for detecting the negative controls.
The deep supervised learning algorithm convolutional neural network (CNN) have already been evaluated for the classification performances on the EEG signal based epileptic seizure detection problem. CNN was compared with the existing classification algorithms, including Linear Discriminant Classifier (LDC) and Logistic Classifier (LOGLC), etc. The model was optimized to achieve 91.92% classification accuracy [68]. Thodoroff et al. conducted a comprehensive evaluation of how to optimize the deep convolutionary neural network (ODCN) but didn’t outperform the algorithm proposed in our study [69]. In the last year, another CNN classifier was applied to detect epileptic seizures and achieved the accuracy 88.7% [70]. So the deep learning model CNN doesn’t achieve better results than the proposed method.
So our feature selection procedure works best with the SVM classifier on analyzing the EEG signal data.

3.5. Predicting an Epilepsy Seizure before It Happens

The same modeling procedure was applied to the problem of predicting epileptic seizures before their onsets, as shown in Figure 7. The two datasets “Onset” and “0 s” are different in that the positive samples of the dataset “Onset” were extracted from the seizure occurring time, while the positive samples of “0 s” were extracted right before the seizure onset. The positive samples of the datasets “5 s” and “10 s” were extracted 5 and 10 s before the annotated onset time of the epileptic seizures. Our data showed that there was a slight decrease in bAcc for predicting epileptic seizures in advances, but the overall performances were acceptable, with bAcc > 98% for all the three cases of seizure predictions. Due to the fact that the dataset didn’t provide EEG signals more than 10 s for most patients, this study didn’t evaluate our experimental procedure on predicting epileptic seizures with longer time before onset. A recent study presented a proof-of-concept experiment of predicting epileptic seizures 15 min before happening using the invasive intracranial EEG signals and achieved 69% in sensitivity, which left large room for improvements [71].

4. Discussion

A comparison was carried out with EEG-based epileptic seizure detection studies, mostly using the invasive EEG datasets. This study used the latest non-invasive scalp EEG signals which was published in 2017 [33,34], and very few studies have been conducted on this dataset.
The EEG signal differences between epileptic seizures and normal controls were significant enough for building an EEG-based seizure detection or prediction problems. Salant et al. explored the possibility of predicting seizures 1–6 s in advance using the two-channel EEG data and didn’t calculate the prediction accuracy [24]. Kiymik et al. compared how to evaluate the difference between the EEG signals of a normal child and a pediatric epileptic seizure patient using the features of wavelet transform (WT) and short-time fourier transform (STFT) [72]. Li et al. investigated the inter-seizure non-linear similarity of the EEG signals in the phase space [73].
Most of the existing studies didn’t outperform this study in the epileptic seizure detection problem. Guler et al. extracted Lyapunov exponents from the single-channel intracranial EEG signals and trained a recurrent neural network model with 96.79% in accuracy [74]. Polat et al. utilized the fast fourier transform features to train a decision tree classifier and achieved the 10-fold cross validation accuracy 98.72% for detecting hippocampus probed EEG signals from the scalp EEGs of healthy controls [19]. Ocak, Hasan extracted the approximate entropy and discrete wavelet transform features from the EEG signals, and achieved the best accuracy 96% [30].
Three studies achieved slightly better accuracies than this study’s accuracy 99.4%. Polat et al. utilized the Principal Component Analysis (PCA) to the fast fourier transform features and achieved almost 100% in accuracy [75]. Another study employed the power spectral features of EEG signals and achieved 99.6% in accuracy using the Fisher discriminant linear algorithm [76]. Hilbert–Huang Transform (HHT) was proposed to describe the EEG signals using the intrinsic mode functions (IMFs) and improved the SVM classification model to the accuracy 99.85% [77]. But all these three studies were based on the invasive seizure EEG signals, which have much better signal quality than this study. And their prediction accuracies need to be verified on the non-invasive seizure signals before their applications in the everyday life of epileptic patients.
Various other feature types were extracted from the EEG signals for the epileptic seizure detection problem. A fully discrete wavelet transform, the tunable-Q factor wavelet transform (TQWT), was applied to the EEG signals but the detection models were trained separately for each of the six patients [78]. Li et al. trained the neural network ensemble classifier on the discrete wavelet transform (DWT) with 98.78% in accuracy [79]. Wang et al. integrated multiple domains of feature extraction types and achieved a stable 99.25% in accuracy by the 10-fold cross validation [17]. The seizure onset EEG signals of these studies were recorded invasively from the epileptic patients.
A recent study extracted the wavelet-based features from the EEG signals and trained an SVM model with the radial basis function (RBF) kernel with 10-fold cross validation accuracy 96.87% [80]. Our work further improved this model to the accuracy 99.4% using the same validation strategy.
Most of the existing studies didn’t carry out a feature selection step and this study suggested that the features extracted from the EEG signals may need to be further refined before fed into the classification models. Figure 5 illustrated that at least 15% of the EEG-extracted features may be removed to further improve the epileptic seizure detection accuracy. The best model selected 170 from the 200 EEG-extracted features and achieved 99.40% in detection accuracy.

5. Conclusions

This study integrated 24 feature types extracted from the scalp EEG signals, and accurately detected epileptic seizures after reducing 2794 features to 170. The same feature selection and classification procedure also performed very well on predicting epileptic seizures a few seconds before their onsets. Such models may facilitate the epileptic seizure monitoring and early warning and significantly improve the patients’ life qualities. The new technique Eddy current pulsed thermography (ECPT) will also be tested in the EEG analysis for its capability of automatically splitting the EEG signals into useful data sources [81]. But due to the fact that the EEG signals may be recorded both intracranially and on the scalp, a detailed investigation may be necessary to optimize how the technique will be tuned [81,82].
The main advantages of our work are that we used the non-invasively collected scalp EEG data and optimized the model on the population level. Most of the existing studies utilized the intracranial EEG (iEEG) which may not fit the everyday living environment [25,71]. A model analyzing the non-invasive scalp EEG signals may help the patients easily manage their everyday lives. Our work also presented a proof-of-concept experiment of patient-independent seizure prediction. Many of the existing studies investigated the patient-specific seizure prediction problem and trained a model for each of the patients in their studies [83,84,85]. Such a modeling strategy has a merit of establishing patient-specific patterns, but has to wait for the data of multiple seizures before a model can be provided for a patient.
The major disadvantage of our work is the intensive computational requirement. We are planning to solve this challenge by both coding optimization and GPU-based parallelization, so that our model may generate the results within an acceptable time frame. Due to the limitations in the public datasets, this study only investigated the seizure prediction problem 10 s before the seizure events. We plan to work with our clinical collaborators to collect 24-h continuously-monitoring scalp EEG data, so that we may explore the possibility of predicting a seizure much earlier than 10 s before its happening.
We will also collaborate with the clinicians to test predicting seizures before their happening in the real world. Our work may significantly soothe the anxiety of a patient by warning the coming of a seizure at least a few seconds earlier.

Author Contributions

Yinda Zhang and Fengfeng Zhou conceived the project and drafted the manuscript. Yinda Zhang wrote the main source codes. Shuhan Yang was involved in the project designing, pipeline maintenance and carrying out the experiments. Yang Liu proposed novel designing ideas and algorithm implementations. Yexian Zhang worked on the implementation improvements. Bingfeng Han was involved in part of the source codes.

Acknowledgments

This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB13040400) and the start-up grant from the Jilin University. The constructive and insightful comments from the two anonymous reviewers are greatly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mormann, F.; Andrzejak, R.G. Seizure prediction: Making mileage on the long and winding road. Brain 2016, 139 Pt 6, 1625–1627. [Google Scholar] [CrossRef] [PubMed]
  2. Zandi, A.S.; Javidan, M.; Dumont, G.A.; Tafreshi, R. Automated real-time epileptic seizure detection in scalp EEG recordings using an algorithm based on wavelet packet transform. IEEE Trans. Bio-Med. Eng. 2010, 57, 1639–1651. [Google Scholar] [CrossRef] [PubMed]
  3. Halatchev, V.N. Epidemiology of epilepsy—Recent achievements and future. Folia Med. (Plovdiv) 2000, 42, 17–22. [Google Scholar] [PubMed]
  4. Senanayake, N.; Roman, G.C. Epidemiology of epilepsy in developing countries. Bull. World Health Organ. 1993, 71, 247–258. [Google Scholar] [PubMed]
  5. Sakauchi, M.; Oguni, H.; Kato, I.; Osawa, M.; Hirose, S.; Kaneko, S.; Takahashi, Y.; Takayama, R.; Fujiwara, T. Retrospective multiinstitutional study of the prevalence of early death in Dravet syndrome. Epilepsia 2011, 52, 1144–1149. [Google Scholar] [CrossRef] [PubMed]
  6. Liebenthal, J.A.; Wu, S.; Rose, S.; Ebersole, J.S.; Tao, J.X. Association of prone position with sudden unexpected death in epilepsy. Neurology 2015, 84, 703–709. [Google Scholar] [CrossRef] [PubMed]
  7. Escalaya, A.L.; Burneo, J.G. Epilepsy surgery and neurocysticercosis: Assessing the role of the cysticercotic lesion in medically-refractory epilepsy. Epilepsy Behav. 2017, 76, 178–181. [Google Scholar] [CrossRef] [PubMed]
  8. El Tahry, R.; Wang, I.Z. Failed epilepsy surgery: Is this the end? Acta Neurol. Belg. 2017, 117, 433–440. [Google Scholar] [CrossRef] [PubMed]
  9. Gadhoumi, K.; Lina, J.M.; Mormann, F.; Gotman, J. Seizure prediction for therapeutic devices: A review. J. Neurosci. Methods 2016, 260, 270–282. [Google Scholar] [CrossRef] [PubMed]
  10. Tewolde, S.; Oommen, K.; Lie, D.Y.; Zhang, Y.; Chyu, M.C. Epileptic Seizure Detection and Prediction Based on Continuous Cerebral Blood Flow Monitoring—A Review. J. Healthc. Eng. 2015, 6, 159–178. [Google Scholar] [CrossRef] [PubMed]
  11. Todaro, C.; Marzetti, L.; Valdes Sosa, P.A.; Valdes-Hernandez, P.A.; Pizzella, V. Mapping Brain Activity with Electrocorticography: Resolution Properties and Robustness of Inverse Solutions. Brain Topogr. 2018, 1–16. [Google Scholar] [CrossRef] [PubMed]
  12. Kurt, M.B.; Sezgin, N.; Akin, M.; Kirbas, G.; Bayram, M. The ANN-based computing of drowsy level. Expert Syst. Appl. 2009, 36, 2534–2542. [Google Scholar] [CrossRef]
  13. Rizvi, S.A.; Tellez Zenteno, J.F.; Crawford, S.L.; Wu, A. Outpatient ambulatory EEG as an option for epilepsy surgery evaluation instead of inpatient EEG telemetry. Epilepsy Behav. Case Rep. 2013, 1, 39–41. [Google Scholar] [CrossRef] [PubMed]
  14. Acharya, U.R.; Bhat, S.; Faust, O.; Adeli, H.; Chua, E.C.; Lim, W.J.; Koh, J.E. Nonlinear Dynamics Measures for Automated EEG-Based Sleep Stage Detection. Eur. Neurol. 2015, 74, 268–287. [Google Scholar] [CrossRef] [PubMed]
  15. Wong-Kisiel, L.C.; Tovar Quiroga, D.F.; Kenney-Jung, D.L.; Witte, R.J.; Santana-Almansa, A.; Worrell, G.A.; Britton, J.; Brinkmann, B.H. Morphometric analysis on T1-weighted MRI complements visual MRI review in focal cortical dysplasia. Epilepsy Res. 2018, 140, 184–191. [Google Scholar] [CrossRef] [PubMed]
  16. Flink, R.; Pedersen, B.; Guekht, A.B.; Malmgren, K.; Michelucci, R.; Neville, B.; Pinto, F.; Stephani, U.; Ozkara, C. Guidelines for the use of EEG methodology in the diagnosis of epilepsy. Acta Neurol. Scand. 2002, 106, 1–7. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, L.; Xue, W.; Li, Y.; Luo, M.; Huang, J.; Cui, W.; Huang, C. Automatic epileptic seizure detection in EEG signals using multi-domain feature extraction and nonlinear analysis. Entropy 2017, 19, 222. [Google Scholar] [CrossRef]
  18. Shunan, L.; Donghui, L.; Bin, D.; Xile, W.; Jiang, W.; Chan, W.-L. A novel feature extraction method for epilepsy EEG signals based on robust generalized synchrony analysis. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; pp. 5144–5147. [Google Scholar]
  19. Polat, K.; Güneş, S. Classification of epileptiform EEG using a hybrid system based on decision tree classifier and fast Fourier transform. Appl. Math. Comput. 2007, 187, 1017–1026. [Google Scholar] [CrossRef]
  20. Murugappan, M.; Murugappan, S. Human emotion recognition through short time Electroencephalogram (EEG) signals using Fast Fourier Transform (FFT). In Proceedings of the 2013 IEEE 9th International Colloquium on Signal Processing and its Applications (CSPA), Kuala Lumpur, Malaysia, 8–10 March 2013; pp. 289–294. [Google Scholar]
  21. Hyvärinen, A.; Ramkumar, P.; Parkkonen, L.; Hari, R. Independent component analysis of short-time Fourier transforms for spontaneous EEG/MEG analysis. NeuroImage 2010, 49, 257–271. [Google Scholar] [CrossRef] [PubMed]
  22. Griffin, D.; Lim, J. Signal estimation from modified short-time Fourier transform. IEEE Trans. Acoust. Speech Signal Process. 1984, 32, 236–243. [Google Scholar] [CrossRef]
  23. Viglione, S.S.; Walsh, G.O. Proceedings: Epileptic seizure prediction. Electroencephalogr. Clin. Neurophysiol. 1975, 39, 435–436. [Google Scholar] [PubMed]
  24. Salant, Y.; Gath, I.; Henriksen, O. Prediction of epileptic seizures from two-channel EEG. Med. Biol. Eng. Comput. 1998, 36, 549–556. [Google Scholar] [CrossRef] [PubMed]
  25. Lachaux, J.P.; Axmacher, N.; Mormann, F.; Halgren, E.; Crone, N.E. High-frequency neural activity and human cognition: Past, present and possible future of intracranial EEG research. Prog. Neurobiol. 2012, 98, 279–301. [Google Scholar] [CrossRef] [PubMed]
  26. Moghim, N.; Corne, D.W. Predicting epileptic seizures in advance. PLoS ONE 2014, 9, e99334. [Google Scholar] [CrossRef] [PubMed]
  27. Litt, B.; Esteller, R.; Echauz, J.; D’Alessandro, M.; Shor, R.; Henry, T.; Pennell, P.; Epstein, C.; Bakay, R.; Dichter, M. Epileptic Seizures May Begin Hours in Advance of Clinical Onset: A Report of Five Patients. In Applications of Intelligent Control to Engineering Systems; Springer: New York, NY, USA, 2009; pp. 225–245. [Google Scholar]
  28. Kendall, M.G. The advanced Theory of Statistics; Charles Griffin & Company Limited: London, UK, 1943. [Google Scholar]
  29. Acharya, U.R.; Sree, S.V.; Suri, J.S. Automatic detection of epileptic EEG signals using higher order cumulant features. Int. J. Neural Syst. 2011, 21, 403–414. [Google Scholar] [CrossRef] [PubMed]
  30. Ocak, H. Automatic detection of epileptic seizures in EEG using discrete wavelet transform and approximate entropy. Expert Syst. Appl. 2009, 36, 2027–2036. [Google Scholar] [CrossRef]
  31. Martis, R.J.; Acharya, U.R.; Tan, J.H.; Petznick, A.; Tong, L.; Chua, C.K.; Ng, E.Y. Application of intrinsic time-scale decomposition (ITD) to EEG signals for automated seizure prediction. Int. J. Neural Syst. 2013, 23, 1350023. [Google Scholar] [CrossRef] [PubMed]
  32. Kutlu, F.; Kose, C. Detection of epileptic seizure from EEG signals by using recurrence quantification analysis. In Proceedings of the 2014 22nd Signal Processing and Communications Applications Conference (SIU), Trabzon, Turkey, 23–25 April 2014; pp. 1387–1390. [Google Scholar]
  33. Golmohammadi, M.; Shah, V.; Lopez, S.; Ziyabari, S.; Yang, S.; Camaratta, J.; Obeid, I.; Picone, J. The TUH EEG Seizure Corpus. In Proceedings of the American Clinical Neurophysiology Society Annual Meeting, Phoenix, AZ, USA, 8–12 February 2017; p. 1. [Google Scholar]
  34. Harati, A.; Lopez, S.; Obeid, I.; Picone, J.; Jacobson, M.; Tobochnik, S. The TUH EEG CORPUS: A big data resource for automated EEG interpretation. In Proceedings of the 2014 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 13 December 2014; pp. 1–5. [Google Scholar]
  35. Acharya, J.N.; Hani, A.J.; Thirumala, P.D.; Tsuchida, T.N. American Clinical Neurophysiology Society Guideline 3: A Proposal for Standard Montages to Be Used in Clinical EEG. J. Clin. Neurophysiol. 2016, 33, 312–316. [Google Scholar] [CrossRef] [PubMed]
  36. Shah, V.; Golmohammadi, M.; Ziyabari, S.; Von Weltin, E.; Obeid, I.; Picone, J. Optimizing channel selection for seizure detection. In Proceedings of the 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 2 December 2017; pp. 1–5. [Google Scholar]
  37. Pearson, B.; Fox-Kemper, B. Log-Normal Turbulence Dissipation in Global Ocean Models. Phys. Rev. Lett. 2018, 120, 094501. [Google Scholar] [CrossRef] [PubMed]
  38. Gandhamal, A.; Talbar, S.; Gajre, S.; Razak, R.; Hani, A.F.M.; Kumar, D. Fully automated subchondral bone segmentation from knee MR images: Data from the Osteoarthritis Initiative. Comput. Biol. Med. 2017, 88, 110–125. [Google Scholar] [CrossRef] [PubMed]
  39. Bai, J.; Li, Y.; Yi, Y.; Cheng, W.; Du, H. PAPR reduction based on tone reservation scheme for DCO-OFDM indoor visible light communications. Opt. Express 2017, 25, 24630–24638. [Google Scholar] [CrossRef] [PubMed]
  40. Liang, Z.; Li, D.; Ouyang, G.; Wang, Y.; Voss, L.J.; Sleigh, J.W.; Li, X. Multiscale rescaled range analysis of EEG recordings in sevoflurane anesthesia. Clin. Neurophysiol. 2012, 123, 681–688. [Google Scholar] [CrossRef] [PubMed]
  41. Hou, D.; Wang, C.; Chen, Y.; Wang, W.; Du, J. Long-range temporal correlations of broadband EEG oscillations for depressed subjects following different hemispheric cerebral infarction. Cogn. Neurodyn. 2017, 11, 529–538. [Google Scholar] [CrossRef] [PubMed]
  42. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  43. Herrera, L.J.; Fernandes, C.M.; Mora, A.M.; Migotina, D.; Largo, R.; Guillen, A.; Rosa, A.C. Combination of heterogeneous EEG feature extraction methods and stacked sequential learning for sleep stage classification. Int. J. Neural Syst. 2013, 23, 1350012. [Google Scholar] [CrossRef] [PubMed]
  44. Cecchin, T.; Ranta, R.; Koessler, L.; Caspary, O.; Vespignani, H.; Maillard, L. Seizure lateralization in scalp EEG using Hjorth parameters. Clin. Neurophysiol. 2010, 121, 290–300. [Google Scholar] [CrossRef] [PubMed]
  45. Martin, M.; Pennini, F.; Plastino, A. Fisher’s information and the analysis of complex signals. Phys. Lett. A 1999, 256, 173–180. [Google Scholar] [CrossRef]
  46. Falconer, K. Fractal Geometry: Mathematical Foundations and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  47. McKay, G.J.; Paterson, E.N.; Maxwell, A.P.; Cardwell, C.C.; Wang, R.; Hogg, S.; MacGillivray, T.J.; Trucco, E.; Doney, A.S. Retinal microvascular parameters are not associated with reduced renal function in a study of individuals with type 2 diabetes. Sci. Rep. 2018, 8, 3931. [Google Scholar] [CrossRef] [PubMed]
  48. Bachmann, M.; Paeske, L.; Kalev, K.; Aarma, K.; Lehtmets, A.; Oopik, P.; Lass, J.; Hinrikus, H. Methods for classifying depression in single channel EEG using linear and nonlinear signal analysis. Comput. Methods Programs Biomed. 2018, 155, 11–17. [Google Scholar] [CrossRef] [PubMed]
  49. Petrosian, A. Kolmogorov complexity of finite sequences and recognition of different preictal EEG patterns. In Proceedings of the Eighth IEEE Symposium on Computer-Based Medical Systems, Lubbock, TX, USA, 9–10 June 1995; pp. 212–217. [Google Scholar]
  50. Khoa, T.Q.D.; Ha, V.Q.; Toi, V.V. Higuchi fractal properties of onset epilepsy electroencephalogram. Comput. Math. Methods Med. 2012, 2012, 461426. [Google Scholar] [CrossRef] [PubMed]
  51. Gómez, C.; Mediavilla, Á.; Hornero, R.; Abásolo, D.; Fernández, A. Use of the Higuchi’s fractal dimension for the analysis of MEG recordings from Alzheimer’s disease patients. Med. Eng. Phys. 2009, 31, 306–313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Anier, A.; Lipping, T.; Melto, S.; Hovilehto, S. Higuchi fractal dimension and spectral entropy as measures of depth of sedation in intensive care unit. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; pp. 526–529. [Google Scholar]
  53. Upadhyay, R.; Padhy, P.; Kankar, P. A comparative study of feature ranking techniques for epileptic seizure detection using wavelet transform. Comput. Electr. Eng. 2016, 53, 163–176. [Google Scholar] [CrossRef]
  54. Chu, W.L.; Huang, M.W.; Jian, B.L.; Cheng, K.S. Analysis of EEG entropy during visual evocation of emotion in schizophrenia. Ann. Gen. Psychiatry 2017, 16, 34. [Google Scholar] [CrossRef] [PubMed]
  55. Wu, H.T.; Lee, C.Y.; Liu, C.C.; Liu, A.B. Multiscale cross-approximate entropy analysis as a measurement of complexity between ECG R-R interval and PPG pulse amplitude series among the normal and diabetic subjects. Comput. Math. Methods Med. 2013, 2013, 231762. [Google Scholar] [CrossRef] [PubMed]
  56. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol.-Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  57. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  58. Roberts, S.J.; Penny, W.; Rezek, I. Temporal and spatial complexity measures for electroencephalogram based brain-computer interfacing. Med. Biol. Eng. Comput. 1999, 37, 93–98. [Google Scholar] [CrossRef] [PubMed]
  59. Inouye, T.; Shinosaki, K.; Sakamoto, H.; Toi, S.; Ukai, S.; Iyama, A.; Katsuda, Y.; Hirano, M. Quantification of EEG irregularity by use of the entropy of the power spectrum. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 204–210. [Google Scholar] [CrossRef]
  60. Yang, H.; Cheng, H.; Feng, Y. Improvement of high-power laser performance for super-smooth optical surfaces using electrorheological finishing technology. Appl. Opt. 2017, 56, 9822–9829. [Google Scholar] [CrossRef] [PubMed]
  61. Xu, C.; Liu, J.; Yang, W.; Shu, Y.; Wei, Z.; Zheng, W.; Feng, X.; Zhou, F. An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection. Epigenomics 2018, 10. [Google Scholar] [CrossRef] [PubMed]
  62. Ge, R.; Zhou, M.; Luo, Y.; Meng, Q.; Mai, G.; Ma, D.; Wang, G.; Zhou, F. McTwo: A two-step feature selection algorithm based on maximal information coefficient. BMC Bioinform. 2016, 17, 142. [Google Scholar] [CrossRef] [PubMed]
  63. Filip, S.; Zoidakis, J.; Vlahou, A.; Mischak, H. Advances in urinary proteome analysis and applications in systems biology. Bioanalysis 2014, 6, 2549–2569. [Google Scholar] [CrossRef] [PubMed]
  64. Wu, M.-Y.; Dai, D.-Q.; Shi, Y.; Yan, H.; Zhang, X.-F. Biomarker identification and cancer classification based on microarray data using laplace naive bayes model with mean shrinkage. IEEE/ACM Trans. Comput. Biol. Bioinform. 2012, 9, 1649–1662. [Google Scholar] [CrossRef] [PubMed]
  65. Dimitriadis, S.I.; Liparas, D.; Tsolaki, M.N.; Alzheimer’s Disease Neuroimaging Initiative. Random forest feature selection, fusion and ensemble strategy: Combining multiple morphological MRI measures to discriminate among healhy elderly, MCI, cMCI and alzheimer’s disease patients: From the alzheimer’s disease neuroimaging initiative (ADNI) database. J. Neurosci. Methods 2017, 302, 14–23. [Google Scholar] [PubMed]
  66. Ternès, N.; Rotolo, F.; Michiels, S. biospear: An R package for biomarker selection in penalized Cox regression. Bioinformatics 2017, 34, 112–113. [Google Scholar] [CrossRef] [PubMed]
  67. Alarcón-Paredes, A.; Alonso, G.A.; Cabrera, E.; Cuevas-Valencia, R. Simultaneous Gene Selection and Weighting in Nearest Neighbor Classifier for Gene Expression Data. In Proceedings of the International Conference on Bioinformatics and Biomedical Engineering, Granada, Spain, 26–28 April 2017; Springer: Cham, Switzerland, 2017; pp. 372–381. [Google Scholar]
  68. Fergus, P.; Hussain, A.; Hignett, D.; Al-Jumeily, D.; Abdel-Aziz, K.; Hamdan, H. A machine learning system for automated whole-brain seizure detection. Appl. Comput. Inform. 2016, 12, 70–89. [Google Scholar] [CrossRef]
  69. Thodoroff, P.; Pineau, J.; Lim, A. Learning robust features using deep learning for automatic seizure detection. In Proceedings of the Machine Learning for Healthcare Conference, Los Angeles, CA, USA, 19–20 August 2016; pp. 178–190. [Google Scholar]
  70. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Boil. Med. 2017. [Google Scholar] [CrossRef] [PubMed]
  71. Kiral-Kornek, I.; Roy, S.; Nurse, E.; Mashford, B.; Karoly, P.; Carroll, T.; Payne, D.; Saha, S.; Baldassano, S.; O'Brien, T.; et al. Epileptic Seizure Prediction Using Big Data and Deep Learning: Toward a Mobile System. EBioMedicine 2018, 27, 103–111. [Google Scholar] [CrossRef] [PubMed]
  72. Kıymık, M.K.; Güler, I.; Dizibüyük, A.; Akın, M. Comparison of STFT and wavelet transform methods in determining epileptic seizure activity in EEG signals for real-time application. Comput. Biol. Med. 2005, 35, 603–616. [Google Scholar] [CrossRef] [PubMed]
  73. Li, X.; Ouyang, G. Nonlinear similarity analysis for epileptic seizures prediction. Nonlinear Anal. Theory Methods Appl. 2006, 64, 1666–1678. [Google Scholar] [CrossRef]
  74. Güler, N.F.; Übeyli, E.D.; Güler, I. Recurrent neural networks employing Lyapunov exponents for EEG signals classification. Expert Syst. Appl. 2005, 29, 506–514. [Google Scholar] [CrossRef]
  75. Polat, K.; Güneş, S. Artificial immune recognition system with fuzzy resource allocation mechanism classifier, principal component analysis and FFT method based new hybrid automated identification system for classification of EEG signals. Expert Syst. Appl. 2008, 34, 2039–2048. [Google Scholar] [CrossRef]
  76. Choe, S.-H.; Chung, Y.G.; Kim, S.-P. Statistical spectral feature extraction for classification of epileptic EEG signals. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics (ICMLC), Qingdao, China, 11–14 July 2010; pp. 3180–3185. [Google Scholar]
  77. Fu, K.; Qu, J.; Chai, Y.; Zou, T. Hilbert marginal spectrum analysis for automatic seizure detection in EEG signals. Biomed. Signal Process. Control 2015, 18, 179–185. [Google Scholar] [CrossRef]
  78. Hassan, A.R.; Siuly, S.; Zhang, Y. Epileptic seizure detection in EEG signals using tunable-Q factor wavelet transform and bootstrap aggregating. Comput. Methods Programs Biomed. 2016, 137, 247–259. [Google Scholar] [CrossRef] [PubMed]
  79. Li, M.; Chen, W.; Zhang, T. Classification of epilepsy EEG signals using DWT-based envelope analysis and neural network ensemble. Biomed. Signal Process. Control 2017, 31, 357–365. [Google Scholar] [CrossRef]
  80. Janjarasjitt, S. Epileptic seizure classifications of single-channel scalp EEG data using wavelet-based features and SVM. Med. Boil. Eng. Comput. 2017, 55, 1743–1761. [Google Scholar] [CrossRef] [PubMed]
  81. Gao, B.; Bai, L.; Woo, W.L.; Tian, G.Y.; Cheng, Y. Automatic defect identification of eddy current pulsed thermography using single channel blind source separation. IEEE Trans. Instrum. Meas. 2014, 63, 913–922. [Google Scholar] [CrossRef]
  82. Gao, B.; Yin, A.; Wang, Y.; Tian, G.; Woo, W.; Liu, H. Thermography spatial-transient-stage tensor model and materal property characterization. In Proceedings of the 2014 IEEE Far East Forum on Nondestructive Evaluation/Testing (FENDT), Chengdu, China, 20–23 June 2014; pp. 199–203. [Google Scholar]
  83. Direito, B.; Teixeira, C.A.; Sales, F.; Castelo-Branco, M.; Dourado, A. A Realistic Seizure Prediction Study Based on Multiclass SVM. Int. J. Neural Syst. 2017, 27, 1750006. [Google Scholar] [CrossRef] [PubMed]
  84. So, R.Q.; Krishna, V.; King, N.K.K.; Yang, H.; Zhang, Z.; Sammartino, F.; Lozano, A.M.; Wennberg, R.A.; Guan, C. Prediction and detection of seizures from simultaneous thalamic and scalp electroencephalography recordings. J. Neurosurg. 2017, 126, 2036–2044. [Google Scholar] [CrossRef] [PubMed]
  85. Parvez, M.Z.; Paul, M. Seizure Prediction Using Undulated Global and Local Features. IEEE Trans. Bio-Med. Eng. 2017, 64, 208–217. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Definitions of seizure samples, inter-ictal samples and pre-seizure samples. This exemplified EEG signal has two seizure onset windows and the inter-ictal window in between.
Figure 1. Definitions of seizure samples, inter-ictal samples and pre-seizure samples. This exemplified EEG signal has two seizure onset windows and the inter-ictal window in between.
Sensors 18 01372 g001
Figure 2. Outline of the experimental procedure. The modules may be roughly grouped as three steps, i.e., feature engineering, feature selection and classification optimization.
Figure 2. Outline of the experimental procedure. The modules may be roughly grouped as three steps, i.e., feature engineering, feature selection and classification optimization.
Sensors 18 01372 g002
Figure 3. Binary classification performances of each of the 24 feature types. Names of the feature types were defined in the Table 2, and the prefix “Stat|”, “Frac|”, “Entr|” and “Spec” represent the feature families Statistical, Fractal, Entropy and Spectral, respectively.
Figure 3. Binary classification performances of each of the 24 feature types. Names of the feature types were defined in the Table 2, and the prefix “Stat|”, “Frac|”, “Entr|” and “Spec” represent the feature families Statistical, Fractal, Entropy and Spectral, respectively.
Sensors 18 01372 g003
Figure 4. Performance measurement bAcc of the pairwise feature types. The heatmap was colored from blue (minimal bAcc = 0.4713) to red (maximal bAcc = 0.6655). The diagonal grids with red font and black box are pairs of the same feature types, e.g., the top left box gives the bAcc = 0.4948 of a pair of feature types (Stat|Mean, Stat|Mean). The columns and rows are in the same orders of all the 24 feature types. The row “GT-Diagnal” gives the numbers of pairwise orchestrations of feature types that achieved better bAcc than the diagonal grids.
Figure 4. Performance measurement bAcc of the pairwise feature types. The heatmap was colored from blue (minimal bAcc = 0.4713) to red (maximal bAcc = 0.6655). The diagonal grids with red font and black box are pairs of the same feature types, e.g., the top left box gives the bAcc = 0.4948 of a pair of feature types (Stat|Mean, Stat|Mean). The columns and rows are in the same orders of all the 24 feature types. The row “GT-Diagnal” gives the numbers of pairwise orchestrations of feature types that achieved better bAcc than the diagonal grids.
Sensors 18 01372 g004
Figure 5. The classification performances of the linear-kernel SVM classifier with different feature numbers. (a) The horizontal axis is the number of features (parameter pNumF), while the vertical axis is the classification performance value of the four measurements Acc/Sn/Sp/bAcc. (b) The feature subset was further filtered by the module BackFS to remove inter-feature redundancies.
Figure 5. The classification performances of the linear-kernel SVM classifier with different feature numbers. (a) The horizontal axis is the number of features (parameter pNumF), while the vertical axis is the classification performance value of the four measurements Acc/Sn/Sp/bAcc. (b) The feature subset was further filtered by the module BackFS to remove inter-feature redundancies.
Sensors 18 01372 g005
Figure 6. Binary classification performances of different classifiers on detection epileptic seizures using the 22-channel EEG signals. All the classifiers were provided in Python with the default parameters.
Figure 6. Binary classification performances of different classifiers on detection epileptic seizures using the 22-channel EEG signals. All the classifiers were provided in Python with the default parameters.
Sensors 18 01372 g006
Figure 7. Predicting epileptic seizures before their onsets. The binary classification performances were evaluated using Acc, Sn, Sp and bAcc.
Figure 7. Predicting epileptic seizures before their onsets. The binary classification performances were evaluated using Acc, Sn, Sp and bAcc.
Sensors 18 01372 g007
Table 1. Standard definitions of the ACNS TCP montages. Columns “Ref1” and “Ref2” give the channel IDs, and the probing locations of these channels may be found in the [36].
Table 1. Standard definitions of the ACNS TCP montages. Columns “Ref1” and “Ref2” give the channel IDs, and the probing locations of these channels may be found in the [36].
MontageNameRef1Ref2
0FP1-F7FP1F7
1F7-T3F7T3
2T3-T5T3T5
3T5-O1T5O1
4FP2-F8FP2F8
5F8-T4F8T4
6T4-T6T4T6
7T6-O2T6O2
8A1-T3A1T3
9T3-C3T3C3
10C3-CZC3CZ
11CZ-C4CZC4
12C4-T4C4T4
13T4-A2T4A2
14FP1-F3FP1F3
15F3-C3F3C3
16C3-P3C3P3
17P3-O1P3O1
18FP2-F4FP2F4
19F4-C4F4C4
20C4-P4C4P4
21P4-O2P4O2
Table 2. Summary of the 24 feature types extracted from each of the 22 channels of a 10-s sample. Column FpC gives the number of features extracted from the 10-s window per channel.
Table 2. Summary of the 24 feature types extracted from each of the 22 channels of a 10-s sample. Column FpC gives the number of features extracted from the 10-s window per channel.
FamilyTypeDescriptionFpC
StatisticalMeanAverage5
CrestMaximum value5
TroughMinimum value5
VarVariance5
SkwSkewness5
KurtKurtosis5
PeakPeak value5
RMSRoot Mean Square5
PAPRPeak-to-Average Power Ratio5
FFacForm Factor5
TotVarTotal Variation5
HuExpHurst Exponent5
DFADetrended Fluctuation Analysis5
HMobHjorth Parameters: Mobility5
HCompHjorth Parameters: Complexity5
FInfoFisher information5
FractalMFDMandelbrot Fractal Dimension5
PFDPetrosian Fractal Dimension5
HFDHiguchi Fractal Dimension 5
EntropySampEnSample Entropy5
PeEnPermutation Entropy5
SVDEnSVD Entropy5
SEnSpectral Entropy5
SpectralPSI_RIRPower Spectral Intensity, and the relative intensity Ratio12

Share and Cite

MDPI and ACS Style

Zhang, Y.; Yang, S.; Liu, Y.; Zhang, Y.; Han, B.; Zhou, F. Integration of 24 Feature Types to Accurately Detect and Predict Seizures Using Scalp EEG Signals. Sensors 2018, 18, 1372. https://doi.org/10.3390/s18051372

AMA Style

Zhang Y, Yang S, Liu Y, Zhang Y, Han B, Zhou F. Integration of 24 Feature Types to Accurately Detect and Predict Seizures Using Scalp EEG Signals. Sensors. 2018; 18(5):1372. https://doi.org/10.3390/s18051372

Chicago/Turabian Style

Zhang, Yinda, Shuhan Yang, Yang Liu, Yexian Zhang, Bingfeng Han, and Fengfeng Zhou. 2018. "Integration of 24 Feature Types to Accurately Detect and Predict Seizures Using Scalp EEG Signals" Sensors 18, no. 5: 1372. https://doi.org/10.3390/s18051372

APA Style

Zhang, Y., Yang, S., Liu, Y., Zhang, Y., Han, B., & Zhou, F. (2018). Integration of 24 Feature Types to Accurately Detect and Predict Seizures Using Scalp EEG Signals. Sensors, 18(5), 1372. https://doi.org/10.3390/s18051372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop