Next Article in Journal
Haptic Hybrid Prototyping (HHP): An AR Application for Texture Evaluation with Semantic Content in Product Design
Next Article in Special Issue
Improving the Efficiency of Robot-Mediated Rehabilitation by Using a New Organizational Model: An Observational Feasibility Study in an Italian Rehabilitation Center
Previous Article in Journal
Combined Fusion Rules in Cognitive Radio Networks Using Different Threshold Strategies
Previous Article in Special Issue
Investigation of the Functional Decline in Proprioceptors for Low Back Pain Using the Sweep Frequency Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Temporal EEG Imaging for Drowsy Driving Prediction

1
Department of Electrical Engineering, National Chiao Tung University, Hsinchu 300, Taiwan
2
Centre for Artificial Intelligence, University of Technology Sydney, Sydney 2007, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(23), 5078; https://doi.org/10.3390/app9235078
Submission received: 21 October 2019 / Revised: 20 November 2019 / Accepted: 21 November 2019 / Published: 25 November 2019
(This article belongs to the Special Issue Human Health Engineering Volume II)

Abstract

:
As a major cause of vehicle accidents, the prevention of drowsy driving has received increasing public attention. Precisely identifying the drowsy state of drivers is difficult since it is an ambiguous event that does not occur at a single point in time. In this paper, we use an electroencephalography (EEG) image-based method to estimate the drowsiness state of drivers. The driver’s EEG measurement is transformed into an RGB image that contains the spatial knowledge of the EEG. Moreover, for considering the temporal behavior of the data, we generate these images using the EEG data over a sequence of time points. The generated EEG images are passed into a convolutional neural network (CNN) to perform the prediction task. In the experiment, the proposed method is compared with an EEG image generated from a single data time point, and the results indicate that the approach of combining EEG images in multiple time points is able to improve the performance for drowsiness prediction.

1. Introduction

The prevention of drowsy driving has become a major challenge in safety driving issues. Many drivers experience driving in drowsy conditions, especially in long-term driving. Continuous, unexciting driving reduces the vigilance of drivers and increases the risk of traffic accidents. To address this problem, the development of brain-computer interfaces (BCIs) to investigate the human’s cognitive state is an urgent necessity. Electroencephalography (EEG) is one of the most direct and effective physiological measures for the estimation of brain dynamics. Recent EEG studies have demonstrated that changes in alertness during driving are related to changes in global brain dynamics [1,2]. It has also been shown that EEG is a robust measurement for the estimation of a driver’s cognitive state [3,4,5,6]. In addition, EEG provides abilities of convenient measurement in real timeand is therefore widely used in real applications [3,7,8,9].
Although EEG has many advantages for the analysis of brain dynamics, the use of EEG-based BCIs in real applications remains challenging. The raw EEG signals acquired from the electrodes are often obscured by physiological artifacts such as eye movement and muscle movement, which is undesirable in the BCI system [10]. Therefore, removing these unwanted artifactsto capture brain activity has become a crucial issue in EEG-based BCI applications. Many studies have shown that independent component analysis (ICA) can effectively separate the artifacts from raw EEG data [11,12,13,14]. The mixture signal is decomposed into many statistically independent components by ICA. A non-artifact signal is obtained by excluding the components that are associated with artifacts. Although ICA is a powerful tool for extracting brain activity from raw EEGsignal, it cannot support real-time applications because the separated artifacts need to be removed manually. This drawback limits the utility of ICA for real-world BCI applications. An automatic processing BCI is strongly required for drowsy driving prediction since traffic accidents always occur in a very short time. Therefore, this study does not apply any artifact removal process to the raw EEG data during the experiment, ensuring that the proposed method does not use manual processes for the drowsy driving prediction task.
For EEG signals without artifact removal, how to correctly extract the informative features of EEG signals becomes a major challenge in BCI applications. A popular approach for feature extraction is transforming the EEG signals into a frequency domain [15,16]. Fast Fourier transform (FFT) is applied to compute the power spectra of the multi-channel time-series EEG signals into the frequency domain; then, the average of the power spectra value for each frequency is collected to obtain a feature vector for classification [3,17]. The main disadvantage of such an approach is that it only considers the frequency information. EEG is measured over the scalp in a three-dimensional space. It is obvious that the spatial information of EEG cannot be well described by a feature vector. Instead of the 1D feature vector, there is an increasing trend to use 2D feature maps for the analysis of EEG, which have achieved good performance in their application areas [18,19].
As the most popular machine learning technique in recent years, deep learning has achieved significant success in a variety of research fields, such as speech [20], images [21,22,23] and video [24]. The ability of deep learning techniques to learn unknown features from incoming data has gained considerable attention in EEG studies [25,26,27]. There is an increasing trend to use convolutional neural networks (CNNs) to analyze EEGs due to their state-of-the-art performance in the computer vision field. A popular approach is transforming the EEG measurement into a 2D feature map and then passing it into a CNN model for classification [28,29,30,31].
For drowsy driving prediction, it is difficult to identify the drowsy state using the single time point of EEG data because drowsiness is an ambiguous event. The driving performance may not immediately decrease with increasing drowsiness levels, which means that drivers maintain normal driving performance even though their vigilance level has started to decrease. To overcome these difficulties, this study proposes a new EEG image method that combines multiple frames of EEG images to examine the temporal activity of EEG data. Such approach not only focus on the current EEG data, but also considers the brain activity of the previous time period. The evaluation results show that the proposed method can improve the performance of EEG image-based BCI systems in drowsiness prediction.

2. Experimental Setup

2.1. Virtual Reality (VR)-Based Driving Environment

In our previous studies, to observe the subject’s drowsy state during the driving task, a virtual reality (VR)-based realistic driving environment was developed to simulate a long-term driving situation [2,13,32,33,34,35]. As shown in Figure 1, the surrounding scenes were projected from six projectors to constitute a surrounding vision. A night-time driving scene at a fixed velocity of 100 km/h on a four-lane highway is set up in the VR experiment. Before the experiment was started, participants were directed to enter the real car mounted on a motion platform and then steer the vehicle according to the instructions. All participants were required to take a 5-min pre-test session to ensure that they clearly understood the instruction and did not suffer from simulator sickness. The highway scene was connected to a physiological measuring system, where the EEG and participants’ performance were continuously and simultaneously measured and recorded.

2.2. Driving Fatigue Paradigm

The event-related lane-keeping driving task was adopted in this study for the evaluation of the brain dynamics occurring during the driving task, as illustrated in Figure 2. The participants were instructed to perform a 90-min driving task without breaking or resting in the VR driving environment. The driving experiment began in the early afternoon (13:00–14:00) after lunch because people often feel sleepy during this time [36]. During the sustained attention driving task, the VR paradigm randomly simulated a lane-departure event that caused the car to drift away from the center of the cruising lane. The participants were required to quickly steer the car back whenever the car started to deviate from the original cruising lane. There was no feedback to wake the participants even if they did not respond to the lane-departure event. The car continued to move along the curb until the participants steered it to return to the center of the cruising lane. Figure 2 describes a complete trial in the driving paradigm that includes the one-second baseline recording, deviation onset, response onset, and response offset. The time interval between the random lane-departure event was set to 5–10 s.

2.3. Participants

Thirty-eight right-handed, healthy young adults aged 20–30 years participated in the driving experiment. All subjects were required to have a driving license and sufficient sleep in the two preceding weeks. According to self-reporting, no subject had a history of psychological disorders. Before the experiment, the subjects were asked to answer a questionnaire about their sleep patterns to ensure that they had a normal cognitive state during the driving task, and they needed to complete a consent form explaining the experimental protocol that was approved by the Institutional Review Board of the Taipei Veterans General Hospital, Taiwan. The EEG signals of the subjects were captured from a Quik-Cap (CompumedicalNeuroScan) with 32 Ag/AgCl electrodes, including 30 EEG electrodes and two reference electrodes. The EEG electrodes were placed in accordance with a modified international 10–20 system. The impedance of all electrodes was kept under 5 kΩ during the experiments.

2.4. Drowsiness Measurement

The driving performance was defined based on the response time (RT), which represented the time between the deviation onset and the response onset. As the lane-departure event occurred, it was expected that the participant would take a long time to steer the car back to the center of the cruising lane if he/she was in a drowsy state; then, the response time (RT) in the trial could be very long. By contrast, when the participant was alert, he/she could respond to the lane-departure event in a short time. Previous studies have shown that baseline EEG activity is strongly correlated with changes in RT [34]. In this study, the 1s baseline signal (red region shown in Figure 2) was used to perform the drowsy prediction based on the trial’s RT.

3. Approach

The general flowchart of our method is presented in Figure 3. Before data analysis, the acquired EEG records were processed using a 1-Hz high-pass and 50-Hz low-pass infinite impulse response filter to remove the noise and then down-sampled to 250-Hz to reduce the dimensions of the data. The power spectral activities of EEG signals were computed using FFT. To transform the EEG signal to a 2D image, we needed to address the following issues: (1) transforming the power spectrum of EEG signals to image values and (2) interpolating the points of the image data to a color image. The detailed approach is explained in the following sections.

3.1. Feature Extraction

To extract the physiological features, the 30-channel time-series EEG signal was transformed into a frequency domain via a 256-point FFT. Based on the findings in previous studies [14,37,38], the frequency band in the, theta (4–8 Hz), alpha (8–13 Hz), and beta (13–20 Hz) was suitable for estimating the driver’s vigilance level. Our past studies also observed that the increasing power of theta band and alpha band had positive correlation with RT, and beta band had high correlation to kinesthetic stimuli which can affect the prediction performance [13,17]. The mean power of these frequency bands of interest was combined to form a feature vector. As depicted in Figure 4, this feature vector was considered a pixel value of the RGB image. Each channel of the colour image corresponds to a frequency band of interest.

3.2. Interpolation of the EEG Measurement to Image Pixels

As described in the previous section, we obtained 30 data points corresponding to the location of the EEG electrodes. First, we converted the magnitude of the power spectrum of the EEG signals into an image pixel value. Equation (1) shows the sigmoid function utilized to normalize the value of the EEG power spectrum to [0,1]:
P t = 1 1 + e 0.5 t ,
where P t is the normalized image pixel value, and t is the magnitude of the frequency response in the dB. Next, we needed to interpolate the scattered image data points to a color image. Figure 5 illustrates the interpolation scheme of the EEG image. The finite element method, a numerical technique that is usually applied for the approximate solution of engineering problems that are difficult to solve analytically, was adopted to perform the interpolation task. The Clough–Tocher scheme was used to interpolate a 32 × 32 mesh from the 30 image data points [39]. In this study, the EEG electrodes were placed in accordance with a modified international 10–20 system, which means that the location corresponding to each image point is known. Three topographical maps corresponding to the three frequency bands of interest were acquired by the Clough–Tocher scheme. The three spatial maps were then merged to create a 32 × 32 color image. Figure 6 demonstrates several samples of the EEG image.

3.3. Temporal EEG Image

One of the challenges in drowsy driving prediction is that some drowsy trials may have similar patterns of the alert trials. The driving performance might not degrade immediately, even if the alertness level of the drivers begins falling, which means that drivers can respond well to lane-departure events before they fall asleep (but the drowsy pattern of the EEG has appeared). In that case, the generated EEG images between the drowsy trial and its previous alert trials can be very similar. As the drivers wake up by themselves, their vigilance level dramatically recovers, and the RT returns to the alert state; then, the EEG pattern becomes completely different from the drowsy trials. Based on these findings, the drowsy state should be estimated not only using the current trial but also by examining the previous trials. This study proposes a temporal EEG image that is generated by a linear combination of a sequence of EEG images, as shown in Equation (2):
I t = t = 1 N c t I t t = 1 N c t = 1
where I t is the generated temporal EEG image, I t is the array of the EEG image data, c is a scalar coefficient and c i < c j when i < j . N in this study is set to five. Figure 7 illustrates the schematic diagram of the temporal EEG image. Instead of asingle frame EEG image I N , this approach estimates the drowsiness state using I N , which includes the information of the brain activity from multiple time points.

3.4. Classification Using the CNN Model

This study applies a CNN including six convolution layers, three max-pooling layers, and a layer to the classification of the input EEG image, as shown in Figure 8. A popular open source deep learning framework named Caffe is employed for implementing the CNN model [40]. The parameter setting of the overall CNN architecture is presented in Table 1. A set of filters is used to convolve the input EEG images for feature extraction. The convolved images are then subsampled by the max-pooling layer to derive compacted features. The convolution and pooling progress are repeated several times through CNN layers. The lower-level features of the input data are extracted via the early layers, and those features are collected in the later layers to hierarchically learn the higher-level features. Finally, the acquired high-level features are concatenated and passed into the fully-connected layer for the classification. The final prediction result is determined according to the output of the fully-connected layer. We only use the alert and drowsy classes in this study, so the output size of the fully-connected layer is 2 × 1.

4. Experiment

The EEG dataset used in this study includes 10,395 alert trials and 3080 drowsy trials collected from 38 subjects. According to the suggestion of the previous research, if the drivers are fully aware of the driving situation, the average time for them to respond to the lane-departure event is approximately 0.7 s [41]. According to the previous studies [42,43,44], drivers provide poor performance when they don’t respond the lane-departure event in three times the mean alert RT. Therefore, this study adopts three times the average response time as the classification boundary of the drowsy prediction task. The EEG trials are considered as alert trials as long as their RT are less than 2.1 s. In addition, the EEG trials with an RT larger than 2.5 s are labelled as drowsy trials, and the EEG trials with an RT between 2.1 s and 2.5 s are not used in this experiment. The evaluation of our approach is performed using leave-one-subject-out cross-validation. We select the data from one subject for testing and the data from the remaining subjects for training. This process is repeated for each of the 38 subjects. To evaluate the predictive performance of the proposed method, the temporal EEG image method is compared with the single frame EEG image method—directly using the current trial of the EEG image for drowsiness prediction.
Table 2 shows the comparison result of the temporal EEG image method and the single frame EEG image method. The average accuracy is an average of the accuracy of individual categories. It is apparent that the temporal EEG image method outperforms the single frame EEG image methods. In both methods, a similar accuracy of the alert class is given, but our approach achieves significant improvement in the accuracy of the drowsy class. Based on the aim of drowsiness prediction, the prediction rate of the drowsy class is more important than the alert class. Furthermore, the results also demonstrate that our approach has better prediction performance than the single frame EEG image method in most subjects, which proves that the improvement of our approach has universality in general users. Table 3 shows the evaluation result of EEGnet and hierarchical convolutional neural network (HCNN), which are CNN-based approach for EEG analysis and achieve good performance in their applications [45]. The results demonstrate that the proposed method yields superior prediction performance than EEGnet and HCNN.

5. Discussion and Conclusions

It is challenging to classify EEG data without an artifact removal process because drivers’ brain activity can change over time due to many factors, such as their mental state and body movement, which result in the temporal fluctuations of the EEG signals. However, there still contains important information associated with the drowsiness level of drivers, and thus, the temporal analysis of the EEG signals becomes a crucial issue. This study proposes a temporal EEG image algorithm that combines a sequence of EEG images to form a new EEG image that contains brain dynamics from multiple time points. Our experimental results show that the proposed method achieves good performance in the drowsiness prediction.
Support vector machines (SVMs) are also employed for comparison with our approach because they are popular classifiers for EEG analysis. In our experiment, the computational cost is expensive and a bad prediction result is obtained if we use the EEG image as the input of SVM. Thus, the power spectrum of EEG is selected as the input of the SVM. Similar to the experiment described in the previous sections, the input data do not apply any artifact removal process. The experimental results indicate that SVM provides a biased prediction result towards the alert class. That is, it always predicts an alert regardless of the input and results in a perfect detection of alert trials but no detection of drowsy trials. We found that SVM only provides meaningful prediction results in a balanced training dataset that has a similar number of alert trials and drowsy trials. In real-world applications, BCI systems usually have to perform drowsiness prediction under imbalanced-datasets, which means that SVM cannot provide reasonable reliability in real-world BCI applications.
To find a suitable CNN model for the drowsiness prediction task, this study introduces two CNN architectures for further evaluation: (1) AlexNet—a very popular CNN architecture that is larger than the CNN architecture used in this study [18], (2) 3D CNN—by performing 3D convolutions, which is capable of learning features from both spatial and temporal dimensions. For AlexNet, the EEG measurement is transformed into 227 × 227 to fit the input size of AlexNet. For 3D CNN, different from our approach using a linear combination of a sequence of EEG images, these EEG images are directly fed into the 3D CNN model since they are 3D input data. Our results show that both AlexNet and 3D CNN cannot achieve better performance than the proposed CNN architecture in the present study. That is, high-complexity CNN is not required for the drowsiness prediction task.
The detection of drowsy trials remains a challenge because different subjects have different drowsy EEG patterns. A further investigation of brain dynamics is the key to improving the prediction performance of the BCI system.

Author Contributions

E.J.C. conceived the main idea and wrote the paper. K.-Y.Y. provided the academic support and checked the manuscript, C.-T.L. provided the funding support and performed the experiment.

Funding

This work was supported in part by the Australian Research Council under Grant DP180100670 and Grant DP180100656, Taiwan Ministry of Science and Technology under AQ1 Grant MOST 106-2218-E-009-027-MY3 and MOST 108-2221-E-009-120-MY2 and W911NF-10-2-0022.

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Chuang, S.-W.; Ko, L.-W.; Lin, Y.-P.; Huang, R.-S.; Jung, T.-P.; Lin, C.-T. Co-modulatory spectral changes in independent brain processes are correlated with task performance. Neuroimage 2012, 62, 1469–1477. [Google Scholar] [CrossRef] [PubMed]
  2. Huang, C.S.; Pal, N.R.; Chuang, C.H.; Lin, C.T. Identifying changes in EEG information transfer during drowsy driving by transfer entropy. Front. Hum. Neurosci. 2015, 9, 570. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, Y.T.; Lin, Y.Y.; Wu, S.L.; Chuang, C.H.; Lin, C.T. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 347–360. [Google Scholar] [CrossRef] [PubMed]
  4. Baulk, S.D.; Reyner, L.A.; Horne, J.A. Driver sleepiness—Evaluation of reaction time measurement as a secondary task. Sleep 2001, 24, 695–698. [Google Scholar] [CrossRef]
  5. Banks, S.; Catcheside, P.; Lack, L.; Grunstein, R.R.; McEvoy, R.D. Low levels of alcohol impair driving simulator performance and reduce perception of crash risk in partially sleep deprived subjects. Sleep 2004, 27, 1063–1067. [Google Scholar] [CrossRef]
  6. Campagne, A.; Pebayle, T.; Muzet, A. Correlation between driving errors and vigilance level: Influence of the driver’s age. Physiol. Behav. 2004, 80, 515–524. [Google Scholar] [CrossRef]
  7. Arunkumar, N.; Ramkumar, K.; Venkatraman, V.; Abdulhay, E.; Fernandes, S.L.; Kadry, S.; Segal, S. Classification of focal and non-focal EEG using entropies. Pattern Recognit. Lett. 2017, 94, 112–117. [Google Scholar]
  8. Long, J.; Li, Y.; Wang, H.; Yu, T.; Pan, J.; Li, F. A hybrid brain computer interface to control the direction and speed of a simulated or real wheelchair. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 720–729. [Google Scholar] [CrossRef] [PubMed]
  9. Xiong, Y.; Gao, J.; Yang, Y.; Yu, X.; Huang, W. Classifying driving fatigue based on combined entropy measure using EEG signals. Int. J. Control Autom. 2016, 9, 329–338. [Google Scholar] [CrossRef]
  10. Winkler, I.; Haufe, S.; Tangermann, M. Automatic classification of artifactual ICA-Components for artifact removal in EEG signals. Behav. Brain Funct. 2011, 7, 30. [Google Scholar] [CrossRef]
  11. Hyvärinen, A.; Karhunen, J.; Oja, E. Independent Component Analysis; Wiley-Interscience: New York, NY, USA, 2001. [Google Scholar]
  12. Tran, Y.; Craig, A.; Boord, P.; Craig, D. Using independent component analysis to remove artifact from electroencephalographic measured during stuttered speech. Med. Biol. Eng. Comput. 2004, 42, 627–633. [Google Scholar] [CrossRef] [PubMed]
  13. Lin, C.-T.; Wu, R.-C.; Liang, S.-F.; Chao, W.-H.; Chen, Y.-J.; Jung, T.-P. EEG-based drowsiness estimation for safety driving using independent component analysis. IEEE Trans. Circuits Syst. I Regul. Pap. 2005, 52, 2726–2738. [Google Scholar]
  14. Jung, T.P.; Humphries, C.; Lee, T.W.; Makeig, S.; McKeown, M.J.; Iragui, V.; Sejnowski, T.J. Extended ICA removes artifacts from electroencephalographic recordings. Adv. Neural Inf. Process. Syst. 1998, 10, 894–900. [Google Scholar]
  15. Garipelli, G.; Chavarriaga, R.; R Millán, J. Single trial analysis of slow cortical potentials: A study on anticipation related potentials. J. Neural Eng. 2013, 10, 036014. [Google Scholar] [CrossRef]
  16. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain- computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  17. Chuang, C.H.; Lai, P.C.; Ko, L.W.; Kuo, B.C.; Lin, C.T. Driver’s cognitive state classification toward brain computer interface via using a generalized and supervised technology. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–7. [Google Scholar]
  18. Tabar, Y.R.; Halici, U. A novel deep learning approach for classification of EEG motor imagery signals. J. Neural Eng. 2016, 14, 016003. [Google Scholar] [CrossRef]
  19. Carvalho, S.R.; Filho, I.C.; Resende, D.O.; Siravenha, A.C.; De Souza, C.R.B.H.; Debarba, B.D.; Gomes, R.; Boulic, A. Deep Learning Approach for Classification of Reaching Targets from EEG Images. In Proceedings of the 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Niteroi, Brazil, 17–20 October 2017. [Google Scholar]
  20. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.A.; Mohamed, N.; Jaitly, A.; Senior, V.; Vanhoucke, P.; Nguyen, T.; Sainath, B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition. IEEE Signal. Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional neural networks. In Proceedings of the NIPS, Doha, Qatar, 12–15 November 2012. [Google Scholar]
  22. Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
  23. Sun, Y.; Wang, X.; Tang, X. Deep Learning Face Representation from Predicting 10,000 Classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
  24. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, S.; Li, F.-F. Large-scale video classication with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
  25. Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. Eeg-based emotion recognition using deep learning network with principal component based covariate shift adaptation. Sci. World J. 2014, 2014, 627892. [Google Scholar] [CrossRef]
  26. Zheng, W.-L.; Zhu, J.-Y.; Peng, Y.; Lu, B.-L. EEG-based emotion classification using deep belief networks. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014. [Google Scholar]
  27. Hosseini, M.-P.; Soltanian-Zadeh, H.; Elisevich, K.; Pompili, D. Cloud-based deep learning of big eeg data for epileptic seizure prediction. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, USA, 7–9 December 2016. [Google Scholar]
  28. Li, J.; Zhang, Z.; He, H. Hierarchical convolutional neural networks for EEG-based emotion recognition. Cogn. Comput. 2017, 10, 368–380. [Google Scholar] [CrossRef]
  29. Bashivan, P.; Rish, I.; Yeasin, M.; Codella, N. Learning representations from EEG with deep recurrent-convolutional neural networks. Int. Conf. Learn. Represent. 2016, 1511, 06448. [Google Scholar]
  30. Emami, A.; Kunii, N.; Matsuo, T.; Shinozaki, T.; Kawai, K.; Takahashi, H. Seizure detection by convolutional neural network-based analysis of scalp electroencephalography plot images. NeuroImage Clin. 2019, 22, 101684. [Google Scholar] [CrossRef]
  31. Comaniciu, A.; Najafizadeh, L. Enabling Communication for Locked-in Syndrome Patients using Deep Learning and an Emoji-based Brain Computer Interface. In Proceedings of the 2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), Cleveland, OH, USA, 24 December 2018. [Google Scholar]
  32. Lin, F.-C.; Ko, L.-W.; Chuang, C.-H.; Su, T.-P.; Lin, C.-T. Generalized EEG-based drowsiness prediction system by using a self-organizing neural fuzzy system. IEEE Trans. Circuits Syst. I Reg. Pap. 2012, 59, 2044–2055. [Google Scholar] [CrossRef]
  33. Huang, R.-S.; Jung, T.-P.; Delorme, A.; Makeig, S. Tonic and phasic electroencephalographic dynamics during continuous compensatory tracking. NeuroImage 2008, 39, 1896–1909. [Google Scholar] [CrossRef] [PubMed]
  34. Chuang, C.H.; Ko, L.W.; Jung, T.P.; Lin, C.T. Kinesthesia in a sustained-attention driving task. Neuroimage 2014, 91, 187–202. [Google Scholar] [CrossRef] [PubMed]
  35. Lin, C.T.; Wu, R.C.; Jung, T.P.; Liang, S.F.; Huang, T.Y. Estimating Driving Performance Based on EEG Spectrum Analysis. EURASIP J. Appl. Signal. Process. 2005, 19, 3165–3174. [Google Scholar] [CrossRef]
  36. Ueno, H.; Kaneda, M.; Tsukino, M. Development of drowsiness detection system. In Proceedings of the VNIS’94-1994 Vehicle Navigation and Information Systems Conference, Yokohama, Japan, 6 August 2002. [Google Scholar]
  37. Jap, B.T.; Lal, S.; Fischer, P.; Bekiaris, E. Using EEG spectral components to assess algorithms for detecting fatigue. Expert Syst. 2009, 36, 2352–2359. [Google Scholar] [CrossRef]
  38. Bashivan, P.; Bidelman, G.M.; Yeasin, M. Spectrotemporal dynamics of the EEG during working memory encoding and maintenance predicts individual behavioral capacity. Eur. J. Neurosci. 2014, 40, 3774–3784. [Google Scholar] [CrossRef]
  39. Alfeld, P. A trivariate clough-tocher scheme for tetrahedral data. Comput. Aided Geom. Des. 1984, 1, 169–181. [Google Scholar] [CrossRef]
  40. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. arXiv 2014, arXiv:1408.5093. [Google Scholar]
  41. Marc, G. How Long Does It Take to Stop? Methodological Analysis of Driver Perception-Brake Times. Transp. Hum. Factors 2000, 2, 195–216. [Google Scholar]
  42. Lin, C.T.; Huang, K.C.; Chuang, C.H.; Ko, L.W.; Jung, T.P. Can arousing feedback rectify lapses in driving? Prediction from EEG power spectra. J. Neural Eng. 2013, 10, 056024. [Google Scholar] [CrossRef] [PubMed]
  43. Huang, K.C.; Jung, T.P.; Chuang, C.H.; Ko, L.W.; Lin, C.T. Preventing lapse in performance using a drowsiness monitoring and management system. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 12 November 2012. [Google Scholar]
  44. Lin, C.-T.; Huang, K.-C.; Chao, C.-F.; Chen, J.-A.; Chiu, T.-W.; Ko, L.-W.; Jung, T.-P. Tonic and phasic EEG and behavioral changes induced by arousing feedback. NeuroImage 2010, 52, 633–642. [Google Scholar] [CrossRef] [PubMed]
  45. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The virtual reality (VR)-based driving environment. (a) Vehicle used for the driving task. (b) Projector platform. (c) The surrounding scene. (d) Front view of the VR scene.
Figure 1. The virtual reality (VR)-based driving environment. (a) Vehicle used for the driving task. (b) Projector platform. (c) The surrounding scene. (d) Front view of the VR scene.
Applsci 09 05078 g001
Figure 2. The event-related lane departure paradigm [2]. A complete trial consists of three steps: (1) deviation onset—the lane-departure event occurs, (2) response onset—the driver starts to steer the wheel back, and (3) the response offset—the car returns to the original cruising lane. Response time (RT) is defined as the time period between the deviation onset and the response onset. This study uses the 1 s baseline signal before the lane-departure event for predicting drowsiness according to the value of RT.
Figure 2. The event-related lane departure paradigm [2]. A complete trial consists of three steps: (1) deviation onset—the lane-departure event occurs, (2) response onset—the driver starts to steer the wheel back, and (3) the response offset—the car returns to the original cruising lane. Response time (RT) is defined as the time period between the deviation onset and the response onset. This study uses the 1 s baseline signal before the lane-departure event for predicting drowsiness according to the value of RT.
Applsci 09 05078 g002
Figure 3. Flowchart of the proposed drowsiness prediction system.
Figure 3. Flowchart of the proposed drowsiness prediction system.
Applsci 09 05078 g003
Figure 4. Schematic illustration used to transform the time domain electroencephalography (EEG) signals into a pixels in a color image.
Figure 4. Schematic illustration used to transform the time domain electroencephalography (EEG) signals into a pixels in a color image.
Applsci 09 05078 g004
Figure 5. Interpolation scheme of the EEG image. The transformed image data points described in Figure 4 are arranged corresponding to the location of the EEG electrodes. These image pixels are interpolated over a 32 × 32 color image via Clough–Tocher scheme.
Figure 5. Interpolation scheme of the EEG image. The transformed image data points described in Figure 4 are arranged corresponding to the location of the EEG electrodes. These image pixels are interpolated over a 32 × 32 color image via Clough–Tocher scheme.
Applsci 09 05078 g005
Figure 6. Examples of the generated EEG images. (a) The alert trials. (b) The drowsy trials.
Figure 6. Examples of the generated EEG images. (a) The alert trials. (b) The drowsy trials.
Applsci 09 05078 g006
Figure 7. The schematic diagram of the temporal EEG image. I N is the single frame EEG image generated from the current trial. Temporal EEG image I N is acquired by linearly combining I N and its previous trials.
Figure 7. The schematic diagram of the temporal EEG image. I N is the single frame EEG image generated from the current trial. Temporal EEG image I N is acquired by linearly combining I N and its previous trials.
Applsci 09 05078 g007
Figure 8. The convolutional neural network (CNN) architecture used in the experiment.
Figure 8. The convolutional neural network (CNN) architecture used in the experiment.
Applsci 09 05078 g008
Table 1. Parameter setting of the CNN model.
Table 1. Parameter setting of the CNN model.
Layer NameKernel SizeStrideNum of Filters
Conv1a3 × 3116
Conv1b3 × 3116
Pool12 × 21-
Conv2a3 × 3132
Conv2b3 × 3132
Pool22 × 22-
Conv3a3 × 3132
Conv3b3 × 3132
Pool32 × 22-
FC1500--
FC22--
Table 2. The prediction result of the single frame EEG image and temporal EEG image.
Table 2. The prediction result of the single frame EEG image and temporal EEG image.
SubjectSingle Frame EEG ImageTemporal EEG Image
Alert (%)Drowsy (%)Total Accuracy (%)Alert (%)Drowsy (%)Total Accuracy (%)
S197.3023.0679.1297.9419.1778.11
S288.2120.1352.2582.7229.7853.95
S393.5027.6072.4797.8521.4773.07
S496.769.3071.8597.7914.1973.33
S596.6180.6992.2297.1386.6794.20
S696.7284.0393.2298.2196.2597.66
S795.1450.2687.2195.4660.5489.33
S898.3027.6987.3196.4266.1591.60
S996.6732.5581.0695.1558.9186.17
S1096.5012.1080.2596.218.3979.09
S1195.3114.6768.2196.2714.2168.44
S1295.6015.8066.3094.5025.7168.90
S1395.769.6674.9498.027.8276.02
S1492.0417.4480.2596.2416.4683.53
S1591.0419.4783.1893.5618.0785.20
S1694.1018.2580.6295.0620.0081.62
S1793.8716.7781.9695.2718.6084.12
S1892.3016.1385.0292.7916.4585.46
S1985.9733.5065.0587.5142.9269.49
S2092.4021.4476.2495.1520.1777.94
S2194.3034.6179.3494.1954.4184.12
S2292.9821.8676.2094.4921.8677.07
S2387.1232.3867.3090.2128.5767.72
S2495.6820.2876.4694.8330.3578.27
S2595.0840.5883.7894.5355.3486.35
S2692.0237.1185.0891.6848.6786.22
S2791.6552.5282.2191.8962.8284.24
S2897.6026.5085.5597.2930.2585.73
S2996.2416.1280.0897.6312.6580.21
S3094.7912.1074.8395.2413.4575.34
S3193.3818.9682.4294.3217.7682.95
S3287.5723.5773.3787.9229.7674.78
S3389.6719.2465.3792.6015.1965.42
S3489.2125.0084.4990.3622.0085.26
S3590.1415.8578.2991.0418.5479.29
S3688.3323.5752.4379.6535.6354.75
S3795.7612.3780.1594.9116.6780.75
S3869.2940.2353.2752.7353.6053.22
Average92.7626.9376.8392.7632.3578.39
Table 3. The prediction result of EEGnet and HCNN.
Table 3. The prediction result of EEGnet and HCNN.
SubjectEEGnet [45]HCNN [28]
Alert (%)Drowsy (%)Total Accuracy (%)Alert (%)Drowsy (%)Total Accuracy (%)
S194.3913.3373.9999.0711.1176.92
S284.9224.5352.190.4817.3350.72
S393.8416.468.795.5121.3371.43
S492.7914.1969.894.2313.9570.75
S590.8118.8970.6699.4586.1195.72
S688.6520.2869.4998.3793.5596.89
S791.2623.2479.2994.8232.4383.88
S893.2117.6981.1797.0815.3884.05
S992.4616.7373.6998.212.7377.03
S1089.7720.1676.1992.9716.1377.98
S1191.812.1264.892.5412.7265.5
S1291.7514.9163.1693.1212.563.12
S1396.49.0875.194.3111.7674.18
S1490.6215.1278.696.7714.6383.69
S1591.1816.6782.9389.9621.0582.33
S1693.4613.8179.2189.4427.8478.41
S1791.6915.9180.6684.4317.274.65
S1892.6517.2681.5688.5211.2981.14
S1991.4114.7560.4494.9217.563.63
S2091.5215.2574.0192.4216.9575.1
S2194.0919.3175.1697.3420.5977.92
S2286.5219.1570.3789.318.6472.36
S2385.6519.6361.5696.357.9464.09
S2490.7615.2571.3792.1616.3172.68
S2587.3320.5873.3897.6922.3381.95
S2687.9117.2378.9390.1836.1483.31
S2789.7815.9571.8496.0828.2479.59
S2892.1916.579.1495.832082.76
S2992.3713.2776.1595.7912.2478.66
S3091.3214.0372.5292.715.1373.82
S3191.9516.2780.7192.1917.9181.15
S3291.3221.975.6590.2816.6773.66
S33100064.8990.4111.3962.67
S3458.143556.4286.292081.34
S3583.329.7674.6288.6817.0777.08
S36100043.4388.3722.3251.001
S3783.6217.2271.6192.0222.2279.4
S388020.1246.1287.8813.9546.05
Average90.0218416.8813271.0373793.0565821.6468474.91082

Share and Cite

MDPI and ACS Style

Cheng, E.J.; Young, K.-Y.; Lin, C.-T. Temporal EEG Imaging for Drowsy Driving Prediction. Appl. Sci. 2019, 9, 5078. https://doi.org/10.3390/app9235078

AMA Style

Cheng EJ, Young K-Y, Lin C-T. Temporal EEG Imaging for Drowsy Driving Prediction. Applied Sciences. 2019; 9(23):5078. https://doi.org/10.3390/app9235078

Chicago/Turabian Style

Cheng, Eric Juwei, Ku-Young Young, and Chin-Teng Lin. 2019. "Temporal EEG Imaging for Drowsy Driving Prediction" Applied Sciences 9, no. 23: 5078. https://doi.org/10.3390/app9235078

APA Style

Cheng, E. J., Young, K. -Y., & Lin, C. -T. (2019). Temporal EEG Imaging for Drowsy Driving Prediction. Applied Sciences, 9(23), 5078. https://doi.org/10.3390/app9235078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop