Next Article in Journal
Improved Removal of Quinoline from Wastewater Using Coke Powder with Inorganic Ions
Next Article in Special Issue
A Proposal for Public Health Information System-Based Health Promotion Services
Previous Article in Journal
Designing Hydrogen and Oxygen Flow Rate Control on a Solid Oxide Fuel Cell Simulator Using the Fuzzy Logic Control Method
Previous Article in Special Issue
An Analysis of Antimicrobial Resistance of Clinical Pathogens from Historical Samples for Six Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Drivers’ Anxiety Invoked by Driving Situations Using Multimodal Biosignals

1
Department of Human Factors Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea
2
Intelligent Robotics Research Division, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Processes 2020, 8(2), 155; https://doi.org/10.3390/pr8020155
Submission received: 10 November 2019 / Revised: 11 January 2020 / Accepted: 21 January 2020 / Published: 25 January 2020
(This article belongs to the Special Issue Big Data in Biology, Life Sciences and Healthcare)

Abstract

:
It has become increasingly important to monitor drivers’ negative emotions during driving to prevent accidents. Despite drivers’ anxiety being critical for safe driving, there is a lack of systematic approaches to detect anxiety in driving situations. This study employed multimodal biosignals, including electroencephalography (EEG), photoplethysmography (PPG), electrodermal activity (EDA) and pupil size to estimate anxiety under various driving situations. Thirty-one drivers, with at least one year of driving experience, watched a set of thirty black box videos including anxiety-invoking events, and another set of thirty videos without them, while their biosignals were measured. Then, they self-reported anxiety-invoked time points in each video, from which features of each biosignal were extracted. The logistic regression (LR) method classified single biosignals to detect anxiety. Furthermore, in the order of PPG, EDA, pupil, and EEG (easiest to hardest accessibility), LR classified accumulated multimodal signals. Classification using EEG alone showed the highest accuracy of 77.01%, while other biosignals led to a classification with accuracy no higher than the chance level. This study exhibited the feasibility of utilizing biosignals to detect anxiety invoked by driving situations, demonstrating benefits of EEG over other biosignals.

1. Introduction

The emotional state during driving is related to driving safety and comfort [1,2]. Negative emotions, especially, can have a serious impact on driving performance, resulting in an increase in the risk of accidents. For example, anger is directly linked to vehicle accidents, and anxiety interferes with concentration on driving [3]. Some studies showed that negative emotions can be regulated by feedback from in-vehicle agents [4,5], which suggests that it is essential to identify the emotional state of a driver to give appropriate feedback.
It has already been revealed, that changes in physiological features such as electroencephalography (EEG), photoplethysmography (PPG), electrodermal activity (EDA), and eye-related features are more suitable than a subjective questionnaire in stress detection [6,7]. Similarly, many studies have attempted to recognize a driver’s emotional state using biosignals, without self-expression of emotions by the driver [8,9,10,11,12,13]. Some studies measured the physiological outcomes of autonomic nervous systems such as heart rates and skin conductance and used them to infer the level of stress in driving situations [8,9,10], while others also inspected traffic situations (e.g., crash) since drivers’ internal emotional state can be changed significantly by external events [11,12,14,15]. For instance, a study revealed that an attention reaction level represented by skin conductance response increased with an accident risk level (i.e., external driving environment), regardless of individual trait anxiety levels (i.e., internal state) [14]. Likewise, drivers are affected by environmental dynamics, which gives rise to the demand on detection of a driver’s emotion invoked from external driving situations.
Although driving anxiety is one of the emotions most influential to driving safety [16], few studies have measured the physiological [14] and neural responses [15] of anxiety compared to other negative emotions [8,9,10,11,12]. In addition, the previous studies determined the onset of anxiety as being spread over an entire video clip [12], or as identical across subjects [14,15]. However, due to variability of driving experiences and personal traits, individual drivers may start to feel anxiety at different time points.
Therefore, in the present study, we aimed to detect driving anxiety using biosignals measured at individualized anxiety onset. In addition, we investigated how combining multiple biosignals could improve such detection. For this purpose, we extracted features from four different biosignals: Electroencephalography (EEG), photoplethysmography (PPG), electrodermal activity (EDA) and pupil size (PS). As a detection algorithm, we built and trained a classifier based on the data of individual subjects and used it to classify biosignals into either a normal or anxiety state.
We confirmed that classification of EEG outperformed that of other signals in terms of average accuracy and weights in the classification model. Classifiers tended to utilize frontal theta, alpha and gamma powers of EEG to detect anxiety-invoked situations. Furthermore, adding other biosignals such as EDA or pupil size to EEG further enhanced the detection performance in some participants. Our findings contribute to the ability to extract feasible biosignals and reveal cognitive processes related to driving anxiety.

2. Materials and Methods

2.1. Participants and Stimuli

Thirty-one university students with normal vision were recruited who had maintained their driver licenses for at least one year since they obtained them (15 females, 16 males, mean age 23.26 ± 1.93 years, mean license possession period 19.62 ± 11.84 months). The participants in the present study were different from our previous study that used the same stimuli [15]. This study was carried out in accordance with the recommendations of the Institutional Review Board of the Ulsan National Institute of Science and Technology (UNISTIRB-18-45-C) with written informed consent from all participants. After experiments, eight participants were excluded from data analysis because in more than 80% of trials, one or more biosignal data points was found to be in poor quality.
Three anxiety-invoking external events during driving were used in this study: A sudden jaywalker, a sudden entry of a vehicle including bicycle, and a speeding vehicle passing by. These events were chosen using the risk criteria in the Hazard Perception Test provided by England Driver and the Vehicle Standard Agency [17]. We collected thirty 30 s driver perspective video clips from YouTube, which contained one of the three anxiety-invoking events above. Each video clip included one anxiety-invoking event (video of anxiety: VA). We also collected another set of thirty 30 s driver perspective video clips from YouTube that did not include any anxiety-invoking events but presented driving at normal speed (video of normal condition: VN). The anxiety-invoking events started on average at 12.73 s (S.D. 5.77 s) and lasted for 2.87 s (S.D. 1.20 s) (Table A1). The start time was determined when an anxiety-related object appeared in the video and the lasting time elapsed from the start time to the time point when the object disappeared.

2.2. Experimental Task

The experiment consisted of two sessions (Figure 1). In the first session, participants were asked to watch sixty videos. At the end of each video, they were asked to answer the question of whether or not they felt anxiety during the video by pressing a keypad (1: Yes/2: No). Presentation of videos was repeated over three successive runs with a short break between runs—there were twenty trials of video presentation followed by responses in each run. The number of VA and VN in each run were balanced and each video was presented in a random order. In the second session, participants were told to press the space bar at the points when they had felt anxiety while they watched the same sixty videos again. They were allowed to press multiple times, yet only the first one was used in the subsequent analysis.

2.3. Multimodal Biosignal Recordings

Four biosignals were collected simultaneously in the first session: EEG, PPG, EDA, and PS. EEG signals were measured (band-pass filtering 1–50 Hz, sampling rate: 500 Hz) with a 31-channel wet-electrode recording system (actiCHamp, Brain products GmbH, Gliching, Germany) at the following electrode locations, determined in accordance with the International 10/20 system: FP1, FPz, FP2, F7, F3, Fz, F4, F8, FC9, FC5, FC1, FC2, FC6, FC10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, and O2. Two additional electrodes were attached to the left mastoid (TP9) as a ground, and the right one (TP10) as a reference. PPG and EDA were collected from a wristband-type wearable device (E4, Empatica Inc, Milano, Italy) with a 64 Hz and 4 Hz sampling rate, respectively. PS was acquired by a wearable eye tracker (Tobii Pro Glasses 2, TOBII, Danderyd, Sweden). The signals from three devices were synchronized by marking the beginning of the first video as follows: Before watching the first driving video, participants should press the event-marker button on the wristband to the 0.5 s rhythm of the countdown from 10 to 1, as instructed on the monitor screen. By doing so, participants could press the button accurately at the moment when the last number 1 was shown, while they could miss some other time points that were not used for synchronization. When the last number, ‘1’, was shown on the monitor screen, a beep sound was presented together, which was recorded by a camera embedded in the eye-tracker. The first video started 0.5 s after the display of ‘1’ (Figure 1). EEG signals were recorded along with triggers, marking the beginning of every trial.

2.4. Behavior Analysis

The behavioral data were acquired from the experiment, including the self-reports of anxiety for all videos and the time points of each VA. The ratio of self-reports of anxiety was calculated as the number of videos with ‘Yes’ response over the number of VA or VN (i.e., 30). To verify that VA clearly invoked anxiety, we compared this ratio between VA and VN using paired t-test. We also estimated the number of time points for each video by fitting Poisson distribution. The time points of self-invoked anxiety from VA were used to determine the onset of individuals’ anxiety (anxiety onset). There was no clear onset time for VN due to absence of event. Thus, the control onset was defined as the average start time of VA (i.e., 12.73 ± 5.77 s, control onset) for VN. These two onsets were used to extract the features of anxiety from biosignals (Section 2.5).

2.5. Signal Processing and Feature Extraction

2.5.1. EEG

To remove eye movement artifacts from EEG signals, artifact subspace reconstruction (ASR) was applied to the recorded EEG data [18]. Then, EEG data were transformed to the spectral domain using short-time Fourier Transform (STFT) with a 1-s window and 50% overlapping. The power spectral density (PSD) in four frequency bands was estimated using Welch’s method: Theta (4–8 Hz), alpha (8–12 Hz), beta (13–30 Hz) and gamma (30–40 Hz). Only frontal channels (F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6) were used for the analysis of this study as the frontal cortex is involved in emotional processing of anxiety [15,19] (Figure 2). The data were extracted from t2 s after the two onset types (i.e., anxiety onset and control onset) and baseline corrected with t1 s before the onsets, where t1 = (1 2 3) s and t2 = (3 4 5) s. Additionally, stress related EEG features [20], such as frontal alpha asymmetry (FAA), brain load index (BLI) and beta/alpha ratio (B/A), were extracted from the same 9 periods. Thus, a total of 423 features (FAA, BLI, B/A for 9 channels, and 4 frequencies for 9 channels for each period) were extracted from EEG data. To prevent over-fitting due to the sizable number of features compared to input data (i.e., the number of trials), we reduced the number of features to 20 using least absolute shrinkage and selection operator (LASSO) regression analysis provided by the function ‘lasso’ from MATLAB (2019a, MathWorks, Natick, MA, USA, 2019). We also extracted the same features of EEG data with a 2 s window and 0.5 s non-overlapping to check if it had more reliable estimates.

2.5.2. PPG

PPG was standardized by subtracting average amplitude and dividing standard deviation of amplitude from 10 s before ([−10 0]), and 10 s after ([0 10]) the time points of interest. A total of 12 features were extracted from preprocessed PPG signals as follows. Firstly, four arithmetic features were calculated from 10 s after the onset (No. 1 ~4 in Table 1). Then, the rest of the features were extracted from a peak-to-peak interval (PPI) according to a previous feature extraction method [21] (No. 5 ~12 in Table 1). As shown in Figure 3a, a PPI is defined as a time interval, t(n + 1) − t(n), between the n-th peak, P(n), to a subsequent peak, P(n + 1) where ‘t’ indicates time. The length and irregularity of PPI are defined as Equations (1) and (2), respectively. We also calculated the number of PPIs within a time window, denoted as ‘nPPI’, as well as the number of fast PPIs that was defined as PPIs faster than average PPI, as ‘fast PPIpost count’. In addition, the ratio of low frequency (LF: 0.04~0.15 Hz) to high frequency (HF: 0.15~0.4 Hz) was obtained within a time period of interest.
PPI length = t(n) − t(n − 2)
PPI irregularity = [(t(n) − t(n − 1)) − (t(n − 1) − t(n − 2))]/[t(n) − t(n − 2)]

2.5.3. EDA

EDA increases from certain latency, normally 1 s, after the onset of arousal events [22]. Thus, EDA signal was corrected with baseline that was determined as a period from the onset to 1 s. Then, we epoched the EDA signals from 1 s to 6 s after both the anxiety onset and control onset. The five arithmetic features were selected within the 5 s time-window: Mean, std., maximum, and minimum EDA signal, as well as EDA amplitude defined as a difference between maximum and minimum.

2.5.4. Pupil Size

To reduce blinking noise in PS, we removed the pupil data whose velocity was 1.5% higher than average velocity (Figure 4). This threshold was set heuristically. According to previous studies [23,24], the largest change in PS is within 2 to 5 s after an emotional change compared to the size from 1 s before the emotional change. Thus, PS data were corrected by baseline of signal 1 s before the onset. The five arithmetic features selected in this study were: Mean, std., max, min and pupil range calculated as maximum minus minimum within a time window (i.e., 3 s).

2.6. Decoding Analysis

We built 15 feature sets with all possible combinations of 4 signals in order to find which signal or combination of signals provided the best features for detecting anxiety. We extracted 20 features from EEG, 12 features from PPG, 5 from EDA and 5 from pupil size, respectively. To evaluate decoding accuracy, leave-one-trial-out (LOTO) validation was used for each participant (Figure 5). To predict whether given trial data contained video with an anxiety event or not, we trained a classifier using the rest of the trials. Before training the classifier, we normalized the features using the standard scaling for each feature. The logistic regression (LR) was used as a classifier. Additionally, we used 10-fold cross validation (CV) for evaluating decoding accuracy as a more conservative validation method and the artificial neural networks (ANN) was used as another classifier to check if it could improve accuracy. Thus, there were eight decoding methods for analysis (2 validation methods × 2 classifiers × 2 sets with EEG features extracted using a 2 s window).
In addition, we developed a cumulative feature count (CFC) in order to evaluate which bio signal was more involved in building the classifier across participants. To do so, we calculated the average of the absolute values of the LR weights assigned to each of the 42 features in each participant (Figure 6). Then, we sorted the features based on their average absolute weight values in a descending order (Figure 6a). Finally, we collected this vector of sorted features from every participant and counted the number of times a feature appeared on each rank (Figure 6b). A feature with the largest proportion in the high ranking could be interpreted as the best feature and/or the best signal. The CFCs from other possible classifiers were also calculated in the same way, except for the number of features in the feature set. Since the CFC was used to rank weights rather than to select features, the number of features of all classifiers was not changed, regardless of CFC application.

3. Results

3.1. Behavior Results

The ratio of self-reports of anxiety for VA and VN were 0.7505 and 0.1704, respectively, indicating VA invoked anxiety significantly more than VN did (t(30) = 20.78, p < 0.0001). In addition, the average number of keyboard presses for anxiety timing for each VA was 0.99 ± 0.22. The expected number of anxiety expressions for each stimulus fitted by Poisson distribution is summarized in Table 2. For example, one would expect to observe 1.103 keyboard press for the stimulus no. 1, estimated by 31 participants button pressing data. These results confirmed that VA could sufficiently arouse anxiety in our experiment.

3.2. Decoding Results

Twenty-three participants’ data was used for a decoding analysis. The number of anxiety and control trials used in the analysis were 24.91 ± 7.36 and 24.09 ± 6.65 out of maximum 30 trials for each. The paired t-test showed no difference in the number of trials between anxiety and control (t(22) = 1.67, p = 0.11), thus informing the chance level of decoding as 50%.
The LR classifier with LOTO validation using a feature set with 1 s EEG data showed the highest accuracies among seven other methods (Table A2 and Table A3). Paired t-test of decoding methods revealed that the other three classification methods using a feature set with 1 s EEG data showed lower average and maximum accuracy (ps < 0.05). We also found that the decoding accuracy of feature sets including EEG features with a 2 s window using LR classifier with both LOTO and 10-fold CV method were not above chance level (ps > 0.5) (Table A4). Also, ANN classifier trained with 10-fold CV method did not perform above chance level either (ps > 0.9). When using ANN classifier with LOTO method, however, feature set 2 (PPG only) showed results slightly above the chance level (average 0.53, t(22) = 1.90, p = 0.035) across subjects, while other feature sets did not (ps > 0.05). In sum, our analysis results indicated that the LR classifier with LOTO validation method produced the most accurate estimation.
Decoding results showed that among 15 possible combinations of multimodal biosignals, decoding EEG alone showed the highest accuracy (Table 3, the third column). In addition, we obtained accuracy above the chance level in most participants (i.e., 22 or 23) whenever the feature sets included EEG features (Table 3, the fifth column). When decoding all the features from every biosignal, the cumulative feature count analysis revealed that the EEG features dominated the top ranks followed by PPG (Figure 7). The cumulative feature count results from other combinations of biosignals also indicated that the EEG features were mostly used for decoding (Figure A1 and Figure A2). Although using the EEG features exhibited the highest performance on average, a subset of participants showed higher decoding accuracy when using other feature sets compared to using the feature set 1—which contained EEG features only (Table 3, the rightmost column). Nine participants exhibited higher accuracy when using the feature set 7, consisting of PS plus EEG, compared to using EEG only. However, only two of them presented above-chance-level accuracy using the feature set 4 that contained PS only, indicating that PS could augment EEG to enhance classification accuracy but not yield high accuracy alone. This is the case when using other feature sets such as the feature set 6 (EEG + EDA) and 13 (EEG + EDA + PS), where adding other signals to EEG helped increase accuracy, but using those signals alone did not produce high accuracy.
Once we observed improvement of decoding by adding other signals to EEG, we counted how many participants benefited from mixing of other biosignals to EEG in terms of decoding accuracy. In other words, we evaluated in how participants using any of the feature sets including EEG plus other signals (i.e., sets 5, 6, 7, 11, 12, 13 and 15) performed using the feature set 1 (i.e., EEG only). We found that 16 out of 23 participants exhibited higher accuracy when using multimodal features than when using EEG only. Figure 8 describes the best feature set for each participant and how much it improved decoding accuracy compared to the uni-modal feature set of EEG. The 7 participants (i.e., 2, 4, 5, 6, 10, 24, 27) who had the highest accuracy for uni-modal EEG feature or the same accuracy between uni-modal EEG feature and the multimodal one were excluded for visualization. Especially, the feature set 7 (EEG + PS) and set 6 (EEG + EDA) were most influential in increasing the possibility of accuracy improvement by multimodal signals.

3.3. Selected Features from EEG

We selected twenty-dimensional feature vectors from 324-dimensional feature vectors of EEG features using LASSO. The most commonly selected features across participants were alpha power at F3 and Fz channels, followed by theta and gamma at Fz channel (Figure 9a). In addition, the most commonly selected features among all training sets for building models of all participants (i.e., 17,775 sets) were also alpha power at F3 (4,488 sets) and Fz (4,405 sets) and theta power at Fz (4,387 sets) (Figure 9b). Notably, gamma feature selection occurred more frequently over front-central channels (e.g., FC1, FC2, FC6) than frontal channels, whereas theta and alpha features over frontal channels were preferred.

4. Discussion

This study aimed to investigate whether multimodal biosignals from wearable sensors could be used to detect anxiety invoked by driving situations, and which signal or combination of signals would show the highest detection accuracy. We simultaneously measured four biosignals—EEG, PPG, EDA, and pupil size—and built a classifier to discriminate anxiety-invoked driving situations and normal ones from these biosignals. The results revealed that classification of EEG outperformed that of other signals in terms of average accuracy and cumulative feature counts. Specifically, classifiers tended to harness frontal theta, alpha and gamma powers of EEG to detect anxiety-invoked situations. Adding other biosignals such as EDA or pupil size to EEG further enhanced the detection performance in some participants.
The selected EEG features for anxiety detection might indicate neural processes involved in dealing with anxiety events. Frontal-midline theta oscillations may directly represent the emotional process of anxiety. It is widely known that anterior cingulate cortex (ACC) is involved in processing negative affects and generates theta oscillations at the frontal midline [25,26]. Another possible explanation is, that theta oscillations at frontal midline were engaged in attention demanding tasks [25,26,27,28]. For example, encountering sudden increases of traffic on road lanes or crossroads increased frontal midline theta power in a driving simulator where the external situation required attention for action derived from the new information [27]. The anxiety events used in our study delivered the new information requiring follow-up action (e.g., hit the brake) in driving environments, thus inducing theta oscillations at frontal midline. In addition, frontal gamma oscillations often appear along with frontal theta oscillations when attention is required for the task [29]. However, it is difficult to find a proper explanation for the alpha oscillations at frontal channels.
Despite dominance of EEG features in contribution to brain-computer interface (BCI) performance, some participants (i.e., 16 out of 23) exhibited better performance when other biosignals (i.e., EDA, pupil size or both) were added to EEG in BCIs. This leaves room for the feasibility of simpler biosignals, other than EEG, to be used in anxiety detection systems in the future. Yet, it should also be highlighted that the combination of multiple biosignals varies across individuals, suggesting that a system to detect anxiety may need personal customization, particularly in a vehicle. We attempted to extract a common feature set from all the participants and examine decoding performance using it. But, decoding performance was only close to a chance level. This might be because the features varied across individuals, as expected. In addition, further work is required to explore why all individuals did not display more improved accuracy for multimodal signals compared with EEG only. Nonetheless, our study highlights that EEG seems to be essential in the development of such a system.
Overall, the average accuracy achieved in this study is lower than other studies that detected driver’s states: 77% vs. {82%, 82.03%, 89.70%, 100%, 77.95%} [8,9,10,11,12]. However, other studies demonstrated the estimation of driver’s states other than anxiety, such as stress or specific emotions (happy and angry), where they discriminated these emotional states from a normal state. In contrast, our studies estimated changes in anxiety derived by sudden events in driving situation.
The present study contributes to the extraction of feasible biosignals for anxiety detection while driving. Furthermore, the analysis of neural data demonstrated that attention for action and processing negative affects were involved in driving with anxiety events. Our findings can be applied to systems for monitoring driver’s emotional states in smart cars. This research suggests the following directions for future research. Broadening the scope of the target group to novices who may feel anxiety more frequently or the elderly whose change of states are slower than normal drivers. In addition, future work should focus on enhancing the decoding accuracy of anxiety detection by applying feature selection methods, suggested in other emotion detection studies such as hybrid techniques (e.g., clustering, principal component analysis (PCA), etc.) [30,31].

Author Contributions

Conceptualization, S.L.; data curation, S.L. and T.L.; formal analysis, S.L., T.L. and T.Y.; funding acquisition, C.Y. and S.-P.K.; investigation, S.L.; methodology, S.L., T.L. and T.Y.; project administration, S.L. and C.Y.; resources, S.-P.K.; software, S.L., T.L. and T.Y.; supervision, S.-P.K.; validation, S.L. and T.L.; visualization, S.L.; writing—original draft, S.L.; writing—review & editing, S.-P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Electronics and Telecommunications Research Institute (ETRI), grant number 18ZS1300 (the development of smart context-awareness foundation technique for major industry acceleration) and by the Korean Government (MSIT), grant number 2017-0-00432 (development of non-invasive integrated BCI SW platform to control home appliances and external devices by user’s thought via AR/VR interface).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Descriptions of video with anxiety event.
Table A1. Descriptions of video with anxiety event.
Video No.Event StartEvent EndDescription
11921Lane change of the front car from left side
268Lane change of the front car from left side
31619Jaywalking from left side
4810Jaywalking from left side
51315Jaywalking from left side
6710Jaywalking from left side
7810Jaywalking from right side at night
81114Lane change of the front car from right side at night
91821Jaywalking from left side at night
101011Bicyclist from right side
112325Jaywalking from left side
121215Bicyclist from left side
131315Jaywalking from left side
142224Bicyclist from left side at a high speed
151820Jaywalking from left side at night
161921Pedestrian from left side
171618Wheelchair jaywalking from right side at the corner
18810Fast jaywalking from right side
192227Bus at the front changing lane from right side
201719Large vehicle passing by left side at night
211821Large vehicle at the front trying to change lane from left side
221821Large vehicle at the front trying to change lane from left side
23713Large vehicle at the front trying to change lane from right side
2416A sudden stop of a car at the front
25813The entrance of a bottleneck
261318Lane change of the front car from left side
2769Facing a car driving in reverse lane
28812Lane change of the front car from right side
291315Facing a car driving in reverse lane
3047Lane change of the front car from right side

Appendix B

Table A2. Individual decoding accuracy for each feature set using LR classifier with LOTO method (1 s window for EEG features).
Table A2. Individual decoding accuracy for each feature set using LR classifier with LOTO method (1 s window for EEG features).
Feature Set No.123456789101112131415
Subject 10.85000.67500.10000.17500.77500.87500.72500.57500.65000.12500.75000.72500.70000.60000.7250
Subject 20.76360.58180.45450.56360.65450.76360.67270.52730.54550.50910.67270.65450.67270.52730.6000
Subject 30.71930.36840.28070.42110.75440.73680.77190.33330.45610.38600.71930.71930.73680.40350.6842
Subject 40.82460.49120.52630.45610.71930.82460.82460.47370.52630.36840.70180.77190.78950.49120.7719
Subject 50.77970.32200.45760.11860.72880.77970.74580.28810.37290.10170.69490.69490.72880.35590.6780
Subject 60.83720.58140.53490.48840.74420.83720.76740.60470.44190.39530.76740.67440.79070.44190.7674
Subject 70.50000.55000.00000.30000.60000.55000.45000.50000.40000.40000.60000.50000.45000.35000.4500
Subject 90.81030.46550.51720.37930.72410.74140.82760.50000.46550.48280.68970.67240.77590.51720.6724
Subject 100.75440.33330.28070.26320.70180.68420.71930.31580.33330.36840.66670.68420.68420.35090.6842
Subject 110.75000.44640.14290.46430.67860.75000.80360.51790.44640.48210.64290.69640.76790.50000.6786
Subject 160.93750.56250.56250.50000.93750.93751.00000.56250.62500.75000.81251.00001.00000.68750.9375
Subject 170.57890.44740.57890.47370.65790.57890.55260.44740.42110.52630.65790.55260.57890.39470.5526
Subject 180.68420.47370.36840.43860.68420.73680.61400.54390.52630.42110.70180.66670.68420.57890.6842
Subject 190.75440.45610.52630.49120.70180.70180.77190.40350.43860.50880.66670.68420.75440.43860.6667
Subject 200.77190.50880.54390.40350.77190.80700.80700.49120.45610.45610.80700.73680.80700.45610.7368
Subject 210.64290.48210.57140.57140.64290.58930.57140.55360.55360.64290.60710.67860.66070.58930.6786
Subject 230.88640.61360.50000.43180.86360.88640.90910.50000.63640.36360.88640.93180.86360.59090.9318
Subject 240.71930.43860.50880.35090.57890.70180.68420.45610.36840.45610.63160.57890.63160.40350.5088
Subject 250.85960.50880.50880.50880.80700.87720.84210.56140.56140.47370.87720.77190.85960.59650.8421
Subject 260.72730.58180.41820.52730.65450.72730.74550.52730.52730.49090.65450.63640.70910.50910.6545
Subject 271.00000.58820.47060.52940.94121.00001.00000.64710.52940.41180.88240.88240.94120.52940.8824
Subject 280.75440.50880.57890.42110.71930.80700.75440.49120.50880.40350.71930.75440.78950.47370.7368
Subject 290.80700.45610.35090.52630.77190.77190.84210.50880.52630.47370.75440.84210.84210.56140.7895
Table A3. Average and maximum decoding accuracies for each feature set using two classifiers with two validation methods (1 s window for EEG features).
Table A3. Average and maximum decoding accuracies for each feature set using two classifiers with two validation methods (1 s window for EEG features).
Feature Set No.123456789101112131415
Avg.LR_LOTO0.83350.69380.56200.53470.68870.82090.83210.69270.61170.58980.70150.78110.81000.68650.7162
LR_CV0.66130.39250.18170.28350.61100.65450.64590.40380.41810.29410.60330.60490.63450.42690.5991
ANN_LOTO0.74550.54040.46230.49280.65970.71410.70780.50990.50730.50200.65620.66170.68520.49320.6320
ANN_CV0.59080.48750.35800.44480.52320.59380.55250.45840.43440.42500.49620.49780.55540.44760.4786
MaxLR_LOTO1.00001.00000.70000.70000.95001.00001.00001.00000.75000.75000.88641.00001.00001.00000.9500
LR_CV1.00000.55000.60000.55000.95000.95001.00000.75000.61000.75000.85001.00001.00000.80000.9500
ANN_LOTO1.00000.68420.70000.64710.94120.93750.94120.68420.70000.75000.88240.88240.87500.59650.8824
ANN_CV0.90000.65000.65000.60000.85001.00000.85000.70000.65000.70000.85000.95000.85000.70000.8000
Table A4. Average and maximum decoding accuracies for each feature set using two classifiers with two validation methods (2 s window for EEG features).
Table A4. Average and maximum decoding accuracies for each feature set using two classifiers with two validation methods (2 s window for EEG features).
Feature Set No.123456789101112131415
Avg.LR_LOTO0.48610.48080.39170.43160.49740.49410.49340.48280.48890.43700.49800.49610.49240.49360.5010
LR_CV0.43160.39070.16190.27720.44700.44230.44880.41020.40510.30410.45600.46270.45770.40970.4646
ANN_LOTO0.51040.53500.45300.46850.49290.52670.51040.51970.47910.50140.50160.50260.51300.49040.5031
ANN_CV0.45120.47800.34560.42360.44410.42230.43950.43090.45350.44070.44590.43720.42330.41010.4370
MaxLR_LOTO0.68420.80950.56520.68420.76190.76320.73680.63160.71430.67860.71430.76190.71050.70000.7000
LR_CV0.60830.76670.50000.55000.70000.63330.70830.71670.63330.50000.70000.75000.71670.60000.6500
ANN_LOTO0.70000.71930.73680.61400.71430.63160.68420.78950.68420.68420.63160.76190.66070.61400.6500
ANN_CV0.72000.66670.60000.60000.75000.65500.61670.60000.70000.65000.65000.61500.59000.65000.6000

Appendix C

Figure A1. The cumulative feature count results from each of four biosignals.
Figure A1. The cumulative feature count results from each of four biosignals.
Processes 08 00155 g0a1
Figure A2. The cumulative feature count results from all possible combinations of three biosignals out of four.
Figure A2. The cumulative feature count results from all possible combinations of three biosignals out of four.
Processes 08 00155 g0a2

References

  1. Eyben, F.; Wöllmer, M.; Poitschke, T.; Schuller, B.; Blaschke, C.; Faerber, B.; Nguyen-Thien, N. Emotion on the Road—Necessity, Acceptance, and Feasibility of Affective Computing in the Car. Adv. Hum.-Comput. Interaction 2010, 2010. [Google Scholar] [CrossRef] [Green Version]
  2. Chan, M.; Singhal, A. Emotion matters: Implications for distracted driving. Saf. Sci. 2015, 72, 302–309. [Google Scholar] [CrossRef]
  3. de Groot-Mesken, J. Measuring Emotions in Traffic. In Proceedings of the ESF Congress Towards Safer Road Traffic in Southern Europe, Ankara, Turkey, 31 May–2 June 2001. [Google Scholar]
  4. Jeon, M.; Walker, B.N.; Gable, T.M. The effects of social interactions with in-vehicle agents on a driver’s anger level, driving performance, situation awareness, and perceived workload. Appl. Ergon. 2015, 50, 185–199. [Google Scholar] [CrossRef]
  5. Nass, C.; Jonsson, I.-M.; Harris, H.; Reaves, B.; Endo, J.; Brave, S.; Takayama, L. Improving automotive safety by pairing driver emotion and car voice emotion. In CHI ‘05 Extended Abstracts on Human Factors in Computing Systems; ACM: Portland, OR, USA, 2005; pp. 1973–1976. [Google Scholar]
  6. Alberdi, A.; Aztiria, A.; Basarab, A. Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review. J. Biomed. Inform. 2016, 59, 49–75. [Google Scholar] [CrossRef]
  7. Giannakakis, G.; Grigoriadis, D.; Giannakaki, K.; Simantiraki, O.; Roniotis, A.; Tsiknakis, M. Review on psychological stress detection using biosignals. IEEE Trans. Affect. Comput. 2019, 1. [Google Scholar] [CrossRef]
  8. Rigas, G.; Goletsis, Y.; Fotiadis, D.I. Real-Time Driver’s Stress Event Detection. IEEE Trans. Intell. Transp. Syst. 2012, 13, 221–234. [Google Scholar] [CrossRef]
  9. Singh, R.R.; Conjeti, S.; Banerjee, R. Assessment of Driver Stress from Physiological Signals collected under Real-Time Semi-Urban Driving Scenarios. Int. J. Comput. Intell. Syst. 2014, 7, 909–923. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, L.-l.; Zhao, Y.; Ye, P.-f.; Zhang, J.; Zou, J.-z. Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers. Expert Syst. Appl. 2017, 85, 279–291. [Google Scholar] [CrossRef]
  11. Ooi, J.; Ahmad, S.; Ishak, A.; Nisa, K.; Minhad, N.A.; Ali, S.; Yu Zheng, C. Grove: An auxiliary device for sympathetic assessment via EDA measurement of neutral, stress, and anger emotions during simulated driving conditions. Int. J. Med Eng. Inform. 2018, 10, 16. [Google Scholar] [CrossRef]
  12. Fan, X.; Bi, L.; Chen, Z. Using EEG to Detect Drivers’ Emotion with Bayesian Networks. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010; pp. 1177–1181. [Google Scholar]
  13. Healey, J.A.; Picard, R.W. Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 2005, 6, 156–166. [Google Scholar] [CrossRef] [Green Version]
  14. Barnard, M.P.; Chapman, P. Are anxiety and fear separable emotions in driving? A laboratory study of behavioural and physiological responses to different driving environments. Accid. Anal. Prev. 2016, 86, 99–107. [Google Scholar] [CrossRef] [PubMed]
  15. Lee, S.; Lee, T.; Yang, T.; Seomoon, E.; Yoon, C.; Kim, S.-P. Neural correlates of anxiety induced by environmental events during driving. In Proceedings of the TENCON 2018-2018 IEEE Region 10 Conference, Jeju Island, Korea, 28–31 October 2018. [Google Scholar]
  16. Taylor, J.; Deane, F.; Podd, J. The Relationship Between Driving Anxiety and Driving Skill: A Review of Human Factors and Anxiety-Performance Theories to Clarify Future Research Needs. N. Z. J. Psychol. 2008, 37, 28–37. [Google Scholar]
  17. HazardPerceptionTest.net Hazard Perception Tips. Available online: https://hazardperceptiontest.net/hazard-perception-tips/ (accessed on 9 November 2019).
  18. Chang, C.; Hsu, S.; Pion-Tonachini, L.; Jung, T. Evaluation of Artifact Subspace Reconstruction for Automatic EEG Artifact Removal. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1242–1245. [Google Scholar]
  19. Aftanas, L.I.; Pavlov, S.V.; Reva, N.V.; Varlamov, A.A. Trait anxiety impact on the EEG theta band power changes during appraisal of threatening and pleasant visual stimuli. Int. J. Psychophysiol. 2003, 50, 205–212. [Google Scholar] [CrossRef]
  20. Giannakakis, G.; Grigoriadis, D.; Tsiknakis, M. Detection of stress/anxiety state from EEG features during video watching. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 6034–6037. [Google Scholar]
  21. Yeo, H.-S.; Lee, J.-W.; Yoon, G.-W.; Hwang, H.-T. Method and Apparatus for Evaluating Human Stress Using Photoplethysmography. U.S. Patent 7613486, 3 November 2009. [Google Scholar]
  22. Boucsein, W. Electrodermal Acitivity, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  23. Partala, T.; Surakka, V. Pupil size variation as an indication of affective processing. Int. J. Hum.-Comput. Stud. 2003, 59, 185–198. [Google Scholar] [CrossRef]
  24. Klingner, J.; Kumar, R.; Hanrahan, P. Measuring the task-evoked pupillary response with a remote eye tracker. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, Savannah, GA, USA, 26–28 March 2008; ACM: Savannah, GA, USA, 2008; pp. 69–72. [Google Scholar]
  25. Cavanagh, J.F.; Shackman, A.J. Frontal midline theta reflects anxiety and cognitive control: Meta-analytic evidence. J. Physiol.-Paris 2015, 109, 3–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Bush, G.; Luu, P.; Posner, M.I. Cognitive and emotional influences in anterior cingulate cortex. Trends Cogn. Sci. 2000, 4, 215–222. [Google Scholar] [CrossRef]
  27. Laukka, S.J.; Järvilehto, T.; Alexandrov, Y.I.; Lindqvist, J. Frontal midline theta related to learning in a simulated driving task. Biol. Psychol. 1995, 40, 313–320. [Google Scholar] [CrossRef]
  28. Mizuki, Y.; Suetsugi, M.; Ushijima, I.; Yamada, M. Differential effects of dopaminergic drugs on anxiety and arousal in healthy volunteers with high and low anxiety. Prog. Neuro-Psychopharmacol. Biol. Psychiatry 1997, 21, 573–590. [Google Scholar] [CrossRef]
  29. Ishii, R.; Canuet, L.; Ishihara, T.; Aoki, Y.; Ikeda, S.; Hata, M.; Katsimichas, T.; Gunji, A.; Takahashi, H.; Nakahachi, T.; et al. Frontal midline theta rhythm and gamma power changes during focused attention on mental calculation: An MEG beamformer analysis. Front. Hum. Neurosci. 2014, 8, 406. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Nogueira, P.A.; Rodrigues, R.; Oliveira, E.; Nacke, L.E. A Hybrid Approach at Emotional State Detection: Merging Theoretical Models of Emotion with Data-Driven Statistical Classifiers. In Proceedings of the 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Atlanta, GA, USA, 17–20 November 2013; pp. 253–260. [Google Scholar]
  31. Sharma, N.; Gedeon, T. Objective measures, sensors and computational techniques for stress recognition and classification: A survey. Comput. Methods Programs Biomed. 2012, 108, 1287–1301. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experimental task and multimodal biosignal recording. In Session 1, participants watched video clips with or without anxiety events and answered whether they felt anxiety. In Session 2, participants indicated a point where in the video they felt anxiety.
Figure 1. Experimental task and multimodal biosignal recording. In Session 1, participants watched video clips with or without anxiety events and answered whether they felt anxiety. In Session 2, participants indicated a point where in the video they felt anxiety.
Processes 08 00155 g001
Figure 2. EEG montage for the present study. Colored channels were analyzed in the study.
Figure 2. EEG montage for the present study. Colored channels were analyzed in the study.
Processes 08 00155 g002
Figure 3. Examples of photoplethysmography (PPG) and electrodermal activity (EDA) signals from subject 1. (a) The description of peak-to-peak interval (PPI) (red) and PPG amplitude (green). (b) The description of EDA amplitude.
Figure 3. Examples of photoplethysmography (PPG) and electrodermal activity (EDA) signals from subject 1. (a) The description of peak-to-peak interval (PPI) (red) and PPG amplitude (green). (b) The description of EDA amplitude.
Processes 08 00155 g003
Figure 4. The example of preprocessed pupil size data. The sudden shrink of velocity (red arrow) indicated eye blinking which can be removed by preprocessing.
Figure 4. The example of preprocessed pupil size data. The sudden shrink of velocity (red arrow) indicated eye blinking which can be removed by preprocessing.
Processes 08 00155 g004
Figure 5. The description of leave-one-trial-out (LOTO) within 1 subject.
Figure 5. The description of leave-one-trial-out (LOTO) within 1 subject.
Processes 08 00155 g005
Figure 6. Visualization of the method for extracting the best feature. (a) Individual level, (b) Participants level.
Figure 6. Visualization of the method for extracting the best feature. (a) Individual level, (b) Participants level.
Processes 08 00155 g006
Figure 7. The cumulative feature count according to rank of weights from the classifier using the feature set composed of all 4 signals.
Figure 7. The cumulative feature count according to rank of weights from the classifier using the feature set composed of all 4 signals.
Processes 08 00155 g007
Figure 8. Improved decoding accuracy of best multimodal feature set compared to uni-modal feature set of EEG.
Figure 8. Improved decoding accuracy of best multimodal feature set compared to uni-modal feature set of EEG.
Processes 08 00155 g008
Figure 9. Frequencies of selected features by least absolute shrinkage and selection operator (LASSO). (a) The sum of participants out of 23 for the features that were selected by LASSO for more than 50% of trials. The darker colored feature represents that it was used for decoding anxiety by more participants. (b) The sum of training sets out of 17,775 for the features that were selected by LASSO.
Figure 9. Frequencies of selected features by least absolute shrinkage and selection operator (LASSO). (a) The sum of participants out of 23 for the features that were selected by LASSO for more than 50% of trials. The darker colored feature represents that it was used for decoding anxiety by more participants. (b) The sum of training sets out of 17,775 for the features that were selected by LASSO.
Processes 08 00155 g009
Table 1. The descriptions of photoplethysmography (PPG) features.
Table 1. The descriptions of photoplethysmography (PPG) features.
No.FeatureDescription
1PPG amplitude meanThe average of PPG amplitude [0 10]
2PPG amplitude std.The standard deviation of PPG amplitude [0 10]
3PPG amplitude maxThe maximum amplitude of PPG [0 10]
4PPG amplitude minThe minimum amplitude of PPG [0 10]
5PPI mean differencePPG amplitude mean [0 10]—PPG amplitude mean [−10 0]
6PPI std. differencePPG amplitude std. [0 10]—PPG amplitude std. [−10 0]
7PPI length differenceMean PPG length [0 10]—Mean PPG length [−10 0]
8PPI irregularity differenceMean PPG irregularity [0 10]—Mean PPG irregularity [−10 0]
9nPPI differencenPPI [0 10]—nPPI [−10 0]
10Fast PPIpost count differenceFast PPIpost count [0 10]—Fast PPIpost count [−10 0]
11LF/HF ratioThe ratio of low frequency (LF: 0.04~0.15 Hz) to high frequency (HF: 0.15~0.4 Hz) [0 10]
12PPI coefficient of variationPPI std. [0 10]/PPI mean [0 10]
Table 2. The estimated number of anxiety moments for each video with anxiety events.
Table 2. The estimated number of anxiety moments for each video with anxiety events.
Stimuli No.Lambda from Poisson Fitting
1–101.1031.0341.0001.0001.1380.8281.0001.2761.1031.345
11–201.0341.0000.8971.0341.0001.1381.0691.1380.9660.655
21–301.2760.9310.5520.4140.4831.0691.0341.1031.1721.103
Table 3. The performance comparison between feature sets.
Table 3. The performance comparison between feature sets.
No.Feature SetAverage AccuracyMaximum Accuracy# Participants Above Chance Level# Participants Above Accuracy of EEG
1EEG0.77011.000022-
2PPG0.49750.6750111
3EDA0.42530.5789110
4PS0.42620.571460
5EEG + PPG0.73100.9412233
6EEG + EDA0.76811.0000237
7EEG + PS0.75671.0000229
8PPG + EDA0.49260.6471110
9PPG + PS0.49200.6500120
10EDA + PS043470.750050
11EEG + PPG + EDA0.72020.8864235
12EEG + PPG + PS0.71781.0000224
13EEG + EDA + PS0.74860.6875227
14PPG + EDA + PS0.49341.0000110
15All0.70930.9376222

Share and Cite

MDPI and ACS Style

Lee, S.; Lee, T.; Yang, T.; Yoon, C.; Kim, S.-P. Detection of Drivers’ Anxiety Invoked by Driving Situations Using Multimodal Biosignals. Processes 2020, 8, 155. https://doi.org/10.3390/pr8020155

AMA Style

Lee S, Lee T, Yang T, Yoon C, Kim S-P. Detection of Drivers’ Anxiety Invoked by Driving Situations Using Multimodal Biosignals. Processes. 2020; 8(2):155. https://doi.org/10.3390/pr8020155

Chicago/Turabian Style

Lee, Seungji, Taejun Lee, Taeyang Yang, Changrak Yoon, and Sung-Phil Kim. 2020. "Detection of Drivers’ Anxiety Invoked by Driving Situations Using Multimodal Biosignals" Processes 8, no. 2: 155. https://doi.org/10.3390/pr8020155

APA Style

Lee, S., Lee, T., Yang, T., Yoon, C., & Kim, S. -P. (2020). Detection of Drivers’ Anxiety Invoked by Driving Situations Using Multimodal Biosignals. Processes, 8(2), 155. https://doi.org/10.3390/pr8020155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop