Next Article in Journal
A Large-Scale Study of Fingerprint Matching Systems for Sensor Interoperability Problem
Next Article in Special Issue
A Wearable Body Controlling Device for Application of Functional Electrical Stimulation
Previous Article in Journal
PSPICE Hybrid Modeling and Simulation of Capacitive Micro-Gyroscopes
Previous Article in Special Issue
A Portable Wireless Communication Platform Based on a Multi-Material Fiber Sensor for Real-Time Breath Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication

1
Department of Computer Science and Information Engineering, National Central University, Taoyuan City 32001, Taiwan, [email protected]
2
Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung City 20224, Taiwan, [email protected]
3
Software Research Center, National Central University, Taoyuan City 32001, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(4), 1007; https://doi.org/10.3390/s18041007
Submission received: 14 January 2018 / Revised: 27 February 2018 / Accepted: 1 March 2018 / Published: 28 March 2018
(This article belongs to the Special Issue Wearable Smart Devices)

Abstract

:
All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication—an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment—confirm the feasibility of this approach.

1. Introduction

Driving behavior differs among drivers. Each driver has a habitual and distinctive driving style; some drive slowly and carefully, while others drive fast and aggressively. Drivers exhibit distinct methods for holding and operating the steering wheel, which differ depending on the driving scenarios (e.g., driving straight, turning, and parking). Recently, several techniques of modeling the behavior of a driver based on the pattern of operating the steering wheel and pedals [1,2,3,4], sitting posture [5], and handgrip patterns [6], have been proposed as methods for driver identification and authentication.
Smartwatches have gained popularity because of technological advances. According to the Gartner report [7], a 17% increase in smartwatch shipments is forecasted for 2018, compared to the 41.5 million units in 2017, and the quantity of yearly shipments is expected to reach nearly 81 million units in 2021. Smartwatches equipped with multiple sensors, such as accelerometers and orientation sensors, can be used not only for health monitoring but also for continuous motion analysis [8]. These devices have been utilized for many applications, such as gesture detection [9,10], health monitoring [11,12,13], information security [14,15], personal safety [16,17], and other applications [18].
Recently, smartwatches have been used to analyze the behavior of a driver for user authentication. Liang and Kotz [19] developed a smartwatch-based user-presence authentication system that continuously authenticated the user with a computer. This system required the hand- and mouse-motion data of the user, and matched the enrolled pattern through three steps: peak detection, weight calculation, and distance calculation. Lewis et al. [20] developed a gesture-based real-time authentication system for a smartwatch. Their system applied behavioral biometrics collected from the readings of the accelerometer and gyroscope of a smartwatch, and used the dynamic time warping algorithm for template generation and matching. These two studies on smartwatches show that the characteristics of a user’s hand movement can be analyzed through the built-in sensor of a smartwatch. On the other hand, Lee et al. [21] developed a real-time driver vigilance monitoring system that tracked a user’s steering wheel movement (SWM) through the motion sensor of a smartwatch and the heart rate from a photoplethysmogram (PPG) sensor. Pearson’s method was applied to select time, phase, and frequency domain features extracted from the SWM, PPG, and PPG-derived respiration. Driver hypervigilance was estimated according to the result of the weighted fuzzy c-mean algorithm. Lee et al. [22] detected a driver’s drowsiness by the time, spectral, and phase domain features of the driver’s hand movements. The accelerometer and gyroscope of a smartwatch captured the hand movements of a driver. The SVM, implemented on the smartwatch, was used to detect the drowsiness of a driver. Their studies show that smartwatches are capable of analyzing a driver’s driving behavior.
Several studies have been devoted to the problems of driver identification and/or driver authentication. Igarashi et al. [1] built a driving behavior model through the Gaussian mixture model (GMM) on the basis of pressure readings obtained from the accelerator and brake pedals. Similarly, Miyajima et al. [2] used GMMs to model the pedal operation patterns of a driver when following a car for driver identification. Later, Wahab et al. [3] compared GMMs and wavelet transforms in the effectiveness of representing the accelerator and brake pedal pressures for driver identification and authentication. They also compared multilayer perceptrons, fuzzy neural networks, and statistical GMMs in recognition performance, and showed that GMMs are the most effective method. Qian et al. [4] compared three methods for extracting features from the readings of the steering wheel angle, the accelerator and the brake pedals, and applied support vector machines (SVMs) to identify the drivers. Riener and Ferscha [5] installed pressure sensors in the driving seat to capture the driver’s pelvic bone signature. They used the driver’s pelvic bone distance as a biometric feature, and matched the driver’s enrolled patterns by the Euclidean distance. To the best of our knowledge, there has not been any feasibility study on the problem of driver authentication using smartwatches.
In this paper, a smartwatch-based driver authentication mechanism is proposed. A novel GMM-based approach is also developed for building the driving behavior model of a driver. Since a driver usually manipulates the steering wheel differently in different driving maneuvers (e.g., driving straight or turning), we have created a behavioral model for each driving maneuver for each driver.
The proposed GMM-based approach addresses two weaknesses of the traditional GMM by building a smartwatch-based behavioral model of drivers. First, in the traditional GMM-based approach, the likelihood value of the GMM of an input pattern is often used to determine whether or not the input pattern is drawn from the data distribution modelled by the GMM. However, when, for example, some Gaussian components of the GMM for driver A cover all of the Gaussian components of the GMM for driver B, it may be difficult to distinguish between drivers A and B with the likelihood value of the GMM for driver A, even though the two GMMs have distinctive Gaussian components. In this paper, to enhance the distinctive Gaussian components of the GMM for each driver, the posterior probabilities of the Gaussian components of GMMs were used instead of the likelihood value of the GMM. Second, in the traditional GMM-based approach [1,2,3], different kinds of features are equally weighted. We found that different kinds of features can be weighted differently to reflect their differing effectiveness for different drivers. To alleviate the two weaknesses of the traditional GMM, we designed two models and combined them through stacked generalization [23] to yield the final driving behavior model.
We established a driving simulation system to collect driving behaviors and recruited 52 participants for the experiment to evaluate the proposed approach. The behaviors of each participant when driving straight, turning left, and turning right were used to construct his/her own models. In the context of driver authentication, if a given participant is selected as the registrant, then all other participants are considered as imposters. The experimental results indicate that the proposed approach can be used to authenticate the driver, with an equal error rate (EER) of 4.62%. Additionally, the proposed approach was tested on 15 participants, who were licensed to drive automobiles, in a real-driving environment, with an EER of 7.86%.
Our main contributions are highlighted below:
(i)
A smartwatch-based driver authentication mechanism is presented. We demonstrate that the driver’s hand motion information captured by the built-in sensors of a smartwatch can be used to authenticate the driver.
(ii)
A novel GMM-based behavioral modeling approach is also proposed to improve the traditional GMM in modeling driving behavior.
(iii)
The experimental results on the data collected from both the simulated and real-traffic environments indicate that the proposed approach is feasible in both environments and more accurate than the traditional GMM.
The remainder of this paper is organized as follows: Section 2 introduces the driving simulation system, the real-traffic environment, and the apparatus used in this study, and Section 3 describes the proposed methodology for driving behavioral modeling and detection. Section 4 discusses the experimental results. Concluding remarks and suggestions for future studies are presented in Section 5.

2. Data Collection Environments and Apparatus

2.1. The Simulated System

A driving simulation system bearing close resemblance with a real driving system was established to analyze driving behaviors (Figure 1a). The simulation system included a desktop computer, a liquid-crystal display monitor, a simulator-grade wheel, and a pedal unit. The driving simulation software City Car Driving [24] was used to simulate realistic three-dimensional road scenes with dynamic traffic streams (Figure 1b).
Participants were asked to drive using this system as they would drive a real car (their safety was ensured regardless of their driving skill). Although this system did not provide a complete driving experience with fully realistic controls and variable road conditions, it could capture driver behavior in accordance with our criteria for comparing and identifying driving behaviors.

2.2. The Real Environment

We also collected the driving behavior data of some participants driving a real vehicle (Honda CR-V) in the campus of National Central University. As shown in Figure 2a, a smartphone was placed in this car beside the driver; the smartphone’s gyroscope readings were used to divide each driver’s driving session into separate segments for different driving maneuvers. The road scene of the campus is shown in Figure 2b.

2.3. Apparatus

The Sony SmartWatch 3, the Sony Xperia Z5 Premium (Sony Corp., Tokyo, Japan) and the Logitech G27 racing wheel (Logitech International S.A., Lausanne, Switzerland) [26] were used in the simulating system, and the angle of the steering wheel was acquired through the Logitech Steering Wheel SDK [27]. The LG Watch Urbane and the LG V20 smartphone (LG Electronics Inc., Seoul, South Korea) were used in the real environment; while the sampling rate of the smartwatch’s and smartphone’s built-in sensors were set at 50 Hz in both environments.

3. The Proposed Methodology

The proposed smartwatch-based driver authentication mechanism, as illustrated in Figure 3, has three major steps: (1) the preprocessing; (2) the feature extraction; and (3) the decision.
In our driver authentication mechanism, a driver’s behaviors are directly observed by capturing the data of the smartwatch built-in motion sensors (3-axis accelerometer and 3-axis orientation sensor). In Preprocessing, the dynamic data are extracted from the collected motion data, and then the noise of the raw and dynamic data are removed through a median filter. In this study, the entire sensor data sequence is partitioned into segments, so that each segment is focused on a specific operational behavior of an individual. In Feature Extraction, two GMM-based driver models are built to extract several features from the preprocessed data. Finally, two types of features are separated to train the classifier model by using SVMs. These two SVMs are combined through stacked generalization to yield a driving behavior model for the driver.

3.1. Data Preprocessing

Smartwatch accelerometers and orientation sensors can provide multidimensional data during a single sensor event. Since the z-axis signal of the orientation sensor is the angle to the magnetic north [28], the information from the z-axis is more related to the condition and direction of the road than the driving behavior, and thus is not used in this study. Many studies reported that, compared to static information, the use of dynamic information can improve the accuracy of behavioral biometrics [29,30]. Therefore, for each dimension of the sensor data, the delta (velocity) coefficient, a dynamic data, was adopted. The delta coefficient can be obtained from the following formula:
Δ x ( t ) = u = 1 K u ( x ( t + u ) x ( t u ) ) 2 u = 1 K u 2
where the delta window size K was set to 25 on the basis of preliminary experiments [31]. As listed in Table 1, four types of signals were obtained from the accelerometer and orientation sensor. The sensor data collected at time t are presented as x t = [ ω 1 ; t T , ω 2 ; t T , ω 3 ; t T , ω 4 ; t T ] T 10 , where ω 1 ; t 3 , ω 2 ; t 2 , ω 3 ; t 3 , and ω 4 ; t 2 are the vectors formed by the three axes of the accelerometer, the x and y axes of the orientation sensor, the delta coefficients with respect to the three axes of the accelerometer, and the delta coefficients with respect to the x and y axes of the orientation sensor, respectively.
After removing the noise from the sensor data using a median filter (as shown in Figure 4), the entire sensor data sequence was partitioned into segments by analyzing the steering wheel angle in the simulated environment, and by analyzing the z-axis angular velocity of the smartphone gyroscope in the real-driving environment. Each segment conveyed information regarding the behavior of a driver driving straight, turning left, or turning right, modeling all three driving behaviors.
In the simulated environment, the threshold value of the steering wheel angle for partitioning the signal was set to ±30°. Figure 5a displays the correlation between the accelerometer and the steering wheel signals at different periods. By contrast, the threshold value of the angular velocity of the smartphone’s gyroscope for partitioning the signal was set to ±10°/s. Figure 5b displays the correlation between the accelerometer and the gyroscope of the smartphone at different periods.

3.2. Feature Extraction

The smartwatch sensor data varied according to the driver and the driving scenario. The GMM was used to capture the sensor data distributions for a driver with a specific driving behavior, which is referred to as an individual driver model (IDM). The IDM log-likelihood of the sensor data has already been used for driver recognition [1,2,3,32], where the log-likelihood value of the model is the total sum of each log-likelihood value of the GMMs based on each sensor. In our study, since each of the four features differed in its effectiveness to authenticate genuine drivers, the IDM log-likelihoods for the four features were combined using SVMs in a weighted manner. Furthermore, to enhance the distinctive Gaussian component of a driving behavior, a universal driver model (UDM), which represented the collective behavior of all drivers, was learned by the GMM. Furthermore, SVMs on the posterior probabilities of the Gaussian components of the UDM were used in the study of other driver models. These two base SVMs for each driver were combined through stacked generalization to form each driving behavior model.
The remainder of this section describes two base learners: (1) the base SVM, which was based on the IDM log-likelihoods; (2) the other base SVM, which was based on the posterior probabilities of the Gaussian components of the UDM.
(1) Base Learner 1: SVM Based on the IDM Log-likelihoods
The parameters of the IDM with M Gaussian components are denoted by θ = { w i , μ i , Σ i } i = 1 M . The mixture density of the IDM θ is a weighted sum of M Gaussian component densities as follows:
P ( ω | θ ) = i = 1 M w i G ( ω | μ i , Σ i )
where ω is a D-dimensional random vector, w i , i = 1 , , M and i = 1 M w i = 1 are the mixture weights, and G ( ω | μ i , Σ i ) , i = 1 , , M , is the density function of the multivariate normal distribution,
G ( ω | μ i , Σ i ) = 1 ( 2 π ) D 2 | Σ i | 1 2 exp { 1 2 ( ω μ i ) T Σ i 1 ( ω μ i ) }
and where μ i and Σ i are the mean vector and covariance matrix for the ith component, respectively. The four IDMs, each of which was built on one of the four types of features, were analyzed using the expectation maximization algorithm to construct the driving behavior model for each driver performing each maneuver.
The parameters of the ith IDM are denoted by θ i = { w j ; i , μ j ; i , Σ j ; i } j = 1 M i . Let x 1 , , x T be a segment of the smartwatch sensor data of a driver. Similar to [1,2,3,32], whether x 1 , , x T is generated by the IDMs for a driver performing a specific maneuver can be determined by the log-likelihood of x 1 , , x T , which is defined as follows:
( x 1 , , x T | θ 1 , , θ 4 ) = i = 1 4 1 T t = 1 T log ( P ( ω i ; t | θ i ) )
where θ 1 , , θ 4 are the parameters of the IDMs for the four features. In this study, the following linear SVM, which is the first base SVM, S I D M ( [ 1 T t = 1 T log ( P ( ω 1 ; t | θ 1 ) ) 1 T t = 1 T log ( P ( ω 4 ; t | θ 4 ) ) ] T ) was estimated to weigh the four IDM log-likelihoods differently. The SVM S I D M also outputted an estimate of the posterior probability so that the input feature vector was a positive sample.
(2) Base Learner 2: SVM Based on Posterior Probabilities of Gaussian Components of the UDM
The limitation of the IDM can be explained by the following example. As shown in Figure 6, the Gaussian component set of participant B is a subset of that of participant A. Given a test sample (either A or B), the likelihood of the test sample is calculated by the summation of the responses of all the Gaussian components (or the log-likelihood) of the IDM of participant A. The test sample is classified as Class “A” if the likelihood value is higher than a preset threshold. The likelihood of B’s sample (from A’s IDM) is always high since B’s Gaussian components are also A’s. Therefore, participant A’s IDM always misclassifies B’s samples as A.
The proposed method takes two steps to alleviate the current limitation. In addition to the IDM models for each participant, we have created a UDM based on the data of all participants. This UDM represents the behavioral patterns of all participants; therefore, the Gaussian component sets of both A and B are subsets of this UDM. Furthermore, we have built a SVM-based classifier that uses the individual response of each component as an independent feature. This classifier is able to distinguish Participant B from A since B’s Gaussian components are only a subset of A’s, but are not identical to A’s.
To use the distinctive Gaussian components of driving behaviors, four UDMs, each constructed on the basis of one of the four features, were estimated to build a GMM for the collective behavior in a specific driving scenario. Subsequently, x t was mapped to vector f t in a new d-dimensional space by using the formula
f t = [ f 1 ; 1 ; t , f 2 ; 1 ; t , , f M 1 ; 1 ; t , , f 1 ; 4 ; t , f M 4 ; 4 ; t ] T
where d = i = 1 4 M i is the total number of Gaussian components of the four UDMs and f j ; i ; t is the posterior probability that ω i ; t is generated by the jth Gaussian component of the ith UDM:
f j ; i ; t = w j ; i G ( ω i ; t | μ j ; i , Σ j ; i ) k = 1 M i w k ; i G ( ω i ; t | μ k ; i , Σ k ; i )
In this d-dimensional space, a linear SVM S U D M ( 1 T t = 1 T f t ) , which is the second base SVM, was calculated. This SVM S U D M also outputted the posterior probability that the input feature vector was a positive sample.

3.3. Proposed Driving Behavior Model

Two modalities based on linear SVMs— S I D M and S U D M — were trained on the different feature vectors. As shown in Figure 7, another combiner SVM was used to combine S I D M and S U D M . Stacked generalization can be used to illustrate this combination framework. The key idea is to evaluate a meta-learner based on the outputs of multiple base-learners. Some researchers [33,34] have demonstrated that the base-learners and meta-learner can use the same learning algorithm to handle multimodalities. In the present study, the combiner SVM also outputs the posterior probability that the input is a positive sample. Additionally, these driving behavior models for a driver can be applied in several driving scenarios to determine if the driver drives as usual in these driving scenarios. To this end, the average output of the driving behavior models for a driver can be used in these different driving scenarios. In the present work, we built three driving behavior models for a driver in three specific driving scenarios: driving straight, turning left, and turning right.

4. Experiments and Discussion

Three experiments were conducted to evaluate the proposed approach. The purposes of the experiments were as follows: (1) to analyze the number of Gaussian components of the GMM required for the proposed approach; (2) to evaluate the accuracy of the proposed approach for driver authentication in the simulated environment; and (3) to evaluate the accuracy of the proposed approach in the real-traffic environment.
The implementations of the traditional GMM approach and the proposed approach were as follows:
  • GMM: The traditional GMM technique (using function (4)) is hereafter referred to as the GMM approach. The implementation of GMMs in the statistics and machine learning toolbox of MATLAB was employed for performance comparison.
  • Stacking: The proposed approach (the combiner SVM as shown in Figure 7) is hereafter referred to as the stacking approach. It was implemented in MATLAB 2017a with LIBSVM [35].
All analyses were conducted on a personal computer (Predator G3610, Acer Inc., New Taipei City, Taiwan) with an Intel Core i7-2600 CPU (Intel Corp., Santa Clara, CA, USA) and 16 gigabytes of RAM, and run in the Windows 7 operating system (Microsoft Corp., Washington, DC, USA).

4.1. Experimental Setups

(1) Data Acquisition
Fifty-two volunteers, including 27 licensed drivers and 25 unlicensed persons, were recruited from various departments of National Central University for the experiment, with a mean age of 24 ± 2 years. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of National Taiwan University (201802ES007). To ensure that these participants were familiar with the simulated system, they were trained until they could maintain normal driving (driving in oncoming traffic lanes and weaving in and out of traffic were prohibited). We also ensured that every participant maintained a good mental state and did not drink before data collection. They were asked to wear a smartwatch on their left hand and proceeded with their most comfortable driving behavior to operate the steering wheel. The participants in the real environment also adhered to the same requirement.
Figure 8a shows the route of the simulated environment. Each participant was asked to drive the route under similar traffic conditions clockwise 40 times, and counterclockwise 40 times to collect the driving behavior of the participant when turning right and left. In total, each participant undertook 80 driving sessions; each session lasted an average of 180 seconds (s) (9000 data points in average). The 80 driving sessions were completed in eight rounds of experiments over the course of three weeks. In total, 14,832 segments of driving straight, 8434 segments of turning left, and 12,168 segments of turning right were collected. Each segment was regarded as a driving behavior sample.
Fifteen participants (of 52 volunteers) were also involved in the data collection in the real-traffic environment. They all had driver’s licenses and had at least two years of driving experience. As shown in Figure 8b, the route for this experiment has five turns and is approximately 1.77 km long. Each participant was asked to drive the route clockwise and counterclockwise so that their driving behavior when turning right and left, respectively, could be collected. Each participant was required to be familiar with the road of the campus prior to data collection. In consideration of the risk of real-traffic roads, data acquisition was conducted only in the daytime between 9 a.m. and 5 p.m., and if there were sunny weather conditions. Every participant undertook 20–25 driving sessions, with each session lasting for 345 s on average (17,250 data points on average). All the driving sessions of a participant were collected in four rounds of experiments over two weeks. In total, 3172 segments of driving straight, 2093 segments of turning left, and 1428 segments of turning right were collected.
(2) Evaluation and Performance Indices
To estimate the performance indices for each participant, the driving behavior of a given participant was regarded as the registrant’s behavior, while the driving behavior of the other participants was regarded as the imposter’s behavior. For each participant, 51 pairs of training and test sets were produced by the leave-one-person-out strategy. The training set comprised 55 segments of the registrant’s driving behavior (as positive samples) and 80 segments of the imposter’s driving behavior (as negative samples). The test set comprised another 20 segments of the registrant’s driving behavior and 20 segments of the imposter’s driving behavior. Participants who provided negative samples in the training set contributed no samples to the test set.
The false acceptance rate (FAR), false rejection rate (FRR), detection error trade-off (DET) curve, area under curve (AUC), and EER were used as performance indices for all the experiments. The FAR is defined as the percentage of the imposter’s behaviors that was wrongly recognized as the registrant’s behaviors, and the FRR is the percentage of the registrant’s behaviors that was wrongly recognized as the imposter’s behaviors. Both FAR and FRR depend on the threshold limit used in the decision-making process. In the detection process, the DET curve was used to illustrate the relationship between the FAR and FRR by varying the threshold limit [36]. The AUC of the DET curve is proportional to the product of the FAR and the FRR. Minimizing the AUC of the DET curve is equivalent to reducing either one of the error types or both [37]. The EER is the value where the FAR and FRR become equal by adjusting the threshold. The EER performance measure rarely corresponds to a realistic operating point. However, it is a relatively popular measure of the ability of a system to distinguish between the two categories [38].
The models obtained for each driving maneuver were annotated with an “S” (for driving straight), “L” (for turning left), or “R” (for turning right), and the “stacking S + L + R approach” referred to the stacking approach that utilized the three segments, with each annotation representing one of the three maneuvers.

4.2. Experimental Results

(1) Experiment 1: Analysis of the Number of Gaussian Components
The number of Gaussian components required for the GMM was analyzed from 15 participants in the simulated environment as follows. Figure 9a presents the EER, while Table 2 presents the training time of the GMM and the stacking approaches in the S + L + R driving scenario with respect to 2, 4, 8, 16, and 24 Gaussian components. The training time was proportional to the number of Gaussian components, and the EER decreased as the number of Gaussian components increased from 8 to 24. Figure 9b provides the EER of the two base SVMs S I D M and S U D M . Notably, they did not require the same number of Gaussian components. Figure 9b and Table 2 show that when the number of GMM components of S I D M increased from 4 to 8, the accuracy of S I D M improved by 6.17% but the training time increased by 216%. The results also show that the accuracy of S U D M improved by 14.15% and the training time increased by 54.63% when the number of GMM components of S U D M increased from 8 to 16. After evaluating the trade-off between the EER and the training time, the number of GMM components was set to 4 for the IDM and 16 for the UDM. In this parameter setting, the average training time and the average testing time of the stacking S+L+R approach were 172 s and 0.04 s, respectively. Additionally, since the stacking approach is more computationally complicated, it was slower than the IDM approach. However, the computational cost of the stacking approach was acceptable.
(2) Experiment 2: Performance Evaluation of Driver Authentication in the Simulated Environment
Figure 10 displays the DET curves of the GMM and the stacking approaches. Figure 10a reveals that the stacking approach was more accurate than the GMM approach in single driving scenarios. In addition, as presented in Figure 10b, the accuracy of the stacking and the GMM approaches improved after multiple driving scenarios, with the stacking approach remaining more accurate than the GMM approach in multiple driving scenarios. Table 3 demonstrates that the stacking approach was at least 4% more accurate than the GMM approach when considering the EERs of each approach. Therefore, the experimental results indicated that the proposed approach outperformed the GMM approach, and thus supports the proposed approach as a feasible method for verifying drivers in the simulated environment.
(3) Experiment 3: Performance Evaluation of Driver Authentication in a Real-Traffic Environment
Fifteen participants were involved in experiments conducted in a real-traffic environment. As Table 4 indicates, the stacking approach was more accurate than the GMM approach, and the EER of the proposed approach for the S + L + R driving scenario was 7.86%. Table 4 also compares the experimental results of these participants in the real-traffic environment compared to the simulated environment. According to the 2-sample t test, the stacking approach attained similar EERs for the S and R driving scenarios in the real-traffic (p = 0.094) and simulated (p = 0.438) environments. However, the stacking approach was less accurate for the L driving scenario in the real-traffic environment (p = 1.064 × 10 8 ). One possible reason may be that the participants made left turns more carefully in the real-traffic environment, and thus, their left-turning behavior was not as distinguishable as in the simulated environment. Another possibility is that the real-traffic environment had only one lane of traffic whereas the simulated environment had two. Therefore, driving maneuvers in the simulated environment may have been easier for participants in the simulated environment compared to the real-traffic environment. Nevertheless, the experimental results indicated that the proposed approach holds feasibility in a real-traffic environment.

4.3. Discussion

Drivers’ physical and mental states might affect their driving behaviors. Additionally, in real-life driving, there are driving scenarios, such as parking and backing a car, that were not considered in the experiment. In this study, driving scenarios and drivers’ physical and mental states were controlled to reduce the complexity of the experiments. However, in Experiments 2 and 3, we found that the accuracy of driver authentication could be improved if more driving maneuvers were used. For example, the S + L + R driving scenario would have resulted in a higher accuracy than the S + L and S driving scenarios. This was probably because more driving maneuvers provided more information about the driver. These issues are worthy of further investigation.
Table 5 gives a summary of the accuracy of several authentication mechanisms based on a driver’s behavioral characteristics, and shows that our approach is a promising means of driver authentication. Table 5 does not include other studies on smartwatch-based driver authentication because, to the best of our knowledge, these studies are scarce.

5. Conclusions

To conclude, the driving behavior of a driver was analyzed from his/her use of a steering wheel. Data from the built-in sensors of a smartwatch attached to the driver’s left hand were used for the analysis. The driving behavior was also modeled by proposing a GMM-based modeling approach. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzed driving behavior using two built-in sensors of a smartwatch. The experimental results indicated that the proposed approach had EERs of 4.62% in a simulated environment and 7.86% in a real-traffic environment, confirming the feasibility of this approach.
The proposed modeling approach has potential for other applications, such as detecting whether drivers maintain normal/habitual behaviors to ensure driving safety. We also believe that the proposed approach can be applied on other kinds of sensing devices. In future works, we intend to investigate the possibility of implementing this authentication mechanism on a smartwatch, and apply the proposed modeling approach to more applications.

Acknowledgments

This work was partially supported by the Ministry of Science and Technology, Taiwan, R.O.C. under Grant no. MOST 105-2221-E-008-068-MY2.

Author Contributions

C. Yang and D. Liang made substantial contributions to the original ideas and designed the experiments; C. Yang performed the experiments; All authors analyzed the data; C. Yang and C. Chang wrote the manuscript. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Igarashi, K.; Miyajima, C.; Itou, K.; Takeda, K.; Itakura, F.; Abut, H. Biometric identification using driving behavioral signals. Proceedings of IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 27–30 June 2004; Volume 1, pp. 65–68. [Google Scholar]
  2. Miyajima, C.; Nishiwaki, Y.; Ozawa, K.; Wakita, T.; Itou, K.; Takeda, K.; Itakura, F. Driver modeling based on driving behavior and its evaluation on driver identification. Proc. IEEE 2007, 95, 427–437. [Google Scholar] [CrossRef]
  3. Wahab, A.; Quek, C.; Tan, C.-K.; Takeda, K. Driving profile modeling and recognition based on soft computing approach. IEEE Trans. Neural Netw. 2009, 20, 563–582. [Google Scholar] [CrossRef] [PubMed]
  4. Qian, H.; Ou, Y.; Wu, X.; Meng, X.; Xu, Y. Support vector machine for behavior-based driver identification system. J. Robot. 2010, 2010. [Google Scholar] [CrossRef]
  5. Riener, A.; Ferscha, A. Supporting implicit human-to-vehicle interaction: Driver identification from sitting postures. In Proceedings of the 1st Annual International Symposium on Vehicular Computing Systems (ISVCS 2008), Dublin, Ireland, 22–24 July 2008. [Google Scholar]
  6. Chen, R.; She, M.F.; Sun, X.; Kong, L.; Wu, Y. Driver recognition based on dynamic handgrip pattern on steering wheel. In Proceedings of the 12th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Sydney, Australia, 6–8 July 2011. [Google Scholar]
  7. Gartner Inc. Gartner Says Worldwide Wearable Device Sales to Grow 17 Percent in 2017. Available online: https://www.gartner.com/newsroom/id/3790965 (accessed on 5 March 2018).
  8. Rawassizadeh, R.; Price, B.; Petre, M. Wearables: Has the age of smartwatches finally arrived? ACM Commun. 2015, 58, 45–47. [Google Scholar] [CrossRef]
  9. Xu, C.; Pathak, P.H.; Mohapatra, P. Finger-writing with smartwatch: A case for finger and hand gesture recognition using smartwatch. In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, Santa Fe, NM, USA, 12–13 February 2015; pp. 9–14. [Google Scholar]
  10. Gruenerbl, A.; Pirkl, G.; Monger, E.; Gobbi, M.; Lukowicz, P. Smart-watch life saver: Smart-watch interactive-feedback system for improving bystander CPR. In Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan, 7–11 September 2015; pp. 19–26. [Google Scholar]
  11. Kalantarian, H.; Sarrafzadeh, M. Audio-based detection and evaluation of eating behavior using the smartwatch platform. Comput. Biol. Med. 2015, 65, 1–9. [Google Scholar] [CrossRef] [PubMed]
  12. Pepple Smartwatch. Available online: https://www.pebble.com (accessed on 5 March 2018).
  13. Fitbit. Available online: https://www.fitbit.com/au/home (accessed on 5 March 2018).
  14. Liu, X.; Zhou, Z.; Diao, W.; Li, Z.; Zhang, K. When good becomes evil: Keystroke inference with smartwatch. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1273–1285. [Google Scholar]
  15. Wang, C.; Guo, X.; Wang, Y.; Chen, Y.; Liu, B. Friend or foe? Your wearable devices reveal your personal PIN. In Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security, Xi’an, China, 30 May–3 June 2016; pp. 189–200. [Google Scholar]
  16. Gutierrez, M.A.; Fast, M.L.; Ngu, A.H.; Gao, B.J. Real-time prediction of blood alcohol content using smartwatch sensor data. In Proceedings of the International Conference for Smart Health (ICSH 2015), Phoenix, Arizona, USA, 17–18 November 2015; pp. 175–186. [Google Scholar]
  17. Ngu, A.; Wu, Y.; Zare, H.; Polican, A.; Yarbrough, B.; Yao, L. Fall detection using smartwatch sensor data with accessor architecture. In Proceedings of the International Conference for Smart Health (ICSH 2017), Hong Kong, China, 26–27 June 2017; pp. 81–93. [Google Scholar]
  18. Rawassizadeh, R.; Tomitsch, M.; Nourizadeh, M.; Momeni, E.; Peery, A.; Ulanova, L.; Pazzani, M. Energy-efficient integration of continuous context sensing and prediction into smartwatches. Sensors 2015, 15, 22616–22645. [Google Scholar] [CrossRef] [PubMed]
  19. Liang, X.; Kotz, D. AuthoRing: Wearable user-presence authentication. In Proceedings of the 2017 Workshop on Wearable Systems and Applications, Niagara Falls, New York, NY, USA, 19 June 2017; pp. 5–10. [Google Scholar]
  20. Lewis, A.; Li, Y.; Xie, M. Real time motion-based authentication for smartwatch. In Proceedings of the 2016 IEEE Conference on Communications and Network Security, Philadelphia, PA, USA, 17–19 October 2016; pp. 380–381. [Google Scholar]
  21. Lee, B.-G.; Lee, B.-L.; Chung, W.-Y. Wristband-type driver vigilance monitoring system using smartwatch. IEEE Sens. J. 2015, 15, 5624–5633. [Google Scholar] [CrossRef]
  22. Lee, B.-L.; Lee, B.-G.; Chung, W.-Y. Standalone wearable driver drowsiness detection system in a smartwatch. IEEE Sens. J. 2016, 16, 5444–5451. [Google Scholar] [CrossRef]
  23. Wolpert, D. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  24. Forward Development Group LLC. City Car Driving-Car Driving Simulator, PC Game. Available online: http://citycardriving.com (accessed on 5 March 2018).
  25. Google Inc. Google Map Street View. Available online: https://www.google.com.tw/maps/@24.9674195,121.1884624,3a,75y,91.68h,87.48t/data=!3m7!1e1!3m5!1sGZVySxRXvBuUBVLw1Esipg!2e0!3e11!7i13312!8i6656?hl=en&authuser=0 (accessed on 5 March 2018).
  26. Logitech International S.A. Logitech G27 Racing Wheel. Available online: http://support.logitech.com/en_us/product/g27-racing-wheel/specs (accessed on 5 March 2018).
  27. Logitech International S.A. Logitech Steering Wheel SDK. Available online: https://www.logitechg.com/en-us/developers (accessed on 5 March 2018).
  28. Android Developers. SensorEvent. Available online: https://developer.android.com/reference/android/hardware/SensorEvent.html (accessed on 5 March 2018).
  29. Furui, S. Speaker independent isolated word recognition using dynamic features of the speech spectrum. IEEE Trans. Acoustics Speech Signal Process 1986, 34, 52–59. [Google Scholar] [CrossRef]
  30. Wang, L.; Ning, H.; Tan, T.; Hu, W. Fusion of static and dynamic body biometrics for gait recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 149–158. [Google Scholar] [CrossRef]
  31. Huang, C.-F. Driver Verification Based on Biometric Using GMM and SVM. Master’s Thesis, National Central University, Taiwan, 2016. Available online: http://hdl.handle.net/11296/38774q (accessed on 5 March 2018).
  32. Yang, C.; Liang, D.; Chang, C. A novel driver identification method using wearables. In Proceedings of the 13th IEEE Annual Consumer Communications & Networking Conference, Las Vegas, NV, USA, 9–12 January 2016; pp. 1–5. [Google Scholar]
  33. Ayache, S.; Quénot, G.; Gensel, J. Classifier fusion for SVM-based multimedia semantic indexing. In Proceedings of the 29th European conference on IR Research, Rome, Italy, 2–5 April 2007; pp. 494–504. [Google Scholar]
  34. Znaidia, A.; Shabou, A.; Popescu, A.; Le Borgne, H.; Hudelot, C. Multimodal feature generation framework for semantic image classification. In Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, Hong Kong, China, 5–8 June 2012. [Google Scholar]
  35. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  36. Martin, A.; Doddington, G.R.; Kamm, T.; Ordowski, M.; Przybocki, M. The DET curve in assessment of detection task performance. In Proceedings of the 5th European Conference on Speech Communication and Technology, Rhodes, Greece, 22–25 September 1997; pp. 1895–1898. [Google Scholar]
  37. Fui, L.H.; Isa, D. Feature selection based on minimizing the area under the detection error tradeoff curve. In Modeling Applications and Theoretical Innovations in Interdisciplinary Evolutionary Computation; Hong, W.-C., Ed.; IGI Global: Hershey, PA, USA, 2013; pp. 16–31. ISBN 978-1-46663-628-6. [Google Scholar]
  38. Bimbot, F.; Bonastre, J.-F.; Fredouille, C.; Gravier, G.; Magrin-Chagnolleau, I.; Meignier, S.; Merlin, T.; Ortega-Garcia, J.; Petrovska-Delacretaz, D.; Reynolds, D.-A. A tutorial on text-independent speaker verification. J. Appl. Signal Process. 2004, 2004, 430–451. [Google Scholar] [CrossRef]
  39. Gafurov, D.; Helkala, K.; Søndrol, T. Biometric gait authentication using accelerometer sensor. J. Comput 2006, 1, 51–59. [Google Scholar] [CrossRef]
  40. Bergadano, F.; Guneti, D.; Picardi, C. User authentication through keystroke dynamics. ACM Trans. Inf. Syst. Secur. 2002, 5, 367–397. [Google Scholar] [CrossRef]
  41. Ahmed, A.A.E.; Traore, I. A new biometric technology based on mouse dynamics. IEEE Trans. Dependable Secure Comput. 2007, 4, 165–179. [Google Scholar] [CrossRef]
  42. Lin, C.-C.; Chang, C.-C.; Liang, D. A novel non-intrusive user authentication method based on touchscreen of smartphones. In Proceedings of the 2013 International Symposium on Biometrics and Security Technologies, Chengdu, China, 2–5 July 2013; pp. 212–216. [Google Scholar]
Figure 1. (a) Driving simulation system. (b) Simulated road scene.
Figure 1. (a) Driving simulation system. (b) Simulated road scene.
Sensors 18 01007 g001
Figure 2. (a) Participant in a real vehicle. (b) Real-traffic road scene [25].
Figure 2. (a) Participant in a real vehicle. (b) Real-traffic road scene [25].
Sensors 18 01007 g002
Figure 3. Overview of the proposed smartwatch-based driver authentication mechanism.
Figure 3. Overview of the proposed smartwatch-based driver authentication mechanism.
Sensors 18 01007 g003
Figure 4. (a) X-axis accelerometer signal with noise. (b) Median filtered X-axis accelerometer signal.
Figure 4. (a) X-axis accelerometer signal with noise. (b) Median filtered X-axis accelerometer signal.
Sensors 18 01007 g004
Figure 5. X-axis accelerator and its dynamics for the segment of a single driver. (a) Simulated environment. (b) Real environment.
Figure 5. X-axis accelerator and its dynamics for the segment of a single driver. (a) Simulated environment. (b) Real environment.
Sensors 18 01007 g005
Figure 6. Orientation sensor signals of the driving behaviors from two different participants A and B: (a) data distribution signals; (b) components of two Gaussian mixture model (GMMs).
Figure 6. Orientation sensor signals of the driving behaviors from two different participants A and B: (a) data distribution signals; (b) components of two Gaussian mixture model (GMMs).
Sensors 18 01007 g006
Figure 7. Proposed driving behavior model for a driver in a specific driving scenario.
Figure 7. Proposed driving behavior model for a driver in a specific driving scenario.
Sensors 18 01007 g007
Figure 8. (a) The route in the simulated environment. (b) The real-traffic route.
Figure 8. (a) The route in the simulated environment. (b) The real-traffic route.
Sensors 18 01007 g008
Figure 9. Equal error rate (EER) with respect to various numbers of Gaussian components: (a) GMM and Stacking; (b) two base SVMs S I D M and S U D M .
Figure 9. Equal error rate (EER) with respect to various numbers of Gaussian components: (a) GMM and Stacking; (b) two base SVMs S I D M and S U D M .
Sensors 18 01007 g009
Figure 10. Detection error trade-off (DET) curves for different numbers of driving behavior models using the GMM and stacking approaches: (a) single driving scenario; (b) multiple driving scenario.
Figure 10. Detection error trade-off (DET) curves for different numbers of driving behavior models using the GMM and stacking approaches: (a) single driving scenario; (b) multiple driving scenario.
Sensors 18 01007 g010
Table 1. Four signals derived from the built-in smartwatch sensors.
Table 1. Four signals derived from the built-in smartwatch sensors.
Signal TypeDescription
AccThe three-dimensional signal of the accelerometer
OriThe two-dimensional signal of the orientation sensor
ΔAccThe three delta coefficients with respect to the three-dimensional signal of the accelerometer
ΔOriThe two delta coefficients with respect to the two-dimensional signal of the orientation sensor
Table 2. Average training time with respect to the number of Gaussian components.
Table 2. Average training time with respect to the number of Gaussian components.
Time (seconds)GMMStackingSIDMSUDM
2 Components1.8213.293.0710.21
4 Components7.5838.298.9829.33
8 Components26.86134.6828.41106.04
16 Components43.65209.8945.46163.97
24 Components78.66300.5880.74219.21
Table 3. EERs for the GMM and stacking approaches for various driving scenarios.
Table 3. EERs for the GMM and stacking approaches for various driving scenarios.
Driving ScenarioSimulated Environment
GMMStacking
S19.39%14.65%
L18.02%11.07%
R20.53%12.88%
S+L12.14%7.07%
S+R14.48%8.35%
L+R12.47%6.33%
S+L+R9.86%4.62%
Table 4. EERs of the GMM and stacking approaches in real-traffic and simulated environments.
Table 4. EERs of the GMM and stacking approaches in real-traffic and simulated environments.
Driving ScenarioReal-Traffic EnvironmentSimulated Environment
GMMStackingGMMStacking
S20.93%16.40%25.07%18.38%
L29.10%18.33%17.46%10.91%
R24.74%15.33%24.15%15.14%
S+L18.34%11.46%14.44%8.20%
S+R17.41%10.52%20.30%11.41%
L+R20.52%10.82%13.706.90%
S+L+R15.67%7.86%12.86%6.07%
Table 5. Summary of the EERs of user authentication using various behavioral characteristics.
Table 5. Summary of the EERs of user authentication using various behavioral characteristics.
Behavioral CharacteristicPerformance (%)Participants
Car driving signals [3]EER = 3.44 to 5.0230
Gait/Stride [39]EER = 5 to 621
Keystroke dynamics [40]FAR = 0.01; FRR = 4154
Mouse dynamics [41]FAR = 2.465; FRR = 2.46122
Touch Gestures [42]EER = 2.35 to 2.9951
Our proposed approachEER = 4.6252

Share and Cite

MDPI and ACS Style

Yang, C.-H.; Chang, C.-C.; Liang, D. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication. Sensors 2018, 18, 1007. https://doi.org/10.3390/s18041007

AMA Style

Yang C-H, Chang C-C, Liang D. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication. Sensors. 2018; 18(4):1007. https://doi.org/10.3390/s18041007

Chicago/Turabian Style

Yang, Ching-Han, Chin-Chun Chang, and Deron Liang. 2018. "A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication" Sensors 18, no. 4: 1007. https://doi.org/10.3390/s18041007

APA Style

Yang, C. -H., Chang, C. -C., & Liang, D. (2018). A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication. Sensors, 18(4), 1007. https://doi.org/10.3390/s18041007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop