Next Article in Journal
Static and Dynamic Hand Gestures: A Review of Techniques of Virtual Reality Manipulation
Previous Article in Journal
Motor Imagery EEG Signal Classification Using Distinctive Feature Fusion with Adaptive Structural LASSO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence

by
Hafeez Ur Rehman Siddiqui
1,*,†,
Ambreen Akmal
1,†,
Muhammad Iqbal
2,
Adil Ali Saleem
1,
Muhammad Amjad Raza
1,
Kainat Zafar
1,
Aqsa Zaib
1,
Sandra Dudley
3,
Jon Arambarri
4,5,6,
Ángel Kuc Castilla
4,7,8 and
Furqan Rustam
9,*
1
Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Abu Dhabi Road, Rahim Yar Khan 64200, Punjab, Pakistan
2
Institute of Computer and Software Engineering, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan 64200, Punjab, Pakistan
3
Bioengineering Research Centre, School of Engineering, London South Bank University, 103 Borough Road, London SE1 0AA, UK
4
Universidade Internacional do Cuanza, Cuito EN250, Angola
5
Fundación Universitaria Internacional de Colombia, Bogotá 111321, Colombia
6
Universidad Internacional Iberoamericana, Campeche 24560, Mexico
7
Universidad de La Romana, La Romana 22000, Dominican Republic
8
Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
9
School of Computing, National College of Ireland, Dublin D01 K6W2, Ireland
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(12), 3754; https://doi.org/10.3390/s24123754
Submission received: 7 May 2024 / Revised: 1 June 2024 / Accepted: 5 June 2024 / Published: 9 June 2024
(This article belongs to the Section Vehicular Sensing)

Abstract

:
Driving while drowsy poses significant risks, including reduced cognitive function and the potential for accidents, which can lead to severe consequences such as trauma, economic losses, injuries, or death. The use of artificial intelligence can enable effective detection of driver drowsiness, helping to prevent accidents and enhance driver performance. This research aims to address the crucial need for real-time and accurate drowsiness detection to mitigate the impact of fatigue-related accidents. Leveraging ultra-wideband radar data collected over five minutes, the dataset was segmented into one-minute chunks and transformed into grayscale images. Spatial features are retrieved from the images using a two-dimensional Convolutional Neural Network. Following that, these features were used to train and test multiple machine learning classifiers. The ensemble classifier RF-XGB-SVM, which combines Random Forest, XGBoost, and Support Vector Machine using a hard voting criterion, performed admirably with an accuracy of 96.6%. Additionally, the proposed approach was validated with a robust k-fold score of 97% and a standard deviation of 0.018, demonstrating significant results. The dataset is augmented using Generative Adversarial Networks, resulting in improved accuracies for all models. Among them, the RF-XGB-SVM model outperformed the rest with an accuracy score of 99.58%.

1. Introduction

Drowsiness manifested by drooping eyes, mind wandering, eye rubbing, inability to concentrate, and yawning, is a state of fatigue that presents a substantial danger, especially when it comes to road safety. Recent investigations highlight the seriousness of the problem, revealing that 30% of the 1 million deaths caused by road accidents can be related to driver weariness or drowsiness [1,2]. The likelihood of a collision increases thrice when the driver is experiencing weariness, emphasizing the importance of taking preventative steps. The American Automobile Association (AAA) has discovered that there are approximately 328,000 crashes caused by drowsy driving each year [3]. These crashes have had a significant impact on society, costing almost 109 billion USD, not including property damage [3]. This staggering figure encompasses immediate and long-term medical expenses, productivity losses in both workplace and household contexts, legal and court costs, insurance administration expenses, and the economic impact of travel delays. Specific demographic groups are particularly susceptible to drowsiness while driving. Night-shift male workers and individuals with sleep apnea syndrome emerge as high-risk categories [4]. Several research studies have been published, suggesting strategies to mitigate or notify drivers about possible indications of drowsiness [5,6,7,8,9,10,11,12,13,14]. These measures are important steps in tackling the critical issue of drowsy driving and improving road safety.
Drowsiness detection systems can be classified into three main categories: vehicle dynamics, physiological signals, and recognition of driver face characteristics [11,12,15,16]. Nevertheless, the efficacy of vehicle dynamics-based systems is hindered by the suboptimal performance caused by unpredictable variables such as road geometry, sluggish processing speed, traffic conditions, and head movement [15,16,17]. Conversely, the examination of yawning and blinking by analyzing facial images of the driver has shown potential in controlled or virtual environments [16,17]. However, the performance of these systems often decreases when used in real-world settings due to factors including changes in lighting, differences in skin color, and temperature fluctuations [16,17]. Conversely, systems relying on physiological signals have demonstrated a high level of accuracy, establishing them as a dependable approach for real-world applications. Physiological measures such as electroencephalography (EEG) [6,18,19,20,21,22,23,24], electrooculography (EOG) [25,26,27,28,29,30], respiration rate [12,31,32,33,34,35], electrocardiography (ECG) [34,36,37,38], and electromyography (EMG) [39,40,41,42,43] are commonly used in the systems designed to identify driver drowsiness. Although the sensors used to capture these signals are effective, a significant obstacle occurs due to their invasive character, making it difficult to integrate or practically use them in real-world contexts.
Among these physiological signals, the respiration rate is especially noteworthy because it fluctuates significantly from awake to sleep and varies depending on numerous physiological situations. In addition, the respiratory system undergoes modifications during sleep, which are impacted by decreased muscle tone and shifts in chemical and non-chemical reactions [44]. It is worth mentioning that a decline in the rate at which a person breathes is frequently noticed prior to a driver reaching a state of sleep [45,46]. This study aims to address the challenge of accurately detecting driver drowsiness in real time using UWB radar signals and advanced machine learning (ML) techniques. The primary objectives are to develop robust feature extraction methods, design efficient ensemble models, and validate their effectiveness against existing methods. In this manuscript, the proposed system employs the non-invasive acquisition of chest movement through Ultra-Wideband (UWB) radar to distinguish between the drowsy and non-drowsy states of the driver. UWB radar offers notable benefits such as fast data rates and low power transmission levels [47]. This is achieved by transmitting very short-duration pulses, resulting in signals with wide bandwidth. The technology does not raise any privacy problems because it is not influenced by ambient elements, does not rely on light or skin color, and emits very little power, guaranteeing human safety [48,49,50]. Furthermore, the system maintains its resilience even when exposed to Wi-Fi and mobile phone transmissions. The UWB radar’s ability to penetrate different materials or obstructions, combined with its non-intrusive nature [51,52], makes it an excellent option for this drowsiness detection system. The chest readings obtained are subsequently transformed into grayscale images, as illustrated in [53], and these images are utilized as input deep learning (DL) models. The features extracted from these models are then employed to train and test ML algorithms. The contributions of this study are as follows:
  • This system utilized a dataset from [12] and transformed it into grayscale images for analysis.
  • The system employs Convolutional Neural Network (CNN) architecture to extract features from these images.
  • These features are input into various machine learning (ML) algorithms, and the performance of these algorithms is assessed on a test set.
  • The hybrid ensemble models RF-MLP and RF-XGB-SVM have been developed to combine the unique capabilities of multiple algorithms.
  • The models undergo evaluation using metrics such as accuracy, precision, recall, and F1 score. In the end, a comparative analysis is conducted to determine which deep learning-based feature yields superior results.
This paper is organized into several sections. Section 2 presents the literature review of the study, while Section 3 describes the methodology of the proposed approach. Section 4 presents the results, and finally, Section 5 contains the study’s conclusion.

2. Literature Review

The literature review examines various prominent studies that specifically investigate the identification and categorization of drowsy and alert conditions in drivers. The classification of drowsy and non-drowsy states is accomplished by utilizing non-invasive IR-UWB radar to measure the breathing rate, as stated in the research [12]. The chest motions of 40 individuals were collected, and the Support Vector Machine algorithm achieved an accuracy rate of 87%. The study demonstrates the efficacy of UWB in detecting driver drowsiness by analyzing breathing rates. The paper introduces an EEG-based spatial-temporal CNN (ESTCNN) in [54] to detect driver fatigue. The network independently acquires characteristics from EEG inputs, with an exceptional classification accuracy of 97.37%. The experiments involve the collection of EEG signals from eight participants in both alert and fatigue stages. The research presented by [55] focuses on two distinct categories of videos: alert and drowsy. The study utilizes a thorough dataset consisting of 60 individuals who have been classified into three groups: alert, low vigilant, and drowsy. Two separate models are created, utilizing computer vision and deep learning to analyze temporal and spatial features. Ref. [56] suggests a method of evaluating exhaustion that does not require any intrusive procedures. This method involves analyzing physiological signs such as heart rate variability (HRV) and ECG data. During sleep periods, ECG data are collected, and the continuous wavelet transform is used to extract features. The average accuracy achieved via ensemble logistic regression is 92.5%, with a processing time of 21 s. Ref. [57] improves the detection of drowsiness by combining ECG and EEG features. The data collected from 22 participants in a driving simulator exhibit noteworthy characteristics that differentiate between states of being alert and tired. By combining modalities, Support Vector Machine (SVM) classification produces enhanced performance, while channel reduction ensures accuracy using only two electrodes.
The Intelligent Drowsiness Detection System (DDS) described in [58] uses Deep Convolutional Neural Networks (DCNNs), specifically VGG16, InceptionV3, and Xception, to address driver fatigue. The Xception model has exceptional performance, with an accuracy of 93.6%. It surpasses both the VGG16 and InceptionV3 models when applied to a dataset containing facial recordings depicting drowsy and non-drowsy states. In [59], an approach with two phases tackles the difficulties in intelligent transportation systems by presenting an improved fatigue detection system that relies on DenseNet. The system consists of a module that represents the model and a sophisticated method for channel attention. The second stage utilizes a guided policy search (GPS) algorithm to facilitate collaborative decision-making, adjusting to the current levels of driver fatigue in real time. Empirical validation on datasets such as YaWDD, RLDD, and DROZY showcases substantial enhancements and achieved an average accuracy of 89.62%. The fatigue detection method, implemented in [60], utilizes powerful CNN models to specifically target yawning. This system demonstrates a remarkable accuracy of 96.69% on the YaWDD dataset. The analysis demonstrates that data augmentation achieves a trade-off between accuracy and model resilience, resulting in a modest decrease in accuracy but an improvement in the model’s ability to withstand complications. In [61], a novel deep learning approach for driver drowsiness identification utilizes a MobileNet-SSD CNN with the SSD technique. Trained on a diverse dataset of 6000 photos, the model achieves a substantial Mean Average Precision (mAP) of 0.84, prioritizing computing efficiency for real-time processing on mobile devices. The methodology incorporates a unique dataset from various sources, ensuring diverse representation. Experimental results demonstrate the model’s resilience, achieving high mAP values for closed eyes (0.776), open eyes (0.763), and outstanding face detection (0.971).
In a study conducted by researchers [62], a Regularized Extreme Learning Machine (RELM) showed exceptional performance in identifying driver drowsiness. The RELM achieved an accuracy rate of 99% using a dataset consisting of 4500 pictures. The combination of video surveillance, image processing, and ML [63] results in a sleepiness detection system that achieves a 93% accuracy rate. This accuracy is determined by analyzing eye blink patterns from the YawDD dataset. The system described in [64] utilizes the PERCLOS algorithm, Python modules, and ML techniques to evaluate eye movements. It achieves a high accuracy rate of 93% in the real-time detection of driver drowsiness. The utilization of mmWave FMCW radar enables [65] to reach an accuracy of 82.9% in detecting drowsiness. This is accomplished by collecting chest motions and employing ML methods. Ref. [66] integrates MTCNN facial detection with GSR sensor-based physiological data, resulting in an accuracy of 91% in the real-time detection of driver drowsiness. The study [67] combines behavioral metrics and physiological data, utilizing Raspberry Pi and SVM classifiers, to achieve a commendable accuracy rate of 91% in detecting driver tiredness. The study [68] uses a histogram of oriented gradients (HOG) and linear SVM to achieve outstanding precision. The DDS in [69] uses CNN to extract features, resulting in an accuracy rate of 86.05% on a dataset of 48,000 photographs. The study [70] conducted in Zimbabwe mainly addresses the issue of road safety. It successfully achieves a detection accuracy of over 95% in identifying drowsiness. This is accomplished by the implementation of the principal component analysis (PCA) dimensionality reduction technique, along with classifiers such as XGBoost and Linear Discriminant Analysis. The implementation of a real-time drowsiness detection system on Nvidia Jetson Nano, as described in [71], achieves an accuracy rate of 94.05%. In particular, it excels particularly in detecting yawning. The paper [72] presents a DDS that uses webcam-based surveillance to detect drowsiness in real-time. The system achieves a high level of accuracy, with over 97% accuracy in multiple metrics such as precision, sensitivity, and F1-score. Ref. [55] presents a DDS that operates in real-time. The system utilizes the Viola–Jones algorithm, a beeping sound mechanism, and calculates the distance between the lips. This combination of technologies provides scalability and cost-effectiveness, which ultimately leads to increased road safety.
Although these investigations contribute substantially to the field of driver drowsiness detection, it is important to highlight numerous limitations. Although video-based methods are successful in controlled environments, they can face difficulties in real-world situations due to inconsistent lighting conditions, which could potentially affect the precision of drowsiness detection. Furthermore, the adoption of physiological data-centric systems in real-time is hindered by practical problems arising from the invasive nature of on-body sensors, regardless of their effectiveness. Not only does this give rise to privacy problems, but it also obstructs the smooth incorporation of such technologies into ordinary driving situations. Hence, the implementation of these techniques in actual driving scenarios requires careful deliberation of these limitations.

3. Methodology

The proposed methodology is depicted in Figure 1. Initially, a data set was sourced from [12], which included chest movement signals acquired through the UWB radar in both drowsy and non-drowsy states. Subsequently, the dataset was transformed into grayscale images, which were then used as input for a CNN model. The Features were extracted from the images using the DL model and then saved in a CSV file, along with their accompanying labels. Following that, the dataset was divided into two distinct sets: the training set and the testing set. The training set was utilized to train various ML classifiers, whilst the test set was put aside for evaluating the performance of these models. The test set was used to make predictions, which were then evaluated using key metrics like accuracy, precision, recall, and F1 score.

3.1. Dataset

The dataset used in this investigation was obtained from [12] which comprises the chest movements of the drivers in the drowsy and non-drowsy state using the X4M300 UWB radar (NOVELDA, Oslo, Norway). The experiment involved forty professional male drivers engaged in extended intercity driving sessions that lasted approximately 10 h. For non-drowsy states, chest movement data were recorded before the drivers commenced their driving shifts. Conversely, in the case of fatigued data, the chest data of the same participants were collected shortly after they finished a 10-h driving shift. The raw radar signal of the chest movement is shown in Figure 2.
In order to promptly evaluate the drivers following their return from their trips, a specially designed testing area was established in a vacant room at the Manthar Transport Company’s terminal in Sadiq Abad, Punjab, Pakistan. Throughout the process of collecting data, drivers were given instructions to position themselves directly in front of the radar. A minimum distance of 1 m was maintained between the radar and the subject at chest level, as shown in Figure 3. The radar had a range of 9.4 m from the transmitter-receiver point, allowing it to detect any movements within this distance. The selected 1-meter distance was predicated on the supposition that the driver could be in any position within this range while operating the vehicle. The radar device was positioned at chest level to guarantee that the subject stayed inside the radar’s effective range. Practically, the distance from the dashboard to the human body could range from 0.2 m to 0.5 m. The chest movement from each subject was collected for five minutes and stored in a CSV file.

3.2. Conversion to Images

Each file consists of a five-minute recording of the chest movement, which is then divided into one-minute segments and labeled as matrix “A”. An approach is utilized to produce grayscale image representations of the matrix “A”, which are denoted as “I”. The purpose of this conversion is to visually depict the matrix, hence improving the comprehensibility and analysis of the data. In order to accomplish this, the ’mat2gray’ function in MATLAB R2020a is utilized. The function initially identifies the minimum and maximum values in the input matrix. Using these values, a normalization formula as shown in Equation (1) is applied to each element, subtracting the minimum value and then dividing by the difference between the maximum and minimum values. This procedure adjusts the minimum value of the matrix to 0 and rescales the maximum value to 1, so ensuring that all other values proportionally fall within this range. If the minimum and maximum values of a matrix are equal, which implies that all values in the matrix are the same, mat2gray assigns a value of 0.5 to all output values. This is executed to prevent undefined operations and provide a reasonable default value.
I = A min A max A min A
Here, ‘min(A)’ represents the minimum value within matrix ‘A’, and ‘max(A)’ signifies the maximum value within the same matrix. The procedure entails subtracting the minimum value from each element of ‘A’ and then dividing the resulting value by the range of values, which is the difference between the maximum and minimum values. The normalization technique guarantees that the values in the matrix ‘A’ are rescaled to fit inside the typical grayscale image representation range of [0, 1]. The converted images for both drowsy and fresh classes are shown in Figure 4.

3.3. Feature Extraction

The study utilized the Convolutional Spatial Feature Engineering (CSFE) method, as presented in [53], to extract spatial features from grayscale images. The process is visualized in Figure 5. By incorporating 2D convolutional layers into CNN architectures, this technique enables the extraction of complex spatial features from the image data. The spatial features obtained provide a thorough depiction, encompassing intricate patterns and movements that are essential for a range of applications. In this research, CSFE features were derived from grayscale images, forming a new feature set along with corresponding labels. These features are then used to train and evaluate ML models, with the goal of accurately detecting driver drowsiness. By harnessing the spatial information extracted through CSFE, these models exhibit the potential to discern nuanced patterns and movements often overlooked by conventional CNNs, thereby enhancing accuracy in drowsiness detection. The architecture of the 2D CNN used in this research is given in Table 1. The rescaling layer performs an initial normalization of the pixel values, guaranteeing a uniform input range of [0, 1]. The initial convolutional layer has 64 filters and a 3 × 3 kernel to identify basic patterns and edges in the input image. It utilizes the ReLU activation function to introduce non-linearity and improve the representation of features. Following max pooling decreases spatial dimensions, preserving important characteristics while decreasing computational complexity.
The second convolutional layer utilizes 32 filters to enhance the process of extracting features while using additional max-pooling to assist in reducing spatial dimensions. The flattened layer prepares the data for fully connected layers, facilitating global feature integration. The 128-neuron dense layer refines the hierarchical features in order to capture intricate patterns and relationships. This architecture is a result of the run-and-test method, demonstrating its adaptability to the specific characteristics of the data set. The selection of these layers aims to achieve an in-depth balance, allowing for efficient feature extraction without introducing unnecessary complexity. The architecture’s advantages stem from its capacity to systematically extract complex spatial characteristics from images. These 128 features are stored in a CSV file along with labels for the classification of the drowsy and non-drowsy states of the drivers.

3.4. Data Augmentation

The study presented in this manuscript employs Generative Adversarial Networks (GANs) to address the problem of a small dataset size. Specifically, the dataset used in this manuscript contains only 200 instances per class, which is insufficient for training robust ML models. GANs, which were proposed by Ian Goodfellow [73], are a type of ML framework specifically created to produce artificial data that closely resemble a given dataset. A GAN comprises two neural networks: a Generator and a Discriminator. The Generator produces novel, artificial data instances, while the Discriminator assesses them to differentiate between genuine and artificial (counterfeit) data. The two networks are trained concurrently in a competitive environment: the Generator grows its proficiency in generating actual data, while the Discriminator improves its ability to identify counterfeit data. The Generator model is specifically designed to accept a random noise vector as input and convert it into a synthetic data instance that closely mimics the actual data. The structure of the Generator commences with an input layer that receives a noise vector of 102 dimensions. Subsequently, a sequence of compact layers is employed with the objective of gradually enhancing the data representation. The initial dense layer is composed of 256 neurons that utilize the Rectified Linear Unit (ReLU) activation function. This is then followed by batch normalization and a dropout layer with a 30% probability. These measures are implemented to enhance stability and mitigate overfitting. The second dense layer consists of 512 neurons, which are likewise activated using the ReLU function. Additionally, batch normalization and dropout layers are applied. The architecture then incorporates a third layer that is densely populated with 256 neurons, and a fourth layer with 128 neurons. Both layers adhere to the identical sequence of activation, normalization, and dropout. The Generator’s final output layer generates a 100-dimensional vector via linear activation, which represents the synthetic data that have been created. The architecture of the generator is shown in Figure 6a.
The Discriminator model’s objective is to distinguish between genuine data from the dataset and the artificial data produced by the Generator. The process starts with an input layer that receives a data vector consisting of 102 dimensions. The first layer of the Discriminator consists of 512 neurons with ReLU activation, which is then followed by a dropout layer with a 30% probability in order to mitigate overfitting. Following this, there are further layers with a high concentration of neurons, namely 256, 128, and 64 neurons, respectively. Each of these layers is activated using the ReLU function and includes dropout layers. The final output layer of the Discriminator is a single neuron with sigmoid activation, outputting a probability score that indicates whether the input data are real or synthetic. The architecture of the discriminator model is shown in Figure 6b.
During the training phase, the Generator and Discriminator participate in a two-player minimax game. The Discriminator is trained by being presented with batches of real data and data produced by the Generator. It learns to improve its ability to distinguish real data from fake data. Simultaneously, the Generator is trained to generate artificial data that can deceive the Discriminator into categorizing it as authentic. The process of adversarial training persists until the Generator generates data that are indistinguishable from genuine data, therefore, substantially enhancing the original dataset with synthetic examples of high quality. Using GANs, the dataset is effectively augmented by adding 1000 instances to each class, resulting in a total of 1200 instances for each class. This enhancement facilitates the creation of more robust and accurate ML models. The sample of the augmented data is shown in Table 2.

3.5. Proposed Ensemble Models

In this manuscript, in addition to individual ML models, two ensemble models RF-MLP and RF-XGB-SVM are proposed with hard voting for the classification task between drowsy and fresh classes. The rationale behind the selection of the RF-MLP and RF-XGB-SVM models is to exploit the advantages of various methods in order to improve the accuracy of predictions. RF-MLP is a hybrid model that combines the robustness of RF with the deep learning skills of MLP. On the other hand, RF-XGB-SVM is a model that merges the strong boosting powers of Extreme Gradient Boosting with the effectiveness in handling high-dimensional data of SVM. The voting mechanism among these separate ensembles introduces diversity, robustness, and computational efficiency, allowing us to balance accuracy and model interpretability. The architecture of both ensemble models is presented in Figure 3. Here, in Figure 7, P1, P2, and P3 the predictions of the respective classifiers, and in the final classification, the class with the majority of votes among the predictions will be selected as the final prediction.
The Algorithm 1 outlines the procedural steps employed by the RF-MLP ensemble model following the hard voting criteria. The trained Random Forest (TRF) and Trained Multilayer Perceptron (TMLP) models operate on the feature vector to predict whether a given sample belongs to the drowsy or fresh class. Each model contributes one vote, and the ultimate prediction, denoted as HBPred, is determined by the majority of votes from these models for the drowsy or fresh class.
Algorithm 1 RF-MLP Algorithm for Drowsiness Prediction
Require: 
CSFE Features, TrainedRF, TrainedMLP
Ensure: 
Drowsy, Fresh
  1:
T R F TrainedRF
  2:
T M L P TrainedMLP
  3:
for i in D a t a s e t  do
  4:
    R F P r e d i c t i o n T R F ( i )
  5:
    M L P P r e d i c t i o n T M L P ( i )
  6:
    H V P r e d [ i ] arg max { R F P r e d i c t i o n , M L P P r e d i c t i o n }
  7:
end for
  8:
Output: Drowsy | Fresh H V P r e d
The Algorithm 2 outlines the steps used by the RF-XGB-SVM ensemble model following the hard voting criteria. The trained Random Forest (TRF), Trained XGB (TXGB), and trained SVM (TSVM) models operate on the feature vector to predict whether a given sample belongs to the drowsy or fresh class. Each model contributes one vote, and the ultimate prediction, denoted as HVPred, is determined by the majority of votes from these models for the drowsy or fresh class.
Algorithm 2 RF-SGB-SVM Algorithm for Drowsiness Prediction
Require: 
CSFE Features, TrainedRF, TrainedXGB, TrainedSVM
Ensure: 
Predictions (Drowsy or Fresh)
  1:
T R F TrainedRF
  2:
T X G B TrainedXGB
  3:
T S V M TrainedSVM
  4:
for each i in D a t a s e t  do
  5:
    R F P r e d i c t i o n T R F ( i )
  6:
    X G B P r e d i c t i o n T X G B ( i )
  7:
    S V M P r e d i c t i o n T S V M ( i )
  8:
    H V P r e d [ i ] arg max ( { R F P r e d i c t i o n , X G B P r e d i c t i o n , S V M P r e d i c t i o n } )
  9:
end for
10:
Output: Drowsy | Fresh H V P r e d

4. Results and Discussion

This section provides a comprehensive analysis and discussion of the results obtained from the experiments carried out during this research. The objective is to provide a thorough analysis of the results while clarifying their importance within the context of this study. Furthermore, it involves a substantial discussion exploring the impacts and importance of these findings, thereby enriching the understanding of the broader academic and practical implications stemming from the research endeavor.

4.1. Experiment Setup

The experimental analyses were conducted on the HP EliteBook x360 1040 G6 (HP Inc., Lahore, Pakistan), which serves as the primary computing platform. Equipped with an Intel(R) Core (TM) i5-8365U processor operating at 1.60 GHz, this system exhibits remarkable computational prowess at a peak speed of 1.90 GHz. An additional 16.0 GB of RAM significantly enhances the performance of the CPU, resulting in improved efficiency when it comes to multitasking and data management. Implementing Windows 11 Pro and running on a 64-bit architecture, the system demonstrates the seamless integration of state-of-the-art hardware and software components. This technical configuration highlights the utilization of advanced capabilities throughout the experimentation phase, ensuring a stable and flexible computing environment. Data preprocessing was performed using MATLAB R2020a. The subsequent experiments, including feature extraction and model training, were implemented in Python 3.0 using Jupyter Notebook 6.5.2. This environment allowed for the seamless integration of code, visualizations, and documentation, facilitating an interactive and iterative workflow. The software environment comprised Python 3.8, TensorFlow 2.4, and scikit-learn 0.24. Hyperparameter tuning was performed using grid search to identify optimal configurations for each model. Software debugging and iterative refinements were managed using Jupyter Notebook’s real-time monitoring and visualization tools, which allowed for dynamic adjustments during the training process.

4.2. Data Splitting

The dataset comprises recordings obtained from forty male participants, encompassing both drowsy and alert states. By segmenting each file at one-minute intervals, the total number of files within each category rises to 200. The dataset is subsequently divided into test and training sets in the proportion of 70% for training and 30% for testing. Additionally, a GAN is employed to augment the dataset, resulting in each class having 1200 values. These augmented datasets are then divided into training and testing sets in an 80–20 split, ensuring a robust and comprehensive evaluation of the model’s performance. The objective of this strategic division is to guarantee an equitable distribution of instances for drowsy and non-drowsy states throughout the training and testing stages, thereby promoting the development and assessment of resilient models.

4.3. Classification Results

In this study, a diverse array of machine learning classifiers, encompassing SVM, Random Forest (RF), XGBoost (XGB), and Multi-Layer Perceptron (MLP), were employed for the classification task. Furthermore, ensemble classifiers were implemented in two ways RF-MLP Ensemble and RF-XGB-SVM Ensemble. To improve the performance of the models, rigorous hyperparameter tuning was performed using the Grid Search technique. The selected specific hyperparameters are provided in Table 3.
The training phase included the use of the training dataset, followed by testing on an independent test set. Table 4 completely presents the classification performance of these models on the test set, providing insights into their efficacy and comparative evaluation.
The results in Table 4 show that ensemble models, RF-MLP and RF-XGB-SVM, showcased exceptional performance, with an accuracy of 95% and 96.6%, respectively. This strong result emphasizes the efficacy of combining several learning techniques, demonstrating their capacity to greatly improve predictive accuracy. Notably, RF consistently outperformed all the measures tested, attaining an amazing accuracy of 93.33% and an F1-score of 0.94. SVM and XGBoost performed similarly, with both models achieving a roughly 91% accuracy and F1 scores of 0.91 and 0.92, respectively. Although effective, they marginally trailed RF’s superior performance. The MLP performed significantly worse, with an accuracy of 74.6% and an F1-score of 0.75. This result sheds light on the potential limits of the MLP model architecture for the specific drowsiness detection task. For accurate drowsiness detection, the ensemble model, notably RF-XGB-SVM, emerges as a highly promising classifier. The confusion matrix is shown in Figure 8.
The augmented dataset was used to ensure fair and comparable evaluations across different models and datasets by maintaining consistency in model training. The same set of hyperparameters as those applied to the original dataset was used, guaranteeing uniform training conditions. Following successful training, the trained models were rigorously tested using the designated test set. The evaluation results, meticulously documented and presented in Table 5, provide insights into the classifiers’ performance metrics, including accuracy, precision, recall, and F1-score.
It is evident from Table 5 that the SVM achieved an accuracy of 98.76%, with a Precision, Recall, and F1-Score all standing at 0.99, indicating a highly consistent and reliable performance across different evaluation metrics. The RF and XGB classifiers both exhibited identical performance metrics, each attaining an accuracy of 99.17%, and scoring 0.99 in Precision, Recall, and F1-Score. This suggests that both classifiers were equally effective in handling the augmented dataset, offering robust and accurate predictions. The MLP demonstrated the highest performance among the individual classifiers, with an accuracy of 99.5%. Remarkably, it achieved perfect scores of 100 in Precision, Recall, and F1-Score, indicating an exceptional ability to correctly identify and classify instances without any false positives or negatives. For the ensemble classifiers, the RF-MLP Ensemble achieved an accuracy of 99.3%, with a Precision, Recall, and F1-Score of 0.99. This performance is slightly lower than that of the MLP alone but still indicates strong predictive capabilities by leveraging the strengths of both RF and MLP. The RF-XGB-SVM Ensemble outperformed all other models, reaching an accuracy of 99.58%. It also achieved perfect scores of 100 in Precision, Recall, and F1-Score. This superior performance highlights the effectiveness of combining multiple classifiers, capitalizing on their individual strengths to deliver highly accurate and reliable predictions. The results demonstrate that all classifiers performed exceptionally well on the augmented dataset, with ensemble methods, particularly the RF-XGB-SVM Ensemble, providing a slightly higher accuracy than other classifiers. The confusion matrix of RF-XGB-SVM is shown in Figure 9.

4.4. K-fold Cross Validation

To assess the robustness and reliability of the models, a K-fold cross-validation approach was implemented in this study. The dataset underwent a process of partitioning into five distinct folds, and the models underwent iterative training and evaluation across each of these folds. Table 6 provides a comprehensive presentation of the results obtained from the cross-validation process. This enables readers to gain a nuanced comprehension of the performance of the models across various subsets of the data.
The findings presented in Table 4 indicate that the ensemble models, specifically RF-XGB-SVM, demonstrated superior performance in terms of both accuracy and consistency. It is noteworthy that RF-XGB-SVM demonstrated the highest accuracy at 97% and a remarkably low standard deviation of 0.018. These results indicate that RF-XGB-SVM operates with robustness and dependability across various factors. Indicating its effective generalization capabilities, RF-MLP additionally exhibited notable results, possessing a precision rate of 95% and a standard deviation of 0.03. In comparison to other individual classifiers, RF demonstrated its efficacy as a solitary model by attaining an accuracy of 94% and a moderate standard deviation of 0.02. In contrast, SVM and XGBoost exhibited similar performance levels, attaining accuracies of approximately 91% each. With a standard deviation of 0.04 compared to SVM’s 0.05, XGBoost exhibited marginally less variability. The MLP demonstrated the most substantial standard deviation of 0.04 and the lowest accuracy of 73%.
The k-fold cross-validation results on the augmented dataset, presented in Table 7, provide a comprehensive evaluation of the classifiers’ performance in terms of accuracy and variability. The accuracy is reported along with the standard deviation (Std), which indicates the consistency of the model across different folds. The SVM and RF classifiers both achieved an average accuracy of 0.98 with a standard deviation of 0.01. This reflects their robust performance and reliability, with minimal variation in accuracy across the different folds of the dataset. XGB exhibited a slightly lower average accuracy of 0.97 with a standard deviation of 0.01. While still demonstrating strong performance, the XGB classifier showed a slightly higher variability in its predictions compared to SVM and RF. The MLP classifier outperformed the other individual classifiers, achieving an impressive average accuracy of 0.99 with a standard deviation of 0.01. This high accuracy, coupled with low variability, underscores MLP’s effectiveness and stability in handling the augmented dataset. The RF-MLP Ensemble, which combines the strengths of both Random Forest and Multi-Layer Perceptron, achieved an average accuracy of 0.98 with a standard deviation of 0.01. This indicates that the ensemble method is as reliable as the individual RF and SVM classifiers, but did not surpass the performance of MLP alone. The RF-XGB-SVM Ensemble demonstrated the highest performance among all models, with an average accuracy of 0.99 and a standard deviation of 0.01. This suggests that combining Random Forest, XGBoost, and SVM in an ensemble approach results in a model that is not only highly accurate but also consistently reliable across different subsets of the dataset. The k-fold cross-validation results affirm the high performance and robustness of the classifiers, with ensemble methods, particularly the RF-XGB-SVM Ensemble, providing the best accuracy and consistency.

4.5. Computational Time Complexity

Table 8 summarizes the computational time complexity of classifiers, measured in seconds. SVM demonstrates the lowest time complexity at 1.53 s, followed by RF (2.47) and XGB (2.81). MLP has a higher complexity at 3.63 s, while the RF-MLP ensemble increases to 3.72 s. The RF-XGB-SVM ensemble requires the most time at 4.15 s. This highlights a trade-off between computational efficiency and model complexity, with simpler models offering faster predictions, while more complex ensembles deliver heightened accuracy at the expense of increased computational time.
The computational time complexity of the classifiers on the augmented dataset, as shown in Table 9, provides insight into the efficiency of each model in terms of training time measured in seconds. The SVM required 4.19 s for training, indicating a relatively fast processing time given its sophisticated algorithm. Similarly, the RF classifier took 4.22 s, which is comparable to SVM and reflects its efficiency in handling the dataset with multiple decision trees. XGB demonstrated the shortest computational time among all classifiers, completing its training in 3.93 s. This rapid processing time is indicative of XGB’s optimized implementation for gradient boosting, which is known for its speed and performance. The MLP, however, required the longest training time of 5.1 s. This increased time complexity can be attributed to the neural network’s iterative training process, involving numerous parameters and layers that need to be optimized. Among the ensemble classifiers, the RF-MLP Ensemble took 4.3 s to train. This slight increase compared to the individual RF model reflects the added complexity of integrating the MLP component, yet it remains efficient. The RF-XGB-SVM Ensemble had a computational time of 4.7 s. While this is higher than the individual classifiers, it remains reasonable given that it combines three different models. The increase in computational time is justified by the significant boost in accuracy and robustness provided by this ensemble approach. The computational time complexity results illustrate a trade-off between training time and model performance. While MLP and ensemble methods take longer to train, their superior accuracy and reliability often justify the additional computational cost. Conversely, XGB stands out for its quick processing time, making it an efficient choice when computational resources or time are limited.

4.6. Comparison with Existing Studies

In comparison to a prior study conducted by Siddiqui et al. [12], which used the same dataset as employed in this manuscript, the proposed method presented in this manuscript has exhibited advancements in accuracy as shown in Table 10. The study [12] achieved an accuracy of 87.5%, while our proposed methodology achieved a significantly higher accuracy of 99.58%. This substantial improvement underscores the efficacy of the approach introduced in this manuscript. The enhanced accuracy suggests that the employed classifiers, such as RF-XGB-SVM, have effectively leveraged the features within the dataset, surpassing the performance achieved in the earlier study.

4.7. Discussion

The results demonstrate that the RF-XGB-SVM ensemble model outperforms all other classifiers. In multiple evaluation criteria, such as accuracy, precision, recall, and F1-score, RF-XGB-SVM consistently exhibited superior performance compared to its competitors. The remarkable efficacy of the RF-XGB-SVM ensemble model can be attributed to the synergistic cooperation of RF, XGB, and SVM. RF, with its collection of decision trees, effectively captures complex data relationships. The XGB algorithm, a robust gradient boosting technique, enhances the performance of less capable models, while SVM prioritizes the identification of suitable hyperplanes for classification. The integration of these classifiers produces a model that not only utilizes a range of learning techniques but also performs exceptionally well in detecting different patterns throughout the feature space. The ensemble approach offers a reliable solution by reducing overfitting and allowing error correction through the combined knowledge of classifiers for drowsiness detection. Despite its excellent classification performance, it is noteworthy that the RF-XGB-SVM model incurs a higher computational time compared to individual classifiers.
The accuracy comparison of all the classifiers on both datasets is shown in Figure 10. The analysis revealed that while individual models like SVM, RF, XGBoost, and MLP performed exceptionally well, achieving high accuracy rates (up to 99.5% for MLP), ensemble methods provided the best results. The RF-XGB-SVM ensemble achieved the highest accuracy of 99.58%, coupled with perfect precision, recall, and F1-score, demonstrating the advantage of combining diverse classifiers. K-fold cross-validation confirmed the robustness and consistency of all models, with low standard deviations indicating reliable performance across different folds.
The findings highlight the effectiveness of ensemble approaches in achieving high performance while balancing computational efficiency. The primary aim of this study is to achieve high accuracy in detecting driver drowsiness, which is crucial for enhancing road safety. This focus, however, leads to higher computational complexity. The benefits of improved detection accuracy justify the additional computational cost. To make the method more practical for deployment in various real-world scenarios, efforts are being made to explore optimizations that improve real-time performance.

5. Conclusions

Drowsiness while driving offers a significant risk, resulting in decreased cognitive performance and an increased likelihood of an accident. Drowsiness-related vehicle crashes have serious consequences, including trauma, economic costs, injuries, and even fatalities. This study demonstrates the effectiveness of using UWB radar and advanced ensemble models for real-time driver drowsiness detection. This study focuses on classifying drivers into drowsy and non-drowsy states using data from ultra-wideband radar. The five-minute dataset was divided into one-minute chunks and converted to grayscale images. A Two-Dimensional Convolutional Neural Network was used to extract spatial features from these images. Using these features, various machine learning classifiers were trained and tested. Notably, the ensemble classifier RF-XGB-SVM attained an amazing accuracy of 96.6% by combining Random Forest, XGBoost, and Support Vector Machine. The k-fold cross-validation score was 97%, with a standard deviation of 0.018, indicating a stable and consistent performance. Utilizing Generative Adversarial Networks for dataset augmentation led to enhanced accuracies across all models, with the RF-XGB-SVM model surpassing others by achieving an accuracy score of 99.58%. The proposed method significantly improves detection accuracy, highlighting its potential to enhance road safety by reducing fatigue-related accidents. Future research could investigate the integration of other sensor modalities for improved detection, as well as the deployment of the system in real-world driving scenarios for comprehensive validation.

Author Contributions

Conceptualization, H.U.R.S. and A.A.; Methodology, H.U.R.S., A.A.S., A.A., M.A.R. and K.Z.; Software, M.A.R., K.Z. and A.Z.; Validation, A.A., M.I., A.A.S., A.Z., J.A. and F.R.; Formal analysis, H.U.R.S., A.A., M.I., A.A.S., A.Z., Á.K.C. and F.R.; Investigation, H.U.R.S., A.A.S., M.A.R., K.Z., S.D., J.A. and Á.K.C.; Data curation, A.A.S.; Writing—original draft, A.A.S., K.Z. and A.A.; Writing—review & editing, M.I.; Visualization, M.A.R.; Supervision, H.U.R.S.; Project administration, S.D. and F.R.; Funding acquisition, S.D., J.A., Á.K.C. and F.R. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the European University of the Atlantic.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This research has no associated data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Martiniuk, A.L.; Senserrick, T.; Lo, S.; Williamson, A.; Du, W.; Grunstein, R.R.; Woodward, M.; Glozier, N.; Stevenson, M.; Norton, R.; et al. Sleep-deprived young drivers and the risk for crash: The DRIVE prospective cohort study. JAMA Pediatr. 2013, 167, 647–655. [Google Scholar] [CrossRef]
  2. World Health Organization. Global Status Report on Road Safety 2015; World Health Organization: Geneva, Switzerland, 2015. [Google Scholar]
  3. Council, N.S. Drivers Are Falling Asleep behind the Wheel. Available online: https://www.nsc.org/road/safety-topics/fatigued-driver? (accessed on 25 December 2023).
  4. Drowsy Driving and Automobile Crashes. Available online: https://www.nhtsa.gov/sites/nhtsa.gov/files/808707.pdf (accessed on 25 December 2023).
  5. Chand, H.V.; Karthikeyan, J. CNN Based Driver Drowsiness Detection System Using Emotion Analysis. Intell. Autom. Soft Comput. 2022, 31, 717. [Google Scholar] [CrossRef]
  6. Fouad, I.A. A robust and efficient EEG-based drowsiness detection system using different machine learning algorithms. Ain Shams Eng. J. 2023, 14, 101895. [Google Scholar] [CrossRef]
  7. Jan, M.T.; Hashemi, A.; Jang, J.; Yang, K.; Zhai, J.; Newman, D.; Tappen, R.; Furht, B. Non-intrusive drowsiness detection techniques and their application in detecting early dementia in older drivers. In Future Technologies Conference; Springer: Berlin/Heidelberg, Germany, 2022; pp. 776–796. [Google Scholar]
  8. Magán, E.; Sesmero, M.P.; Alonso-Weber, J.M.; Sanchis, A. Driver drowsiness detection by applying deep learning techniques to sequences of images. Appl. Sci. 2022, 12, 1145. [Google Scholar] [CrossRef]
  9. Nasri, I.; Karrouchi, M.; Kassmi, K.; Messaoudi, A. A Review of Driver Drowsiness Detection Systems: Techniques, Advantages and Limitations. arXiv 2022, arXiv:2206.07489. [Google Scholar]
  10. Rajkar, A.; Kulkarni, N.; Raut, A. Driver drowsiness detection using deep learning. In Applied Information Processing Systems: Proceedings of ICCET 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 73–82. [Google Scholar]
  11. Saleem, A.A.; Siddiqui, H.U.R.; Raza, M.A.; Rustam, F.; Dudley, S.; Ashraf, I. A systematic review of physiological signals based driver drowsiness detection systems. Cogn. Neurodyn. 2023, 17, 1229–1259. [Google Scholar] [CrossRef] [PubMed]
  12. Siddiqui, H.U.R.; Saleem, A.A.; Brown, R.; Bademci, B.; Lee, E.; Rustam, F.; Dudley, S. Non-invasive driver drowsiness detection system. Sensors 2021, 21, 4833. [Google Scholar] [CrossRef] [PubMed]
  13. Thota, J.R.; Jaidhan, B.; Jitendra, M.S.; Shanmuk Srinivas, A.; Venkata Praneel, A. Computer Vision-Based Alert System to Detect Fatigue in Vehicle Drivers. In Advances in Data Science and Management: Proceedings of ICDSM 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 533–544. [Google Scholar]
  14. Zilberg, E.; Burton, D.; Xu, M.; Karrar, M.; Lal, S. Methodology and initial analysis results for development of non-invasive and hybrid driver drowsiness detection systems. In Advances in Broadband Communication and Networks; River Publishers: Aalborg, Denmark, 2022; pp. 309–328. [Google Scholar]
  15. Albadawi, Y.; Takruri, M.; Awad, M. A review of recent developments in driver drowsiness detection systems. Sensors 2022, 22, 2069. [Google Scholar] [CrossRef] [PubMed]
  16. Sahayadhas, A.; Sundaraj, K.; Murugappan, M. Detecting driver drowsiness based on sensors: A review. Sensors 2012, 12, 16937–16953. [Google Scholar] [CrossRef]
  17. Triyanti, V.; Iridiastadi, H. Challenges in detecting drowsiness based on driver’s behavior. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; Volume 277, p. 012042. [Google Scholar]
  18. Budak, U.; Bajaj, V.; Akbulut, Y.; Atila, O.; Sengur, A. An effective hybrid model for EEG-based drowsiness detection. IEEE Sens. J. 2019, 19, 7624–7631. [Google Scholar] [CrossRef]
  19. Cui, J.; Lan, Z.; Sourina, O.; Müller-Wittig, W. EEG-based cross-subject driver drowsiness recognition with an interpretable convolutional neural network. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 7921–7933. [Google Scholar] [CrossRef] [PubMed]
  20. Jiang, Y.; Zhang, Y.; Lin, C.; Wu, D.; Lin, C.T. EEG-based driver drowsiness estimation using an online multi-view and transfer TSK fuzzy system. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1752–1764. [Google Scholar] [CrossRef]
  21. Mardi, Z.; Ashtiani, S.N.M.; Mikaili, M. EEG-based drowsiness detection for safe driving using chaotic features and statistical tests. J. Med. Signals Sens. 2011, 1, 130. [Google Scholar] [PubMed]
  22. Noori, S.M.R.; Mikaeili, M. Driving drowsiness detection using fusion of electroencephalography, electrooculography, and driving quality signals. J. Med. Signals Sens. 2016, 6, 39. [Google Scholar] [PubMed]
  23. Ren, Z.; Li, R.; Chen, B.; Zhang, H.; Ma, Y.; Wang, C.; Lin, Y.; Zhang, Y. EEG-based driving fatigue detection using a two-level learning hierarchy radial basis function. Front. Neurorobot. 2021, 15, 618408. [Google Scholar] [CrossRef] [PubMed]
  24. Tuncer, T.; Dogan, S.; Subasi, A. EEG-based driving fatigue detection using multilevel feature extraction and iterative hybrid feature selection. Biomed. Signal Process. Control 2021, 68, 102591. [Google Scholar] [CrossRef]
  25. Barua, S.; Ahmed, M.U.; Ahlström, C.; Begum, S. Automatic driver sleepiness detection using EEG, EOG and contextual information. Expert Syst. Appl. 2019, 115, 121–135. [Google Scholar] [CrossRef]
  26. Chieh, T.C.; Mustafa, M.M.; Hussain, A.; Hendi, S.F.; Majlis, B.Y. Development of vehicle driver drowsiness detection system using electrooculogram (EOG). In Proceedings of the 2005 1st International Conference on Computers, Communications, & Signal Processing with Special Track on Biomedical Engineering, Honolulu, HI, USA, 15–17 August 2005; IEEE: New York, NY, USA, 2005; pp. 165–168. [Google Scholar]
  27. Hayawi, A.A.; Waleed, J. Driver’s drowsiness monitoring and alarming auto-system based on EOG signals. In Proceedings of the 2019 2nd International Conference on Engineering Technology and Its Applications (IICETA), Al-Najef, Iraq, 27–28 August 2019; IEEE: New York, NY, USA, 2019; pp. 214–218. [Google Scholar]
  28. Jiao, Y.; Deng, Y.; Luo, Y.; Lu, B.L. Driver sleepiness detection from EEG and EOG signals using GAN and LSTM networks. Neurocomputing 2020, 408, 100–111. [Google Scholar] [CrossRef]
  29. Wang, H.; Wu, C.; Li, T.; He, Y.; Chen, P.; Bezerianos, A. Driving fatigue classification based on fusion entropy analysis combining EOG and EEG. IEEE Access 2019, 7, 61975–61986. [Google Scholar] [CrossRef]
  30. Zhu, X.; Zheng, W.L.; Lu, B.L.; Chen, X.; Chen, S.; Wang, C. EOG-based drowsiness detection using convolutional neural networks. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; IEEE: New York, NY, USA, 2014; pp. 128–134. [Google Scholar]
  31. Ebrahimian, S.; Nahvi, A.; Tashakori, M.; Salmanzadeh, H.; Mohseni, O.; Leppänen, T. Multi-Level Classification of Driver Drowsiness by Simultaneous Analysis of ECG and Respiration Signals Using Deep Neural Networks. Int. J. Environ. Res. Public Health 2022, 19, 10736. [Google Scholar] [CrossRef]
  32. Kiashari, S.E.H.; Nahvi, A.; Bakhoda, H.; Homayounfard, A.; Tashakori, M. Evaluation of driver drowsiness using respiration analysis by thermal imaging on a driving simulator. Multimed. Tools Appl. 2020, 79, 17793–17815. [Google Scholar] [CrossRef]
  33. Lee, B.G.; Lee, B.L.; Chung, W.Y. Mobile healthcare for automatic driving sleep-onset detection using wavelet-based EEG and respiration signals. Sensors 2014, 14, 17915–17936. [Google Scholar] [CrossRef] [PubMed]
  34. Musicant, O.; Richmond-Hacham, B.; Botzer, A. Estimating Driver Fatigue Based on Heart Activity, Respiration Rate. In Proceedings of the Lindholmen Conference Centre, Online, 19–20 October 2022; p. 78. [Google Scholar]
  35. Solaz, J.; Laparra-Hernández, J.; Bande, D.; Rodríguez, N.; Veleff, S.; Gerpe, J.; Medina, E. Drowsiness detection based on the analysis of breathing rate obtained from real-time image recognition. Transp. Res. Procedia 2016, 14, 3867–3876. [Google Scholar] [CrossRef]
  36. Arefnezhad, S.; Eichberger, A.; Frühwirth, M.; Kaufmann, C.; Moser, M. Driver drowsiness classification using data fusion of vehicle-based measures and ECG signals. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 451–456. [Google Scholar]
  37. Babaeian, M.; Mozumdar, M. Driver drowsiness detection algorithms using electrocardiogram data analysis. In Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 7–9 January 2019; IEEE: New York, NY, USA, 2019; pp. 0001–0006. [Google Scholar]
  38. Yaacob, S.; Affandi, N.A.I.; Krishnan, P.; Rasyadan, A.; Yaakop, M.; Mohamed, F. Drowsiness detection using EEG and ECG signals. In Proceedings of the 2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 26–27 September 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar]
  39. Fan, Y.; Gu, F.; Wang, J.; Wang, J.; Lu, K.; Niu, J. SafeDriving: An effective abnormal driving behavior detection system based on EMG signals. IEEE Internet Things J. 2021, 9, 12338–12350. [Google Scholar] [CrossRef]
  40. Naim, F.; Mustafa, M.; Sulaiman, N.; Rahman, N.A.A. The study of time domain features of EMG signals for detecting driver’s drowsiness. In Recent Trends in Mechatronics Towards Industry 4.0: Selected Articles from iM3F 2020, Malaysia; Springer: Berlin/Heidelberg, Germany, 2022; pp. 427–438. [Google Scholar]
  41. Rahman, N.A.; Mustafa, M.; Sulaiman, N.; Samad, R.; Abdullah, N. EMG signal segmentation to predict driver’s vigilance state. In Human-Centered Technology for a Better Tomorrow: Proceedings of HUMENS 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 29–42. [Google Scholar]
  42. Satti, A.T.; Kim, J.; Yi, E.; Cho, H.Y.; Cho, S. Microneedle array electrode-based wearable EMG system for detection of driver drowsiness through steering wheel grip. Sensors 2021, 21, 5091. [Google Scholar] [CrossRef] [PubMed]
  43. Wali, M.K. Ffbpnn-based high drowsiness classification using EMG and WPT. Biomed. Eng. Appl. Basis Commun. 2020, 32, 2050023. [Google Scholar] [CrossRef]
  44. Xie, A. Effect of sleep on breathing-why recurrent apneas are only seen during sleep. J. Thorac. Dis. 2012, 4, 194. [Google Scholar] [PubMed]
  45. Warwick, B.; Symons, N.; Chen, X.; Xiong, K. Detecting driver drowsiness using wireless wearables. In Proceedings of the 2015 IEEE 12th International Conference on Mobile Ad Hoc and Sensor Systems, Dallas, TX, USA, 19–22 October 2015; IEEE: New York, NY, USA, 2015; pp. 585–588. [Google Scholar]
  46. Yang, C.; Wang, X.; Mao, S. Respiration monitoring with RFID in driving environments. IEEE J. Sel. Areas Commun. 2020, 39, 500–512. [Google Scholar] [CrossRef]
  47. Brown, R.; Ghavami, N.; Adjrad, M.; Ghavami, M.; Dudley, S. Occupancy based household energy disaggregation using ultra wideband radar and electrical signature profiles. Energy Build. 2017, 141, 134–141. [Google Scholar] [CrossRef]
  48. Chong, C.C.; Watanabe, F.; Inamura, H. Potential of UWB technology for the next generation wireless communications. In Proceedings of the 2006 IEEE Ninth International Symposium on Spread Spectrum Techniques and Applications, Manaus, Brazil, 28–31 August 2006; IEEE: New York, NY, USA, 2006; pp. 422–429. [Google Scholar]
  49. Tsang, T.K.; El-Gamal, M.N. Ultra-wideband (UWB) communications systems: An overview. In Proceedings of the 3rd International IEEE-NEWCAS Conference, Quebec City, QC, Canada, 19–22 June 2005; IEEE: New York, NY, USA, 2005; pp. 381–386. [Google Scholar]
  50. Wang, X.; Dinh, A.; Teng, D. Radar sensing using ultra wideband–design and implementation. In Ultra Wideband—Current Status and Future Trends; InTech: London, UK, 2012; pp. 41–64. [Google Scholar]
  51. Rana, S.P.; Dey, M.; Siddiqui, H.U.; Tiberi, G.; Ghavami, M.; Dudley, S. UWB localization employing supervised learning method. In Proceedings of the 2017 IEEE 17th International Conference on Ubiquitous Wireless Broadband (ICUWB), Salamanca, Spain, 12–15 September 2017; IEEE: New York, NY, USA, 2017; pp. 1–5. [Google Scholar]
  52. Rana, S.P.; Dey, M.; Brown, R.; Siddiqui, H.U.; Dudley, S. Remote vital sign recognition through machine learning augmented UWB. In Proceedings of the 12th European Conference on Antennas and Propagation (EuCAP 2018), London, UK, 9–13 April 2018. [Google Scholar]
  53. Zafar, K.; Siddiqui, H.U.R.; Majid, A.; Saleem, A.A.; Raza, A.; Rustam, F.; Dudley, S. Deep Learning Based Feature Engineering to Detect Anterior and Inferior Myocardial Infarction using UWB Radar Data. IEEE Access 2023, 11, 97745–97757. [Google Scholar] [CrossRef]
  54. Gao, Z.; Wang, X.; Yang, Y.; Mu, C.; Cai, Q.; Dang, W.; Zuo, S. EEG-based spatio–temporal convolutional neural network for driver fatigue evaluation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2755–2763. [Google Scholar] [CrossRef]
  55. Pandey, N.N.; Muppalaneni, N.B. Temporal and spatial feature based approaches in drowsiness detection using deep learning technique. J. Real-Time Image Process. 2021, 18, 2287–2299. [Google Scholar] [CrossRef]
  56. Babaeian, M.; Amal Francis, K.; Dajani, K.; Mozumdar, M. Real-time driver drowsiness detection using wavelet transform and ensemble logistic regression. Int. J. Intell. Transp. Syst. Res. 2019, 17, 212–222. [Google Scholar] [CrossRef]
  57. Awais, M.; Badruddin, N.; Drieberg, M. A hybrid approach to detect driver drowsiness utilizing physiological signals to improve system performance and wearability. Sensors 2017, 17, 1991. [Google Scholar] [CrossRef] [PubMed]
  58. Suresh, A.; Naik, A.S.; Pramod, A.; Kumar, N.A.; Mayadevi, N. Analysis and Implementation of Deep Convolutional Neural Network Models for Intelligent Driver Drowsiness Detection System. In Proceedings of the 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 17–19 May 2023; IEEE: New York, NY, USA, 2023; pp. 553–559. [Google Scholar]
  59. Li, A.; Ma, X.; Guo, J.; Zhang, J.; Wang, J.; Zhao, K.; Li, Y. Driver fatigue detection and human-machine cooperative decision-making for road scenarios. Multimed. Tools Appl. 2023, 83, 12487–12518. [Google Scholar] [CrossRef]
  60. Majeed, F.; Shafique, U.; Safran, M.; Alfarhood, S.; Ashraf, I. Detection of drowsiness among drivers using novel deep convolutional neural network model. Sensors 2023, 23, 8741. [Google Scholar] [CrossRef] [PubMed]
  61. Shakeel, M.F.; Bajwa, N.A.; Anwaar, A.M.; Sohail, A.; Khan, A. Detecting driver drowsiness in real time through deep learning based object detection. In International Work-Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 283–296. [Google Scholar]
  62. Mohan, R.; Chalasani, S.; Mary, S.S.C.; Chauhan, A.; Parte, S.A.; Anusuya, S. Identification of Driver Drowsiness Detection using a Regularized Extreme Learning Machine. In Proceedings of the 2023 Second International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 2–4 March 2023; IEEE: New York, NY, USA, 2023; pp. 1233–1238. [Google Scholar]
  63. Miah, A.A.; Ahmad, M.; Mim, K.Z. Drowsiness detection using eye-blink pattern and mean eye landmarks’ distance. In Proceedings of the International Joint Conference on Computational Intelligence: IJCCI 201, Seville, Spain, 18–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2020; pp. 111–121. [Google Scholar]
  64. Kawtikwar, V.N.; Tiwari, G.; Patil, C.; Pandey, N.; Tiwari, P. Eyes on the Road: A Machine Learning-based Fatigue Detection System for Safer Driving. In Proceedings of the 2023 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 26–28 April 2023; IEEE: New York, NY, USA, 2023; pp. 1799–1805. [Google Scholar]
  65. Liu, S.; Zhao, L.; Yang, X.; Du, Y.; Li, M.; Zhu, X.; Dai, Z. Remote drowsiness detection based on the mmwave fmcw radar. IEEE Sens. J. 2022, 22, 15222–15234. [Google Scholar] [CrossRef]
  66. Ananthi, S.; Sathya, R.; Vaidehi, K.; Vijaya, G. Drivers Drowsiness Detection using Image Processing and I-Ear Techniques. In Proceedings of the 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 17–19 May 2023; IEEE: New York, NY, USA, 2023; pp. 1326–1331. [Google Scholar]
  67. Bajaj, J.S.; Kumar, N.; Kaushal, R.K.; Gururaj, H.; Flammini, F.; Natarajan, R. System and method for driver drowsiness detection using behavioral and sensor-based physiological measures. Sensors 2023, 23, 1292. [Google Scholar] [CrossRef]
  68. Srivastava, A.; Bansal, S.; Sehgal, S.S. Real-Time Based Driver’s Drowsiness and Fatigue Detection System. In Proceedings of the 2022 International Conference on Cyber Resilience (ICCR), Dubai, United Arab Emirates, 6–7 October 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
  69. Suresh, Y.; Khandelwal, R.; Nikitha, M.; Fayaz, M.; Soudhri, V. Driver drowsiness detection using deep learning. In Proceedings of the 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 7–9 October 2021; IEEE: New York, NY, USA, 2021; pp. 1526–1531. [Google Scholar]
  70. Civik, E.; Yuzgec, U. Real-time driver fatigue detection system with deep learning on a low-cost embedded system. Microprocess. Microsyst. 2023, 99, 104851. [Google Scholar] [CrossRef]
  71. Kannan, R.; Jahnavi, P.; Megha, M. Driver Drowsiness Detection and Alert System. In Proceedings of the 2023 IEEE International Conference on Integrated Circuits and Communication Systems (ICICACS), Raichur, India, 24–25 February 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  72. Kumar, D.; Nair, S.R.; Jayaraj, R. Driver Drowsiness Detection Using Open CV and DLIB. In Proceedings of the 2023 International Conference on Networking and Communications (ICNWC), Chennai, India, 5–6 April 2023; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar]
  73. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
Figure 1. Proposed methodology diagram of the system.
Figure 1. Proposed methodology diagram of the system.
Sensors 24 03754 g001
Figure 2. Raw radar signal of chest movement.
Figure 2. Raw radar signal of chest movement.
Sensors 24 03754 g002
Figure 3. Subject in front of the radar while collecting data obtained from [12].
Figure 3. Subject in front of the radar while collecting data obtained from [12].
Sensors 24 03754 g003
Figure 4. Converted grayscale images of (a) drowsy class (b) fresh class.
Figure 4. Converted grayscale images of (a) drowsy class (b) fresh class.
Sensors 24 03754 g004
Figure 5. The architecture diagram of CSFE feature extraction.
Figure 5. The architecture diagram of CSFE feature extraction.
Sensors 24 03754 g005
Figure 6. The architecture of GAN (a) Generator (b) Discriminator.
Figure 6. The architecture of GAN (a) Generator (b) Discriminator.
Sensors 24 03754 g006
Figure 7. The architecture of ensemble models (a) RF-XGB-SVM, (b) RF-MLP.
Figure 7. The architecture of ensemble models (a) RF-XGB-SVM, (b) RF-MLP.
Sensors 24 03754 g007
Figure 8. Confusion matrix of RF-XGB-SVM on original dataset.
Figure 8. Confusion matrix of RF-XGB-SVM on original dataset.
Sensors 24 03754 g008
Figure 9. Confusion matrix of RF-XGB-SVM on augmented dataset.
Figure 9. Confusion matrix of RF-XGB-SVM on augmented dataset.
Sensors 24 03754 g009
Figure 10. Comparison of accuracies on both datasets.
Figure 10. Comparison of accuracies on both datasets.
Sensors 24 03754 g010
Table 1. Neural Network Model Configuration.
Table 1. Neural Network Model Configuration.
Layer TypeConfiguration
Input RescalingScaling factor: 1.0/255
Convolutional (Conv2D)Filters: 64, Kernel: (3, 3), Activation: ReLU
Max Pooling (MaxPooling2D)Pool Size: (2, 2)
Convolutional (Conv2D)Filters: 32, Kernel: (3, 3), Activation: ReLU
Max Pooling (MaxPooling2D)Pool Size: (2, 2)
FlattenN/A
DenseNeurons: 128, Activation: ReLU
Table 2. Snippets of the dataset post augmentation.
Table 2. Snippets of the dataset post augmentation.
Sr. No.012979899Label
00.396221760.40660650.516603470.462848930.486667040.35402840
10.417452540.542927860.59741610.41082110.472987060.39469420
20.43054020.480492230.52134180.405194580.60114780.220751080
30.4222870.65209220.638226330.344007160.4425130.398817360
40.413099320.477238420.511497860.38702720.53300190.204173040
50.522328560.450202080.577954530.469768350.46527970.171638440
60.41894650.479182360.512083530.38759020.52447160.199999940
70.407116060.337623540.50820190.53006790.50990580.329783770
80.434086740.67846970.648294030.336029560.437609430.386810320
23910.53872670.5402450.65757160.43984430.59395290.309801851
23920.53988850.624412660.61594980.391327380.796117070.202976731
23930.56543930.591647150.637940050.46514160.75602280.239127141
23940.52868860.54394120.69007630.466904880.52355710.332423991
23950.346171620.479541480.53206590.499930050.409804580.191991251
23960.537290630.54429550.676337960.45687160.551479460.328101431
23970.55637640.616156640.6331110.42391450.78346810.223238021
23980.56733740.57291270.64608250.471918760.70991730.262457161
23990.495235620.55118890.7057650.473019930.45857470.345226851
Table 3. Classifiers and Their Hyperparameters.
Table 3. Classifiers and Their Hyperparameters.
ClassifiersHyperparameters
SVMC = 10, kernel = ‘rbf’
RFmax_depth = None, n_estimators = 100
XGBlearning_rate = 0.2, n_estimators = 50
MLPalpha = 0.001, hidden_layer_sizes = (100,)
RF-MLPRF (max_depth = None, n_estimators = 100), MLP (alpha = 0.001,
hidden_layer_sizes = (100,))
RF-XGB-SVMRF (max_depth = None, n_estimators = 100), XGB (learning_rate = 0.2,
n_estimators = 50), SVM (C = 10, kernel = ‘rbf’)
Table 4. Classification matrices of the classifiers on the test data.
Table 4. Classification matrices of the classifiers on the test data.
ClassifiersAccuracy (%)PrecisionRecallF1-Score
SVM91.60.910.910.91
RF93.330.940.940.94
XGB91.60.910.920.92
MLP74.60.740.750.75
RF-MLP950.950.950.95
RF-XGB-SVM96.60.970.970.97
Table 5. Results of classifier on Augmented dataset.
Table 5. Results of classifier on Augmented dataset.
ClassifiersAccuracy (%)PrecisionRecallF1-Score
SVM98.760.990.990.99
RF99.170.990.990.99
XGB99.170.990.990.99
MLP99.5100100100
RF-MLP99.30.990.990.99
RF-XGB-SVM99.58100100100
Table 6. Kfold cross-validation results on original dataset.
Table 6. Kfold cross-validation results on original dataset.
ClassifiersAccuracy ± Std
SVM0.91 ± 0.05
RF0.94 ± 0.02
XGB0.91 ± 0.04
MLP0.73 ± 0.04
RF-MLP0.95 ± 0.03
RF-XGB-SVM0.97 ± 0.018
Table 7. Kfold cross-validation results on augmented dataset.
Table 7. Kfold cross-validation results on augmented dataset.
ClassifiersAccuracy ± Std
SVM0.98 ± 0.01
RF0.98 ± 0.01
XGB0.97 ± 0.01
MLP0.99 ± 0.01
RF-MLP0.98 ± 0.01
RF-XGB-SVM0.99 ± 0.01
Table 8. Computational time complexity of classifiers.
Table 8. Computational time complexity of classifiers.
ClassifiersComputational Time (s)
SVM1.53
RF2.47
XGB2.81
MLP3.63
RF-MLP3.72
RF-XGB-SVM4.15
Table 9. Computational time complexity of classifiers on Augmented dataset.
Table 9. Computational time complexity of classifiers on Augmented dataset.
ClassifiersComputational Time (s)
SVM4.19
RF4.22
XGB3.93
MLP5.1
RF-MLP4.3
RF-XGB-SVM4.7
Table 10. Comparison with other Studies.
Table 10. Comparison with other Studies.
StudyAccuracy
[12]87.5%
Proposed99.58%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Siddiqui, H.U.R.; Akmal, A.; Iqbal, M.; Saleem, A.A.; Raza, M.A.; Zafar, K.; Zaib, A.; Dudley, S.; Arambarri, J.; Castilla, Á.K.; et al. Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence. Sensors 2024, 24, 3754. https://doi.org/10.3390/s24123754

AMA Style

Siddiqui HUR, Akmal A, Iqbal M, Saleem AA, Raza MA, Zafar K, Zaib A, Dudley S, Arambarri J, Castilla ÁK, et al. Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence. Sensors. 2024; 24(12):3754. https://doi.org/10.3390/s24123754

Chicago/Turabian Style

Siddiqui, Hafeez Ur Rehman, Ambreen Akmal, Muhammad Iqbal, Adil Ali Saleem, Muhammad Amjad Raza, Kainat Zafar, Aqsa Zaib, Sandra Dudley, Jon Arambarri, Ángel Kuc Castilla, and et al. 2024. "Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence" Sensors 24, no. 12: 3754. https://doi.org/10.3390/s24123754

APA Style

Siddiqui, H. U. R., Akmal, A., Iqbal, M., Saleem, A. A., Raza, M. A., Zafar, K., Zaib, A., Dudley, S., Arambarri, J., Castilla, Á. K., & Rustam, F. (2024). Ultra-Wide Band Radar Empowered Driver Drowsiness Detection with Convolutional Spatial Feature Engineering and Artificial Intelligence. Sensors, 24(12), 3754. https://doi.org/10.3390/s24123754

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop