Next Article in Journal
Implementation of Biomimicry for Detecting Composite Structure Damage Using Impedance-Based Non-Destructive Testing Method
Next Article in Special Issue
The Impact of Automated Vehicles on Road and Intersection Capacity
Previous Article in Journal
Pneumatic Urban Waste Collection Systems: A Review
Previous Article in Special Issue
Design and Experiments of Autonomous Path Tracking Based on Dead Reckoning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Deep-Learning Model for Remote Driver Monitoring in SDN-Based Internet of Autonomous Vehicles Using 5G Technologies

Computer Engineering Department, College of Engineering and Technology, Arab Academy for Science and Technology (AAST), Alexandria 1029, Egypt
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(2), 875; https://doi.org/10.3390/app13020875
Submission received: 6 December 2022 / Revised: 26 December 2022 / Accepted: 5 January 2023 / Published: 8 January 2023
(This article belongs to the Special Issue Autonomous Vehicles for Public Transportation Services)

Abstract

:
The rapid advancement in the Internet of Things (IoT) and its integration with Artificial Intelligence (AI) techniques are expected to play a crucial role in future Intelligent Transportation Systems (ITS). Additionally, the continuous progress in the industry of autonomous vehicles will accelerate and increase their short adoption in smart cities to allow safe, sustainable and accessible trips for passengers in different public and private means of transportation. In this article, we investigate the adoption of 5G different technologies, mainly, the Software-Defined Networks (SDN) to support the communication requirements of delegation of control of level-2 autonomous vehicles to the Remote-Control Center (RCC) in terms of ultra-low delay and reliability. This delegation occurs upon the detection of a drowsy driver using our proposed deep-learning-based technique deployed at the edge to reduce the level of accidents and road congestion. The deep learning-based model was evaluated and produced higher accuracy, precision and recall when compared to other methods. The role of SDN is to implement network slicing to achieve the Quality of Service (QoS) level required in this emergency case. Decreasing the end-to-end delay required to provide feedback control signals back to the autonomous vehicle is the aim of deploying QoS support available in an SDN-based network. Feedback control signals are sent to remotely activate the stopping system or to switch the vehicle to direct teleoperation mode. The mininet-WiFi emulator is deployed to evaluate the performance of the proposed adaptive SDN framework, which is tailored to emulate radio access networks. Our simulation experiments conducted on realistic vehicular scenarios revealed significant improvement in terms of throughput and average Round-Trip Time (RTT).

1. Introduction

There is rapid progress in vehicle automation technologies, which has resulted in the availability of autonomous cars for consumer purchase. According to [1], the global autonomous car market is expected to reach a size of nearly 62 billion U.S. dollars in 2026. Six levels of driving automation were defined by the Society of Automotive Engineers, where level 0 is the case of no automation and level 5 is the case of full automation as illustrated in Figure 1. The primary advantages of autonomous vehicles have been named by the National Highway Traffic Safety Administration (NHTSA) as follows: safety, economic and societal benefits, efficiency and convenience and mobility. Although autonomous vehicles have higher accident rates than human-driven vehicles, the injuries are less severe. On average, there are 9.1 autonomous vehicle accidents per million miles driven [2], while 4.1 crashes per million miles for regular vehicles. So, there is an urgent need to strengthen the concept of adding a remote control center in the automated driving model that takes over control in emergency cases. Article [3] introduces three service categories where the concept of a control center is deployed in automated driving. These categories are namely: emergency service, fleet service and teleoperation service (direct and indirect teleoperation).
Moreover, to support the challenges of autonomous vehicles (AVs) in terms of QoS, high mobility, dynamic topologies, a programmable and scalable network paradigm is required to manage and control this communication scenario [4]. This can be done by the deployment of software-defined networking (SDN).
SDN is a network paradigm that separates the control plane and the data plane. As a result of this separation, the control plane is implemented in a centralized controller, and then, forwarding rules are installed by this controller in the routers (network switches), which simplifies policy enforcement, network reconfiguration, programmability and evolution [5,6]. Currently, the controller-data plane interface (C-DPI) standard that permits communication between controllers and data plane devices (network switches) is the OpenFlow protocol.
On the other hand, artificial intelligence (AI) techniques became crucial components of an autonomous vehicle. AI techniques are applied in autonomous vehicles in different domains that comprise: sensor data processing, path planning, path execution, monitoring vehicle conditions, and insurance data collection.
In addition, edge computing (EC) overcomes the problems encountered with V2I (Vehicle to Infrastructure) approaches of SDN within VANET (Vehicular Ad hoc Networks) RSU (Road Side Unit) communications. So, a performance investigation of the usage of MEC (Mobile Edge Computing) as edge layer implementation is carried-out to conclude its impact on our proposed model. MEC integrates storage and processing intermediate nodes over the base station of cellular networks, which allows the deployment of cloud computing service within the radio area network (RAN) [7].
Article [8] presented a 5G V2X ecosystem based on SDN to provide the Internet of Vehicles (IoV). Simulations using ns3 were conducted to evaluate vehicular Internet-based video service traffic and vehicle-to-vehicle (V2V) communications in urban and rural scenarios. Article [9] developed a framework using 5G network slicing for application-driven vehicular networks. The authors evaluated their model using simulations and compared their results to the state-of-the-art approaches.
There are three basic methods of sleepiness detection techniques, which are the measurement of vehicle characteristics, physiological characteristics, or behavioral characteristics. Vehicle characteristic measurement focuses on the assessment of driver drowsiness and is based on vehicle motions like the location of the vehicle in the lane, steering wheel movement, and stop and acceleration pedal action. Measurement of physiological characteristics includes detecting driver drowsiness using brain signals, heart rate, and nerve impulses, among other things. These solutions are not commercially viable since they are obtrusive and place additional stress on the driver’s body. Behavioral characteristic measurement is based on the driver’s expression and facial movement to determine their level of tiredness. This method does not cause any disruptions and is dependent on the camera capturing several facial expression stances [10].
A recent survey [11] has presented the recent applications of drowsiness detection, and the value of both temporal and spatial feature-based techniques was discussed. Although the network can be trained relatively quickly using the temporal feature-based method, it is less accurate. The spatial feature-based technique, on the other hand, performs well in terms of accuracy, but lags in terms of training time, meaning that it takes longer to train the network. Since precision and timing are both crucial components in sleepiness detection, they have concluded that the highest results in different research were achieved by analyzing the spatial features.
Motivated by the important role IoT and autonomous vehicles technologies play in intelligent transportation systems nowadays and the requirements of safety and emergency-related service in terms of low latency and high reliability, our aim in this research is to develop a framework that ensures the safety of passengers by deploying a deep-learning model at the edge (in a vehicle) that detects a drowsy driver and propagates this information message with QoS required for this type of information message by leveraging the SDN to the remote control center (RCC) to switch the autonomous vehicle to teleoperation mode. The SDN core network satisfies the required end-to-end delay constraint for this delay-sensitive application by exploiting the SDN global view of the network conditions to reallocate the available bandwidth among traffic flows based on the priority of different traffic classes. The SDN controller communicates with forwarding devices using OpenFlow protocol to build this global view. The main contributions of this research are as follows:
  • The deployment of the Software-Defined Network (SDN) paradigm to implement the 5G slicing feature to allow the dynamic allocation of resources to support the Key Performance Indicators (KPIs) (e.g., low latency, low packet loss requirements) of heterogeneous autonomous vehicle applications.
  • The application of the edge computing concept by deploying AI techniques at the edge in an autonomous vehicle to remotely monitor driver status and report critical cases only to the Remote Control Center (RCC). Integration between the AI techniques and edge-computing paradigm result-in a significant decrease in the bandwidth required. Besides, the deployment of the MEC concept to implement the safety servers and to provide further support to the delay requirement.
  • The complete pipeline starts from the video stream captured by the mobile phone following the machine-learning steps to determine whether or not a driver is drowsy. Finally, employing SDN as the implementation technique of 5G slicing to forward the critical messages with the required level of QoS to the control center.
  • A validation of the proposed SDN-VANET QoS framework using a realistic urban congestion scenario and performing a comparison between the adaptive and the QoS-free approach.
The rest of the article is organized as follows: Section 2 gives a thorough description of our intelligent and adaptive QoS proposed framework after giving a brief review of communication technologies deployed in ITS applications. Section 3 presents the performance evaluation results of our proposed framework. We summarize the findings of this research in the concluding section.

2. Materials and Methods

In this section, our proposed framework architecture is outlined. First, communication technologies deployed in the Internet of Vehicles (IoV) are reviewed. Then, the proposed framework architecture that includes the IoV layer, the proposed deep learning model, the data plane and the control plane constituting the SDN core network, and the QoS algorithm is explained.

2.1. Communication Technologies

ITS applications depend on advanced wireless communication schemes, as vehicle on-board units (OBUs) are allowed to interact with each other (v2V), with remote stations and entities (V2I) in the same communication range, and with road users (V2P) using available radio interfaces. More specifically, communication of Internet of Vehicles with infrastructure is carried out through multiple access technologies, like IEEE 802.11 (Wi-Fi) customized for vehicular connectivity in the 5.9 GHz band (the DSRC/WAVE) and mobile broadband technologies (e.g., 4G/LTE, 5G). 5G is considered a promising technology, as it aims to be a revolution in terms of data rates, ultra-low latency, massive connectivity, ultra-high network reliability, and energy efficiency to support V2X (Vehicle-to-Everything) safety and non-safety applications with their different requirements. Mobile Edge Computing (MEC), Network slicing and SDN are technologies in 5G that add important enhancements to both the radio access networks (RAN) and the core networks of mobile communications [9]. Network slicing is described by [12] as an end-to-end logical network that is equipped with a collection of separated virtual resources on a common physical infrastructure. These logical networks are provided as various services to meet the various communication needs of users. SDN is regarded as an approach for 5G network slicing implementation.

2.2. Proposed Model Architecture

Article [13] proposed a high-level reference architecture of software-defined VANETs. The proposed architecture consists of three planes including the data plane, a control plane, and an application plane. The data plane is made up of switches, RSUs, cellular network nodes, and vehicular devices. The control plane includes different types of controllers: OpenFlow controllers and controllers tailored to enforce the policies required by the applications. The OpenFlow protocol is used by the data plane elements to communicate with the control plane. Furthermore, the control plane communicates with the application plane using the Northbound Interface (NBI) such as REST APIs.
Our proposed architecture is depicted in Figure 2. The components of this architecture cooperate using 5G slicing technology. In addition, MEC technology is implemented by connecting the safety servers directly to the RSUs. Our proposed framework is made up of four main modules: IoV layer, edge computing device layer, data plane and control plane (SDN network core), and QoS application. The IoV layer consists of autonomous vehicles that generate three types of traffic: traffic belonging to the safety application, which in our research is remote monitoring and detection of a drowsy driver using AI models, implemented on the edge computing device, infotainment traffic and best-effort traffic. Once a drowsy driver is detected, a message is sent to the safety server (RCC) to turn the vehicle into teleoperation mode. We evaluate the implementation of safety servers using the MEC concept to support the ultra-low delay requirement of this type of traffic. The OpenFlow switches use the destination port number to classify the traffic flows. Each switch’s meter table, which the RYU controller installed, is used to allocate bandwidth to various traffic classes. In the event that MEC technology is not active, the meter table settings prioritize safety traffic to ensure prompt transmission across the data plane to the RCC. The deployed QoS module is adaptive in that it gathers unused bandwidth from each class and redistributes it to the class that is most in need of it, giving precedence to safety traffic to ensure fewer packet loss and lower average RTT. The bandwidth needed to transfer the entire video of the monitored driver to the RCC and make the decision there is reduced when a sleepy driver is identified at the edge. Our proposed framework architecture components are explained in the following subsections and our design choices are justified.

2.2.1. Application Plane

In SDN, there are two types of applications. First, the end application that is supported by the infrastructure which, in our proposed model, is switching the autonomous vehicle to teleoperation mode upon detection of a drowsy driver. Second, the control application that specifies the network behavior, which is in our proposed model, is giving priority to control commands traffic going back and forth between the autonomous vehicle and the remote control center (RCC) in terms of bandwidth. In this work, we deploy our previously proposed adaptive quality of service algorithm [14] that was tailored to the investigated vehicular application.

2.2.2. Data Plane

In our proposed model, the data plane is the network core that consists of a set of interconnected OpenFlow switches (OVS-switches) and RSUs. These switches receive forwarding rules that give priority to vehicular applications data from RYU controller.

2.2.3. Control Plane

The RYU controller uses the OpenFlow protocol to communicate with the OpenFlow switches. The RYU controller was selected, as it encompasses a separate module for QoS and is implemented in Python programming language, which allows rapid prototyping for our proposed intelligent QoS framework [15]. In our proposed framework, the RSUs are SDN-enabled to extend the SDN control towards the OBUs. Thus, RSUs are programmable by the controller. The RYU controller determines the network rules for each slice to fulfill different vehicular applications KPIs and applies them to the data plane. The OpenFlow protocol installs the flow entries between vehicles and application servers to meet the ITS applications’ KPIs. The network policies are defined by the control application, which is the meter table configured in our proposed adaptive QoS application.

2.2.4. IoV Layer

The IoV (Internet of Vehicles) layer consists of autonomous vehicles equipped with drowsiness monitoring cameras that continuously monitor the driver and produce a video stream of the monitored driver, which is the input of the edge computing device layer that implements the deep learning model.

2.2.5. Edge Computing Device Layer

The proposed architecture of driver drowsiness detection and alert is shown in Figure 3, video stream capturing followed by frame selection, face detection, landmarks extraction, and classification. If a driver’s face is discovered, it is detected and cropped from the image using the Viola–Jones [16,17] face detection algorithm before being provided as input to the machine learning phase as facial landmarks. The machine learning algorithm detects whether or not the driver is classified as drowsy at the earliest frame. The input to the machine learning model will be 68 facial landmarks of the face provided by the OpenCV library, and each landmark is presented as an x and y coordinate in the face. Since this is a real-time application, the model needs to be very simple and fast to assure the continuous processing of the frames in minimal time. Therefore, when choosing the model to employ, several famous architectures were not included because of their large number of parameters. Considering the VGG16 [18] architecture with 138M parameters, the lightest version of the Efficientnet-B0 [19] has 5.3 M parameters and the Squeezenet [20] has 421 K. Our proposed dense model, presented in Figure 4, requires training only 5 K parameters making it more suited to real-time applications.
The parameters of the presented model were chosen by several experiments using a grid search to assure the best-chosen model for the problem addressed has a dropout rate of 0.2 and a batch normalization momentum of 0.8. The machine learning model needs to be trained with several frames labeled as being drowsy or non-drowsy. To best train the model and assure its generalization ability, the dataset is split by actors to make sure that the model is tested by actors that were not previously seen. The dataset contains unequal frames per actor taken from the original video dataset [21]. The idea behind the proposed work is to detect drowsiness as early as possible using a single frame instead of multiple consecutive frames. The available dataset was then split into 10 folds for the different experimentation each fold contains a few actors for training and others for testing.
Accuracy, precision, recall, and F-measure will be used to assess the proposed idea. Accuracy is measured by the percentage of points out of all the data points that were successfully predicted. Precision and recall are two measures that are used in conjunction to evaluate how well systems perform. Precision is defined as the fraction of all retrieved instances that are relevant occurrences. Recall, also referred to as “sensitivity,” is the proportion of retrieved instances among all relevant examples. A perfect classifier has precision and recall that are both equal to one. The harmonic average of precision and recall is the F-measure. The experimentation and results will be further explained in the upcoming section, namely the split by actors.
Another experiment is also presented that aims to further enhance the classification results, thus achieving the highest value possible. This approach will require the driver to provide two videos, one marked as drowsy and the other as non-drowsy, as a calibration step before the initiation of the program. The provided videos will help the model be fine-tuned to the specific features of the driver, thus dramatically enhancing the classification results. The results will also be presented in the upcoming section in the trained with calibration results.
In this research, we employ the dataset generated by [22] from the Arlington Real-Life Drowsiness Dataset (UTA-RLDD) analysis from the University of Texas at Arlington [21]. The original dataset was developed for the multi-stage drowsiness detection task, focusing on both extreme and readily apparent cases of drowsiness as well as subtle cases where minor micro-expressions serve as the discriminating criteria. It can be crucial to identify these modest episodes of drowsiness to engage drowsiness prevention systems at an early stage. Since the micro-expressions of tiredness have physiological and instinctive roots, it can be challenging for actors to convincingly mimic such expressions when acting sleepy.

3. Results

3.1. Performance Evaluation

Machine Learning Evaluation

Generalized model: The generalized model was tested using 10-fold cross-validation and the average accuracy, precision, recall and f-measure are presented in Table 1. The proposed deep learning dense-based model is compared to some benchmarks, namely, Adaboost, random forest and support vector classification. The results produced show that the proposed model has at least an 8% improvement in the accuracy results over the other measure and an improvement in the F-measure of about 3.5%.
Calibrated model: The calibrated model was trained on the actors and provided further samples from the test actors to allow the model to further learn the features of the designated driver. Table 2 shows the results of the proposed scenario and it can be concluded that there is around a 10% improvement in accuracy results between the generalized and the calibrated models and a 9% improvement in F1 score results. Furthermore, the comparison between the dense network and the other benchmarks showed at least a 7% improvement in the accuracy results and an almost 8% improvement in the F1 measure. Since drowsy driver detection is a critical detection problem with a severe risk resulting in accidents, it is recommended to calibrate the model before use to ensure better performance.
Finally, a sample of the 200 epochs of training the model is presented in Figure 5. The chart shows that the model is not overfitting to the given dataset and the choice of 200 epochs was enough for training the presented model.

3.2. Sdn-Vanet QoS Framework Evaluation

The performance evaluation of the SDN-VANET QoS Framework is carried out using Mininet-WiFi wireless network emulator [23], an OpenFlow-enabled network emulator forked from Mininet to add wireless channel emulation and mobility support. The SDN controller deployed is the open-source RYU for the justification given previously. In addition, the SUMO (Simulation of Urban MObility) [24] tool is used, which is an open-source, highly portable, microscopic and continuous multi-modal traffic simulation package—including road vehicles, public transport and pedestrians designed to handle large networks.
The performance evaluation is conducted using an urban congestion scenario having vehicular applications with heterogeneous bandwidth requirements. Our implementations consider three classes of application priority: 1, 2 and 3. Class 1 is devoted to safety applications with ultra-low delay requirements. For this class, we evaluate the impact of the usage of MEC on performance; one of the 5G features. Class 2 is devoted to infotainment application which is more delay tolerant. Class 3 is reserved for best-effort applications that don’t have any priority or specific requirements. In our implementation environment, the controller has to delay the run of the adaptive component of the control application before recalculating the bandwidth assigned for each meter according to the association timeout occurring when the vehicle changes the RSUs.

Performance Metrics

To assess the performance of the suggested SDN-VANET QoS Framework, the following performance metrics will be deployed: average throughput of data between vehicles and applications servers and average round trip time (RTT). The RTT can be defined as the time taken by the message sent by the source vehicle until receiving a response from the receiving application server.

3.3. Evaluation Scenario 1

In this scenario, the configuration deployed in [9] is used. This configuration was generated by the SUMO urban mobility simulator, to simulate various levels of congestion over time. In this configuration, one hundred and fifty-eight vehicles move in the 650 m of an urban road in Manhattan (NYC). The Mininet-WiFi emulator and the SUMO simulator were integrated to implement mobility. To evaluate our proposed framework, in this scenario 42 vehicles out of the 158 vehicles generate traffic belonging to different applications simultaneously, whereas 26 vehicles generate traffic for safety applications. This scenario represents a high-traffic scenario to study the impact of a high level of interference on our proposed framework. The parameters used in the performance evaluation are summarized in Table 3. Figure 6 depicts the network setup used in simulation in case of applying the MEC concept to support safety applications prerequisites in terms of ultra-low delay and without applying the MEC concept. The values of RTT are calculated using the “PING” ICMP messages. Table 4 illustrates the average RTT results in the case of applying our proposed Adaptive Quality of Service (AQoS) algorithm and in the case of a QoS-free model in this high-level interference scenario. Our algorithm achieves an improvement of up to 74.63% in terms of average RTT in the case of safety applications and up to 90.88% in the case of infotainment applications. The high improvement in average RTT in the case of infotainment as the infotainment applications have a priority class 2 and are assigned a bandwidth greater than the best effort class and in case of this class is hungry for bandwidth, it collects the free bandwidth from other classes. On the other hand, the improvement in safety applications is less as in both cases the safety servers are connected directly to RSUs applying the MEC concept of 5G to support ultra-low delay application. Moreover, results revealed that the usage of MEC technology improves the average RTT by up to 98.09%. Finally, without applying the MEC technology and moving safety servers to the SDN network core, our proposed AQoS still improves the performance by up to 64% compared to the QoS-free model.

3.3.1. Evaluation Scenario 2

In this scenario, 45 vehicles out of the 158 vehicles generate traffic, where 15 vehicles generate safety applications traffic, 15 vehicles generate infotainment applications traffic and 15 vehicles generate best-effort traffic. Table 5 summarizes the characteristics of different applications. Figure 7, Figure 8, Figure 9 and Figure 10 illustrate the throughput results for all application servers. The results revealed an average aggregate throughput of 2.05 Mbps for all infotainment traffic in case of applying our proposed AQoS algorithm against 1.45 M in case of QoS-free model showing an improvement of 29.27%, once again due to reassigning the free bandwidth to infotainment traffic. The average throughput of safety applications is not improved as safety servers are connected directly to RSUs independent of the implemented algorithm since the traffic does not go through the network core.

3.3.2. Evaluation Scenario 3

Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15 shows the throughput in case of moving the safety servers to be connected to the network core (SW5). The results show an average aggregate throughput of 1.251 Mbps for all safety traffic in the case of applying our proposed AQoS algorithm against 1.01 Mbps in the case of a QoS-free model with an improvement of up to 19.26%. Moreover, the results revealed an average aggregate throughput of 2.18 Mbps for all infotainment traffic in the case of applying our proposed AQoS algorithm versus 1.13 Mbps in the case of QoS-free model with an improvement of up to 48.17%.

4. Conclusions

In this work, we proposed a framework that integrates 5G technologies (network slicing, MEC, SDN) with deep learning models to support safety applications in IoV. Remote driver monitoring to detect drowsy drivers using AI models and switching the vehicle into teleoperation mode was our deployed case study. Evaluation of the proposed SDN-VANET QoS-based model showed significant improvements in terms of average RTT and average throughput in all scenarios investigated. This is due to various reasons. First, the application of 5G technologies, namely; MEC and network slicing. Secondly, the integration of the deep-learning model with SDN reduces the bandwidth required since only critical cases are reported to the RCC. Finally, deploying the SDN paradigm allowed the success of the adaptation phase of the algorithm as a result of the global view of network conditions the SDN paradigm offers. Furthermore, the proposed work has presented a machine learning architecture that would extract facial landmarks per video frame and then input it to a dense deep learning model to detect whether or not the driver is drowsy and accordingly report to the control room. The proposed dense model was compared to benchmarks and provided an improvement in terms of accuracy, precision, recall and F-measure. In the future, in the context of SDN, we will explore if the usage of hierarchical controllers will improve performance and will give more support to the requirements of safety applications by making each controller responsible for a part of the RAN.

Author Contributions

Conceptualization, S.N.S. and C.F.; methodology, S.N.S. and C.F.; software, S.N.S. and C.F.; validation, S.N.S. and C.F.; formal analysis, S.N.S. and C.F.; investigation, S.N.S. and C.F.; resources, S.N.S. and C.F.; writing—original draft preparation, S.N.S. and C.F.; writing—review and editing, S.N.S. and C.F.; visualization, S.N.S. and C.F.; project administration, C.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
APIApplication Programming Interface
AVsAutonomous Vehicles
ICMPInternet Control Message Protocol
IoTInternet of Things
IoVInternet of Vehicles
ITSIntelligent Transportation Systems
KPIKey Performance Indicator
MECMobile Edge Computing
NBINorth Bound Interface
NHTSANational Highway Traffic Safety Administration
QoSQuality of Service
RANRadio Access Network
RCCRemote Control Center
RSURoad Side Unit
SDNSoftware Defined Networks
SUMOSimulation of Urban MObility
V2IVehicle to Infrastructure
V2PVehicle to Person
V2XVehicle to Everything
V2VVehicle to Vehicle
VANETVehicular Adhoc Networks

References

  1. Placek, M. Autonomous Car Market Size Worldwide 2021–2026. Report, March 2021. Available online: https://www.researchandmarkets.com/reports/5359435/global-autonomous-cars-market-2021-2026-by (accessed on 1 December 2022).
  2. Law, C. The Dangers of Driverless Cars. Natl. Law Rev. 2022, XII. Available online: https://www.natlawreview.com/article/dangers-driverless-cars (accessed on 30 November 2022).
  3. Feiler, J.; Hoffmann, S.; Diermeyer, F. Concept of a Control Center for an Automated Vehicle Fleet. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems, ITSC 2020, Rhodes, Greece, 20–23 September 2020. [Google Scholar] [CrossRef]
  4. Kaur, K.; Garg, S.; Kaddoum, G.; Kumar, N.; Gagnon, F. SDN-Based Internet of Autonomous Vehicles: An Energy-Efficient Approach for Controller Placement. IEEE Wirel. Commun. 2019, 26, 72–79. [Google Scholar] [CrossRef]
  5. Kreutz, D.; Ramos, F.M.V.; Veríssimo, P.E.; Rothenberg, C.E.; Azodolmolky, S.; Uhlig, S. Software-Defined Networking: A Comprehensive Survey. Proc. IEEE 2015, 103, 14–76. [Google Scholar] [CrossRef] [Green Version]
  6. Karakus, M.; Durresi, A. Quality of Service (QoS) in Software Defined Networking (SDN): A survey. J. Netw. Comput. Appl. 2017, 80, 200–218. [Google Scholar] [CrossRef] [Green Version]
  7. Mahi, M.J.N.; Chaki, S.; Ahmed, S.; Biswas, M.; Kaiser, M.S.; Islam, M.S.; Sookhak, M.; Barros, A.; Whaiduzzaman, M. A Review on VANET Research: Perspective of Recent Emerging Technologies. IEEE Access 2022, 10, 65760–65783. [Google Scholar] [CrossRef]
  8. Storck, C.R.; Duarte-Figueiredo, F. A 5G V2X ecosystem providing internet of vehicles. Sensors (Switzerland) 2019, 19, 550. [Google Scholar] [CrossRef] [Green Version]
  9. Do Vale Saraiva, T.; Campos, C.A.V.; Fontes, R.D.R.; Rothenberg, C.E.; Sorour, S.; Valaee, S. An Application-Driven Framework for Intelligent Transportation Systems Using 5G Network Slicing. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5247–5260. [Google Scholar] [CrossRef]
  10. Pandey, N.N.; Muppalaneni, N.B. Temporal and spatial feature based approaches in drowsiness detection using deep learning technique. J. -Real-Time Image Process. 2021, 18, 2287–2299. [Google Scholar] [CrossRef]
  11. Pandey, N.N.; Muppalaneni, N.B. A survey on visual and non-visual features in Driver’s drowsiness detection. Multimed. Tools Appl. 2022, 2022, 1–41. [Google Scholar] [CrossRef]
  12. Li, X.; Samaka, M.; Chan, H.A.; Bhamare, D.; Gupta, L.; Guo, C.; Jain, R. Network Slicing for 5G: Challenges and Opportunities. IEEE Internet Comput. 2018, 21, 20–27. [Google Scholar] [CrossRef]
  13. Dos Reis Fontes, R.; Campolo, C.; Esteve Rothenberg, C.; Molinaro, A. From theory to experimental evaluation: Resource management in software-defined vehicular networks. IEEE Access 2017, 5, 3069–3076. [Google Scholar] [CrossRef]
  14. Fathy, C.; Saleh, S.N. Integrating Deep Learning-Based IoT and Fog Computing with Software-Defined Networking for Detecting Weapons in Video Surveillance Systems. Sensors 2022, 22, 5075. [Google Scholar] [CrossRef] [PubMed]
  15. Ryu. Ryu Documentation. 2016. p. 490. Available online: https://media.readthedocs.org/pdf/ryu/latest/ryu.pdf (accessed on 30 November 2022).
  16. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. 1. [Google Scholar]
  17. Viola, P.; Jones, M. Robust real-time object detection. Int. J. Comput. Vis. 2001, 4, 4. [Google Scholar]
  18. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  19. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  20. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  21. Ghoddoosian, R.; Galib, M.; Athitsos, V. A realistic dataset and baseline temporal model for early drowsiness detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
  22. Nasri, I.; Karrouchi, M.; Snoussi, H.; Kassmi, K.; Messaoudi, A. Detection and Prediction of Driver Drowsiness for the Prevention of Road Accidents Using Deep Neural Networks Techniques. In WITS 2020; Springer: Singapore, 2022; pp. 57–64. [Google Scholar]
  23. Fontes, R.R.; Afzal, S.; Brito, S.H.B.; Santos, M.A.S.; Rothenberg, C.E. Mininet-WiFi: Emulating Software-Defined Wireless Networks. In Proceedings of the 2015 11th International Conference on Network and Service Management (CNSM), Barcelona, Spain, 9–13 November 2015; pp. 384–389. [Google Scholar]
  24. Lopez, P.A.; Behrisch, M.; Bieker-Walz, L.; Erdmann, J.; Flötteröd, Y.P.; Hilbrich, R.; Lücken, L.; Rummel, J.; Wagner, P.; Wießner, E. Microscopic Traffic Simulation using SUMO. In Proceedings of the 21st IEEE International Conference on Intelligent Transportation Systems, Maui, HI, USA, 4–7 November 2018. [Google Scholar]
Figure 1. Vehicle automation levels.
Figure 1. Vehicle automation levels.
Applsci 13 00875 g001
Figure 2. Proposed model architecture.
Figure 2. Proposed model architecture.
Applsci 13 00875 g002
Figure 3. Proposed machine learning architecture.
Figure 3. Proposed machine learning architecture.
Applsci 13 00875 g003
Figure 4. Proposed Deep Learning Model.
Figure 4. Proposed Deep Learning Model.
Applsci 13 00875 g004
Figure 5. Area under the curve and accuracy results of the proposed deep learning model.
Figure 5. Area under the curve and accuracy results of the proposed deep learning model.
Applsci 13 00875 g005
Figure 6. Simulation network setups used in evaluation scenarios.
Figure 6. Simulation network setups used in evaluation scenarios.
Applsci 13 00875 g006
Figure 7. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Safety Server 1.
Figure 7. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Safety Server 1.
Applsci 13 00875 g007
Figure 8. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Safety Server 2.
Figure 8. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Safety Server 2.
Applsci 13 00875 g008
Figure 9. Average throughput results for both approaches: AQoS and QoS-free of scenario 2 for Infotainment Server e.
Figure 9. Average throughput results for both approaches: AQoS and QoS-free of scenario 2 for Infotainment Server e.
Applsci 13 00875 g009
Figure 10. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Infotainment Server e2.
Figure 10. Average throughput results for both approaches: AQoS and QoS-free of Scenario 2 for Infotainment Server e2.
Applsci 13 00875 g010
Figure 11. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 1.
Figure 11. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 1.
Applsci 13 00875 g011
Figure 12. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 2.
Figure 12. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 2.
Applsci 13 00875 g012
Figure 13. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 3.
Figure 13. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Safety Server 3.
Applsci 13 00875 g013
Figure 14. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Infotainment Server e.
Figure 14. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Infotainment Server e.
Applsci 13 00875 g014
Figure 15. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Infotainment Server e2.
Figure 15. Average throughput results for both approaches: AQoS and QoS-free of Scenario 3 for Infotainment Server e2.
Applsci 13 00875 g015
Table 1. Generalized model results.
Table 1. Generalized model results.
 TrainingTesting
AccuracyPrecisionRecallF-MeasureAccuracyPrecisionRecallF-Measure
Adaboost63.74%81.47%41.65%55.12%53.59%67.19%25.54%37.01%
Random Forest99.59%99.58%99.65%99.61%51.82%56.28%44.37%49.62%
SVC81.97%86.55%78.50%82.33%66.69%61.21%95.55%74.62%
Proposed Dense97.72%97.82%97.93%97.87%74.85%72.92%84.26%78.18%
Table 2. Calibrated model results.
Table 2. Calibrated model results.
 TrainingTesting
AccuracyPrecisionRecallF-MeasureAccuracyPrecisionRecallF-Measure
Adaboost64.12%80.07%43.84%56.66%62.34%68.77%54.16%60.60%
Random Forest99.70%99.71%99.72%99.72%78.01%81.93%75.46%78.56%
SVC78.53%83.26%74.94%78.88%78.29%80.26%78.68%79.46%
Proposed Dense98.13%98.25%98.25%98.25%85.69%82.23%93.37%87.45%
Table 3. Parameters used in the performance evaluation scenarios.
Table 3. Parameters used in the performance evaluation scenarios.
ParameterValue
Number of Vehicles158
Number of RSUs3
RSUs Range250 m
Number of Switches (Core Network)6
Propagation ModelLog Distance
RAN MAC LayerIEEE802.11 g
Number of Applications Types3
Emulation Time300 s
Table 4. Average round trip time results for all models in case of high traffic scenario.
Table 4. Average round trip time results for all models in case of high traffic scenario.
ModelApplicationMinimum RTTAverage RTT
MEC (AQoS)Safety Applications1.146 ms73.46 ms
MEC (AQoS)Infotainment Applications6.93 ms3061.63 ms
MEC (QoS-Free)Safety Applications6.945 ms289.56 ms
MEC (QoS-Free)Infotainment Applications16,133.43 ms33,570.39 ms
No MEC (AQoS)Safety Applications4.82 ms3849.49 ms
No MEC (QoS-Free)Safety Applications2475.84 ms6878.73 ms
Table 5. Application characteristics.
Table 5. Application characteristics.
ApplicationsUseData Rate KPIProtocolPortPriority Class
SSafety0.5 MbpsUDP50021
IFInfotainment1.5 MbpsUDP50032
BEBest-Effort0.5 MbpsUDP50043
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saleh, S.N.; Fathy, C. A Novel Deep-Learning Model for Remote Driver Monitoring in SDN-Based Internet of Autonomous Vehicles Using 5G Technologies. Appl. Sci. 2023, 13, 875. https://doi.org/10.3390/app13020875

AMA Style

Saleh SN, Fathy C. A Novel Deep-Learning Model for Remote Driver Monitoring in SDN-Based Internet of Autonomous Vehicles Using 5G Technologies. Applied Sciences. 2023; 13(2):875. https://doi.org/10.3390/app13020875

Chicago/Turabian Style

Saleh, Sherine Nagy, and Cherine Fathy. 2023. "A Novel Deep-Learning Model for Remote Driver Monitoring in SDN-Based Internet of Autonomous Vehicles Using 5G Technologies" Applied Sciences 13, no. 2: 875. https://doi.org/10.3390/app13020875

APA Style

Saleh, S. N., & Fathy, C. (2023). A Novel Deep-Learning Model for Remote Driver Monitoring in SDN-Based Internet of Autonomous Vehicles Using 5G Technologies. Applied Sciences, 13(2), 875. https://doi.org/10.3390/app13020875

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop