Next Article in Journal
Spatial Influence of Multifaceted Environmental States on Habitat Quality: A Case Study of the Three Largest Chinese Urban Agglomerations
Previous Article in Journal
Estimating Afforestation Area Using Landsat Time Series and Photointerpreted Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving

1
College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
2
Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
3
College of Technological Innovation, Zayed University, Dubai 144534, United Arab Emirates
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(4), 922; https://doi.org/10.3390/rs15040922
Submission received: 22 October 2022 / Revised: 19 November 2022 / Accepted: 7 December 2022 / Published: 7 February 2023
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
Autonomous vehicles and smart roads are not new concepts and the undergoing development to empower the vehicles for higher levels of automation has achieved initial milestones. However, the transportation industry and relevant research communities still require making considerable efforts to create smart and intelligent roads for autonomous driving. To achieve the results of such efforts, the CCAM infrastructure is a game changer and plays a key role in achieving higher levels of autonomous driving. In this paper, we present a smart infrastructure and autonomous driving capabilities enhanced by CCAM infrastructure. Meaning thereby, we lay down the technical requirements of the CCAM infrastructure: identify the right set of the sensory infrastructure, their interfacing, integration platform, and necessary communication interfaces to be interconnected with upstream and downstream solution components. Then, we parameterize the road and network infrastructures (and automated vehicles) to be advanced and evaluated during the research work, under the very distinct scenarios and conditions. For validation, we demonstrate the machine learning algorithms in mobility applications such as traffic flow and mobile communication demands. Consequently, we train multiple linear regression models and achieve accuracy of over 94% for predicting aforementioned demands on a daily basis. This research therefore equips the readers with relevant technical information required for enhancing CCAM infrastructure. It also encourages and guides the relevant research communities to implement the CCAM infrastructure towards creating smart and intelligent roads for autonomous driving.

1. Introduction

The advancement in sensor technologies, mobile network technologies, and artificial intelligence has pushed the boundaries of different verticals, e.g., eHealth, and autonomous driving. Autonomous driving has the potential to cope with major challenges in the transportation industry such as road safety and traffic efficiency. According to the Society of Automotive Engineers (SAE), ref. [1] there are five levels of autonomous driving (AD), ranging from Level 1 (L-1) to Level 5 (L-5). The L-1 indicates no automation, whereas L-5 indicates full automation; however, a fully automated vehicle will not have a driver’s cockpit. The advancement of autonomous vehicles (AVs) has evolved through the early stages of automation, and soon L-4 AVs will be observed operating on roads. The L-3 automation-equipped vehicles have already been manufactured by automakers such as the Audi A8 Sedan [2]. The five levels of AD are discussed in detail in Section 2.1. The race to L-5 AVs is still going on, and the stakeholders, including automakers and technology (sensor and communication) experts, are constantly refining their approaches to reach the necessary level of automation for their vehicles. The construction of testbeds and demonstrations all around the world is a new step toward the reality of AD. These testing grounds play a crucial role in assessing the capabilities of AVs in various environmental dynamics. The main goal of the testbed is the creation of a realistic environment that accurately represents the world in which AVs will function.
The complex dynamic traffic environment on roads necessitates the cooperation between AVs, classical vehicles, road infrastructure, and other road users. Classical purely sensor-based collaborative driving is vulnerable to line-of-sight limitations, processing lag, and sensor mistakes [3]. The concept of AD is based on a vehicle’s perception—also known as situational awareness—or its ability to comprehend its surroundings and respond to their changing dynamics. The vehicles are capable to create a perception of their surroundings using the knowledge and sensory data from on-vehicle sensors. Some degree of automation is made possible by the perception of vehicles and their ability to communicate with other vehicles, although it still relies on the visibility of the individual vehicle. How well the AV can adapt to various circumstances and dynamics is unknown. Autonomous driving promises to improve traffic safety. However, the promise of greater safety suffers because the quality and scope of a vehicle’s perception is constrained by its sensors.
To cope with the above challenges, a paradigm shift known as cooperative, connected, and automated mobility (CCAM) is required to enhance the environmental understanding of vehicles in complex urban settings. The CCAM has the ability to advance current automated driving to L-4 and L-5 vehicle automation. The deployment of CCAM infrastructure necessitates the upgrading of essential assets such as vehicles, network infrastructure, roadside infrastructure, and cloud infrastructure to achieve greater AD performance in complicated urban environments. The CCAM will shift the mobility paradigm towards mobility as a service (MaaS), resulting in more reliable, comfortable, and safe transportation systems. To realize this imaginative scenario, vehicular communications, automated driving, and cooperative transportation systems are increasingly observed as the building blocks of CCAM applications. To achieve this, standardizing and regulatory bodies, automotive Original Equipment Manufacturers (OEMs), road operators, telecommunication operators, and other interested stakeholders are collaborating to build massive infrastructures and conduct field tests of cooperative intelligent transport systems, which will pave the way for connected, automated, and cooperative vehicles.
In connection with this, we provide an overview of the recent state-of-the-art research conducted on CCAM and highlight the need for this research. To address the issue of cross-sector collaboration for connected, cooperative, and automated mobility, Geißler et al. [4] proposed an integrated perspective. They analyzed the taxonomies from three different perspectives; namely: the user, the vehicle, and the road infrastructure. The proposed taxonomies are comprised of the user communication of automated driving, the SAE cooperation classes, Infrastructure Support for automated driving, and levels of service for automated driving. They concluded that the proposed taxonomies should be used and applied as a shared knowledge that necessitates tight cooperation between the players aiming to plan, prototype, test, and implements CCAM services in the upcoming decade. In another study, Royuela et al. [5] presented a flexible and modular testbed, with a focus on evaluating the CCAM applications. They demonstrated a use case leveraging a 4G system, where an AV offloaded processing tasks to an edge server, which analyzed the images, provided routes, and sent guidance commands back to the vehicle, demonstrating the potential of edge computing and wireless technologies. Kousaridas et al. [6] emphasized that the challenges including interoperability among interested stakeholders, seamless connectivity, and the uninterrupted supply of real-time services across borders need to be carefully examined to successfully realize the cross-border CCAM services. To cope with these challenges, they presented a summary of the challenges, demands, technical difficulties, and gaps from the standardization, regulatory, commercial, and legal perspectives that need to be resolved for the quick and effective adoption of 5G-enabled CCAM services, particularly in cross-border settings. Hosseini et al. [7] also researched the CCAM with an emphasis on cross-border environments, and a novel architecture for CCAM service continuity was developed in the cross-border corridor that integrated a federation of multi-access edge computing concepts to maintain the service continuity. Moreover, they developed an idea for a simulative/emulative platform for C-V2X applications to make it easier to construct complex CCAM use cases, particularly in cross-border or cross-domain environments. Going over the literature, we found that CCAM infrastructure has been actively used for various mobility applications. However, the literature still lacks in providing the right set of sensory infrastructure, communication infrastructure, and machine learning-based algorithms for traffic and road analysis. Therefore, there is a dire need for advanced CCAM infrastructure. In what follows next, we provide the contributions of this research.
In this paper, we leverage the CCAM infrastructure for the development of smart infrastructure, which helps in enabling higher levels of autonomous driving capabilities. The aims and objectives of the paper are as follows:
  • To describe the Level 5 autonomous driving relevant concepts,
  • To identify the common challenges in the roadside infrastructure for AD,
  • To lay down the technical requirements of the CCAM Infrastructure,
  • To identify the right set of sensory infrastructure and communication interfaces,
  • To parameterize road and network infrastructures for advancements and evaluations,
  • To process data and execute intelligent machine learning algorithms,
  • To validate CCAM infrastructure’s mobility applications for traffic flow and mobility predictions.
With the view of highlighting the design goals of level 5 AD, challenges, and directing the readers to the potential solution candidates, we carefully structured the paper. In Section 2, we provide the background (tutorial) type information to equip the readers with the needed information, enabling them to comprehend the contents of this article. Following which, we dedicate a full section, Section 3, on the challenges that captures the crucial challenges hampering the realization of level 5 AD. In Section 4, we provide the literature review to provide evidence about achieving perception through information available in the roadside units (RSUs). In Section 5, we proposed our CCAM solution, which is an advanced infrastructure to realize the use-cases of autonomous driving in complex and urban environments. Following which, we dedicate a full section, Section 6, on the experiments to validate the CCAM solution for autonomous driving. In Section 7, we conclude our research paper.

2. Level-5 Autonomous Driving—An Overview of the Relevant Concepts

In this section, we cover the basic definition of autonomous driving, levels of automation, usecase groups, and core layers of autonomous driving.

2.1. What Is Autonomous Driving?

Autonomous driving refers to the concept where a vehicle senses its environment and performs maneuvers without human intervention. According to SAE J3016 [8], there are five levels of autonomous driving, ranging from Level 1 (L-1) to Level 5 (L-5). The L-1 indicates no automation, whereas L-5 indicates full automation; however, a fully automated vehicle will not have a driver’s cockpit. The development of numerous new features in vehicles is necessary for the transition from L-1 to L-5 vehicles.
  • L-1—Driver Assistance: In L-1 automation, the human operator must always be present and engaged in the Dynamic Driving Task (DDT). Speed and steering cannot be controlled by the vehicle simultaneously. Furthermore, the driver can monitor any DDT performance decrease with the use of the L-1 automatic driving assistance technology. Cruise control, lane departure warning, and emergency braking assistance are a few L-1 examples.
  • L-2—Partial Driving Automation: A human driver must be present at all times. Similar to L-1, if any autonomous driving component fails, the driver notices the failure and takes charge of the vehicle to meet the DDT performance standards. However, the L-1 and L-2 differ in that the system is completely in charge of lateral and longitudinal vehicle motion within a constrained operational design domain (ODD). The ODD is a representation of the physical and digital environment in which AVs must operate. Examples of L-2 vehicles include adaptive cruise control, active lane-keep assist, and autonomous emergency braking.
  • L-3—Conditional Automation: When specific operational parameters are satisfied, an L-3 vehicle is capable of assuming complete control and functioning for specific segments of a route. In the event of an automated driving system failure, the system may request to resume human intervention. The DDT operation can be carried out by the L-3 AV in an area with heavy traffic; however, it cannot be performed at an accident or collision site. The traffic jam chauffeur is an example of an L-3 vehicle.
  • L-4—High Automation: The ability of L-4 vehicles to intervene in the event of a problem or system breakdown is the primary distinction between L-3 and L-4 automation. In this way, these vehicles typically do not need human intervention. However, it is still possible for a human to manually override, though. L-4 vehicles are capable of driving autonomously. However, they are only able to do so in a limited region until the regulations and infrastructure evolve.
  • L-5—Full Automation: Since L-5 vehicles do not need human intervention, DDT is no longer used. Even the steering wheel and the accelerator/brake pedal will be absent from L-5 vehicles. L-5 AVs will be capable of handling unexpected situations without human intervention and driving through extremely complex settings. In addition, they will be unrestricted by geofencing and capable of performing any task that a skilled human driver is capable of.
Although there is no single standardized definition of the Level-5 (L-5) autonomous driving (AD) except that defined by SAE, our perception of the L-5 AD may be summarized as: A vehicle capable of
  • Creating perception of all the dynamics of the complex road traffic environment including roundabouts, congestion, sharp turns, etc.,
  • Dynamically adapting the local planing and control for any unprecedented event on the roads,
  • Driving in congested and pedestrian occupied regions of the roads,
  • Driving on the unmarked roads,
  • Selecting optimal, autonomic, efficient, and adaptive manoeuvres, i.e., increasing and/or decreasing acceleration, overtake and lane change, and vehicle controls, e.g., steering wheel, speed, break, and gear change,
  • Driving through extraordinary traffic conditions on the roads, i.e., bad weather (raining and snowing), and emergency vehicles (police vehicle and ambulance).
A clear distinction between L-5 autonomous vehicles from the lower levels of autonomous vehicles is the “broader canvas” that engages more stakeholders, e.g., OEMs will no longer be the sole solution providers. The obvious consequence of this is evident from the number of activities being carried out around the globe by various relevant stakeholders (including mobile network operators, OEMs, sensor providers, authorities, etc.) to achieve the objectives of higher automation levels. Yet another dilemma is the “transition period”, which is the period where we may witness the mixed traffic on the roads. The scenarios where autonomous vehicles (of level 4/5) coexist with classical vehicles, are expected to add to the challenges of fully autonomous driving, as it will intensify the impact of complex dynamics of the urban environments.

2.2. Autonomous Driving Usecase Groups

The autonomous driving scenarios are categorized in the following use case groups [9], provided by the 3rd Generation Partnership Project (3GPP) standardization.
  • Vehicle Platooning facilitates a dynamic group of vehicles to drive with a shorter inter-vehicle gap (directly related to the communication latency).
  • Advance Driving facilitates collaboration among vehicles to perform complex manoeuvres, i.e., speeding, breaking, overtaking, etc., in a safer manner.
  • Remote Driving facilitates a remote driver to operate vehicles in areas/situations where high reliability and short low latency is achievable.
  • Extended Sensors facilitates holistic understanding of the complex environment by exchanging data (directly related to high data rate) gathered from all relevant road users (vehicles, RSUs, IoT enabled devices, etc.).
  • General Aspects and Vehicle QoS facilitates the aforementioned use case groups in terms of communication coverage, service availability, service quality, and smoother user experience.
These use case groups are expected to increase safety, traffic efficiency, collision avoidance, and reduce fuel consumptions. The recently commercialized 5G network and its variant standardized under 3GPP release 16 is expected to provide the latency of 1 msec, ensuring the reliability of 99.999%, availability of 99.999%, and very high throughput of 10 Gbps [10].

2.3. Autonomous Driving Core Layers

To achieve the L-5 AD, we believe that a vehicle needs to drive fully autonomously in a mixed traffic environment, e.g., complex urban road traffic environment, and meet the requirements of all aforementioned use-case groups. Therefore, the L-5 enabled AV needs more sophisticated versions of the basic layers, i.e., perception, planning, and control [10], as shown in Figure 1.
  • Perception layer captures the context of the vehicle’s environment, which may be carried out through different techniques such as: fusion of the on-board sensory data, information exchanges with other vehicles and infrastructure, localization, etc.
  • Planning layer processes involve actions regarding mission, behavioral, motion, space-time, and dynamic environment planning.
  • Control layer executes the planned actions through traditional and prediction-based control, i.e., steering, breaking, and trajectory tracking controls.
However, the higher levels of vehicle automation are proportional to the effective implementation of functions and operations specific to the aforementioned three layers. Research community and industrial partners are striving to achieve the solutions with the desired sophistical levels.

3. Common Challenges in Autonomous Driving

In this section, we present a compact version of the common challenges of level 5 autonomous driving.

3.1. Narrow Perception

To create the perception of the environment, autonomous vehicles primarily rely on onboard sensors and vehicle-to-everything (V2X) communication. The context of the environment is created by processing the sensory data and information obtained through V2X communication. However, the evident drawback of this approach is a limited comprehension of the environment, i.e., the perception of the world is constrained by the detection ranges and onboard deployed sensor capabilities. The scientific community, standardizing bodies, and stakeholders have developed different strategies to solve this and related problems, such as creating a perception of the environment through external sources referred to as external or extended perception and disseminating it as needed. Despite addressing the core problem of narrow environmental understanding, a number of research questions and challenges are the following:
  • CH 1: How can a vehicle’s external perception be created accurately? Will the environment’s dynamics be recorded and sent (wherever necessary) in real time?
  • CH 2: Can the classical local dynamic map be improved? If so, what extra information layers ought to be used and how will they help with the use cases of higher automation levels?
  • CH 3: Do the 3GPP Release 15 and 16 standardized interfaces and architectural components support the protocols for the interchange and fusion of external perception with onboard perception? If not, what could be the potential alternatives?
  • CH 4: What fusion techniques could be employed to obtain a real-time and accurate picture of the environment if the perception is generated externally on the RSUs and then fused with the data from the onboard vehicles?
  • CH 5: What is the optimal fusion strategy for accurately fusing the sensory data from heterogeneous sensors?
  • CH 6: Data on autonomous vehicles come from a variety of sources, including the vehicle’s sensors, sensors on other vehicles, RSU, etc. How to synchronize them is a significant challenge when dealing with a range of data sources. Should synchronization be handled by the storage system beforehand, or should we let the application developer do this?

3.2. Limited Computation

Operations such as creating environment understanding, sensory data fusion, finding patterns based on data analytics for behavioral planning, and other operations of this nature are extremely computationally intensive and require rich infrastructure in the vehicle. The edge and Roadside Units (RSUs) may have computing-rich infrastructure owing to the recently new idea of extended sensors and distributed decision-making for attaining the goals of a higher level of autonomous driving. This entails significant expenses, which led to an increase in vehicle pricing. Therefore, sharing the computation, i.e., creating the perception on the road and sharing it in real-time with the vehicles, may be one approach to reduce such high costs. More processing will be directed toward the RSUs, and little is kept inside the vehicle. Although it looks appealing, it might be challenging to implement solutions that are reliable, real-time, and highly available. Some of the obvious challenges of this category are discussed below:
  • CH 1: What perception information is to be provided, and to which vehicle? It is crucial since the RSU may cover a large geographic area with numerous vehicles, which creates the perception. It goes without saying that not all vehicles need to have access to every piece of information.
  • CH 2: Can on-road deployed sensors generate environmental understanding as excellent, as it is produced by on-board sensors?
  • CH 3: Can demand estimation for computation in the vehicle or at RSUs be conducted using artificial intelligence (AI) approaches? This is significant because new ideas could include virtualized network functions, machine learning as a service (MLaaS), data analytics as a service (DAaaS), information as a service (IaaS), and others that depend on the availability of computing infrastructure at regular times and locations.

3.3. Communication

The communication plays an important role of data sharing for both autonomous vehicles and roadside units. The data, i.e., information, knowledge, perception, etc., is generated over vehicles through onboard sensors and over roadside units through on road deployed sensors. However, the existing communication approaches lack in properly communicating the complex dynamics of the environment. The ongoing research and development from both the academia and industry is producing various strategies to address this and similar issues. Inspired by which, a number of research questions and challenges are as follows:
  • Ch 1: How can a vehicle accurately communicate with other vehicles and roadside infrastructure? Will the vehicles be able to achieve real-timeliness in their communications with other vehicles and roadside infrastructure?
  • Ch 2: Can the information generation over a roadside unit be improved? If so, how will they realize the higher automation levels and the usecases therein?
  • Ch 3: Can the generated maps either on vehicle or roadside unit be transmitted and/or received in long ranges?
  • Ch 4: What could be the potential alternatives, if the existing communication interfaces lack support of the integration of external (roadside unit) and local (vehicle) information?
  • Ch 5: What strategy is to be followed from updating the roadside unit’s information with onboard vehicle’s information or vice versa.
  • Ch 6: What is the optimal communication strategy for accurately transmitting and receiving the sensory data from heterogeneous sensors available at the vehicle and roadside unit.
  • Ch 7: Data over vehicle and roadside unit come from a variety of sources. How to communicate and synchronize the complex and challenging data.
  • Ch 8: The communication infrastructure entails significant expenses. What are the vitalized alternatives for communication infrastructure?
  • Ch 9: The engagement of communication infrastructure components in upstream and downstream operations is variable. How to segregate or decide what operations to be handled and when. Which connectivity needs to be highly available, the upstream or the downstream?
  • Ch 10: How reliable will the communication between vehicles and RSUs be?
  • Ch 11: It is not necessary to communicate every piece of information. What information needs to be communicated and how to prioritize that information.
One of the directions that these challenges can be addressed is creating the external infrastructure, which enhances the perception and decision making of the vehicle. The similar research direction has been approached by the research community from around the globe. In what follows next, we will analyze the research literature in this direction.

4. Literature Review

Perception is one of the core layers of autonomous driving. The perception is generated through the edge dynamic map (EDM) by the roadside unit (RSU or Edge) available in the external infrastructure. The most generic challenges of EDM include:
  • Challenging representation of various events (data) happening over the road,
  • Right communication bit-pipes that meet the dynamically updating requirements,
  • Processing the data to have conclusive outcomes, e.g., recommendation, warning, etc.
Obviously, it is difficult to update information (maps) frequently given the constrained transmission and computing resources at the vehicle. Therefore, researchers are focusing on generating information (maps) over the RSU. It should also be highlighted that these challenges kept evolving as we progress to achieve the goals of higher automation levels. Research community and relevant stakeholders have been active in evolving the EDM by enhancing the data representation, processing, transforming the data into events and driving relevant information, etc. The Table 1 shows a quick overview to the reader about achieving perception through information (map) generated over RSU. In what follows next, we discuss the most relevant approaches that falls nearer our domain.
The EdgeMap [11] explores crowdsourcing data from connected vehicles to HD map as a way to minimize the usage of network resources while maintaining latency. This is accomplished by designing the DATE (distributed adaptive offloading and resource reservation) algorithm and formulating offloading decisions and resource usage. In order to minimize resource usage and maintain latency, they proposed a vehicular offloading agent (e.g., subseconds) based on multi-agent DRL technique and a resource reservation controller (e.g., minutes) based on Gaussian process regression (GPR). However, the minimization problem can be difficult from three perspectives, namely, the unknowable latency function, the Markovian nature of the problem, and the heterogeneity of time scales associated with the coupling optimization variables. They used the ORB algorithm to extract features from sensor images for relocalization, which results in creating a global map (the latest point clouds are broadcasted to all vehicles) based on the SLAM technique (which divides the computation between edge and vehicle components). To implement the EdgeMap, the following steps are taken:
  • A fully-connected neural network with Leaky Recifier and sigmoid activation functions is used for multiple DRL agents using PyTorch.
  • The GPR model is build using scikit-learn.
  • The Machine Hall 01–05 scenarios are used with the EuRoC dataset.
  • A time-driven network simulator is build for experiments, which consists of vehicles, servers, and a wireless transmission (based on an open-source 5G simulator) modules.
  • A desktop (AMD Ryzen 3600 3.8 GHz, 32 GB RAM) and a laptop (Intel i7-6500U 2.5 GHz, 8 GB RAM) were used as compute vehicles and edge servers in the experiments.
On the desktop and laptop, the mean computation latency is 286.54 ms and 609.27 ms, respectively. With EdgeMap, 0.31 MHz is used for uplink, 0.02 MHz for downlink, 19.4% of computation resources (1.94 server capacity), and 9.2% of average resources, and an average latency of 502.8 ms is obtained. This translates to a 33% reduction in average resource usage when compared to other approaches.
The CAMINO framework [12] supports hybrid cellular technologies, which are better suited to low latency use cases (e.g., truck platooning) and throughput-intensive passenger infotainment applications. In this way, the framework supports a multitude of services and enables dynamic management of technologies and integration of devices, interfaces, and services (with the help of DUST framework). The proposed framework manages C-ITS messages via controller, controls the data flow between interfaces, and interconnects different V2X wireless technologies. For example, communication of custom messages and standardized messages (i.e., CAM, DENM and IVIM) with C-V2X PC5 and ITS-G5 modules via UDP sockets and with an MQTT broker via TCP over wired (or cellular) networks. By using a publisher/subscriber architecture, DUST framework enables interconnection of devices, services, and interfaces for CAMINO. Hence, C-ITS services testings is possible over hybrid communication technologies. The proposed framework provides:
  • The logging service for post-processing information.
  • The forwarder service for external messages exchange using MQTT.
  • The GNSS service for positioning and timing information (using GPS daemon).
  • The GeoCasting services for messages distribution (using geo-tiling technique).
For implementation and validation in real-life environments, the Smart Highway testbed used the Vanetza library (C-ITS protocol suite) for services, DUST publisher for real-time information access, public Belgian Government datasets for fetching traffic sign values, JSON message format for relevant information, DUST subscriber for relevant decoded information publishing, InterCor logging format for positioning/speed data, and entities, i.e., RSUs, MEC, OBUs for evaluating V2X technologies. For packet-loss evaluation, the Packet Delivery Rate (PDR) performance is almost equal by both short-range technologies and more stable (up to 600 m distance). For latency, ITS-G5 outperforms C-V2X PCS. The former takes 4 ms and the latter takes 33 ms to transmit 95% of the packets. For interoperability, it is evaluated through the V2X communication equipment, considering highways such as the E313 highway in Belgian and A16 highway in Amsterdam.
The 5G-MEC [13] approach enables a cooperative perception between AVs in a specified targeted area. The AVs are registered with MEC and share their information in a periodical manner, which are stored in a database. Therefore, the vehicles do not share their information with each other to introduce latency and communication overhead. When an AV request for the holistic understanding of the environment, e.g., See-Through use case of Extended Sensors, the MEC provides the exact information of the other vehicle to achieve the cooperative perception with low latency communication. However, it is not clear if the MEC is able to prioritize between vehicles’ information when responding to the query of the requested vehicle. In other words, since the database connected to MEC has a time-series structure, it could contain records of vehicles with the same time stamp. Furthermore, the existing solution approach only covers streaming-based CAM applications.
In [14], the authors present live video streams (data) delivery while ensuring low latency and study how it can be applied for AD use cases such as remote driving and platooning. To ensure smooth video playback, their solution minimizes video stream outages during handovers between two different network operators (cross-border). The use of live video feeds can assist drivers in making timely decisions. However, timely video delivery is only possible with a low glass-to-glass latency (<500 ms), which depends on the available bandwidth, hardware, and technologies. Remote driving makes use of a computer in the vehicle that is connected to the vehicle’s OBU (5G connectivity) and an HD webcam. Whereas, platooning makes use of cloud-based media server for providing live video streams to follower vehicles. The RTSP protocol is used to deliver streams to receivers (streams are announced via REST API calls) in real time. There are three main streaming scenarios (i.e., cloud-based, fog-based, and V2V-based) supported by their architectural entities (i.e., sender, server, and receiver).
The experimental setup is two-fold: (1) fog-based includes entities connected via local wired network; (2) cloud-based includes entities connected via 4G LTE network and OpenStack cloud platform. The experimental machines are:
  • Jetson TX2, GPU with 256 NVIDIA CUDA cores, Quad-Core, 8GB LPDDR4 RAM, and Ubuntu 18.04.
  • A virtual machine (in OpenStack cloud server), 4 cores Intel Processor (Haswell, no TSX, IBRS), 4GB RAM, and Ubuntu Server 18.04.
  • Intel(R) Xeon(R) CPU [email protected] GHz, 8 GB RAM, and Ubuntu Desktop 18.04.
The authors leveraged the NVIDIA GPU hardware encoder (H.264/H.265) in Jetson TX2 for achieving low end-to-end latency. A recovery module (written in C language that used GStreamer framework) is implemented to cope with failure cases such as network disruption. As video streaming in the aforementioned AD use cases requires low latency (<500 ms) in order to achieve a safe experience, the authors conducted 11 different executions which resulted in latency of 80–100 ms in the fog-based scenario and 105–125 ms in the cloud-based scenario. In order to guarantee smooth playback during handover, 30 handover operations resulted in recovery times less than 300 ms, satisfying requirements of no disruption during network outage.
The CarMap [15] creates a lean representation of the 3D feature-map using crowd-sourcing and updating the map in near real-time on a cellular network. The idea behind crowdsourcing is to leverage the vehicles’ sensors, so that each vehicle is able to upload map updates to a cloud service, making the updated map available to other vehicles. Among the features of CarMap are the following: lean representation of feature maps, use of approximate position information (e.g., from GPS), localization based on dynamic object filtering via resource-aware algorithm, map diff (differences) algorithm, and stitching algorithm. These features address challenges such as map size, environmental dynamics, effective feature matching, and fast map-updates. The CarMap’s architecture collects data, generates segments via map segment generator, filters objects, performs stitching for map reconstruction in the cloud, and then shares with other vehicles for localization. Most of the above-mentioned entities in the CarMap architecture rely on GPS position (a radius of 50 m) and keyframes. The implementation consists of:
  • An alienware laptop (Intel i7 4.4 GHz CPU, 16 GB DDR4 RAM, NVIDIA 1080p GPU with 2560 CUDA cores),
  • MobileNetV2 (DeepLabv3+) for semantic segmentation,
  • OpenCV for image transformations,
  • The Point Cloud Library (PCL) for point cloud operations,
  • QuickSketch for stitching,
  • The C++ Boost library for serializing and transferring the map files over the network,
  • Software modules for the aforementioned architectural components,
  • Modification to ORB-SLAM2 (feature extraction, index generation, and feature matching modules),
  • The open-source visual odometry algorithm for mono, stereo, and RGB-D cameras in the KITTI vision-based benchmarks.
In the CarMap simulations, three scenarios are considered and map data are taken for validation of end-to-end latency and map updates. In addition, the accuracy of end-to-end localization with reduced map size is assessed for localization scenarios. Latency is as low as less than a second (<1 s) in CarMap, since the integration of map updates (50 ms) and stitching/reconstruction (500 ms) is 550 ms. Localization accuracy is 28× (50×) better for the static (dynamic) map as compared to ORB-SLAM2 and QuickSketch. Map size is 75× reduced. CarMap is also capable of localizing in situations where other approaches fail. It is notable that CarMap does not consider denser (high-dimensional) representations of maps, which is a challenge that needs to be addressed for future autonomous vehicles. Additionally, the CarMap does not rely on road-side units for storage and processing, which can improve latency and localization.
In SURROGATES [16], a physical OBU is virtualized into a VNF (i.e., vOBU) within the MEC layer to make it available for performing computationally-intensive tasks and caching data. The contributing factors of the study include: OBUs virtualization; NFV and MEC architectures; and real-world implementation, deployment, and validation of the system. The architecture consist of entities from vehicle and virtualized edge-&-cloud such as OBUs, RSUs, OBU manager module, REST protocols and API, virtualized infrastructure manager (VIM), cellular base stations, 802.11 OCB and 4G technologies, WiFi access points, and an IPv6-based network middleware. The middleware is used to share navigation data, monitor statistics, etc. A MANO entity orchestrates the vOBUs as VNFs that utilize the network node’s computing capabilities and feed user applications such as traffic efficiency, safety, pollution monitoring, tracking, and healthy mobility. It is noteworthy that the physical OBUs proactively communicate with vOBUs to feed a set of services (user applications) by providing monitoring parameters. Accessing and processing data takes place at three locations: the data analytics module, the vOBU, and the physical OBU. Thus, it addresses the issue of multiple services requiring data from the same vehicle, reduces request resolution delays, and saves wireless resources.
Using the 5GINFIRE project ecosystem, SURROGATE is deployed and tested at the University of Murcia, Spain (UMU). The authors deployed three network services (NS) for vOBUs, platform management, and final services. Using open source MANO and OpenStack, it performs resources virtualization and data gathering from in-vehicle sensors. It allows the implementation of software services (VNFs) by using x86_64 (CISC) equipment. To further the implementation process, the following steps were taken:
  • An On-Board Diagnosis (OBD)-II interface (OBDLink SX) was used to connect a Laguna LGN-20 OBU (with a TP-LINK MA-180 modem) with in-vehicle CAN bus for gathering vehicle (Renault Clio 4th generation) state data.
  • An OpenVPN tunnel (router) managed the IPv6 network mobility since the telecom operator does not support IPv6 traffic.
  • A Dell Poweredge R430 server (Intel(R) Xeon(R) CPU), CentOS 7 using OSM v4, Debian 9 OS (GNU), and OpenStack Pike is used to run the infrastructure, which include four VNF types, i.e., vOBU, OBU, data analytics, vehicle monitoring service.
  • A MySQL database is used to store data, Grafana tool for monitoring services, and an Android application (on Samsung Galaxy S7) to access vehicle data from a mobile device.
The vOBUs are evaluated by analyzing the request RTT in different network segments. In this connection, the RTT is maintained at around 230 ms by vOBU, while it ranges from 400 ms to 22 ms for OBU. When a delay threshold of 300 ms is reached, the vOBU software provides the cache data to measure the requests. Meaning thereby, 90% of the requests are solved by the vOBU at 300 ms. With the 4G connection, the new vehicle registers itself in just 400 ms.
Based on what has been mentioned above in the literature review, we provide the most relevant directions to design our differentiator. In this connection, the relevant research contributions are directed:
  • To techniques of data fusion,
  • To approaches for enabling the specific AD use-cases (e.g., remote driving),
  • To creating the high level perception/map through crowdsourcing techniques,
  • To focusing on the information collected from automated and connected vehicles,
  • To solutions for addressing the computation limitation,
  • To the limited services with no contribution towards CCAM infrastructure, etc.
On the contrary to the above and similar approaches, our work provides the differentiator by: (i) creating the enriched perception through images, videos, and sensory data captured by the sensory devices mounted in the RSU; (ii) approaches to utilize the generated enriched perception for enabling new AD use-cases; (iii) suggesting the computation and communication-rich infrastructure in the classical roadside unit to enable the creation of enhanced roadside unit; (iv) contributing with enabling concepts that help realize the advanced use-cases of autonomous driving.

5. Proposed CCAM Solution for Autonomous Driving

The motivation to contribute to the proposed CCAM infrastructure is driven by the needed solutions to realize the use-cases of autonomous driving in a complex and urban environment. It should be highlighted that challenges of urban and open environments are a lot different then those of controlled, rural, and highway environments. Hence, the classical methods of creating local perception, finalizing the trajectory, local planning, etc., may not cope with the use-cases of the higher automation levels. Enriched perception and support for decision making is required, which will be provided by the CCAM infrastructure.
This section focuses on detailing the proposed CCAM infrastructure and relevant enabling technologies therein. It should be highlighted that the proposed infrastructure is based on the experience collected by the authors in practical implementation of the concepts (as PoCs) in Europe and UAE. The readers are referred to [17] and Table 17 in [3] for engagement and contribute in such activities. In what follows next, we provide the details of the needed CCAM infrastructure. The core idea is to extend the classical roadside unit to a more sophisticated and intelligent edge infrastructure that does not only perform the task of relaying the messages between and among vehicles and infrastructure. To understand the need and gains of the CCAM infrastructure, let us discuss the big picture, as shown in Figure 2. The figure shows three layers, where the middle layer provide the details of the CCAM infrastructure. Meaning thereby, the evolved roadside unit with the CCAM infrastructure is the intermediate level and it hosts compute hardware and communication interfaces.
We propose that roadside infrastructure should go beyond serving as the roadside unit of an existing vehicular network to develop it in a manner that allows for the development of the following features:
  • The ability to create dynamic segments for roads.
  • The ability to adapt and understand dynamic and complex environments.
  • The ability to process raw data and build intelligent machine learning pipelines.
  • The ability to perform layer-to-layer communication using upstream and downstream channels.
An evolved roadside unit with CCAM infrastructure, with its features and applications, is shown in Figure 2. The following categories may describe a result of using on-road deployed sensors to capture data, applying ML/AI methods, and making relevant decisions:
  • Categorization—AI/ML methods can be used to categorize vehicles based on the sensory data captured by the on-road sensors.
  • Activity Analysis—The data can be processed and analyzed using AI/ML methods to create heatmaps of various activities across the road segments.
  • Tracking—AI/ML methods can be used to track the objects, once the objects are properly detected, for creating an enrich perception of the complex road segment.
  • Traffic Analysis—AI/ML methods can process raw data to provide useful insights on traffic signs, traffic signals, speed, mobility patterns, and traffic density.
In what follows next, we provide the details of the design, involved technologies, and interfaces. The proposed CCAM infrastructure may broadly be categorized into the following major technological and solution categories:
  • Sensors and devices—This category contains all the devices and sensors that are used for collecting the right set of information/data from the road segment, i.e., devices that help capture the dynamics of the road segment. More details about this for the proposed CCAM infrastructure may be found in Section 5.1.
  • Computation—This category corresponds to the right set of accelerators, e.g., GPUs, CPUs, TPUs, FPGA, etc. More details about this for the proposed CCAM infrastructure may be found in Section 5.2.
  • Communication—This category defines the set of communication technologies that help achieve the objectives of the right communication bit-pipes for executing different use-cases of the autonomous driving. More details about this for the proposed CCAM infrastructure may be found in Section 5.3.
  • AI/ML tools and platforms—This category encamps all the relevant AI/ML tool boxes, platforms, and approaches that help in creating the perception and context awareness for the specified road segment covered by the set of sensors deployed on the roadside unit. More details about this for the proposed CCAM infrastructure may be found in Section 5.4.
In the rest of this section, we provide the details of each of the above categories.

5.1. Sensors and Devices

The right set of information is imperative for an autonomous vehicle to drive through a complex dynamic environment. To capture the information, we can make use of various types of sensors and devices. For our proposed CCAM infrastructure, we choose to have cameras, LiDARs, RADAR, etc. The reasons behind using these sensors and devices are as follows. The camera-based sensors perform capturing in the form of images and/or videos, These visuals capture the road segments, vehicles, pedestrians, traffic signals and sign-boards, and real-time events happening in the vicinity of the RSU. The LiDAR sensors perform 3D mapping the world around the RSU. A LiDAR’s measurements become more precise as its layer count increase. The RADAR sensors perform reliable object detection and can locate stable or moving objects. Consider Figure 3, which illustrates the components of the proposed architecture at this layer.
Through the implementation of sensory data fusion and filtering approaches, we can create patterns for different use cases of the collected environmental data (via on-road sensors) as well as data from vehicles. This is accomplished by deploying an IoT middleware, an optimization and machine learning toolbox, and a decision toolbox in the eRSU (cf. Figure 2). By analyzing data from connected on-road deployed sensors and vehicles, the eRSU creates information (map) at the edge. The eRSU creates new information (map) based on the information collected from connected road sensors and vehicles. The information gained from external environments is termed Perception at Edge (PAE). Hence, more informed decision making is performed when PAE is shared with vehicles. An illustration of the components of the proposed architecture at this layer can be found in Figure 3.
It is imperative to select the right set of sensors, computing infrastructure, and communication infrastructure carefully to meet the aforementioned capabilities. A technical description is given in Table 2, along with the features and technologies of the selected sensors. The last column describes the measurements made by each sensor. Please note that the sensors’ selection is based on the practical implementation of similar projects.
As shown in the following figures, the suggested set of sensors at the edge provides the perception (in addition to detecting and tracking objects). The activity analysis outcomes are depicted in Figure 4a, the environmental parameters are shown in Figure 4b, and the generated perception can be visualized in Figure 4c.

5.2. Computation

It goes without saying that the sensory data collected through on-road deployed sensors need to be processed and analyzed in the roadside unit. Pushing it to the central clouds of webscalers is contradictory to the fundamental concept of the proposed CCAM. This asks for the compute infrastructure, accelerators in the roadside unit. For this, we suggest to have a greater set of GPUs than CPUs and other accelerators. This is due to the nature of analytics that are expected at the roadside unit of the proposed CCAM infrastructure. The on-road deployed sensors collect the video stream, photos, radar data, lidar data, etc., which require strong GPU equipped computer infrastructure.

5.3. Communication

(i) Communication bit-pipes for Autonomous Driving: To achieve the objectives of AD, the concept of effective, efficient, and real-time communications between vehicles, infrastructure, and other road users is crucial. Hence, the operations of an autonomous vehicle in a complex environment can be autonomously operated if and only if the communication bit-pipes are right. The cellular technologies of 3GPP support the communications between vehicles, infrastructures, and road users. The cellular-V2X (C-V2X) technology performs well in high-dense traffic areas and provide high reliability and real-timeliness in communication between entities. For instance, Figure 5 depicts the communications between entities, i.e., vehicles, pedestrians, networks, and infrastructures. C-V2X is expected to be instrumental in enabling more sophisticated ITS and infotainment services, when it comes to autonomous vehicles paradigm such as vehicle diagnostics, connected infotainment, pay-as-you drive insurance, autonomous driving, co-operative driving, platooning, remote driving, and other safety features, etc., by leveraging the mix-ranges module, etc. [18]. However, the true potentials of C-V2X technology may not be explored unless the proper coordination amongst right stakeholders is achieved, as shown in Figure 6.
These entities and their interaction generate a large amount of data and distributed decision engines, which require reliable and fitting communication bit pipes. C-V2X and B5G technologies are expected to pave the path towards achieving the objectives of L-5 AD. For instance, to achieve enhanced perception and informed decision making, 5G pledges the enhanced mobile broadband (MBB) and ultra reliable low-latency communication (URLLC) bit-pipes. The 5G- and B5G-based communication are some of the core enablers for achieving L-5 AD. Therefore, the C-V2X communication services are expected to play an important role in placing the right enablers for implanting AD capabilities in AVs [10]. Since 5G- and B5G-enabled the C-V2X communications support are yet to be realized, the existing 4G (LTE) and 5G-Non-Standalone (5G-NSA) technologies by 3GPP supported the usecase groups relevant to realizing AD capabilities.
(ii) Communication to Realize Inter-stakeholders Relationships: Considering the need for communication to realize the envisioned inter-stakeholder relationship, we believe that the fully autonomous driving would ask for a new eco-system with new stakeholders, i.e., OEMs will no longer be the only stakeholder to drive the future transportation. Based on our earlier work, work with operators, and industry, we believe in the new stakeholders’ relationships. It goes without saying that for such a inter-stakeholder relationship, there needs to be communication bit-pipes existing, hence, with communication for inter-stakeholder relationship. It should be highlighted that not for all the communication bit-pipes, we need the same QoS. Hence, we need QoS aware communication, for which, we rely on 5G both in upstream and downstream. This section discusses the potential relationship between the inter-stakeholders that can be created dynamically by leveraging the envisioned CCAM infrastructure. The roles and responsibilities outlined in [19] may provide inspiration for how multiple stakeholders would work closely together to realize the L5 AD. We take into account six roles: MNO, CIP, CA, RIO, and OEM/Automobile Maker and believe that the proper interaction of these roles will contribute to the development of the appropriate CCAM infrastructure and solution enablers for the development of smart roads and the realization of L5 autonomous driving. Consider Figure 6 and its description in the Table 3 to comprehend how each stakeholder contributes to the envisioned solutions of higher autonomy.
It is evident from Table 3 and Figure 6 that new entries into the autonomous driving ecosystem are necessary. This is to say that the goals of L5 AD can only be met by using a CCAM infrastructure that is enhanced with external information. This calls for the availability of the right set of (processed) data sources at the proper time and location, as well as a reliable and almost real-time communication bit-pipe. Assume that the network operators have made communication bit-pipes available along with an evolved CCAM system incorporating a road infrastructure with intelligent edges. In this case, the regulations and policies are undoubtedly the first to appear. Still, the more significant task is finding technology solutions that permit the composition of dynamic inter-stakeholder relationships. Therefore, the solution component should ideally achieve the design objectives.
(iii) C-V2X Communication from Standardization Perspective: The 3GPP has been one of the standardization organizations contributing to the technological standardization for implementing various AD use cases, as described in Section 2.2. In this regard, 3GPP initially outlined the fundamental use cases and potential requirements for addressing safety and non-safety aspects through environmental awareness and warning messages [20]. These use cases require different needs such as latency of 100 ms, message size of 50–1200 bytes, mobility performance (absolute speed of 160 km/h and relative velocity of 160–280 km/h), a response time of 4 s, the maximum frequency of 10 messages/s, high anonymity and integrity, high reliability, communication (message transfer, message received, authorization via MNO or RSU, etc.), as well as other requirements such as battery and power consumption, and location. The readers are directed to the authors’ previous research work [21], which provides a summary of the basic safety and non-safety use cases organized according to V2X communication technologies and a comparison of the potential requirements for each use case.
Standardization of 4G and 5G technologies has mostly addressed the important V2X communication concerns, enabling the stakeholders to achieve the design objectives of L5 AD. The C-V2X has drawn a lot of interest from the industry, academic world, and research community in recent years. As a result, there are several documents, design methods, and reports on solution-related components. These documents are not mutually exclusive, and their information tends to overlap (though the degree of overlap may vary). There is no classification of technical reports and specifications in the literature. For instance, Ref. [22] offers broad perspectives rather than an exhaustive study to cover all components of V2X applications. To cope with these challenges, we thoroughly examine the 3GPP standard documents pertinent to AD in general and C-V2X communication in particular. In this regard, we did the categorization of technical documents that are motivated by V2X services. Moreover, we proposed a straightforward yet descriptive taxonomy of the standard documents, as illustrated in Figure 7 to align the standard documents with current developments and direct readers to the right documents. We do believe that the proposed categorization would considerably help to direct the stakeholders in their research and industrial operations. It is important to highlight that the standard documents include comprehensive V2X applicable guidance for future use cases, although they are dispersed throughout several different documents. The identified use cases and other crucial information from the 3GPP standards documents for 4G (before) and 5G (now) V2X communication services are analyzed in our most recent work [3,10].
In order to provide frequency details from 3GPP Release 14 to Release 17, we created a Figure 8, which presents a number of technical documents supporting 4G, 5G, and 4G-&-5G communication technologies. The complete set of these technical documents is shown in Figure 7 and details of technical specifications (TS) can be found for each communication technology: For 4G, the specifications are TS 22.185 [23], TS 22.186 [9], TS 23.285 [24], TS 24.385 [25], TS 24.386 [26], TS 24.486 [27], TS 29.387 [28], TS 29.388 [29], TS 29.389 [30]; For 5G, the specifications are TS 23.287 [31], TS 24.587 [32], and TS 24.588 [33]; For 4G and 5G together, the specifications are TS 23.286 [34], TS 29.486 [35], TS 33.185 [36], and TS 33.536 [37]. The readers are encouraged to look into the given specifications links for details such as title, status, type, release, and versions of the technical specification. When taking into account all releases, the frequency rate for 4G continued to decrease, was normal for 4G-&-5G, and began to increase for 5G (more technical documents are expected in release 17). Similar to this, the breakdown of technological support, the number of technical documents, and pertinent cellular communication technologies are presented in Figure 9. It is important to note that this breakdown is influenced by the 3GPP Technical Specification Groups (TSG), such as the radio access network (RAN), service and system aspects (SA), and core network and terminals (CT). It provides an easy-to-navigate approach for naive readers to equip themselves with exact support categories rather than following the puzzling structure of the standardization documentation.

5.4. AI/ML Tools and Platforms

This is a crucial component of the proposed CCAM infrastructure. The idea is to enable performing various data analytics operations, execute decision making instances, and infer different autonomous use-cases at the roadside unit level. Given that the proposed compute infrastructure ensures the required resources to conduct analyses of the data, train/retrain model, host pre-trained models, and conduct inferencing by deploying the right inferencing engine, we propose to deploy an automated flow of the MLOPS, enabling the frequent training/selection of models for different data sets generated in different road segments and for different use-cases of autonomous driving. For object detection, tracking, clustering of the vehicles and objects on the road segments, congestion prediction, etc., we rely on TF, PyTorch, and other reinforcement, neural networks, etc. Readers are encouraged to refer to some of our proposed RL-based and self-learning approaches in this regard [38,39].

6. Validation of the CCAM for Autonomous Driving

This section discusses the experimental design, implementation details, machine learning algorithms for traffic analysis, and a use-case to find the impact of external perception.

6.1. A Use-Case Experiment

In this experimental use-case, we discuss how a use-case can be implemented in order to investigate the impact of the external perception created by the on-road deployed infrastructure. This work is supported by the CCAM infrastructure, which is fully developed (physically through SCAD Lab in UAEU, UAE), and will be deployed in the near future. For different use-cases of autonomous driving, we have conducted a set of experiments to study the impact of external information. These experiments were carried out over a dataset from Berlin. The dataset captures a famous Berlin roundabout and is relevant to the CCAM infrastructure. A noteworthy point here is that parts of the deployment that collects the data do not track the vehicles, and there are no sensors for road conditions. Additionally, the different cameras could not track vehicles throughout their path through the intersection. As a further point to be noted, activity analysis sensors capture the highest degree of detail. For the purpose of creating valid simulation models, this tracking is extremely useful. The vehicle count is performed by traffic analysis sensors on the configured segments of roads. A one-minute interval is used to aggregate counts and a classification by vehicle type is available for some segments.
We started with the manual data analysis and carried out some sanity checks. This led us to inspect the data for visually validating the expected patterns. Further, a consistency check was carried out and an assessment of the system accuracy was conducted by:
  • Analyzing properties, data type, aggregation, and time resolution of the data,
  • Statistic aggregation and comparison of reference values,
  • Using spatial correlations to analyze time relationships and constraints in the data.
Next, we performed vehicle counting. The number of vehicles is counted through extracting the vehicles from the image streams. Similarly, we performed vehicle classification. The types of vehicles are detected in the image streams. It is note worthy that the vehicle types change drastically at certain times in the data. As a result, we opted to focus on the vehicle count (aggregated) for now and furnish our future research work with vehicle type classification. After accumulating the recorded data in minute intervals for traffic analysis, we expected the following:
  • To perform matching between data sources, e.g., working days vs. holidays, normal hour of the day and peaks, daily averages, etc.,
  • To detect temporary closures without difficulty,
  • To have simple correlations between sensor stations,
  • To perform accurate detection of traffic lights.
With the help of interactive plots, we visualized the dataset by selecting time range, sensor locations, and smoothing options that we could freely select. Next, we selected a simple Neural Network (NN) regressor for traffic volume predictions. The NN regressor was trained for first 20 days and the total available data size is 38 days. We aim to have a simple and small network for producing acceptable predictions. Therefore, the experiments consider various input formatting, number of layers, and neurons. Consider Figure 10, which shows the predictions. It goes without saying that the nature of an application decide about the required accuracy. In what follows next, we provide details about the neural network and some features, as follows:
  • A total of 100 neurons;
  • Solver (L-BFGS);
  • Activation function (tanh);
  • Minute of the day (0–1439);
  • Day of week (Monday–Sunday (0–6));
  • Is a holiday (true, false);
  • Vehicle count of that minute (number)—the class label/output.
Though the above mentioned features are important, there exist other promising features as well. For instance, the prediction accuracy can be improved when the data contain weather information, and/or seasonal vacations of school. As the data are recorded for less number of days, we decided to avoid considering these additional features.

6.2. Dataset Cleaning and Visualization

The considered dataset is a collection of road dynamics measurements, which are recorded on a set of road intersections using three distinct types of sensors deployed at various locations on and around the target areas. The three types of sensors used are Kiwi Traffic Analyzer, Cisco Aironet Wi-Fi & CMX, and Clever Citi Parking Sensor. The Traffic measurement was conducted over a set of road segments leading up to an intersection at the Ernst-Reuter Square (ERS), where the traffic sensors are deployed. These road segments are as follows:
  • AmpelHardenbergstrvon-BHN2-Outbound;
  • VorplatzHardenbergstrvon-BHN3-Inbound;
  • Otto-Suhr-Allee-TEL1-Inbound;
  • Bismarckstrasse-TEL3-Outbound;
  • Ernst-Reuter-PlatzGeb-A1-Outbound;
  • Bismarckstrasse-TEL3-Inbound;
  • VorplatzGeb-AF1-Rotary;
  • Strdes17Junivon-EB1-Outbound;
  • Marchstrasse-A2-Inbound;
  • Strdes17Junivon-EB1-Inbound;
  • Ernst-Reuter-Platz-TEL2-Rotary;
  • MarchvonGeb-A2-Outbound;
  • Ernst-Reuter-Platzvon-EB2-Rotary;
For mobile devices with Wi-Fi connectivity, the number of devices was measured in six different areas and recorded using deployed sensors. Next, the recorded data was stored into comma separated files one per each sensor that was deployed. The data are recorded for a total of forty-one days in the case of traffic flow and for five days for Wi-Fi devices. For parking areas, the data were recorded over a total of twenty-four days. Next, the traffic flow dataset is cleaned by removing the unused features and records. Furthermore, it is changed into the correct format of time and date that allows us to use it easily. Next, we created a subset of the larger all-encompassing dataset for a single day traffic recording and visualized it in Figure 11.
This helped us to graphically see the traffic flow throughout the day. This step also helped in performing informed data cleaning tasks. The goal of this machine learning (ML) exercise is to predict the number of vehicles by learning the traffic flow patterns from all the recorded features in the dataset. Understanding these features helped us in understanding how to best utilize these features for our ML approaches. We normalized our dataset for training purposes and discarded all large values that are present in the vehicle count. Next, the ML regression algorithms were trained on the running average of the total vehicles throughout the day, where we kept an average sum window of 60 units of time-steps. Using this method, traffic counts were averaged over an hour and significant differences between maximum and minimum values were reduced.
To visualize the dataset, the traffic flow is visualized for three days, as shown in Figure 12. Therefore, we may observe that the traffic flow increases steadily in the early hours of the day, after which it reduces and builds up again in the late afternoon, remaining consistently high during the evening hours. It reduces once more in the late hours of the night as the day ends. In addition to measuring traffic flow counts for the intersections, we also recorded parking space availability for a parking area next to the intersection. Next, we measured the number of available parking areas in the lot. Thus, it provided an additional layer of mobility monitoring and helped us in visualizing the mobility management requirements in terms of parking spaces. To this end, we used parking sensors to measure the number of parked cars and recorded the values for an interval of every hour during the day and every minute during the hour, as observed in Figure 13. The parking lot measurements indicate that the parking spots are occupied throughout the day but are vacant as the day progresses. The density of WiFi-enabled devices for each hour of a day are shown in Figure 14.
The total Wi-Fi device counts over a period of a single day is shown in Figure 15. It shows that the number of Wi-Fi devices increase starting at 6 A.M. and continue to be so until 7 P.M. in the evening, where it reaches a peak between 12 P.M. and 5 P.M. This aligns with everyday foot traffic as expected in a campus setting. Thus, it shows that there is much foot traffic during those hours and the need for connectivity is increased during these peak hours. As observed, it is steady in the early hours of the day and steadily declines as the day progresses, indicating that earlier in the day communication requirements tend to be high.

6.3. Experimental Setup

We applied machine learning algorithms to our recorded dataset and predict the real values as closely as possible. To do so, we selected two types of datasets from our consolidated dataset bank. First, we tried and prepared a model to predict as accurately as possible the vehicle counts at a particular intersection, drawing from a subset of the original dataset that consists of over twenty-four days of records. For this, we selected a window of three consecutive days of data and perform data cleaning tasks on it to develop a sub-sample that our models may consume for training tasks. In addition to this, we also prepared an ML model that predicts the Wi-Fi device counts and help us model a mobility requirements prediction model. In this section, we described and discussed the detailed working of how we developed machine learning-based models to predict Wi-Fi communication prediction and vehicular number prediction models.
For the vehicle count prediction model, the selected data are traditionally in an unclean format for ML models. Therefore, we initially tackled the task of predicting vehicle counts. To this end, we cleaned our dataset by removing all unwanted features. For instance, these are speed and station identifier. The cause for speed feature being dropped is the lack of data points for every vehicle in the dataset. To use the full potential of the ML regression algorithms, we prepared a subset of the dataset for our considered usecase. Next, we prepared a set of indexes to correctly align any inconsistencies in recorded samples, as they are recorded.
Therefore, we created an index column that is sorted by the unix timestamp value. Therefore, it is much easier to calculate preceding and succeeding values. It is also important to mention that unix timestamp values were only used for our experimental purposes. Otherwise, the results are shown in 24-h format. Next, we created a list of features that we deemed important in predicting the vehicle count. We selected the features, namely, hour, minute, second, type of vehicle, and a onetime step forward shifted lag feature. The decision to create a lag feature is based on the importance of lag features being able to contribute much to predicting time series values, which is the case herein. This technique is used to increase model accuracy and they are used on the assumption that prior events may contain useful patterns or hidden contextual information that may be indicative of the patterns exhibited.
The experimentation was conducted on a single intersection to highlight the potential of mobility management using machine learning. These ML models were trained using a two-day subset of the overall dataset. Next, we split the selected data into 80% for training and the remaining 20% for evaluating purposes. Therefore, we used two days of traffic count measurements and features to train our regressor models. Next, we evaluated and calculated the model score as an evaluation category. To predict the time series dependent vehicle counts, we rely on a linear regression ML technique. These techniques are: linear regression, random forest linear regression, and gradient boost regressor. All of these techniques are available from the Scikit Learn ML library. Therefore, we used the library in conjunction with a few other libraries, i.e., Python Pandas data science library, Seaborn, and Matplotlib, for data visualizations and graph creation. Next, we run the experiments on a machine with an Intel core i7 6700HQ processor with 16 GBs of primary memory. Hence, the training took three minutes, on average.
For the WiFi device count prediction model, we develop an XGBoost-based linear regression for the Wi-Fi devices. In this connection, we took the recorded values for a total of five days of the data. Next, we selected features that would be best suited to the considered task. These features are: hour, minute, second, and lag shifted count. Furthermore, we divided the dataset into two parts, i.e., 80% for training and 20% for testing. Next, we used the Scikit XGBoost library to perform our training tasks. To complete the training task over the above mentioned machine, it took less than 5 min on average.

6.4. Results and Discussions

For vehicle count, there are three different linear regression models used and for each of them, the results are given, as follows. Using linear regression, we trained our model to learn the average traffic count pattern for each hour of the day. The Figure 16 shows how the model performs compared to the real traffic counts. For linear regression, we reported an accuracy of 65% for the prediction of the vehicle count. The training data were split into training and testing sub-samples, as is the case with ML models and then the predictions for the rest of the three days were predicted using the linear regression model. We can observe the actual vehicle count graph in the Figure 10.
For the Gradient Boosted regressor, we performed the experimental setup and trained the model using the following hyper parameters: loss was set to mean squared error, learning rate was 0.1, number of Boosting stages was 100, sub-sample value was 1.0, whereas the split quality measurement criterion was set to the Friedman Mean Squared Error (FMSE). The accuracy for this method is reported as 94.3%. Similarly, for random forest linear regression hyper parameters used are as follows: the number of tree was 100, split quality criterion was squared error, maximum tree depth was set to 2, minimum sample split was 1, sample weight function for leaves was equal, maximum features for split selection was the same as the features allotted. The accuracy for this method is reported as 93.8%. The results from these training experiments are presented in Figure 17.
The experimental results indicate a repeating pattern for traffic intensity for the road sections, as predicted by our models, and as the general patterns shown by Figure 10. Our trained models predicted the running average of this traffic density accurately. The traffic flow is low in the early morning, gradually increasing, and then reducing to a midday low as it then climbs back around the evening hours. These results demonstrate that traffic flow density can be modeled by artificial intelligence and be accurately predicted given enough information about the traffic features and conditions. These models are helpful in making informed decisions about our autonomous perception proposals. These create a known tangible outcome-based result that can be known ahead of time to be prepared for and given the resources at our disposal for communication infrastructure.
For Wi-Fi device count, the Figure 18 shows that we had an accuracy of 41.82% when trained on vanilla hyper parameters and an accuracy of 91% when trained with the gradient boost regressor. These results, however, visually do represent a similarity between what the actual trend is and what the model predicts it to be in the case. These values are incredibly important.
From a mobility point of view and to help us better understand the mobility patterns of the people around us, in this case, the SHapley Additive exPlanations (SHAP) values are selected for our regression model, where the features are divided into two groups: red and blue. As shown in Figure 19, SHAP values in the red group are positive, meaning they had a positive impact on the prediction. Similarly, there are negative SHAP value in the blue group, meaning it negatively affected the prediction. Hence, the SHAP values present the explainablility of the features contributed with the predicitons and used in the regression model. We believe that this will help the reader to understand the predictions of the model in a more specific and explainable manner. The glossary of frequently used terms are listed in Table 4.

7. Conclusions

It is undeniable that autonomous driving is a complex and problematic technology, which requires an intelligent environment. In this paper, we focused on enabling an intelligent environment by augmenting the CCAM infrastructure to create smart roads for autonomous driving. Meaning thereby, autonomous driving advances rely heavily on the CCAM infrastructure. Therefore, we identified sensory infrastructure, integration platforms, and communication interfaces for interconnections between components of the infrastructure. In addition, we parameterized the road and network infrastructures for advancements and evaluations under various conditions. The CCAM solution approach is validated by a set of experiments, which considered machine learning algorithms, i.e., linear regression model, gradient boosted regression model, and random forest linear regression model, for vehicle count (for road traffic analysis), mobility monitoring (for parking spaces), and WiFi devices count (for foot traffic analysis) using the external information.
The results, in the form of an accuracy metric, obtained by the above experiments for vehicle count predictions and WiFi device count predictions, are provided as follows: the linear regression model resulted with 65% accuracy, the random forest linear regression model resulted with 93.8% accuracy, and the gradient-boosted regression model resulted with 94.3% accuracy for vehicle count predictions; the linear regression model resulted with 41.8% accuracy, and the gradient boosted regression model resulted in 91% accuracy for WiFi devices count predictions. Hence, the traffic count patterns, traffic intensity, traffic flow density, foot traffic analysis, and mobility monitoring and management requirements can be accurately predicted given the right external information about traffic features and condition. Our goal is to make roads of United Arab Emirates University smarter by deploying our solution approach for augmented CCAM infrastructure. Therefore, we plan to find inter-stakeholder relationships and smarter collaborations between CCAM (on-road) and vehicle (on-board) infrastructures. Consequently, these relationships and collaborations will introduce new challenges and ask for new effective solutions from the relevant stakeholders, research communities, and industry.

Author Contributions

Conceptualization, M.A.K.; Investigation, M.J.K., M.A.K. and O.U.; Methodology, M.J.K., M.A.K., O.U. and S.M.; Project administration, M.J.K., M.A.K., F.I., H.E.-S. and S.T.; Supervision, M.A.K., F.I. and S.T.; Writing—original draft, M.J.K., M.A.K., O.U. and S.M.; Writing—review and editing, M.J.K., M.A.K., F.I., H.E.-S. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by Emirates Center of Mobility Research (ECMR) UAEU, Sandooq Al Watan, and UAEU-ZU research project.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. SAE Levels of Driving Automation™ Refined for Clarity and International Audience. Available online: https://www.sae.org/blog/sae-j3016-update (accessed on 10 July 2022).
  2. Audi Abandons Self-Driving Plans for Current Flagship—SlashGear. Available online: https://www.slashgear.com/audi-a8-traffic-jam-pilot-level-3-cancelled-tech-self-driving-legislations-28618493 (accessed on 10 July 2022).
  3. Khan, M.A.; El Sayed, H.; Malik, S.; Zia, M.T.; Alkaabi, N.; Khan, J. A Journey towards Fully Autonomous Driving-Fueled by a Smart Communication System. Veh. Commun. 2022, 36, 100476. [Google Scholar] [CrossRef]
  4. Geißler, T.; Shi, E. Taxonomies of Connected, Cooperative and Automated Mobility. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 1517–1524. [Google Scholar]
  5. Royuela, I.; Aguado, J.C.; de Miguel, I.; Merayo, N.; Barroso, R.J.D.; Hortelano, D.; Ruiz, L.; Fernández, P.; Lorenzo, R.M.; Abril, E.J. A testbed for CCAM services supported by edge computing, and use case of computation offloading. In Proceedings of the NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 25–29 April 2022; pp. 1–6. [Google Scholar]
  6. Kousaridas, A.; Fallgren, M.; Fischer, E.; Moscatelli, F.; Vilalta, R.; Mühleisen, M.; Barmpounakis, S.; Vilajosana, X.; Euler, S.; Tossou, B.; et al. 5G Vehicle-to-Everything Services in Cross-Border Environments: Standardization and Challenges. IEEE Commun. Stand. Mag. 2021, 5, 22–30. [Google Scholar] [CrossRef]
  7. Hosseini, S.; Jooriah, M.; Rocha, D.; Almeida, J.; Bartolomeu, P.; Ferreira, J.; Rosales, C.; Miranda, M. Cooperative, Connected and Automated Mobility Service Continuity in a Cross-Border Multi-Access Edge Computing Federation Scenario. Front. Future Transp. 2022, 3, 911923. [Google Scholar] [CrossRef]
  8. SAE J3016 Automated-Driving Graphic. Available online: https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic (accessed on 20 September 2022).
  9. Service Requirements for Enhanced V2X Scenarios–Technical Specification # 22.186. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3180 (accessed on 15 November 2022).
  10. Khan, M.A.; Sayed, H.E.; Malik, S.; Zia, T.; Khan, J.; Alkaabi, N.; Ignatious, H. Level-5 Autonomous Driving—Are We There Yet? A Review of Research Literature. ACM Comput. Surv. 2022, 55, 1–38. [Google Scholar] [CrossRef]
  11. Liu, Q.; Zhang, Y.; Wang, H. EdgeMap: CrowdSourcing High Definition Map in Automotive Edge Computing. arXiv 2022, arXiv:2201.07973. [Google Scholar]
  12. Naudts, D.; Maglogiannis, V.; Hadiwardoyo, S.; Van Den Akker, D.; Vanneste, S.; Mercelis, S.; Hellinckx, P.; Lannoo, B.; Marquez-Barja, J.; Moerman, I. Vehicular Communication Management Framework: A Flexible Hybrid Connectivity Platform for CCAM Services. Future Internet 2021, 13, 81. [Google Scholar] [CrossRef]
  13. Velez, G.; Perez, J.; Martin, A. 5G MEC-enabled vehicle discovery service for streaming-based CAM applications. Multimed. Tools Appl. 2021, 81, 12349–12370. [Google Scholar] [CrossRef]
  14. El Marai, O.; Taleb, T. Smooth and low latency video streaming for autonomous cars during handover. IEEE Netw. 2020, 34, 302–309. [Google Scholar] [CrossRef]
  15. Ahmad, F.; Qiu, H.; Eells, R.; Bai, F.; Govindan, R. {CarMap}: Fast 3D Feature Map Updates for Automobiles. In Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20), Santa Clara, CA, USA, 25–27 February 2020; pp. 1063–1081. [Google Scholar]
  16. Santa, J.; Fernández, P.J.; Ortiz, J.; Sanchez-Iborra, R.; Skarmeta, A.F. SURROGATES: Virtual OBUs to foster 5G vehicular services. Electronics 2019, 8, 117. [Google Scholar] [CrossRef]
  17. Khan, M.A. Intelligent Environment Enabling Autonomous Driving. IEEE Access 2021, 9, 32997–33017. [Google Scholar] [CrossRef]
  18. C-2VX-Enabling-Intelligent-Transport_2.pdf. Available online: https://www.gsma.com/iot/wp-content/uploads/2017/12/C-2VX-Enabling-Intelligent-Transport_2.pdf (accessed on 16 November 2022).
  19. Gyawali, S.; Xu, S. CA Study of 5G V2X Deployment by 5GPPP Automotive Working Group. IEEE Commun. Surv. Tutor. 2018. [Google Scholar]
  20. Technical Report #22.885—LTE Support for V2X Services. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2898 (accessed on 16 November 2022).
  21. Khan, M.J.; Khan, M.A.; Beg, A.; Malik, S.; El-Sayed, H. An overview of the 3GPP identified Use Cases for V2X Services. Procedia Comput. Sci. 2022, 198, 750–756. [Google Scholar] [CrossRef]
  22. Harounabadi, M.; Soleymani, D.M.; Bhadauria, S.; Leyh, M.; Roth-Mandutz, E. V2X in 3GPP Standardization: NR Sidelink in Release-16 and Beyond. IEEE Commun. Stand. Mag. 2021, 5, 12–21. [Google Scholar] [CrossRef]
  23. Service Requirements for V2X Services—Technical Specification #22.185. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2989 (accessed on 15 November 2022).
  24. Architecture Enhancements for V2X Services—Technical Specification #23.285. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3078 (accessed on 15 November 2022).
  25. V2X services Management Object (MO)—Technical Specification #24.385. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3113 (accessed on 15 November 2022).
  26. User Equipment (UE) to V2X Control Function—Technical Specification #24.386. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3114 (accessed on 15 November 2022).
  27. Vehicle-to-Everything (V2X) Application Enabler (VAE) Layer—Technical Specification #24.486. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3638 (accessed on 15 November 2022).
  28. V2X Control Function to V2X Application Server Aspects (V2)—Technical Specification #29.387. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3118 (accessed on 15 November 2022).
  29. V2X Control Function to Home Subscriber Server (HSS) Aspects (V4)—Technical Specification #29.388. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3119 (accessed on 15 November 2022).
  30. Inter-V2X Control Function Signalling Aspects (V6)—Technical Specification #29.389. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3120 (accessed on 15 November 2022).
  31. Architecture Enhancements for 5G System (5GS) to support Vehicle-to-Everything (V2X) Services—Technical Specification #23.287. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3578 (accessed on 15 November 2022).
  32. Vehicle-to-Everything (V2X) Services in 5G System (5GS)—Technical Specification #24.587. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3640 (accessed on 15 November 2022).
  33. Vehicle-to-Everything (V2X) Services in 5G System (5GS)—Technical Specification #24.588. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3641 (accessed on 15 November 2022).
  34. Application Layer Support for Vehicle-to-Everything (V2X) Services—Technical Specification #23.286. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3562 (accessed on 15 November 2022).
  35. V2X Application Enabler (VAE) Services—Technical Specification #29.486. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3639 (accessed on 15 November 2022).
  36. Security Aspect for LTE Support of Vehicle-to-Everything (V2X) Services—Technical Specification #33.185. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3141 (accessed on 15 November 2022).
  37. Security Aspects of 3GPP Support for Advanced Vehicle-to-Everything (V2X) Services—Technical Specification #33.536. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3724 (accessed on 15 November 2022).
  38. Khan, M.A.; Tembine, H.; Vasilakos, A.V. Game dynamics and cost of learning in heterogeneous 4G networks. IEEE J. Sel. Areas Commun. 2011, 30, 198–213. [Google Scholar] [CrossRef]
  39. Khan, M.A.; Tembine, H. Meta-learning for realizing self-x management of future networks. IEEE Access 2017, 5, 19072–19083. [Google Scholar] [CrossRef]
Figure 1. The cycle of autonomous vehicle core layers.
Figure 1. The cycle of autonomous vehicle core layers.
Remotesensing 15 00922 g001
Figure 2. Details of the CCAM and evolved roadside unit.
Figure 2. Details of the CCAM and evolved roadside unit.
Remotesensing 15 00922 g002
Figure 3. An overview of the Edge architecture for CCAM infrastructure.
Figure 3. An overview of the Edge architecture for CCAM infrastructure.
Remotesensing 15 00922 g003
Figure 4. A look into some of the visualizations of activity and environmental sensors.
Figure 4. A look into some of the visualizations of activity and environmental sensors.
Remotesensing 15 00922 g004
Figure 5. Communication technologies for CCAM Infrastructure to enable L-5 autonomous driving.
Figure 5. Communication technologies for CCAM Infrastructure to enable L-5 autonomous driving.
Remotesensing 15 00922 g005
Figure 6. Inter-stakeholders relationship for L-5 AD.
Figure 6. Inter-stakeholders relationship for L-5 AD.
Remotesensing 15 00922 g006
Figure 7. Taxonomy and 3GPP technical documents for autonomous driving.
Figure 7. Taxonomy and 3GPP technical documents for autonomous driving.
Remotesensing 15 00922 g007
Figure 8. Frequency of technical documents in 3GPP releases (14–17) supporting V2X communication.
Figure 8. Frequency of technical documents in 3GPP releases (14–17) supporting V2X communication.
Remotesensing 15 00922 g008
Figure 9. Frequency of system and network support for AD usecase groups through cellular technologies.
Figure 9. Frequency of system and network support for AD usecase groups through cellular technologies.
Remotesensing 15 00922 g009
Figure 10. Response of neural network regressor trained on 2 weeks of data. Predicting the traffic volume given the minute of day. The two curves are predictions for workday (Monday–Tuesday in cyan) and weekend (Saturday/Sunday in red) with correspondingly colored training instances in the background.
Figure 10. Response of neural network regressor trained on 2 weeks of data. Predicting the traffic volume given the minute of day. The two curves are predictions for workday (Monday–Tuesday in cyan) and weekend (Saturday/Sunday in red) with correspondingly colored training instances in the background.
Remotesensing 15 00922 g010
Figure 11. Scatter plot of five minute average vehicle count of two selected locations highlighting a traffic interruption on 25 January 2020.
Figure 11. Scatter plot of five minute average vehicle count of two selected locations highlighting a traffic interruption on 25 January 2020.
Remotesensing 15 00922 g011
Figure 12. Average traffic volume (number of vehicles per hour) data as provided by the system over a period of three days.
Figure 12. Average traffic volume (number of vehicles per hour) data as provided by the system over a period of three days.
Remotesensing 15 00922 g012
Figure 13. Heat maps of parking spaces for spot #6. For every hour of the 25 days (Left). For every minute of the 20 h (Right).
Figure 13. Heat maps of parking spaces for spot #6. For every hour of the 25 days (Left). For every minute of the 20 h (Right).
Remotesensing 15 00922 g013
Figure 14. Heat maps of number of WiFi devices at two selected locations.
Figure 14. Heat maps of number of WiFi devices at two selected locations.
Remotesensing 15 00922 g014
Figure 15. WiFi density for a single day at each location.
Figure 15. WiFi density for a single day at each location.
Remotesensing 15 00922 g015
Figure 16. Linear regression prediction of vehicle count.
Figure 16. Linear regression prediction of vehicle count.
Remotesensing 15 00922 g016
Figure 17. Random forest and gradient boost regressors when trained on partial day dataset, and tested for predicting two days of traffic.
Figure 17. Random forest and gradient boost regressors when trained on partial day dataset, and tested for predicting two days of traffic.
Remotesensing 15 00922 g017
Figure 18. WiFi devices’ prediction. Gradient boost model predictions for five days (Upper). The training dataset with prediction based on XGBoost model (Lower).
Figure 18. WiFi devices’ prediction. Gradient boost model predictions for five days (Upper). The training dataset with prediction based on XGBoost model (Lower).
Remotesensing 15 00922 g018
Figure 19. SHAP values for linear regression model to show features’ contributions.
Figure 19. SHAP values for linear regression model to show features’ contributions.
Remotesensing 15 00922 g019
Table 1. Achieving perception through information available in roadside units.
Table 1. Achieving perception through information available in roadside units.
Study and YearChallengeContribution/FocusSolution ApproachArchitectureUse Case/ScenarioShortcomings
EdgeMap Year 2022Frequent updates of HD Maps under limited network resouces, i.e., transmission and computation resources. Maintaining up-to-date map through specialized collection vehicles. Vehicular offloading and resource reservation problemA new HD map (EdgeMap) by Crowdsourcing data in MEC, Design and Implemenation of DATE Algorithm, Minimum resource utilization and offloading decision presented via extensive network simulations.A crowdsourcing HD Map (EdgeMap) for minimum network resouces with balanced latency requirement. A DATE Algorithm based on multi-agent deep reinforcement learning for vehicular offloading and Gaussian process regression for resource usageArchitecture: Yes
Experimental Setup: Yes
Simulation: Yes
Real-Experimentation: Yes
Performance Metrics: Yes
Urban micro (UMi—Street Canyon) channel model, Scenarios of Machine Hall 01–05Not clear.
CAMINO Year 2021Real flexible and modular platforms for hybrid communication. Different communication technologies between different vehicles from different OEMs.Enabling cross-technology vehicular communication to support CCAM services, Monitoring and logging of valuable information, Helping OEMs towards management and integration.Integration of different V2X technologies, devices, interfaces, and services (i.e., C-V2X PC5, ITS-G5, 4G/5G C-V2X Uu, vehicle/infrastructure sensors, actuators, HMIs, and external service providers)Architecture: Yes
Experimental Setup: Yes
Simulation: No
Real-Experimentation: Yes
Performance Metrics: Yes
Highways in Flanders,
E313 highway (Belgian),
A16 highway (Amsterdam)
Not clear.
5G-MEC Year 2021V2V sharing data creates communication overhead and latency issuesHolistic understanding of the surrounded environment. Cooperative perception between Avs. Enabling streaming-based CAM application through MECMEC provides the exact information required for the AVs with low communication latency.Architecture: No
Experimental Setup: Yes
Simulation: No
Real-Experimentation: No
Performance Metrics: No
See-Through Application of Extended SensorsNot clear if MEC is able to prioritize between vehicles for information sharing when two vehicles request at the same time.
SLL-VS Year 2020In time live video (data) delivery. Continous live video streaming with no disruption (outage) during handover operation. High glass-to-glass latencyLow latency live video streaming for AD use cases. Mimimize the video stream outage during handover. Smooth video playbackA live video streaming solution that ensures a low E2E latency. A solution to minimize video stream outage during handover process between two different network operators. Resuming the stream automatically after a network outage due to handover operation.Architecture: Yes
Experimental Setup: Yes
Simulation: No
Real-Experimentation: Yes
Performance Metrics: Yes
Remote Driving,
See-What-I-See in Platooning,
No comparison with V2V-based streaming for latency and recovery time, No details over the practical implementaions (location etc.), Lack of modularity for other CCAM applications
CarMap Year 2020Difficult vehicle localization due to infreqent HD map updates. Large map size and dynamic environmental dynamics. Effective feature matching in feature maps. No fast map-updates,Developed feature map, i.e., CarMap. Reduced map size by lean representation. Improve localization by using position information obtained through feature search. Developed algorithms for dynamic object filtering, map segment stitching, and map  updates.A crowd-sourced 3D feature-map (CarMap) with near real-time updates over a cellular network.Architecture: Yes
Experimental Setup: Yes
Simulation: Yes
Real-Experimentation: No
Performance Metrics: Yes
Real-world traces and simulated traces in CarLA (i.e., sub-urban street, freeway roads, downtown roads), Scenarios such as static scene, dynamic scene, and multi-lane.Edge is not taken into account for storage and compute resouces, Stitching operation becomes expensive for cloud service in real-time experiement.
SURROGATES Year 2019Continuous data gathering and processing of OB sensors is challenging. Mobility and processing needs are issues in vehicular domain, Intermediate cache and processing layers are required for data gatheringVirtualization of end-devices in ITS ecosystem. Improve system efficiency and adaptability by virtualization in vehicular environemnts. End-to-end architecture for NFV and MEC. Real implementation, deployment, and validation tests of the architectureVirtualize vehicle OBUs into VNFs and create MEC layer for offloading processing and data-access requests. Real OBUs virtualization in MEC to create always available OBUsArchitecture: Yes
Experimental Setup: Yes
Simulation: No
Real-Experimentation: Yes
Performance Metrics: Yes
Testbed at the Espinardo Campus, University of Murcia, SpainIntroducing Edge layer is not a novel idea since the literature exist with studies that are using edge layer between device and cloud layer; however, virtualizing OBUs in edge layer is a novel approach
Table 2. Types and descriptions of the sensors.
Table 2. Types and descriptions of the sensors.
Sensor TypeDescriptionParameters Measured
Car ParkingIn parking lots, these sensors can be of various use, including finding a preferred parking space for drivers, identifying wrong parking, parking violations and expired tickets. Up to 300 spaces can be covered by one sensor and the range can be as long as 400 m. The raw video streams are only processed on-board in an effort to increase privacy.Total parking spots, free spots, and occupied spots
Traffic AnalysisIn highways, the crowded traffic can be caught/recorded via these sensors and it can capture as wide as four lanes per camera. The frequency with which the data are recorded can be modified according to needs, e.g., per month, week, day, hour, or per minute. The user can view the evaluations through camera image on a display, via email reported regularly by a reporting engine or sent through CSV file.Classification of vehicle types and automatic counting of vehicles
Activity AnalysisThese sensors can recognize/distinguish different automobile activities at certain geographical locations. Information regarding motion and most lingered places can be traced via analysis of sensory data. The information of activity patterns (pedestrian walking, cycling etc.), gathered via these sensors may assist in dynamic decision making for AD, e.g., scheduling traffic light controls or LDM trajectory planning.Visualization of motion and dwelling time, objective measurement of hot spots, compilation of statistical evaluations with adjustable duration and intervals of evaluation.
EnvironmentalThe effects of AD on the environment can be analyzed via these sensors. Moreover, different parameters and their relationship can be studied through interpretations and evaluations of sensory data collected from environmental and traffic analysis sensors.Air quality index, NO2, NO, O3, particulate matter (PM1, PM2.5, PM10)
Road conditionThese sensors are installed on masts (e.g., streetlamps) or bridges and can be used as optical technologies for the purpose of visual information of site and measure remote temperature.Road status (dry, damp, wet, ice, snow, and chemically wet), road surface temperature, snow height, and water film height
Traffic LightThese sensors are integrated with the existing traffic control system and use DSRC communication technology for short range messages delivery between traffic light, vehicles, pedestrians, and other road users.Traffic light status, e.g., red, yellow, green
Camera-based SensorsThe type of sensors visually scans its surroundings and capture images and videos that contains important and relevant information. The information is then used for perception creation, object detection, context or scene understanding, object tracking, etc.Object detection, road status, and classification.
LiDAR SensorsThese sensors are popular to perform 3D mapping of the surroundings. The 3D mapping is in point-cloud format and operations over these point-cloud depends over the number of layers in the sensor. This is to say, LiDAR comes with various layers detailing the objects it scans.Field-of-view, point density, and depth level based on scanning layers
RADAR SensorsThese sensors are usually used for detection objects that are moving or stable. Through radar, the object can be tracked by determining their distance, angle, and velocity.Range and distance to the detectable object and object tracking.
Table 3. Inter-stakeholders relationship for envisioned.
Table 3. Inter-stakeholders relationship for envisioned.
RelationshipsDenotationDescription
MNO t o CA o r RIO o r OEM 1 (i) MNO provides communication infrastructure, (ii) CA, RIO, and OEM are MNO tenants, (iii) CA controls the data, which it exchanges with other stakeholders in accordance with predetermined rules.
MNO t o Users 2 MNO provides the services to the users. For infotainment, this relationship still holds true, but not for autonomous driving’s communication services.
OEM o r CA o r RIO t o Users 3 OEM/CA/RIO provide services to the users. The services provided by CA, OEM, and RIO could include perception-as-a-service (PaaS). It could be the data which is gathered from outside sources such as on-road sensors, OEM backends, etc.
CIP t o CA o r RIO o r OEM 4 The communication infrastructure is installed by CIP, and it is run by CA/RIO/OEM.
Table 4. Glossary list.
Table 4. Glossary list.
TermDefinition
3GPPThe 3rd Generation Partnership Project (3GPP) is the collective name for several standard bodies that develop mobile telecommunications protocols and standards. 3GPP specifications cover cellular telecommunications technologies, such as radio access, core networks, and service capabilities, which offer a comprehensive system definition for mobile telecommunications.
CameraThese offer more thorough information and aid in comprehending depthless objects, which are typically overlooked by other types of sensors. Such depthless objects include speed limit boards, stop signs, slow signs, traffic lights, etc.
CCAMStands for cooperative connected and automated mobility. It enables the capabilities of autonomous driving by allowing sensory infrastructure, communication infrastructure, and computation infrastructure.
C-V2XStands for a cellular vehicle to everything communication, which allows the vehicles to communicate with each other, pedestrians, the cloud, and their environment.
EDMStands for edge dynamic map, which is built over the edge for facilitating road users with external information.
LiDARThe type of sensor that sends and receives signals using a laser beam to detect objects. It also fires laser pulses at specific targets to produce a depth map. These sensors are reliable and real-time. Since LiDARs are depthless sensors, they are unable to distinguish depthless objects, such as traffic lights, signs, etc.
OBUThe onboard unit mounted on the vehicles is designed to exchange messages and communicate with other OBUs and RSUs leveraging dedicated short-range communication or the PC-5 communication.
PerceptionThe component of an autonomous vehicle responsible for collecting information from different onboard sensors and external sources; extracting the relevant knowledge and developing an understanding of the environment.
RadarThe RADAR sensor functions very similarly to the LiDAR, except using radio waves rather than laser. However, when radio waves come into contact with objects, they absorb less energy than light waves. Thus, they can operate over a relatively long distance.
RSUThe roadside unit can be mounted along a road or on the vehicle. It broadcasts the data to OBUs or exchanges data with OBUs in its communications zone.
TSThe Technical Specification documents are created by 3GPP. The TS covers the Core Network and radio component in addition to billing details and speech coding right down to the source code level. These TS are then transferred into standards.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, M.J.; Khan, M.A.; Ullah, O.; Malik, S.; Iqbal, F.; El-Sayed, H.; Turaev, S. Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving. Remote Sens. 2023, 15, 922. https://doi.org/10.3390/rs15040922

AMA Style

Khan MJ, Khan MA, Ullah O, Malik S, Iqbal F, El-Sayed H, Turaev S. Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving. Remote Sensing. 2023; 15(4):922. https://doi.org/10.3390/rs15040922

Chicago/Turabian Style

Khan, M. Jalal, Manzoor Ahmed Khan, Obaid Ullah, Sumbal Malik, Farkhund Iqbal, Hesham El-Sayed, and Sherzod Turaev. 2023. "Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving" Remote Sensing 15, no. 4: 922. https://doi.org/10.3390/rs15040922

APA Style

Khan, M. J., Khan, M. A., Ullah, O., Malik, S., Iqbal, F., El-Sayed, H., & Turaev, S. (2023). Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving. Remote Sensing, 15(4), 922. https://doi.org/10.3390/rs15040922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop