1. Introduction
The rapid evolution of computer, sensor, and wireless communication technology has ushered in a transformative era in intelligent transportation systems (ITS). Starting from the early phases of individual vehicle intelligence, there has been a gradual shift towards a cooperative vehicle–infrastructure model, resulting in Cooperative, Connected, and Automated Mobility (CCAM) [
1]. Internet of Vehicles (IoV) technology is at the heart of this evolution, integrating communication, control, and calculation technologies to significantly enhance road utilization in urban transportation systems. This advancement contributes to heightened transportation efficiency, reduced traffic accidents, lower energy consumption, and the realization of sustainable development within the transportation industry [
2,
3,
4,
5,
6,
7,
8,
9].
The Intelligent Vehicle Infrastructure Cooperative System is central to intelligent transportation, epitomizing vehicular cooperation. This system encompasses vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (V2I), vehicle-to-pedestrian communication (V2P), and vehicle-to-Internet communication (V2N), collectively referred to as vehicle-to-everything (V2X) [
10,
11,
12,
13,
14]. It represents the optimal solution, facilitating data exchange among vehicles and entities to provide precise knowledge of the surrounding environment, thereby enhancing traffic safety and efficiency.
A quintessential Intelligent Vehicle Infrastructure comprises two fundamental units: the Roadside Unit (RSU) and the On Board Unit (OBU). Data exchange between these units furnishes vehicles with accurate insights into their surroundings, enabling informed decisions to enhance traffic safety and efficiency. For instance, sensor devices on RSUs can detect approaching objects, such as pedestrians and vehicles, measure their distance and speed, and inform the RSU. Utilizing these data, the RSU dynamically adjusts the phase information of intelligent traffic lights, optimizing traffic flow and efficiency. Similarly, intelligent parking systems leverage surrounding information to model and analyze available parking spaces, enhancing convenience for users.
5G technology is pivotal in advancing several Sustainable Development Goals (SDGs) through its transformative capabilities. Specifically, SDG 9 (Industry, Innovation, and Infrastructure) benefits from the contribution of 5G to enhanced connectivity, low latency, and high throughput, fostering growth in industries and infrastructure development [
15]. To promote SDG 3 (Good Health and Wellbeing), 5G facilitates telemedicine and remote healthcare applications, enabling real-time communication between healthcare professionals and patients [
16]. Smart city initiatives, aligned with SDG 11 (Sustainable Cities and Communities), leverage 5G to optimize transportation, enable precision agriculture, and support responsible consumption and production practices [
17]. Addressing SDG 13 (Climate Action), 5G aids in environmental monitoring, utilizing sensors and devices for real-time data collection to support climate-related initiatives. The technology also aligns with SDG 4 (Quality Education) by providing high-speed and reliable connectivity for remote learning, thereby ensuring access to educational resources [
18]. Furthermore, 5G contributes to SDG 17 (Partnerships for the Goals) by fostering innovation ecosystems and economic growth to create opportunities for collaborative partnerships and initiatives that align with sustainable development objectives. Overall, 5G directly contributes to and advances multiple SDGs, fostering a more sustainable and inclusive global future [
19].
While significant advancements have been made in vehicular communication technologies, there remains a gap in understanding how communication distances influence the performance of edge nodes, particularly in the context of energy consumption and computational demands. Existing studies often overlook the nuanced effects of distance variations on edge computing in vehicular environments, necessitating a comprehensive exploration to bridge this gap. To address this gap, our research endeavors to answer the following questions:
How do varying communication distances between OBUs and RSUs impact energy consumption patterns in vehicular communication networks?
What is the correlation between communication distance and CPU load at the RSU, and how does it influence edge node performance?
This paper provides insights into the performance dynamics of edge computing in 5G-enabled vehicular communication systems while considering the influence of communication distances. By conducting both real-world experiments and simulations, we offer valuable findings on energy consumption patterns and CPU load variations, contributing to the development of efficient and resilient edge computing solutions tailored for connected transportation systems.
Our research is conducted within a meticulously crafted experimental framework deployed on a 5G testbed. The experimental setup involves a BMW car equipped with an OBU and an RSU supplemented by a simulation environment powered by the NS-3 network simulator [
20]. This dual-pronged approach meticulously captures the intricacies of vehicular communication in both real-world and simulated scenarios. The BMW car, serving as the platform for the OBU, becomes the focal point for in-vehicle communication, embodying the dynamic and diverse nature of vehicular data exchange. Simultaneously, the RSU, functioning as an edge node, plays a pivotal role in the infrastructure, facilitating seamless communication between the vehicle and the broader network. The NS-3 simulation complements the real-world experimentation, providing a controlled environment for detailed technical analysis. Together, these components form a comprehensive ensemble designed to authentically mirror the practical challenges and dynamics of connected transportation systems. This approach ensures the relevance of our study and establishes a solid foundation for technically nuanced insights into the performance of vehicular communication networks.
The remainder of this paper is structured as follows. In
Section 2, we review related works.
Section 3 details the experimental and simulation setups employed in our investigation.
Section 4 presents our results and analysis, highlighting energy consumption patterns and CPU load dynamics across different communication distances. In
Section 5, we discuss the implications of our findings. Finally,
Section 6 concludes the paper, summarizing key insights, contributions, and potential avenues for future research.
2. Related Works
Lately, the issue of energy efficiency has attracted considerable attention in the Mobile Edge Computing System (MECS) context [
21]. In [
22], the authors considered user association and power allocation in millimeter-wave (mmWave)-based ultra-dense networks while ensuring load balance and energy efficiency, harvesting energy from base stations, meeting user quality of service requirements, and managing cross-tier interference. The authors of [
23] explored the optimization challenge of power control and sensing time in a cognitive small-cell network, addressing concerns such as mitigating cross-tier interference, imperfect hybrid spectrum sensing, and energy efficiency. Efficient energy-saving schemes [
24,
25], extensively researched and widely acclaimed, stand out as one of the most prominent avenues for achieving substantial energy savings in cellular networks. Additionally, proposals for base-station sleeping (BSs) control [
26,
27,
28,
29] have garnered significant attention in the pursuit of energy efficiency within this domain. However, integrating MEC with BSs significantly complicates the energy-saving issue, as the BS now provide both radio access and caching services.
Additionally, because caching resources on MECSs are limited, downloading some content from the CN is inevitable. Consequently, energy consumption is intricately linked to caching capacity and MECSs’ sleeping decisions over time. In this context, content popularity and caching capacity emerge as two main factors influencing the MECSs’ sleeping decisions. In [
30], the researchers discussed the caching deployment problem with a given wireless transmission rate while assuming fixed factors for the backhaul transmission rate, MECS storage capacity, and system energy consumption. However, base stations must account for various wireless channel states, conditions, diverse backhaul links, and system power in practical mobile networks. Thus, it is necessary and crucial for caching deployment and active MECS to consider the above three factors [
31,
32].
In [
33], service delay was minimized by virtual machine (VM) mitigation in SCN, and the impact of the mobility of UEs and the dense deployment of small cells were discussed. In [
34], the authors analyzed the coverage performance for computation tasks within a two-tier small-cell network. Note that energy efficiency should be discussed in these works, and the service delay requirement has been neglected thus far.
Ning et al. [
35] addressed the evolving demands of vehicular services by proposing an intelligent Internet of Vehicles (IoV) framework to minimize overall energy consumption while meeting user delay constraints. Recognizing the limitations of vehicular fog nodes in delivering satisfactory user experiences, the authors constructed a three-layer offloading framework within the context of intelligent IoV. To tackle the high computational complexity of the problem, it was decomposed into flow redirection and offloading decision components. Subsequently, the researchers introduced a deep reinforcement learning (DRL)-based scheme to solve the optimization problem efficiently. Real-world traces of taxis in Shanghai, China were utilized for performance evaluations, demonstrating the effectiveness of the proposed methods. The results indicated a significant decrease in average energy consumption by approximately 60% compared to the baseline algorithm. However, despite its contributions this work lacks comprehensive simulation-based evaluations to validate the proposed framework under various scenarios and conditions. While real-world trace-based evaluations can offer valuable insights into the effectiveness of the proposed methods, simulation-based studies provide a more controlled environment for testing and validating the system’s performance across a wide range of parameters and scenarios.
Ke et al. [
36] addressed the challenges of minimizing energy consumption and data communication delay in vehicular networks, with a particular focus on the dynamic and time-varying nature of wireless channels and bandwidth. Leveraging MEC servers associated with BSs, vehicles, and RSUs can offload computing tasks to enhance processing efficiency and reduce delays. However, the unstable environment for offloading tasks to the MECS, characterized by varying wireless channel states and available bandwidths, poses significant challenges. To tackle this, the researchers proposed a task computation offloading model for heterogeneous vehicular networks considering multiple stochastic tasks and the variability of wireless channels and bandwidth. To address the complexity of the large action space and obtain a tradeoff between energy consumption and data transmission delay, they introduced an adaptive computation offloading method based on deep reinforcement learning (ACORL). ACORL utilizes the Deep Deterministic Policy Gradient (DDPG) as a deep reinforcement learning method to optimize the offloading policy, efficiently addressing continuous action space; additionally, the method incorporates improvements to the Ornstein–Uhlenbeck (OU) noise vector for effective stochastic exploration. Through extensive simulations under varying channel states and available bandwidth scenarios, the researchers demonstrated the effectiveness of ACORL in adaptively performing local execution and offloading tasks to MEC servers, outperforming two baseline schemes. However, while this work makes significant strides in addressing the challenges of computation offloading in vehicular networks, the evaluation was primarily conducted through numerical simulations; further validation through real-world experiments or field trials could provide more insights into the practical applicability and robustness of the proposed approach.
Feng et al. [
37] proposed a novel approach to address the challenges in end-to-end low-latency transmission and backhaul resources in 5G-enabled V2X networks by leveraging MEC. By considering the crucial factors of reliability and delay in vehicle communication, the authors introduced a joint computation and Ultra-Reliable Low-Latency Communication (URLLC) resource allocation strategy for collaborative MEC-assisted cellular-V2X networks. To tackle the NP-hard optimization problem, the authors decomposed it into two subproblems: URLLC resource allocation and computation resource decisions. They further introduced non-cooperative game theory and bipartite graph techniques to reduce inter-cell interference and optimize channel allocation for URLLC V2X communication. Additionally, an online Lyapunov optimization method was proposed to achieve a tradeoff between average weighted power consumption and delay, with the CPU frequency calculated using the Gauss–Seidel method. Simulation results demonstrated the superiority of the proposed strategy over centralized MEC-assisted V2X approaches, achieving better performance in terms of power consumption, overflow probability, and execution delay. While the paper presents a comprehensive strategy for addressing computation and URLLC resource allocation in C-V2X networks, it needs real-world experimental validation. Conducting experiments in real-world settings would help to validate the effectiveness and feasibility of the proposed strategy under various scenarios and environmental conditions.
Sadatdiynov et al. [
38] offered a comprehensive overview of optimization methods employed in computation offloading within Edge Computing networks, addressing the challenges of handling massive data generated by Smart Mobile Devices (SMDs). By exploring six types of optimization methods, including Lyapunov optimization, convex optimization, heuristic techniques, game theory, machine learning, and others, the authors provide insights into their respective objective functions, application areas, offloading methods, evaluation methods, and time complexity. Moreover, the authors discuss open research problems in computation offloading, providing valuable guidance for new field researchers. While this review provides a thorough examination of existing optimization methods, it predominantly focuses on theoretical aspects and lacks empirical validation in real-world scenarios. Most of the methods discussed require validation through practical implementations and experimentation in order to assess their efficacy in diverse network environments.
Moghaddasi et al. [
39] offered a pioneering approach to addressing the challenges of data offloading in the context of 5G-enabled vehicular edge computing (VEC) environments. The authors introduced an innovative data offloading strategy designed to optimize overall system performance by leveraging the power of deep reinforcement learning (DRL), specifically employing a cutting-edge application known as double deep Q-networks (DDQN). This strategy demonstrates significant improvements in operational efficiency and effectively manages the complexities inherent in VEC environments, including dynamic mobility conditions and fluctuating network demands. This approach represents a substantial advancement in connected vehicular networks, drastically reducing communication overhead, lowering energy consumption, and minimizing latency. The empirical analysis showcases remarkable enhancements, including an 80% improvement in energy efficiency, a 72.6% reduction in communication overhead, and a 9.8% improvement in delay compared to existing methods. This work is a benchmark for future research endeavors aiming to enhance real-time vehicular network services and tackle the intricate challenges of data offloading within 5G-enabled VEC frameworks. While the presented work offers substantial advancements in optimizing data offloading strategies for 5G-enabled VEC systems, the efficacy of the proposed approach in real-world scenarios with varying network conditions, traffic patterns, and environmental factors remains to be fully validated.
While comprehensive in addressing various energy-related challenges, these existing works often need to consider the impact of transmission range on energy consumption explicitly. The communication distance between vehicles and infrastructure elements is a crucial variable, especially in dynamic vehicular environments. Longer transmission ranges may introduce increased signal attenuation and interference, potentially affecting energy consumption and overall system performance.
As a consequence, designing an optimal solution to minimize energy costs while guaranteeing high user quality of experience (QoE) is a challenging issue, and standardization efforts play a pivotal role in this trajectory. Establishing common standards enhances interoperability and provides a foundation for seamless integration of diverse components within these systems. Standardized frameworks ensure consistency and compatibility, fostering a collaborative environment where researchers and practitioners can build upon established norms. In the context of the works mentioned above, adherence to standardized protocols becomes crucial for scalability, reliability, and the broader adoption of energy-efficient practices.
3. Experimental and Simulation Setup
In our research, both simulation and real-world experimentation serve complementary purposes to comprehensively explore vehicular communication and edge computing within a 5G network while considering varying communication distances between OBUs and RSUs. The purpose of the simulations is to replicate controlled virtual environments where various scenarios can be tested under specific conditions, providing insights into system behavior and performance in idealized settings. On the other hand, the purpose of real-world experiments is to validate the simulation results and capture the nuances and challenges of actual deployments while considering factors such as signal interference, environmental conditions, and hardware limitations.
3.1. Experimental Setup
The experiment was conducted at the University of Antwerp Campus Groenenborger in Antwerp, Belgium. It involved a real-world deployment scenario with an OBU mounted in a BMW and an RSU strategically placed on the campus, establishing a 5G connection for communication. The goal of the experiment was to generate Cooperative Awareness Messages (CAMs) in the OBU and send them to the RSU over the 5G connection from different distances while recording the energy consumption and central processing unit (CPU) load at the RSU (see
Figure 1).
3.1.1. Network Architecture
The network architecture encompassed advanced components in both the RSU and OBU (see
Figure 2). The RSU featured a General Purpose Compute Unit (GPCU) with an Intel Xeon E5-2620 processor, 32 GB RAM, and a 1 TB SSD. In addition, it included a Cohda MK5 RSU and Cohda MK6c EVK for IEEE 802.11p and Cellular V2X (C-V2X) connectivity, both equipped with built-in Global Navigation Satellite System (GNSS) receivers. A Universal Software Radio Peripheral (USRP) N310 Software-Defined Radio (SDR) facilitated high-speed data transfer with multiple antennas and a wide frequency range, while a Septentrio AsteRX-m2 ensured high-precision GNSS reception.
Similarly, the OBU incorporated an Intel Nuc 7i7DNKE with an Intel Core i7-8650U processor, 8 GB RAM, and a 250 GB SSD. The Cohda MK5 OBU and Cohda MK6c EVK provided IEEE 802.11p and C-V2X connectivity, complemented by a USRP B210 SDR for flexible communication. An NVidia Jetson AGX Xavier contributed a robust GPU for processing tasks, enhancing the overall communication capabilities. Finally, the Septentrio AsteRX-m2a delivered high-precision GNSS data.
3.1.2. The Experiment
The experiment focused on measuring the energy consumption of the RSU, specifically at the edge node, under varying communication ranges such as 30 m, 50 m, 70 m, and 80 m. Data packets were transmitted from the OBU in the BMW to the RSU, simulating different distances to assess the impact on energy efficiency (see
Figure 3). Metrics included the recording of energy consumption and CPU load, providing insights into the performance of the edge node in a 5G communication scenario. To measure energy consumption, we managed a Power Distribution Unit (PDU) contained in the RSU, which was used to remotely power-cycle stuck components an to measure the energy consumption of the devices connected to it. We deployed a monitoring system [
40] to record and monitor the CPU load. This system was encapsulated within an LXD container to ensure smooth management and monitoring of the RSU. A Prometheus server with key agents such as Node Exporter and cAdvisor was hosted within this container. These components actively gathered real-time performance metrics and health statuses, relying on a Grafana Dashboard as a centralized interface to visualize dynamic performance data and receive status updates.
3.2. Simulation Setup
The simulation experiment was run on NS-3 version 3.33. NS-3 stands out as an open-source discrete-event network simulator, providing the capability to trace internal events, a flexible configuration system, and diverse modules for simulating other technologies such as ethernet, Long Term Evolution (LTE), and WiFi in multi-technology scenarios. This versatility allows different segments of the same network modeled and different systems to be interconnected when required. The widely adopted NS-3 simulator is well recognized in research and academia, diligently maintained by an active community, and often supported by the Google Summer of Code as a flagship open-source project [
41].
Our simulations aimed to replicate a 5G-enabled vehicular communication scenario featuring an OBU and an RSU within a virtual urban environment. The OBU was modeled as a mobile node that generates messages and establishes 5G connections with a strategically placed stationary RSU. The experiment focused on assessing the energy consumption induced by the impact of 5G communication, with variable transmission range settings on the edge node (RSU) to simulate distances of 30 m, 50 m, 70 m, and 80 m. The simulation was executed for multiple scenarios and the resulting data were analyzed to understand how the communication distance influenced the energy consumption of the edge node in the 5G vehicular communication system.
3.2.1. Network Architecture and Implementation
Simulation was conducted to evaluate the energy consumption performance at the 5G Next-Generation Node Base (gNB) within a 5G non-standalone (NSA) deployment [
42]. The network architecture was based on a scenario featuring a single 5G NR gNB. The gNB was co-deployed with a 4G Evolved Packet Core (EPC) network and the User Equipment (UE) was configured with multi-connectivity between a co-deployed LTE evolved Node Base (eNB) and the 5G NR gNB. In the simulation, the distance parameter, denoted as “dist”, was varied within the range of 30 m, 50 m, 70 m, and 80 m in order to analyze its influence on gNB energy consumption under diverse communication ranges. The simulation employed the User Datagram Protocol (UDP) as the transport protocol to transmit the messages generated by the OBU. UDP was chosen for its lightweight and connectionless nature, which aligns with the requirements of edge computing scenarios. To model the energy consumption, we adapted a base station power consumption model based on Physical (PHY) states (see
Figure 4), considering the absence of the RRC_INACTIVE state in the Radio Resource Control (RRC) state machine implemented in the ns3 mmWave module. The PHY states (IDLE, RX_CTRL, RX_DATA, and TX) were used to emulate the power consumption behavior over the RRC states of an eNB/gNB. The net energy consumption of the base station was expressed as P(Ps×ts), where Ps and ts respectively represent the power consumption and time taken in a specific PHY state (IDLE, RX_CTRL, RX_DATA, TX), as shown in (
1):
where:
represents the total energy consumption
represents the PHY states: IDLE, RX_CTRL, RX_DATA, and TX
represents the power consumption of each PHY state S
represents the dwell time of each PHY state S.
The implementation in NS-3 leveraged the ns3-mmWave [
42] and 5G-LENA [
43] modules, which integrate the MAC, PHY, and RLC layers per 3GPP specifications. The mmWave simulation utilized an Energy Source Model and a Device Energy Model. The mmWaveSpectrumPhy object within the mmWaveEnbNetDevice provided a trace source for PHY state changes, and the device energy model utilized a trace sink to update the total energy consumption based on the PHY state power values. The NR module originated as a divergence from the NS-3 mmWave simulation tool, initially developed by New York University (NYU) and the University of Padova [
42]. The mmWave simulation model incorporates essential LTE features from the NS-3 LTE module (LENA) [
43], entirely crafted at the Centre Tecnològic de Telecomunicacions Catalunya (CTTC). The mmWave module is inherited from the NS-3 LTE module (LENA), making both the mmWave and NR modules heavily influenced by the LTE module’s previous design. Specifically, both modules reuse higher protocol components (Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP), Radio Resource Control (RRC), Non-Access Stratum (NAS)) and the Evolved Packet Core (EPC) from LTE. This simulation framework allowed us to analyze the energy consumption patterns of the RSU at varying communication distances, providing insights into the efficiency of the 5G network under different scenarios.
3.2.2. Simulation Parameters
The 5G network was deployed in non-standalone (NSA) mode, indicating its reliance on a 4G Evolved Packet Core (EPC) network, mirroring practical scenarios where both 4G and 5G technologies coexist.
For the user equipment (UE) configuration, a multi-connectivity approach was adopted, enabling simultaneous links with both the LTE eNB and the NR gNB. The LTE eNB served as a local traffic anchor within the core network, with data packets transmitted to and from the mmWave gNB using split bearers. This configuration, detailed in [
44], ensures efficient flow management.
The simulation scenario involved initiating an end-to-end data flow from the UE directed towards the gNB while sustaining a constant bitrate of 100 Mbit/s to analyze energy consumption comprehensively under varying distances and realistic operational conditions. Parameters defining the simulation setup included a bandwidth of 1 GHz for mmWave eNBs operating at a carrier frequency of 28 GHz with a transmission power of 30 dBm; for LTE eNBs the bandwidth was set to 20 MHz, operating at a carrier frequency of 2.1 GHz with a downlink (DL) transmission power of 30 dBm and an uplink (UL) transmission power of 25 dBm. This system accounts for a noise figure of 5 dB and a minimum Signal Interference Noise Ratio (SINR) threshold of −5 dB. Both eNB and UE utilized Uniform Planar Array (UPA) Multiple-Input Multiple-Output (MIMO) arrays, with sizes of 8 × 8 and 4 × 4, respectively. The scanning directions for eNB and UE were each set to 1. The Simulation Reference Signal (SRS) duration was 10
s, with an overhead of 5% and a period of 200
s between SRS transmissions. The UE speed was maintained at 0 m/s, and the RLC buffer size was set to 10 MB. The one-way delay on X2 links was configured as 1 ms, while the one-way MME delay was set to 10 ms. The UDP payload size for data transmission was specified as 1024 bytes (see
Table 1).
The simulation was meticulously designed to capture the nuances of edge node performance in a 5G deployment while taking into account distance variations. The primary focus was on assessing energy consumption patterns and efficiency in the specified network configuration, which provide valuable insights for evaluating system performance in practical 5G deployment scenarios.
3.3. Performance Metrics
In exploring vehicular communication within the edge computing domain, our attention is directed towards two fundamental metrics: energy consumption and CPU load. These metrics carry profound implications for the sustainability and efficiency of communication processes in dynamic vehicular environments.
Energy consumption played a central role in our investigation, mirroring its significance in vehicular communication day-to-day operations. As vehicles exchange critical information with RSUs, the energy expended in these communication processes becomes a pivotal factor. The optimization of energy consumption is not merely an operational concern, it is an ecological imperative; the optimization of energy consumption is a crucial factor that influences the longevity and environmental impact of the entire vehicular communication network [
45].
The CPU plays a pivotal role in handling the computational demands associated with the processing and transmitting of CAMs between OBUs and RSUs. As our investigation delves into the dynamics of vehicular communication over a 5G network at varying distances, the CPU emerges as a critical component in the edge node responsible for real-time data processing [
46].
In the context of our study, the CPU operates as the core computational hub, and is responsible for computational demands associated with the processing and transmitting of CAMs between OBUs and RSUs. The significance of the CPU load metric becomes pronounced in scenarios where rapid and concurrent communication between vehicles is indispensable. It serves as a critical parameter reflecting the computational intensity and efficiency required to facilitate seamless data exchange in dynamic vehicular environments.
As vehicles exchange crucial information with RSUs, the CPU of the RSU becomes the focal point for handling the intricate computational demands associated with the processing and transmission of CAMs. The CPU is engaged in tasks such as data parsing, protocol processing, and message forwarding. Moreover, the CPU load metric becomes a critical parameter in the context of rapid and concurrent communication between vehicles [
47].
The CPU load metric quantifies the extent to which the CPU is utilized at any given time. In scenarios where vehicles communicate rapidly and concurrently, the CPU load provides a quantitative measure of the computational burden on the RSU’s CPU. High CPU load values indicate increased demand for processing power, potentially affecting the system’s responsiveness and efficiency.
Therefore, the relevance of monitoring the CPU load in real time lies in its direct correlation with the system’s ability to handle the dynamic and demanding nature of vehicular communication. It serves as a tangible indicator of the computational stress experienced by the RSU’s CPU during periods of intense data exchange, offering insights into the system’s performance and its capacity to sustain reliable and efficient communication in dynamic vehicular environments.
The intersection of energy consumption and CPU load in vehicular communication networks establishes a quantifiable link between real-time computational demands and system performance. As vehicles exchange critical information with RSUs, the CPU’s engagement in tasks such as data parsing and message forwarding is reflected in its load metric, providing a tangible measure of computational intensity. High CPU load values correspond to increased energy consumption, emphasizing the direct relationship between computational demands and energy expenditure [
48]. This interplay influences system responsiveness, with elevated CPU load potentially impacting prompt data processing. Understanding this dynamic intersection is pivotal for optimizing efficiency and reliability. Leveraging these metrics can inform the development of edge computing solutions, ensuring adaptive resource allocation and enhanced sustainability in vehicular communication networks.
4. Results and Analysis
As described in
Section 3, our experiment involved a setting in which a car generated a CAM and transmitted it to an RSU over a 5G connection. Distances between the OBU and RSU were varied between 30 m, 50 m, 70 m, and 80 m, reflecting common scenarios encountered in urban and highway environments, where shorter communication distances are common due to the presence of intersections, traffic signals, and closely spaced infrastructure elements. Communication distances may be longer in highway scenarios, reflecting the relatively open and linear nature of highways. This experiment allowed us to explore the impact of communication range on the performance of the edge node in terms of energy consumption (see
Figure 5 and
Table 2) and CPU load (see
Figure 6 and
Table 3), which is crucial for the seamless operation of vehicular communication networks.
A parallel simulation was conducted to validate and augment our findings in tandem with our real-world experiment. The simulation aimed to replicate the conditions of the experiment in order to provide a comprehensive view of the system’s behavior in a controlled virtual environment.
4.1. Energy Consumption
The simulated energy consumption started at 67.92 J/s and maintained stability at around 69 J/s throughout the first 10 s of the simulation, with a slight peak at 76.34 J/s around the 9 s mark. After that, it stabilizes again at 69 J/s throughout the remaining time of the simulation. The minor fluctuations observed in the experimental data were present in the simulation as well, reflecting the arrival of the CAM at the RSU (see
Figure 5b).
Next, we changed the distance between the RSU and OBU to 50 m (see
Figure 5c,d). The data recorded in the real world indicate noticeable fluctuations in energy consumption, ranging from 60 to 65 J/s with occasional peaks and dips. The energy consumption remained stable within the first 9 s at around 61 J/s. After that, it peaked at 65 J/s around the 9s mark, then started fluctuating with peaks at 67 J/s and dips at 61 J/s, before eventually decreasing toward the end of the experiment (see
Figure 5c).
The simulated energy consumption started at 67.92 J/s and maintained stability at around 69 J/s throughout the first 10 s of the experiment, with a peak at 76.34 J/s around the 9 s mark. After that, it stabilizes again at 69 J/s throughout the remaining time of the simulation. The minor fluctuations observed in the experimental data were present in the simulation as well, reflecting the arrival of the CAM at the RSU (see
Figure 5d).
We then raised the distance again to 70 m (see
Figure 5e,f). In the experimental setup, the energy consumption values gradually increased over time in the first 21 s, starting from 57 J/s and maintaining a value ranging from 64 J/s to 68 J/s, eventually reaching a peak at 71 J/s after 21 s. There was a noticeable fluctuation ranging between 64 J/s and 71 J/s in the remaining time of the experiment, which decreased towards the end (see
Figure 5e). On the other hand, the simulation data demonstrate a steady increase from 66.64 J/s to a peak of 77.53 J/s at the 3.1 s mark. After this, it stabilizes at 69 J/s throughout the simulation, with random fluctuations ranging from 75 J/s to 77 J/s before eventually stabilizing again at 68 J/s towards the end (see
Figure 5f).
Finally, we raised the distance to 80 m (see
Figure 5g,h). In the experimental setup, similar to the previous setup with a distance of 70 m, we noticed a gradual increase in the energy consumption values, ranging from 57 J/s to 67 and fluctuating over time. Halfway through the duration, we noticed a peak at 71 J/s, then stabilizing at 66 J/s. Multiple peaks are observed towards the end at 71 J/s, eventually decreasing towards the end of the experiment (see
Figure 5g).
On the other hand, the simulation results show a more controlled energy consumption pattern, with values ranging from 66.64 J/s to 77.53 J/s, then stabilizing at 69 J/s and spiking to 77.47 J/s around the 9 s mark. After that, we see fluctuations ranging from 69 J/s to 76 J/s until it stabilizes at 68 J/s towards the end (see
Figure 5h).
4.2. CPU Load
The CPU load at the edge node (RSU) was monitored only in the experimental setup across different communication distances: 30 m, 50 m, 70 m, and 80 m (see
Figure 6). The recorded percentages represent the proportion of CPU load overtime during CAM transmission.
At 30 m, the CPU load demonstrates a relatively stable pattern. The initial utilization hovers around 7.14%, indicating a moderate load on the edge node. There is a noticeable increase as CAM transmission commences, peaking at around 13.09%. The CPU load then gradually decreases, reaching 10.89% as transmission stops.
When the distance was increased to 50 m, the CPU load exhibited a more varied trend. Utilization started at approximately 10.89% and rose significantly during CAM transmission, with peaks reaching 17.75%. A decline in CPU load was observed after transmission cessation, stabilizing at around 10.43%.
At 70 m, the CPU load demonstrates further variability. The initial CPU load was around 12.66%, escalating during CAM transmission and reaching peaks of approximately 18.53%. The post-transmission period saw a decline in CPU load, stabilizing at around 13.41%.
The 80 m distance scenario presented distinct CPU load patterns. The initial CPU load was around 13.11%, escalating during CAM transmission, with peaks reaching 20.02%. After transmission concluded, the CPU load gradually decreased, reaching 13.11% and eventually stabilizing at the initial values.
5. Discussion
Our exploration of vehicular communication and edge computing over a 5G network at varying distances between the OBU and RSU sheds light on critical insights for real-world deployments. Our investigation combining both real-world experiments and simulations allows us to draw nuanced conclusions regarding the performance of the edge node in CAM processing.
In our real-world experiments, the observed fluctuations in energy consumption were a direct result of the dynamic and evolving nature of vehicular communication. This phenomenon can be attributed to several factors inherent in the vehicular communication environment. Specifically, changes in communication distances between the OBU and RSU introduce variations in the strength and quality of the wireless signals.
As vehicles move, signal strength may vary due to factors such as obstacles, interference, and the inherent mobility of the communicating entities. These fluctuations in signal strength directly influence energy consumption patterns as the communication system adapts to maintain a reliable connection. Moreover, environmental conditions such as reflections, multipath effects, and signal attenuation contribute to the observed variations in energy consumption.
At shorter distances (30 m and 50 m) we observed stable energy consumption patterns, emphasizing the reliability of the system in closer proximity. However, more pronounced peaks and troughs were evident at greater distances (70 m and 80 m), suggesting potential challenges in maintaining stability as the communication range increases. These challenges may stem from factors such as signal attenuation, increased propagation delay, interference, and the dynamic nature of the vehicular communication environment.
Our simulation results closely resembled the experimental results, validating the efficacy of our virtual model. The simulated energy consumption patterns demonstrated a controlled increase and occasional spikes, aligning with the corresponding real-world scenarios. This correspondence reaffirms that we see a general upward trend in energy consumption as the distance between the vehicle and the RSU increases, emphasizing the importance of considering transmission range in the design of efficient vehicular communication networks.
Notably, our real-world experiments exhibited higher variability, especially in the scenarios with greater distances. Factors such as signal interference, environmental conditions, and the inherent unpredictability of real-world deployments contributed to fluctuations that were not entirely mirrored in the controlled simulation environment.
To optimize edge computing in vehicular communication networks, future efforts should focus on developing algorithms that are resilient against real-world variations. Adaptive strategies that dynamically adjust computational loads, energy consumption, and communication protocols based on real-time conditions will be instrumental in ensuring reliable performance across diverse scenarios.
Our examination of the CPU load at the edge node (RSU) in different communication scenarios revealed crucial insights into the computational dynamics of vehicular communication networks. A noticeable correlation emerged between the communication distance and CPU load, indicating the distance-dependent computational challenges faced by the edge node. As the distance between the OBU and RSU increases, there is a corresponding intensification in CPU load, highlighting the heightened computational demands associated with CAM processing over longer ranges.
Initiation of CAM transmission consistently triggers a surge in CPU load, underscoring the substantial impact of real-time data processing on the edge node. The observed peaks during transmission emphasize the necessity for robust computational capabilities to handle the instantaneous influx of vehicular information. Post-transmission, the CPU load exhibits varying degrees of stability, reflecting the adaptability of the edge node; this adaptability is crucial for the efficient operation of vehicular communication networks, as it can ensure swift recovery and preparation for subsequent communication events.
Furthermore, each distance scenario presents unique peaks in CPU usage during CAM transmission, indicating distance-specific computational demands. This highlights the need for tailored computational resources based on the communication range. Thus, adaptive algorithms capable of dynamically adjusting computational loads in response to distance-specific demands become imperative for optimizing edge node performance.
These findings underscore the importance of optimizing edge computing solutions for varying communication distances. Future system design should prioritize adaptive algorithms capable of dynamically allocating computational resources based on real-time requirements. This adaptability can ensure efficient resource utilization and minimize the risk of performance bottlenecks, especially in scenarios with extended communication ranges.
When integrated with previous findings on energy consumption patterns, the interplay between energy consumption and CPU load emphasizes the multidimensional nature of optimizing edge computing in vehicular communication networks. In conclusion, the analysis of CPU load provides a comprehensive view of the computational challenges and opportunities associated with varying communication distances, contributing to foundational knowledge for developing resilient and efficient edge computing systems tailored for real-world vehicular communication deployments.
6. Conclusions
In conclusion, our investigation into vehicular communication and edge computing within a 5G network while considering varying communication distances between the OBU and the RSU has illuminated essential insights. Both real-world experiments and simulations consistently portrayed energy consumption patterns showcasing stable operation at shorter distances and increased fluctuations at greater ranges. The analysis of the CPU load at the RSU underscored the correlation between communication distance and computational demands, emphasizing the need for adaptive algorithms to optimize edge node performance.
The results obtained from our investigation offer valuable insights for academia and industry. Our findings highlight the importance of developing adaptive algorithms capable of dynamically allocating computational resources based on real-time requirements, especially in scenarios with varying communication distances. These algorithms can help to optimize edge node performance, ensuring efficient resource utilization and minimizing the risk of performance bottlenecks. System designers can leverage these insights to develop more robust and adaptive systems capable of handling the dynamic nature of vehicular communication environments. Moreover, they can help to shape policies aimed at promoting sustainability, efficiency, and reliability in connected transportation systems.
Overall, the obtained results provide a solid foundation for advancing the state of the art in the fields of vehicular communication and edge computing. Researchers can build upon our work to explore additional aspects of communication scenarios, integrate emerging technologies, and develop novel algorithms and solutions to address the evolving challenges and opportunities in connected transportation systems.
However, it is crucial to acknowledge the limitations of our study. Our valuable real-world experiments exhibited higher variability due to factors such as signal interference and environmental conditions. These uncontrollable elements influenced fluctuations that were not entirely mirrored in the controlled simulation environment. Additionally, while our focus on CAM transmission represents a specific aspect of vehicular communication, future studies may benefit from a more comprehensive exploration of diverse communication scenarios.
For future works, efforts should prioritize the development of adaptive algorithms resilient against real-world variations, with the aim of optimizing edge computing in diverse vehicular communication scenarios. To mitigate the limitations of our study, it would be possible to explore a broader spectrum of communication aspects in future investigations. This exploration would involve considering factors such as data types, network congestion, and the coexistence of various communication technologies. Additionally, investigating the integration of emerging technologies such as machine learning and blockchains in vehicular communication networks holds promise for enhancing efficiency and reliability in real-world deployments.