Vehicle Collaborative Partial Offloading Strategy in Vehicular Edge Computing
Abstract
:1. Introduction
- We design a collaborative partial computational offloading strategy based on divisible tasks. Each computational task can be divided into a maximum of three parts which can be processed locally by the task-generating vehicle, transferred to the MEC server, or offloaded to a nearby collaborative vehicle, allowing for better utilization of network resources.
- We propose a multi-factor comprehensive evaluation method for collaborative vehicles. It takes into account factors such as the movement, remaining battery, and power of vehicles in the vicinity of the task-generating vehicle to find a more suitable candidate collaborative vehicle.
- We propose a partial offloading approach based on deep reinforcement learning (DRL). On the basis of the collaborative vehicle evaluation method, the Double Deep Q Network (DDQN) algorithm is used to dynamically adjust the proportion of each offloading task part to achieve load balancing across service nodes and reduce delay.
2. Related Works
3. System Model
3.1. Network Architecture of VEC
- The offloading vehicle generates a task and transmits a task request to the MEC server.
- Upon receiving the request, the MEC server collects relevant information from both the offloading vehicle and potential collaborative vehicles.
- Based on the system resource state, offloading targets, and available strategies, the MEC server calculates the optimal offloading strategy.
- The MEC server informs the offloading vehicle and selected collaborative vehicles about the offloading decision.
- Following the decision, the offloading vehicle divides the task and offloads portions to the MEC server and the designated collaborative vehicles.
- The offloading vehicle, MEC server, and collaborative vehicles process their assigned task splits concurrently.
- Once finished, the MEC server and collaborative vehicles return their processed results to the offloading vehicle.
- Finally, the offloading vehicle aggregates the final results and provides feedback on the task execution.
3.2. Collaborative Vehicle Discrimination Model
3.3. Computation Model
3.3.1. Local Calculation
3.3.2. Offloading to the MEC Server
3.3.3. Offloading to Collaborative Vehicle
4. Problem Formulation
- Constraint (15) defines the offloading decision for task .
- Constraint (16) ensures that task is processed in its entirety.
- Constraint (17) guarantees that the processing time for each task meets the specified latency requirement.
- Constraint (18) ensures that the RSU transmits results back to vehicle i before it moves out of communication range.
- Constraint (19) dictates that vehicle returns the results to vehicle i before they can no longer communicate.
5. Solution
5.1. Markov Decision Process
- State space: The state of vehicle is described by its location, speed, on-board computing power, and the computing power available from the nearby MEC server and candidate vehicles:In which is the location of vehicle i. The state space S of the whole system is then composed of the location, speed, on-board computing power, MEC server computing power, and the determining matrix of all vehicles, .
- Action space: Within the proposed VEC network, a deep reinforcement learning (DRL)-based controller resides on the RSU, acting as the agent that interacts with the environment and generates decisions. In any given state, each OV chooses a specific offloading decision from a set of available options as action . Collectively, the set of all possible offloading decisions for all OVs forms the system’s action space:
- Reward function: Since minimizing the total time delay is our goal, the reward function is designed to be directly proportional to the negative of the delay:To prevent the learning process from becoming stuck on suboptimal solutions, this paper proposes a reward normalization scheme for OVs. This scheme scales all OV action rewards to a range between −1 and 0. Additionally, any invalid action selection incurs a minimum reward of −1.
5.2. Computation Offloading Based on Reinforcement Learning
5.3. Computation Offloading Based on Deep Reinforcement Learning
- Initialization (Line 1): The algorithm begins by initializing the parameters for both the main and target networks. Additionally, an experience memory is created to store past interactions with the environment.
- Action Selection (Lines 5–9): At each time step, the algorithm employs the Epsilon-Greedy strategy to select an action a.
- Environment Interaction (Lines 10–11): The chosen action is then taken in the environment (). The environment responds with the next state () and a reward signal (r) indicating the outcome of the action (Line 10). The experience gained from this interaction is stored in the experience memory D (Line 11).
- Network Update (Lines 12–17): At each time step, a mini-batch (size U) of experiences is randomly sampled from the experience memory and then used to update the parameters of the main network using the Bellman equation (Lines 12–14). The target network’s parameters are periodically (every K time steps) updated with a copy of the main network’s parameters (Lines 15–17).
- Offloading Decision (Line 19): At the end of each episode, the final optimal strategy is obtained.
Algorithm 1 Collaborative Partial Computation Offloading Algorithm based on DDQN |
|
6. Experiment and Analysis
6.1. Environment
6.2. Analysis
- High discount factor (e.g., = 0.99): While it encourages considering future rewards, a very high discount factor can lead to convergence difficulties. This is because the agent has to consider the impact of actions over many future steps, making training more complex.
- Low discount factor (e.g., = 0.4): A very low discount factor can lead to faster convergence but also smaller long-term rewards. This happens because the agent prioritizes immediate rewards, neglecting the potential benefits of actions with delayed payoff. In simpler terms, the agent becomes less focused on the long-term consequences of its decisions.
7. Conclusions
- Currently, the model only considers the offloading vehicle’s local MEC server and nearby collaborative vehicles. This can lead to increased task failure rates when dealing with a high task volume due to latency constraints. In future research, we can explore incorporating surrounding MEC servers to enable task relay and load balancing, potentially reducing task failure rates.
- The current approach prioritizes minimizing latency. However, other crucial factors like the task failure rate also exist. We can investigate combining these metrics into a unified objective function to achieve a more balanced and optimized task offloading strategy.
- Incentive mechanisms for collaborative vehicles are not considered in this paper. By introducing well-designed incentive mechanisms, we can achieve a win–win situation for both collaborative vehicles and MEC servers. These incentives could be designed to maximize collaborative vehicles’ benefits. This will help attract more vehicles to become collaborative vehicles and improve the success rate of task offloading. The incentive mechanism could also consider maximizing the benefits of service providers (MEC operators). This will help alleviate the burden on MEC servers.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Qi, Q.; Ma, Z. Vehicular edge computing via deep reinforcement learning. arXiv 2018, arXiv:1901.04290. [Google Scholar]
- Zhang, K.; Mao, Y.; Leng, S.; Maharjan, S.; Zhang, Y. Optimal delay constrained offloading for vehicular edge computing networks. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Guo, H.; Liu, J. Collaborative Computation Offloading for Multiaccess Edge Computing Over Fiber–Wireless Networks. IEEE Trans. Veh. Technol. 2018, 67, 4514–4526. [Google Scholar] [CrossRef]
- Zhou, Z.; Feng, J.; Tan, L.; He, Y.; Gong, J. An Air-Ground Integration Approach for Mobile Edge Computing in IoT. IEEE Commun. Mag. 2018, 56, 40–47. [Google Scholar] [CrossRef]
- Li, Y.; Yang, B.; Chen, Z.; Chen, C.; Guan, X. A Contract-Stackelberg Offloading Incentive Mechanism for Vehicular Parked-Edge Computing Networks. In Proceedings of the 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 28 April–1 May 2019; pp. 1–5. [Google Scholar]
- Wang, H.; Peng, Z.; Pei, Y. Offloading Schemes in Mobile Edge Computing With an Assisted Mechanism. IEEE Access 2020, 8, 50721–50732. [Google Scholar] [CrossRef]
- Yang, C.; Liu, Y.; Chen, X.; Zhong, W.; Xie, S. Efficient Mobility-Aware Task Offloading for Vehicular Edge Computing Networks. IEEE Access 2019, 7, 26652–26664. [Google Scholar] [CrossRef]
- Xiao, Z.; Dai, X.; Jiang, H.; Wang, D.; Chen, H.; Yang, L.; Zeng, F. Vehicular Task Offloading via Heat-Aware MEC Cooperation Using Game-Theoretic Method. IEEE Internet Things J. 2020, 7, 2038–2052. [Google Scholar] [CrossRef]
- Huang, X.; Yu, R.; Kang, J.; He, Y.; Zhang, Y. Exploring mobile edge computing for 5G-enabled software defined vehicular networks. IEEE Wirel. Commun. 2017, 24, 55–63. [Google Scholar] [CrossRef]
- Qiao, G.; Leng, S.; Zhang, K.; He, Y. Collaborative Task Offloading in Vehicular Edge Multi-Access Networks. IEEE Commun. Mag. 2018, 56, 48–54. [Google Scholar] [CrossRef]
- Han, D.; Chen, W.; Fang, Y. A Dynamic Pricing Strategy for Vehicle Assisted Mobile Edge Computing Systems. IEEE Wirel. Commun. Lett. 2018, 8, 420–423. [Google Scholar] [CrossRef]
- Dai, F.; Liu, G.; Mo, Q.; Xu, W.; Huang, B. Task offloading for vehicular edge computing with edge-cloud cooperation. World Wide Web 2022, 25, 1999–2017. [Google Scholar] [CrossRef]
- Liu, S.; Tian, J.; Zhai, C.; Li, T. Joint computation offloading and resource allocation in vehicular edge computing networks. Digit. Commun. Netw. 2023, 9, 1399–1410. [Google Scholar] [CrossRef]
- Xu, X.; Liu, K.; Dai, P.; Jin, F.; Ren, H.; Zhan, C.; Guo, S. Joint task offloading and resource optimization in NOMA-based vehicular edge computing: A game-theoretic DRL approach. J. Syst. Archit. 2023, 134, 102780. [Google Scholar] [CrossRef]
- Klar, M.; Glatt, M.; Aurich, J.C. Performance comparison of reinforcement learning and metaheuristics for factory layout planning. CIRP J. Manuf. Sci. Technol. 2023, 45, 10–25. [Google Scholar] [CrossRef]
- Luoto, P.; Bennis, M.; Pirinen, P.; Samarakoon, S.; Horneman, K.; Latva-aho, M. Vehicle clustering for improving enhanced LTE-V2X network performance. In Proceedings of the 2017 European Conference on Networks and Communications (EuCNC), Oulu, Finland, 12–15 June 2017; pp. 1–5. [Google Scholar]
- Karedal, J.; Czink, N.; Paier, A.; Tufvesson, F.; Molisch, A.F. Path Loss Modeling for Vehicle-to-Vehicle Communications. IEEE Trans. Veh. Technol. 2011, 60, 323–328. [Google Scholar] [CrossRef]
Source | Feature | Offloading Type | Method | Result |
---|---|---|---|---|
Li (2019) [5] | Parking lot-assisted VEC | Full Offloading | Contract–Stackelberg Offloading Incentive; Backward Induction | Maximizes the utility of vehicles, operators, and parking lot agents |
Yang (2019) [7] | MEC server-assisted mobility-aware task offloading | Full Offloading | Convex Optimization Algorithm | Reduces system cost while satisfying delay constraints |
Xiao (2020) [8] | Heat-aware MEC cooperation | Full Offloading | Deep Learning; Non-Cooperative Game-Theoretic Strategy | Reduces system delay and enhances energy efficiency |
Wang (2020) [6] | MEC network with secondary MEC servers | Full Offloading | Problem Decomposition; Heuristic Algorithm | Reduces system delay and improves system reliability |
Dai (2022) [12] | VEC with edge-cloud Computing Cooperation | Full Offloading | Deep Q-Network | Reduces average delay |
Liu (2023) [13] | Joint computation offloading and resource allocation | Full Offloading | Matching Theory-Based and Lagrangian-Based Algorithms | Improves system performance |
Xu (2023) [14] | Joint task offloading and resource allocation | Full Offloading | Multi-Agent Distributed Distributional Deep Deterministic Policy Gradient (MAD4PG) | Improves system performance |
This paper | Vehicle collaborative VEC | Partial Offloading | Double Deep Q-Network | Minimizes overall delay |
Notation | Definition | Notation | Definition |
---|---|---|---|
action space and optimal action set | offloading decision of task ; action set of vehicle i at time slot t | ||
percentage of power remaining in vehicle j; upper and lower thresholds of remaining power | bandwidth values of V2I and V2V communication (Hz) | ||
computational resources required for task i (cycles) | set of vehicles (union of offloading vehicles and collaborative vehicles) | ||
size of task i (bit) | average distance between vehicle i and RSU within RSU’s coverage (meter) | ||
instantaneous and average distances between vehicles i and j (meter) | maximum communication distance between V2I and V2V | ||
computational resources of vehicle i and MEC server (Hz) | G | transmission power of vehicle (Watt) | |
h | channel attenuation factor for uplinks | position of vehicle i (horizontal and vertical coordinates) | |
channel gain for communication from vehicle i to RSU and vehicle i to j | Gaussian white noise power (dB) | ||
proportions of task executed locally and offloaded to MEC servers and to collaborative vehicle j | average data transfer rate between and RSU, and (bit/s) | ||
upper and lower thresholds for time two vehicles remain within communication range | maximum tolerable delay of task (second) | ||
delay in local calculation; V2I and V2V offloading (second) | tasks for offloading vehicle | ||
predictions of main and target neural networks | discount factor, learning rate, and exploration rate of greedy algorithms |
Parameter | Value/Range | Parameter | Value/Range |
---|---|---|---|
200 m | 15 m | ||
(1, 1000) Megacycles | MB | ||
10 GHz | GHz | ||
15 MHz | 10 MHz | ||
s | −100 dB | ||
80% | 20% | ||
h | 1 | 0.9 | |
0.9 | 0.01 | ||
U | 3000 | K | 100 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, R.; Fan, Y.; Yuan, S.; Hao, Y. Vehicle Collaborative Partial Offloading Strategy in Vehicular Edge Computing. Mathematics 2024, 12, 1466. https://doi.org/10.3390/math12101466
Chen R, Fan Y, Yuan S, Hao Y. Vehicle Collaborative Partial Offloading Strategy in Vehicular Edge Computing. Mathematics. 2024; 12(10):1466. https://doi.org/10.3390/math12101466
Chicago/Turabian StyleChen, Ruoyu, Yanfang Fan, Shuang Yuan, and Yanbo Hao. 2024. "Vehicle Collaborative Partial Offloading Strategy in Vehicular Edge Computing" Mathematics 12, no. 10: 1466. https://doi.org/10.3390/math12101466
APA StyleChen, R., Fan, Y., Yuan, S., & Hao, Y. (2024). Vehicle Collaborative Partial Offloading Strategy in Vehicular Edge Computing. Mathematics, 12(10), 1466. https://doi.org/10.3390/math12101466