Next Article in Journal
The Impact of Unconventional Monetary Policy on China’s Economic and Financial Cycle: Application of a Structural Vector Autoregression Model Based on High-Frequency Data
Next Article in Special Issue
On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
Previous Article in Journal
A Highly Accurate Computational Approach to Solving the Diffusion Equation of a Fractional Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm

by
Qijie Qiu
1,
Lingjie Li
2,*,
Zhijiao Xiao
1,*,
Yuhong Feng
1,
Qiuzhen Lin
1 and
Zhong Ming
1,2,3,*
1
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
2
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen 518000, China
3
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(13), 1966; https://doi.org/10.3390/math12131966
Submission received: 22 May 2024 / Revised: 20 June 2024 / Accepted: 21 June 2024 / Published: 25 June 2024
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)

Abstract

:
With the development of digital economy technologies, mobile edge computing (MEC) has emerged as a promising computing paradigm that provides mobile devices with closer edge computing resources. Because of high mobility, unmanned aerial vehicles (UAVs) have been extensively utilized to augment MEC to improve scalability and adaptability. However, with more UAVs or mobile devices, the search space grows exponentially, leading to the curse of dimensionality. This paper focus on the combined challenges of the deployment of UAVs and the task of offloading mobile devices in a large-scale UAV-assisted MEC. Specifically, the joint UAV deployment and task offloading problem is first modeled as a large-scale multiobjective optimization problem with the purpose of minimizing energy consumption while improving user satisfaction. Then, a large-scale UAV deployment and task offloading multiobjective optimization method based on the evolutionary algorithm, called LDOMO, is designed to address the above formulated problem. In LDOMO, a CSO-based evolutionary strategy and a MLP-based evolutionary strategy are proposed to explore solution spaces with different features for accelerating convergence and maintaining the diversity of the population, and two local search optimizers are designed to improve the quality of the solution. Finally, simulation results show that our proposed LDOMO outperforms several representative multiobjective evolutionary algorithms.

1. Introduction

With advances in Internet of Things (IoT) technology, numerous digital economy applications with compute-sensitive and latency-sensitive characteristics, such as augmented reality (AR), virtual reality (VR), and autonomous driving, have emerged, significantly enhancing the user experience [1,2]. Traditional cloud computing, which relies on long-distance communication with centralized data centers, fails to meet the stringent requirements of these applications. Therefore, mobile edge computing (MEC) has emerged as an effective complementary paradigm [3]. By enabling mobile devices to offload tasks to edge base stations, MEC offers reduced latency and energy consumption, thereby improving task completion efficiency [4]. This capability positions MEC as a viable solution for executing compute-intensive and latency-critical tasks, thereby enhancing overall quality of service (QoS) for digital economy applications [5,6,7]. Nevertheless, the location of the edge base station in MEC is usually fixed and cannot be flexibly changed according to the dynamic needs of mobile devices, which limits the adaptability and responsiveness of MEC.
To address these above limitations, unmanned aerial vehicles (UAVs) have emerged as a promising solution to augment MEC, in both military and civilian applications [8,9,10]. The dynamic integration of UAC and MEC is particularly evident in scenarios such as natural disaster relief, field operations, and large-scale competition events, where UAVs act as communication relay stations or efficient edge computing platforms [11]. Unlike fixed base stations, UAV-assisted MEC leverages the high mobility and adaptability of UAVs to establish superior line-of-sight links, thereby reducing transmission distances, enhancing communication reliability, and conserving the battery life of mobile devices [12]. For example, in wildfire rescue operations, UAVs with MEC capabilities can collect and process real-time data, establish communication relays where terrestrial networks fail, and optimize resource allocation to enhance rescue coordination. Consequently, UAV-assisted MEC offers a transformative solution for improving quality of experience (QoE) across various dynamic and challenging environments.
In the realm of UAV-assisted MEC, optimizing the deployment of UAVs and the task offloading of mobile devices is essential for achieving an optimal system performance. However, existing studies primarily focus on small-scale scenarios involving single- or multi-UAV assisted MEC [13,14,15]. In order to obtain efficient optimization strategies, these studies predominantly rely on evolutionary algorithms to solve resource optimization problems in single- or multi-UAV-assisted MEC, such as genetic algorithm (GA) [16,17], particle swarm optimization (PSO) [18,19], and ant colony optimizer (ACO) [20,21]. However, when the number of UAVs or mobile devices increases, the search space exponentially expands as the dimensionality of the decision variables increases, which is known as the curse of dimensionality [22,23]. Therefore, for large-scale UAV-assisted MEC, existing methods encounter considerable difficulties when obtaining convergent optimization strategies.
To address the challenge of efficient UAV deployment and task offloading in a large-scale UAV-assisted MEC, this paper introduces a large-scale UAV deployment and task offloading multiobjective optimization method based on the evolutionary algorithm, namely LDOMO. Specifically, LDOMO first competitively divides the entire solution space into an elite solution space with well-performing individuals and a poor solution space with bad-performing individuals, which provides important labels and guidance for population evolution. Then, the competitive swarm optimizer (CSO)-based evolutionary strategy and the multilayer perceptron (MLP)-based evolutionary strategy are proposed in LDOMO to explore the two solution spaces separately. The former accelerates the convergence of the population by guiding the poor solutions closer to the elite solutions, while the latter guides the elite solutions into more promising areas through the evolutionary directions learned by MLP to maintain the diversity of the population. Finally, to address feasibility issues and elevate solution quality, LDOMO incorporates two local search optimizers: one for UAV deployment to prevent collisions by maintaining safe distances, and another for task offloading to ensure the accomplishment of all tasks.
The main contributions of this paper are clarified as follows.
  • This paper focuses on UAV deployment and task offloading optimization in a large-scale UAV-assisted MEC and formulates it as a large-scale multiobjective optimization problem, consisting of two optimization objectives (i.e., energy consumption and user satisfaction) with several constraints.
  • This paper proposes an efficient multiobjective optimization evolutionary algorithm, called LDOMO, to address the above-formulated problem. Specifically, LDOMO proposes two evolutionary strategies to explore two solution spaces with different characteristics for accelerating convergence and maintaining the diversity of population, and and designs two local search optimizers to improve the quality of the solution.
  • Extensive simulation results show that LDOMO outperforms several representative multiobjective optimization algorithms in terms of energy consumption and user satisfaction.
The rest of the paper is organized as follows. Section 2 reviews the background of large-scale multiobjective optimization and related work. Section 3 introduces several mathematical models and the problem formulation. Section 4 presents the details of the proposed algorithm. Section 5 provides simulation results and discussions. Finally, Section 6 provides conclusions and future work.

2. Background and Related Work

2.1. Large-Scale Multiobjective Optimization

In the realm of large-scale multiobjective optimization (LMOP), the goal is to concurrently minimize or maximize multiple objective functions that often contradict each other. Mathematically, an LMOP can be mathematically modeled as follows:
min x Ω F ( x ) = f 1 ( x ) , f 2 ( x ) , , f m ( x ) T , s . t . g i ( x ) 0 , i Θ ,
where the objective vector F ( x ) comprises m conflicting objective functions f i ( x ) and x = ( x 1 , x 2 , , x n ) symbolizes an n-dimensional decision vector within the search space Ω ( m 2 and n 1000 ). In order to balance multiple conflicting optimization objectives, the goal of optimizing an LMOP is to obtain a set of Pareto-optimal solutions (PS) in its search space, and the mapping of PS in the objective space is called the Pareto frontier (PF), which is described as follows.
If two solutions a and b exist in the search space Ω , a dominates b ( a b ) if f i ( a ) f i ( b ) for all i and f j ( a ) < f j ( b ) for at least one j. A solution x * Ω is non-dominated when no other solution in Ω dominates it. The set of non-dominated solutions constitutes the PS, and the corresponding objective values form the PF:
P S = x Ω | y Ω , y x , P F = F ( x * ) , x * P S .

2.2. Related Work and Our Motivations

This section reviews the prior works most relevant to our work from two aspects: (1) task offloading in MEC and (2) UAV-assisted MEC networks, and clarifies the motivation of our work.
Task offloading in MEC: In recent years, task offloading in MEC has attracted great attention, as it can reduce the delay of computation-intensive tasks and energy consumption to effectively improve the system performance [24,25]. For example, Mei et al. [26] focused on the task offloading problem for MEC systems with multiple energy harvesting devices aiming to increase the throughput and extend the battery life of devices. They applied Liapunov optimization theory to design a computational task offloading algorithm to accommodate system dynamics and ensure system stability. Ren et al. [27] studied the task offloading problem in the scenario of multiple edge service providers, and proposed an ADMM-based internal load balancing algorithm and a dynamic two-stage non-cooperative game to reduce the delays of ESPs internally and externally, respectively. Deng et al. [28] investigated edge collaboration task offloading and splitting strategies in MEC networks, i.e., splitting tasks to be executed in parallel on multiple MEC servers aims to minimize the total cost of latency and energy consumption. They applied the alternating convex search algorithm to obtain optimized solutions withv a low computational complexity for the task offloading and splitting problems. Liu et al. [29] developed a task offloading model consisting of local edge computing resources, edge server resources at both macro and attached base stations, as well as cloud computing server resources. They developed an approach based on block coordinate descent technique combining convex optimization and gray wolf algorithm in order to simultaneously reduce latency and energy consumption. Zeng et al. [30] discussed task offloading in a multi-layer vehicular edge computing framework and proposed a prediction-based vehicular task offloading scheme, in which a deep learning model is designed to predict the task offloading result and service delay, and then the prediction policy with successful task offloading and minimum service delay is selected as the final offloading scheme.
UAV-assisted MEC networks: A number of research efforts have started to use UAVs as different types of wireless communication platforms, i.e., UAVs equipped with edge servers can be flexibly deployed in specific areas to provide reliable uplink and downlink communications to ground users [31,32]. Therefore, UAV deployment has become a key discussion issue in academia. In particular, a good UAV deployment can improve the quality and efficiency of task offloading for users and enhance the performance of the system [33,34]. Xu et al. [35] proposed a sky–air–ground integrated mobile edge computing system to provide high-quality computing services in areas with missing or damaged communication infrastructures and designed an algorithm based on particle swarm optimization and greedy strategies to obtain a near-optimal solution for UAV deployment and mission offloading in the system. Tian et al. [36] investigated the service satisfaction-oriented task offloading and UAV scheduling problem in UAV-assisted MEC networks, where the task priority was considered based on the latency requirement of tasks and the remaining energy status of users. To solve the proposed problem, they proposed a joint optimization algorithm for task offloading and UAV scheduling based on a genetic algorithm to maximize the total user satisfaction. To address the joint optimization of UAV deployment and offloading decisions in a three-layer UAV-assisted MEC network, Xia et al. [37] proposed a two-layer optimization algorithm to reduce the processing latency of the task, where a differential evolution learning algorithm was used in the upper layer to solve the UAV deployment problem, and a distributed deep neural network was used to generate the offloading decisions in the lower layer. Similarly, confronted with the joint optimization of UAV deployment and computational offloading problem in MEC, Chen et al. [38] designed a two-layer joint optimization approach to minimize the average task response time. First, the outer layer utilized a particle swarm optimization algorithm combined with a genetic algorithm operator to optimize UAV deployment. Second, the inner layer used a greedy algorithm to optimize task offloading.
However, none of the existing works seem to have address the specific challenges posed by large-scale UAV assistance in MEC, particularly in terms of UAV deployment and task offloading problems. In reality, many scenarios have large-scale UAV devices and users numbering more than 1000, such as city traffic management, large competition events, and natural disaster rescue. Nevertheless, existing algorithms encounter a significant challenge with the exponential growth of the search space as the number of devices and users increases, leading to the curse of dimensionality and inefficiency in search. Motivated by this gap, our work proposes a large-scale UAV deployment and task offloading multiobjective optimization method based on the evolutionary algorithm, termed LDOMO. Firstly, LDOMO divides the solution space into a poor solution space and an elite solution space to reduce the search space and improve the search efficiency. Thus, it designs a CSO-based evolutionary strategy and a MLP-based evolutionary strategy to optimize these two solution spaces, respectively. Finally, two local search optimizers are designed to prevent infeasible solutions and enhance the quality of solutions.

3. System Model and Problem Formulation

3.1. System Model and Assumption

As shown in Figure 1, there are I mobile devices, J UAVs, and a edge base station in the considered UAV-assisted MEC. Assume that the set of mobile devices is notated as M D = { m d 1 , m d 2 , , m d I } and the set of UAVs is denoted as U A V = { u a v 1 , u a v 2 , , u a v J } . Moreover, the ith mobile device in the set M D is expressed as m d i = D i , C i , ψ i , where D i denotes the task data size, C i denotes the CPU cycles per bit for task computation, and ψ i is the maximum tolerable delay for task of m d i . The jth UAV in the set U A V is denoted as u a v j = F j , O j , where F j and O j are the computational and storage resources of u a v j , respectively.
To accurately represent the positions of UAVs and mobile devices, a 3-dimensional Cartesian coordinate system was employed to model the stereospace of UAV-assisted MEC scenarios. In this system, the position of m d i was denoted as ( x i m d , y i m d , 0 ) and the position of UAV u a v j was denoted as ( x j u a v , y j u a v , H ) . Note that all mobile devices were placed on the ground and all UAVs were deployed on a plane with height H. Assume that the locations of all UAVs were determined and deployed by the edge base station in the control center of the system. Please note that obstacles in UAV flight were not considered in this article. After that, mobile devices can select the nearest or suitable UAV to offload and execute the task according to the task requirements. A binary matrix L of size I × J is represented as task offloading decisions for all mobile devices, where the element l i j represents whether the task of m d i is offloaded to u a v j . If the task is offloaded to u a v j , l i j = 1 , otherwise l i j = 0 . For ease of description, Table 1 summarizes the main notations in this article.

3.2. Task Offloading Model

There are two main processes involved in the process of task offloading, i.e., the communication process and the computation process. Specifically, in the communication process, the mobile device first communicates with the selected UAV and transmits tasks to the UAV. Then, in the computation process, the tasks of the mobile device are executed on the UAV. Finally, the result of the execution is returned to the mobile device. In this paper, the result return process is ignored because the result data are too small compared with the task data. The communication process and the computation process are discussed in detail subsequently.
Communication process: Assume that mobile devices communicate with UAVs through wireless links. In this case, the path-loss for the channel between the UAV and mobile device is given by,
L o s s ( m d i , u a v j ) = 4 π f c 2 × d m d i , u a v j α ,
where f represents the central frequency, c is the speed of light, α represents the path-loss exponent, and d ( m d i , u a v j ) means the distance between m d i and u a v j , which is given as follows,
d ( m d i , u a v j ) = ( x i m d x j u a v ) 2 + ( y i m d y j u a v ) 2 + H 2 .
The signal-to-interference plus noise ratio (SINR) [39] of the communication link between m d i and u a v j is calculated as,
S I N R ( m d i , u a v j ) = P t × g 2 N 0 × L o s s ( m d i , u a v j ) ,
where P t represents the transmit power from the UAV to the mobile device, g is the channel fading coefficient, and N 0 is the noise power. The wireless transmission rate between m d i and u a v j is calculated based on the Shannon’s formula as follows,
R ( m d i , u a v j ) = B i j × log 2 1 + S I N R ( m d i , u a v j ) ,
where B i j is the channel bandwidth between m d i and u a v j .
When m d i chooses to offload its task to u a v j , i.e., l i j = 1 , then the communication time between them depends on the task size and transmission rate, which is calculated as follows,
T ( m d i , u a v j ) = l i j D i R ( m d i , u a v j ) .
Moreover, the corresponding energy consumption during communication can be given as,
E ( m d i , u a v j ) = P t × l i j D i R ( m d i , u a v j ) .
Communication process: After the mobile device has completed the task transmission, the task will be executed on the transferred UAV. In this paper, the computing power allocated to m d i from u a v j depends on the maximum allowable delay of the task aiming to equitably distribute the computational resources of the UAV. Therefore, the computing power is calculated as follows,
f ( m d i , u a v j ) = ψ i 1 m d k M D ( u a v j ) ψ k 1 × F j ,
where M D ( u a v j ) denotes the set of all mobile devices that offloaded tasks to u a v j . According to (9), the more urgent the task of the mobile device, the more computing power is allocated to it.
Moreover, the computation time of task for m d i on u a v j is given as,
T ^ ( m d i , u a v j ) = l i j C i D i f ( m d i , u a v j ) .
Following [40], the corresponding energy consumption during computation is modeled as a function related to the computing power, which is given below,
E ^ ( m d i , u a v j ) = γ l i j C i D i f ( m d i , u a v j ) 2 ,
where γ is the computing energy efficiency coefficient.

3.3. UAV Hover Model

When UAVs provide computing services for mobile devices, they need to hover at a fixed location to ensure uninterrupted communication and quality of service. In this process, the hovering time of UAV depends on the maximum time of receiving and executing tasks, which is calculated as follows,
T ¯ ( u a v j ) = max m d k M D ( u a v j ) T ( m d k , u a v j ) + T ^ ( m d k , u a v j ) .
Moreover, the hovering energy consumption of UAV is given as follows,
E ¯ ( u a v j ) = P h × T ¯ ( u a v j ) ,
where P h is the hovering power of the UAV.

3.4. Problem Formulation

Based on the above models, the task completion time for each mobile device is as follows,
T ( m d i ) = j = 1 J T ( m d i , u a v j ) + T ^ ( m d i , u a v j )   = j = 1 J l i j D i R ( m d i , u a v j ) + l i j C i D i f ( m d i , u a v j ) .
Moreover, user satisfaction depends on whether their computational task is completed within the maximum allowable delay. Specifically, the ith user is satisfied if T ( m d i ) ψ i , denoted as θ i = 1 , and θ i = 0 otherwise. In this paper, the task completion rate of all users is represented as the user satisfaction in the whole system, which is calculated as,
S = 1 I i = 1 I θ i .
The total energy consumption of the system is denoted as,
E = i = 1 I j = 1 J E ( m d i , u a v j ) + E ^ ( m d i , u a v j ) + j = 1 J E ¯ ( u a v j ) .
In this paper, the joint optimization problem of UAV deployment and task offloading is modeled as a large-scale multiobjective optimization problem with the purposes of reducing energy consumption and user satisfaction. Mathematically, the formulated problem is defined as follows,
max x u a v , y u a v , L S min x u a v , y u a v , L E
s . t . j J l i j = 1 , i I ,
i I ( l i j f i j ) F j , j J ,
i I ( l i j d i j ) O j , j J ,
f i j > 0 , i I , i J ,
l i j , θ i { 0 , 1 } , i I , i J .
In the formulated model, the primary goal is to maximize the user satisfaction of mobile devices while minimizing the energy consumption of system in (17) with three decision variables and several constraints. Specifically, the real decision variables x u a v and y u a v denote the horizontal position of the UAV. The binary decision variable L represents the task offloading scheme of the user. Constraint (18) guarantees that the task of each mobile device can be offloaded to only one UAV. Moreover, constraints (19) and (20) ensure that the computational and storage resources of each UAV cannot be exceeded, respectively. Constraints (21) and (22) define the range of values for some parameters.

4. Optimization Method

In this section, we propose a multiobjective optimization evolutionary algorithm to solve the above formulated large-scale multiobjective optimization problem in (17), called LDOMO. First, the framework of LDOMO is presented, and then the details of LDOMO are introduced subsequently.

4.1. The Overall Framework

This section presents the general framework of LDOMO proposed in this paper. The flowchart and pseudocode of LDOMO are shown in Figure 2 and Algorithm 1, respectively. As shown in Algorithm 1, LDOMO starts with two input data: a list of UAVs and a list of mobile devices. Then, the initialization process is executed in lines 1 and 2, where a population P o p and a MLP are initialized, as detailed in Section 4.2. Subsequently, LDOMO enters the main loop in lines 3 to 7, which consists of three core processes (i.e., competition phase, evolution phase, and optimization phase). In the competition phase (line 4), the entire population is split into two equal-sized sets, including an elite solution set X e with a better performance and a poor solution set X p with a worse performance. This competitive operation provides important labels and guidelines for the subsequent evolution phase, which is described in Section 4.3. In the evolution phase (line 5), two different evolutionary strategies are employed in LDOMO to evolve X p and X e , respectively, i.e., CSO-based evolutionary strategy and MLP-based evolutionary strategy. Specifically, in the CSO-based evolutionary strategy, the poor solutions in X p learn directly from the elite solutions in X e to accelerate population convergence. Moreover, in the MLP-based evolutionary strategy, MLP is trained to learn the gradient direction of the evolutionary process by adopting the poor and elite solutions. After that, the elite solutions in X e are evolved through the trained MLP to maintain the population diversity and enhance the population quality. Note that details of the two evolutionary strategies are presented in Section 4.4. To further improve the quality of the solution in P o p , the UAV deployment and task offloading for each solution are optimized in line 6 via two local search optimizers, i.e., a deployment optimizer and an offloading optimizer. In the deployment optimizer, redundant UAVs in each solution are eliminated based on the distance between UAVs to reduce energy consumption. Moreover, in the offloading optimizer, the failed tasks in each solution are assigned feasible UAVs as much as possible to improve user satisfaction. Note that the details of local search optimizers are presented in Section 4.5. Finally, the entire evolutionary loop is terminated when the maximum number of generations t m a x is reached, and the final population obtained by LDOMO is output.
Algorithm 1 LDOMO
Input: 
U A V list, M D list.
Output: 
The final population P o p .
 1:
initialize a population P o p ;
 2:
initialize a MLP;
 3:
for  t : 1 to t m a x  do
 4:
     X e , X p Competition Phase( P o p );\\Algorithm 2
 5:
     P o p Evolution Phase(MLP, X e , X p );\\Algorithm 3
 6:
    Optimization Phase( P o p );\\Algorithm 4
 7:
end for

4.2. Population Initialization

Regarding the population representation and initialization, an equal-length integer encoding is employed to represent each solution in the population. Specifically, the locations of all UAVs and the decisions of all mobile devices are encoded in one solution. For example, in an MEC environment of 10 × 10 m2 with two UAVs and three mobile devices, a solution is encoded as 2415221. The first four bits of the solution (i.e., 2415) indicate that the locations of the first and the second UAVs are (2 m, 4 m) and (1 m, 5 m), respectively. Moreover, the last three bits (i.e., 221) indicate the first two mobile devices offload the task to the second UAV, and the last one offloads to the first UAV. As for the population initialization, N solutions are randomly generated to form an initial population P o p that is evolved and optimized in subsequent sections.

4.3. Competition Phase

In order to clearly distinguish the quality of solutions, a competitive strategy in LDOMO is used to divide the population into two sets by employing fast non-dominated sort [41] and crowding distance [42], including a poor solution set with a worse performance and an elite solution set with a better performance. As for the fast non-dominated sort, if solution x * is not worse than any solution after competition, it is called a non-dominated solution and its ranking level is 0 ( R a n k ( x * ) = 0 ) . After removing the solutions with a ranking level of 0, if solution x is not worse than the remaining solutions, the ranking level of x is 1 ( R a n k ( x ) = 1 ) . Through analogy, the ranking level of all solutions can be calculated. Please note that the lower the ranking levels of the solution, the better the quality of the solution. However, when the solutions are in the same ranking level, the quality of them cannot be determined due to the fact that the solution has multiple optimization objectives. In this case, the crowding distance is used to further sort these solutions, which is calculated as follows,
C D ( x i ) = , i = 1 o r n , C D ( x i ) = m M i I f m ( x i + 1 ) f m ( x i 1 ) f m m a x f m m i n , i = 2 , , n 1 ,
where n represents the number of solutions and f m m a x and f m m i n are the maximum and minimum values of the mth objective function. In summary, all of the solutions in the population can be ranked from good to bad according to the following conditions,
R a n k ( x i ) < R a n k ( x j ) , R a n k ( x i ) = R a n k ( x j ) a n d C D ( x i ) > C D ( x j ) .
After the solution is sorted, the first N / 2 solutions are labeled as elite and added to the set X e , while the remaining solutions are added to the set X p . Through this competitive strategy, LDOMO can efficiently rank solutions with multiple optimization objectives and provide critical labels and guidance for subsequent population evolution.
Algorithm 2 presents the pseudocode of the competition phase with the current population P o p as the input. In particular, two empty sets, X e and X p , are initialized in lines 1 and 2 to store the elite and poor solutions, respectively. Subsequently, the fitness of all solutions in P o p is calculated based on Equations (15) and (16) in line 3, which serves as a metric for solution competition and is stored in the set F i t . Then, the solutions in P o p compete with each other and are sorted in lines 4–6. Specifically, in line 4, a fast non-dominated sort is performed according to F i t and the ranking levels of all solutions are stored in set R a n k . In line 5, the crowding distances of solutions with the same ranking level are calculated and added to set C D . In line 6, the solutions in P o p are sorted from good to bad according to Equation (24), with R a n k and F i t as inputs. After that, the sorted solutions are labeled in lines 7–13. The first N / 2 solutions are labeled as elite solutions and added to set X e in line 9. The remaining solutions are labeled as poor solutions and added to X p in line 11. Finally, the set of elite solutions X e and the set of poor solutions X p are returned in line 14.
Algorithm 2 Competition Phase
Input: 
P o p .
Output: 
X e , X p .
 1:
initialize an empty set X e to store the elite solutions;
 2:
initialize an empty set X p to store the poor solutions;
 3:
F i t calculate the fitness of solutions in P o p based on (15) and (16);
 4:
R a n k non-dominated sorting of solutions in P o p according to F i t ;
 5:
C D calculate the crowding distance of solutions based on R a n k and F i t ;
 6:
P o p s o r t sort the solutions in P o p according to (24);
 7:
for  x P o p s o r t  do
 8:
    if x ranks in the first N / 2 . then
 9:
          X e add x as an elite solution;
10:
    else
11:
         X p add x as a poor solution;
12:
    end if
13:
end for
14:
return X e , X p .

4.4. Evolution Phase

In the process of the population evolution phase, LDOMO needs to optimize the UAV deployment and task offloading decisions in each solution. In fact, the real-life UAV deployment and task offloading problem is a large-scale optimization problem due to the dimension of its search space being larger than 1000. However, traditional evolutionary algorithms face large-scale optimization problems with rapidly decreasing efficiency due to dimensionality catastrophe. Inspired by [43,44], two different evolutionary strategies are employed in LDOMO to explore different solution spaces ( X p and X e ), i.e., the CSO-based evolutionary strategy and the MLP-based evolutionary strategy. Specifically, the former strategy makes the poor solutions in X p quickly approach the elite solutions in X e as a way to accelerate population convergence. Meanwhile, the latter strategy exploits the gradient direction of population evolution learned by the MLP to guide elite individuals in X e to more promising regions, which maintains the diversity of population and further enhances the quality of population. Employing different search strategies for different features of the solution space, LDOMO can overcome the challenges posed by dimensional catastrophe in large-scale optimization. Note that the operational procedures and specific details of population evolution phase are given subsequently.
In the CSO-based evolutionary strategy, the poor solutions in X p observe and learn the behavior of the elite solutions in X e . This learning process facilitates knowledge transfer from the elite to the poor solutions, which promotes the convergence of the population. Assuming that the velocity and position of the poor solution are v p and x p , respectively, their update equations are as follows,
v p ( t + 1 ) = r 1 × v p ( t ) + r 2 × x e ( t ) x p ( t ) + φ × r 3 × ( x ¯ ( t ) x p ( t ) ) ,
x p ( t + 1 ) = x p ( t ) + v p ( t + 1 ) ,
where r 1 , r 2 , and r 3 are three random numbers ranging from 0 to 1, t represents the t-th iteration, x e ( t ) is the position of an elite solution, x ¯ ( t ) is the mean position of all solutions in the t-th iteration, and φ is a control parameter of x ¯ ( t ) . On the other hand, it is difficult for elite solutions to learn implicit knowledge from themselves for further evolution, because they are in the vicinity of a locally optimal solution. Therefore, a multilayer perceptron(MLP) in LDOMO is used to explore more promising regions to guide the evolution of elite solutions. In the MLP-based evolutionary strategy, a three-layer MLP is trained by taking X p as the input, while labeling X e as the target output. In this supervised learning way, the MLP can learn the possible fast convergence direction for the input solutions. Subsequently, using the trained MLP, the velocities and positions of the elite solutions in X e are updated in the following way.
v e ( t + 1 ) = v e ( t ) + r 1 ( x M L P ( t ) x e ( t ) ) + r 2 ( x b e s t ( t ) x e ( t ) ) + r 3 ( x r d 1 ( t ) x r d 2 ( t ) ) ,
x e ( t + 1 ) = x e ( t ) + v e ( t + 1 ) ,
where x M L P is the output solution by inputting x e to the trained MLP, x b e s t is the historically best solution for the entire population, x r d 1 and x r d 2 are two randomly selected solutions from the current population, respectively. Obviously, the evolution of the solution x defined in (27) consists of three search components: one is a self-acceleration component provided by the knowledge of MLP (i.e., x M L P ( t ) x e ( t ) ), the second one is the global acceleration component guided by the global best solution (i.e., x b e s t ( t ) x e ( t ) ), and the last one is the differential-regulation component guided by other solutions (i.e., x r d 1 ( t ) x r d 2 ( t ) ). In this way, the convergence of population evolution can be accelerated by following the fast convergence direction provided by MLP, while the correctness of the convergence direction can be guaranteed by the guidance of the global best solution. In addition, differential regulation prevents the evolutionary process from falling into a local optimum to some degree.
Algorithm 3 presents the pseudocode of population evolution. In particular, an empty set Q s p is firstly initialized in line 1 to store the offspring. Subsequently, two critical evolutionary strategies of LDOMO are implemented in lines 2–17. Specifically, the main loop of the CSO-based evolutionary strategy is accessed in line 2. Then, in line 3, an elite solution is randomly selected from X e for guiding the current poor solution evolution. The mean position of all solutions in P o p is calculated and noted as x ¯ in line 4. In lines 5 and 6, the velocity and position of x p are sequentially updated according to (25) and (26). The updated x p is added to Q s p in line 7. After all of the solutions in X p have been updated, the MLP-based evolutionary strategy is executed to update the elite solutions in X e in lines 9–17. The MLP is firstly trained by inputting X e and X p in line 9. Then, solutions x M L P , x b e s t , x r d 1 , and x r d 2 are generated sequentially for assisted evolution in lines 11–13. After that, the velocity and position of x e are sequentially updated according to (27) and (28) in lines 14 and 15. In line 16, the updated x e is added to Q s p . Finally, Q s p is returned as the next generation population in line 18.

4.5. Optimization Phase

In the process of population evolution, UAV deployments in each solution may appear aggregation and uneven distribution due to the stochastic nature of evolutionary algorithms. However, overcrowded UAVs lead to wasted computational resources or even economic losses due to crashes. Therefore, a local search optimizer is designed to optimize the deployment of UAVs by eliminating redundant UAVs that are too close together. In this way, the safety of UAVs is ensured and energy consumption can be reduced by reducing the number of UAVs. In order to further improve the quality of solutions at the same time, another local search optimizer was designed in LDOMO to optimize task offloading for each solution to enhance user satisfaction. Specifically, for each failed task that cannot be completed before the maximum allowable delay T m a x , the first feasible UAV is selected as the new offloading scheme for the failed task from the UAV list based on the first-come-first-served principle. Note that this process ensures that the original successful tasks are completed normally and will be stopped when a suitable UAV is found. In this way, LDOMO is able to maximize the user satisfaction, thus improving the reliability of the system and enhancing the user experience.
Algorithm 3 Evolution Phase
Input: 
MLP, X e , X p .
Output: 
Q s p .
 1:
initialize the offspring Q s p ;
/ * Access to CSO-based evolutionary strategy.
 2:
for each x p X p  do
 3:
      x e get a elite solution from X e ;
 4:
      x ¯ get the mean position of all solutions;
 5:
     update the velocity of x p according to (25);
 6:
     update the position of x p according to (26);
 7:
      Q s p add the updated x p ;
 8:
end for
/ * Access to MLP-based evolutionary strategy.
 9:
MLP ← train the MLP by inputting X e and X p ;
10:
for each x e X e  do
11:
      x M L P get a new solution by inputting x e to MLP;
12:
      x b e s t get the global best solution from P o p ;
13:
      x r d 1 , x r d 2 select two different random solutions;
14:
     update the velocity of x e according to (27);
15:
     update the position of x e according to (28);
16:
      Q s p add the updated x e ;
17:
end for
18:
return Q s p as the next generation population.
Algorithm 4 provides the pseudocode of the population optimization phase, which consists of two main loops for the key optimizers—the deployment optimizer and the unloading optimizer. First, the deployment optimizer is executed in lines 1–11 to optimize the UAV deployment scheme for each solution. In particular, the UAV deployment scheme for each solution x in P o p is decoded and presented in set U A V in line 2. Then in lines 3 to 9, each UAV uav in U A V is checked and determined if it needs to be eliminated. Concretely, the Euclidean distance of the uav from other UAVs is calculated in line 4. In line 5, the minimum distance between uav and other UAVs will be evaluated if it is less than the minimum allowable distance d m i n = 40 m. If it does, then uav is eliminated from U A V in line 6 and the tasks on uav are migrated to the nearest UAV in line 7. After checking all of the UAVs, the UAV deployments in x are updated in line 10. Subsequently, the offloading optimizer is executed in lines 12–25 to optimize the task offloading scheme for each solution. Specifically, the task offloading schemes of all of the mobile devices for each solution x are firstly decoded and presented in set T a s k in line 13. Then, the completion time of each task in T a s k is calculated in line 15. Moreover, if the completion time of task is greater than its maximum allowed delay T m a x , LDOMO tries to find a feasible UAV for this failed task in line 17. If a feasible UAV is successfully found, the offloading decision of task is updated in x, otherwise task is marked as a failed task. Finally, all task offloading decisions for each solution in the population P o p are optimized and the optimized P o p is returned.
Algorithm 4 Optimization phase
Input: 
P o p .
Output: 
The optimized P o p .
  / * Access to deployment optimizer.
 1:
for each x P o p  do
 2:
      U A V decode the location of UAVs from x ;
 3:
     for each uav U A V  do
 4:
         calculate distance d uav between uav and others;
 5:
         if  min ( d uav ) < d m i n  then
 6:
            eliminate uav from U A V ;
 7:
            migrate tasks from uav to nearest UAV;
 8:
         end if
 9:
     end for
10:
     update UAVs and tasks in x ;
11:
end for
/ * Access to offloading optimizer.
12:
for each x P o p  do
13:
      T a s k decode the decisions of tasks from x ;
14:
     for each task T a s k  do
15:
         calculate the completion time t task of task ;
16:
         if  t task > T m a x  then
17:
            try to find a viable decision for task ;
18:
            if success then
19:
                update task in x ;
20:
            else
21:
                mark task as a failed task;
22:
            end if
23:
         end if
24:
     end for
25:
end for

5. Simulation Results

5.1. Environmental Setup

In this section, simulation experiments are performed to evaluate the performance of LDOMO, which are implemented in Python 3.11, and run on a PC with Intel (R) Core (TM) i7-12700 CPU and 32.0 GB RAM. Table 2 summarizes the environmental settings for the simulation experiments.

5.2. Experimental Setup

In order to comprehensively evaluate the performance of LDOMO, several representative methods are adopted for comparisons, including NSGA-II [45], MOEA/D [46], LMOCSO [47], and MOPSO [48]. In addition, a variant of the proposed method is used for comparison, i.e., LDOMO-I, which does not use two local search optimizers to verify their effectiveness. Moreover, the parameters of all of the algorithms are summarized in Table 3. In particular, the number of neurons in the hidden layer of the MLP is set to 40 in our LDOMO. For training the MLP in each generation, the learning rate L R is set to 0.1, the number of epochs is set as e = 20 . For the sake of fairness, all algorithms are independently run 20 times and the average results are recorded to evaluate the overall performance of the algorithms.

5.3. Simulation Results

In this section, the experimental results are discussed by comparing the performance of our method with several representative multiobjective evolutionary algorithms on different experimental scales. First, comprehensive performance comparisons of all algorithms are discussed across four different scales, each featuring varying numbers of mobile devices (UAVs): 200 (100), 400 (200), 600 (300), and 1000 (500). Subsequently, the impact of different numbers of mobile devices is analyzed based on algorithm performance, maintaining a constant number of UAVs set at 500. Finally, the effect of different numbers of UAVs is explored on the algorithm performance while keeping the number of mobile devices constant at 1000.
Performance comparison: Figure 3 shows the non-dominated solution sets obtained by all algorithms in terms of both energy consumption and user satisfaction. In a small-scale setting (i.e., 200 (100)), LDOMO-I does not dominate MOEA/D, but shows a slight performance improvement over NSGA-II, LMCSO, and MOPSO. However, LDOMO emerges as the superior performer, dominating all other algorithms due to its lowest energy consumption and highest user satisfaction. This dominance can be attributed to the effectiveness of the two local search optimizers employed by LDOMO, which contribute to reducing energy consumption and enhancing user satisfaction, thereby enhancing solution quality. In the last three large-scale experiments (i.e., 400 (200), 600 (300), and 1000 (500)), LDOMO and LDOMO-I both outperform the other algorithms and have better scalability. This illustrates that the CSO-based evolutionary strategy and MLP-based evolutionary strategy used in both LDOMO and LDOMO-I are able to learn faster convergence directions and improve convergence efficiency.
Impacts of different numbers of mobile devices: Table 4 presents the experimental results with varying numbers of mobile devices, focusing on energy consumption and user satisfaction. As shown in Table 4, the energy consumption of the system rises with the increasing number of mobile devices, attributed to the heightened computing requirements. Consequently, user satisfaction gradually diminishes as the system contends with higher computational loads. Across all scales of mobile devices, LDOMO consistently demonstrates a superior performance, maintaining the lowest energy consumption and ensuring user satisfaction of over 86%. In contrast, other algorithms exhibit lower user satisfaction levels, hovering around 70%, particularly in large-scale environments. In summary, LDOMO outperforms existing representative evolutionary algorithms across different scales of mobile devices. Its ability to effectively balance energy consumption and user satisfaction underscores its suitability for addressing the complexities of UAV-assisted MEC systems, regardless of the scale of mobile device deployment.
Impacts of different numbers of UAVs: Table 5 shows the experimental results with different scales of UAVs in terms of energy consumption and user satisfaction. As observed, the energy consumption of the system exhibits a positive trend with the increasing number of UAVs, primarily due to the hovering energy consumption of UAVs. However, the user satisfaction of mobile devices shows improvement with the escalation of UAVs, attributed to the augmented edge computing resources facilitated by the increased UAV deployment. Notably, LDOMO consistently demonstrates a superior performance across varying numbers of UAVs, maintaining the lowest energy consumption and the highest levels of user satisfaction. By effectively balancing energy consumption and user satisfaction, LDOMO emerges as a promising solution for addressing the challenges posed by different scales of UAV deployment in UAV-assisted MEC environments.

6. Conclusions and Future Work

In this paper, we focused on the challenges of joint UAV deployment and task offloading in a large-scale UAV-assisted MEC. First, we formulated joint UAV deployment and task offloading problem as a large-scale multiobjective optimization problem with the goal of minimizing the energy consumption while improving user satisfaction. Then, we designed a large-scale UAV deployment and task offloading multiobjective optimization method based on the evolutionary algorithm (called LDOMO) to effectively address the above formulated problem. In the proposed LDOMO, the entire population was first decomposed into the set of poor solutions and the set of elite solutions. Then, CSO-based and MLP-based evolutionary strategies were proposed to explore the two solution spaces with the aim of accelerating the convergence while improving the quality of the population. Subsequently, two local search optimizers were designed to avoid infeasible solutions and to further improve the quality of solutions. Finally, we simulated different experimental scales to compare the performance of our method with existing representative evolutionary algorithms and discussed the impact of different numbers of UAVs and mobile devices. Extensive simulation results have shown that our approach outperforms several representative evolutionary algorithms in terms of energy consumption and user satisfaction in experiments of different scales.
In future work, we will further consider the problem of UAV trajectory planning with obstacle avoidance in large-scale UAV-assisted MEC. In addition, we are interested in exploring other scenarios to extend the research work of this paper, such as partial task offloading, wireless energy harvesting, and collaboration between UAV and base stations.

Author Contributions

Conceptualization, Q.Q. and L.L.; Methodology, Q.Q. and L.L.; Software, Q.Q.; Validation, Q.Q.; Resources, Q.Q.; Data curation, Q.Q.; Writing—original draft, Q.Q.; Writing—review & editing, L.L.; Supervision, L.L., Z.X., Y.F., Q.L. and Z.M.; Project administration, Z.X., Q.L. and Z.M.; Funding acquisition, Z.X., Y.F., Q.L. and Z.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Guangdong Province under Grants 2023A1515011296 and 2023A1515011238; in part by the Stable Support Project of Shenzhen (Project No. 20231120145719001); in part by the National Natural Science Foundation of China (NSFC) under Grants 62376163 and 62272315; in part by the Shenzhen Science and Technology Foundation under Grants JCYJ20210324093212034 and JCYJ20220531101411027; in part by 2022 Guangdong Province Undergraduate University Quality Engineering Project (Shenzhen University Academic Affairs [2022] No. 7); and in part by the Guangdong Regional Joint Foundation Key Project under Grant 2022B1515120076.

Data Availability Statement

The data will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yin, Y.; Zheng, P.; Li, C.; Wang, L. A State-of-The-Art Survey on Augmented Reality-Assisted Digital Twin for Futuristic Human-Centric Industry Transformation. Robot. Comput.-Integr. Manuf. 2023, 81, 102515. [Google Scholar] [CrossRef]
  2. Chen, Z.; Hu, J.; Min, G.; Luo, C.; El-Ghazawi, T. Adaptive and efficient resource allocation in cloud datacenters using actor-critic deep reinforcement learning. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 1911–1923. [Google Scholar] [CrossRef]
  3. Chen, Z.; Xiong, B.; Chen, X.; Min, G.; Li, J. Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning. IEEE Trans. Mob. Comput. 2024. [Google Scholar] [CrossRef]
  4. Li, L.; Qiu, Q.; Xiao, Z.; Lin, Q.; Gu, J.; Ming, Z. A Two-Stage Hybrid Multi-Objective Optimization Evolutionary Algorithm for Computing Offloading in Sustainable Edge Computing. IEEE Trans. Consum. Electron. 2024, 70, 735–746. [Google Scholar] [CrossRef]
  5. Chen, Z.; Yu, Z. Intelligent offloading in blockchain-based mobile crowdsensing using deep reinforcement learning. IEEE Commun. Mag. 2023, 61, 118–123. [Google Scholar] [CrossRef]
  6. Xiao, Z.; Qiu, Q.; Li, L.; Feng, Y.; Lin, Q.; Ming, Z. An Efficient Service-Aware Virtual Machine Scheduling Approach Based on Multi-Objective Evolutionary Algorithm. IEEE Trans. Serv. Comput. 2023. [Google Scholar] [CrossRef]
  7. Chen, Z.; Zhang, J.; Zheng, X.; Min, G.; Li, J.; Rong, C. Profit-Aware Cooperative Offloading in UAV-Enabled MEC Systems Using Lightweight Deep Reinforcement Learning. IEEE Internet Things J. 2023, 11, 21325–21336. [Google Scholar] [CrossRef]
  8. Adnan, M.H.; Zukarnain, Z.A.; Amodu, O.A. Fundamental Design Aspects of UAV-Enabled MEC Systems: A Review on Models, Challenges, and Future Opportunities. Comput. Sci. Rev. 2024, 51, 100615. [Google Scholar] [CrossRef]
  9. Shi, B.; Chen, Z.; Xu, Z. A Deep Reinforcement Learning Based Approach for Optimizing Trajectory and Frequency in Energy Constrained Multi-UAV Assisted MEC System. IEEE Trans. Netw. Serv. Manag. 2024. [Google Scholar] [CrossRef]
  10. Wang, Y.; Ru, Z.Y.; Wang, K.; Huang, P.Q. Joint Deployment and Task Scheduling Optimization for Large-scale Mobile Users in Multi-UAV-Enabled Mobile Edge Computing. IEEE Trans. Cybern. 2019, 50, 3984–3997. [Google Scholar] [CrossRef]
  11. Goudarzi, S.; Soleymani, S.A.; Wang, W.; Xiao, P. UAV-Enabled Mobile Edge Computing for Resource Allocation Using Cooperative Evolutionary Computation. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 5134–5147. [Google Scholar] [CrossRef]
  12. Kumbhar, F.H.; Shin, S.Y. Innovating Multi-Objective Optimal Message Routing for Unified High Mobility Networks. IEEE Trans. Veh. Technol. 2023, 72, 6571–6583. [Google Scholar] [CrossRef]
  13. Guo, H.; Wang, Y.; Liu, J.; Liu, C. Multi-UAV Cooperative Task Offloading and Resource Allocation in 5G Advanced and Beyond. IEEE Trans. Wirel. Commun. 2024, 23, 347–359. [Google Scholar] [CrossRef]
  14. Ning, Z.; Hu, H.; Wang, X.; Guo, L.; Guo, S.; Wang, G.; Gao, X. Mobile Edge Computing and Machine Learning in The Internet of Unmanned Aerial Vehicles: A Survey. ACM Comput. Surv. 2023, 56, 13. [Google Scholar] [CrossRef]
  15. Chen, Z.; Zhang, J.; Huang, Z.; Wang, P.; Yu, Z.; Miao, W. Computation offloading in blockchain-enabled MCS systems: A scalable deep reinforcement learning approach. Future Gener. Comput. Syst. 2024, 153, 301–311. [Google Scholar] [CrossRef]
  16. Asim, M.; Mashwani, W.K.; Shah, H.; Belhaouari, S.B. An Evolutionary Trajectory Planning Algorithm for Multi-UAV-Assisted MEC System. Soft Comput. 2022, 26, 7479–7492. [Google Scholar] [CrossRef]
  17. Pehlivanoglu, Y.V.; Pehlivanoglu, P. An Enhanced Genetic Algorithm for Path Planning of Autonomous UAV in Target Coverage Problems. Appl. Soft Comput. 2021, 112, 107796. [Google Scholar] [CrossRef]
  18. Abhishek, B.; Ranjit, S.; Shankar, T.; Eappen, G.; Sivasankar, P.; Rajesh, A. Hybrid PSO-HSA and PSO-GA Algorithm for 3D Path Planning in Autonomous UAVs. SN Appl. Sci. 2020, 2, 1805. [Google Scholar] [CrossRef]
  19. Tang, Q.; Wen, S.; He, S.; Yang, K. Multi-UAV-Assisted Offloading for Joint Optimization of Energy Consumption and Latency in Mobile Edge Computing. IEEE Syst. J. 2024, 18, 1414–1425. [Google Scholar] [CrossRef]
  20. Mousa, M.H.; Hussein, M.K. Efficient UAV-based Mobile Edge Computing Using Differential Evolution and Ant Colony Optimization. PeerJ Comput. Sci. 2022, 8, e870. [Google Scholar] [CrossRef]
  21. Samriya, J.K.; Kumar, M.; Tiwari, R. Energy-aware ACO-DNN Optimization Model for Intrusion Detection of Unmanned Aerial Vehicle (UAVs). J. Ambient Intell. Humaniz. Comput. 2023, 14, 10947–10962. [Google Scholar] [CrossRef]
  22. Shurrab, M.; Mizouni, R.; Singh, S.; Otrok, H. Reinforcement Learning Framework for UAV-based Target Localization Applications. Internet Things 2023, 23, 100867. [Google Scholar] [CrossRef]
  23. Yu, Z.; Hu, J.; Min, G.; Zhao, Z.; Miao, W.; Hossain, M.S. Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5341–5351. [Google Scholar] [CrossRef]
  24. AlShathri, S.I.; Chelloug, S.A.; Hassan, D.S. Parallel Meta-Heuristics for Solving Dynamic Offloading in Fog Computing. Mathematics 2022, 10, 1258. [Google Scholar] [CrossRef]
  25. Wei, D.; Wang, R.; Xia, C.; Xia, T.; Jin, X.; Xu, C. Edge Computing Offloading Method Based on Deep Reinforcement Learning for Gas Pipeline Leak Detection. Mathematics 2022, 10, 4812. [Google Scholar] [CrossRef]
  26. Mei, J.; Dai, L.; Tong, Z.; Deng, X.; Li, K. Throughput-Aware Dynamic Task Offloading Under Resource Constant for MEC with Energy Harvesting Devices. IEEE Trans. Netw. Serv. Manag. 2023, 20, 3460–3473. [Google Scholar] [CrossRef]
  27. Ren, J.; Liu, J.; Zhang, Y.; Li, Z.; Lyu, F.; Wang, Z.; Zhang, Y. An Efficient Two-Layer Task Offloading Scheme for MEC System with Multiple Services Providers. In Proceedings of the IEEE Conference on Computer Communications, Online, 2–5 May 2022; pp. 1519–1528. [Google Scholar]
  28. Deng, T.; Chen, Y.; Chen, G.; Yang, M.; Du, L. Task Offloading Based on Edge Collaboration in MEC-Enabled IoV Networks. J. Commun. Netw. 2023, 25, 197–207. [Google Scholar] [CrossRef]
  29. Liu, T.; Guo, D.; Xu, Q.; Gao, H.; Zhu, Y.; Yang, Y. Joint Task Offloading and Dispatching for MEC with Rational Mobile Devices and Edge Nodes. IEEE Trans. Cloud Comput. 2023, 11, 3262–3273. [Google Scholar] [CrossRef]
  30. Zeng, F.; Tang, J.; Liu, C.; Deng, X.; Li, W. Task-Offloading Strategy Based on Performance Prediction in Vehicular Edge Computing. Mathematics 2022, 10, 1010. [Google Scholar] [CrossRef]
  31. Tung, T.V.; An, T.T.; Lee, B.M. Joint Resource and Trajectory Optimization for Energy Efficiency Maximization in UAV-Based Networks. Mathematics 2022, 10, 3840. [Google Scholar] [CrossRef]
  32. Arif, M.; Kim, W. Analysis of Fluctuating Antenna Beamwidth in UAV-Assisted Cellular Networks. Mathematics 2023, 11, 4706. [Google Scholar] [CrossRef]
  33. Elgendy, I.A.; Meshoul, S.; Hammad, M. Joint Task Offloading, Resource Allocation, and Load-Balancing Optimization in Multi-UAV-Aided MEC Systems. Appl. Sci. 2023, 13, 2625. [Google Scholar] [CrossRef]
  34. Zhu, A.; Lu, H.; Ma, M.; Zhou, Z.; Zeng, Z. DELOFF: Decentralized Learning-Based Task Offloading for Multi-UAVs in U2X-Assisted Heterogeneous Networks. Drones 2023, 7, 656. [Google Scholar] [CrossRef]
  35. Xu, Y.; Deng, F.; Zhang, J. UDCO-SAGiMEC: Joint UAV Deployment and Computation Offloading for Space–Air–Ground Integrated Mobile Edge Computing. Mathematics 2023, 11, 4014. [Google Scholar] [CrossRef]
  36. Tian, J.; Wang, D.; Zhang, H.; Wu, D. Service Satisfaction-Oriented Task Offloading and UAV Scheduling in UAV-Enabled MEC Networks. IEEE Trans. Wirel. Commun. 2023, 22, 8949–8964. [Google Scholar] [CrossRef]
  37. Xia, J.; Wang, P.; Li, B.; Fei, Z. Intelligent Task Offloading and Collaborative Computation in Multi-UAV-Enabled Mobile Edge Computing. China Commun. 2022, 19, 244–256. [Google Scholar] [CrossRef]
  38. Chen, Z.; Zheng, H.; Zhang, J.; Zheng, X.; Rong, C. Joint Computation Offloading and Deployment Optimization in Multi-UAV-Enabled MEC Systems. Peer Netw. Appl. 2022, 15, 194–205. [Google Scholar] [CrossRef]
  39. Liu, B.; Liu, C.; Peng, M. Resource Allocation for Energy-Efficient MEC in NOMA-Enabled Massive IoT Networks. IEEE J. Sel. Areas Commun. 2020, 39, 1015–1027. [Google Scholar] [CrossRef]
  40. Liu, Y.; Xiong, K.; Ni, Q.; Fan, P.; Letaief, K.B. UAV-Assisted Wireless Powered Cooperative Mobile Edge Computing: Joint Offloading, CPU Control, and Trajectory Optimization. IEEE Internet Things J. 2019, 7, 2777–2790. [Google Scholar] [CrossRef]
  41. Li, L.; Lin, Q.; Ming, Z.; Wong, K.C.; Gong, M.; Coello, C.A.C. An Immune-Inspired Resource Allocation Strategy for Many-Objective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 3284–3297. [Google Scholar] [CrossRef]
  42. Raquel, C.R.; Naval, P.C., Jr. An Effective Use of Crowding Distance in Multiobjective Particle Swarm Optimization. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 257–264. [Google Scholar] [CrossRef]
  43. Liu, S.; Li, J.; Lin, Q.; Tian, Y.; Tan, K.C. Learning to Accelerate Evolutionary Search for Large-scale Multiobjective Optimization. IEEE Trans. Evol. Comput. 2022, 27, 67–81. [Google Scholar] [CrossRef]
  44. Li, L.; Li, Y.; Lin, Q.; Liu, S.; Zhou, J.; Ming, Z.; Coello, C.A.C. Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization. IEEE Trans. Cybern. 2023, 54, 3502–3515. [Google Scholar] [CrossRef] [PubMed]
  45. Zhu, J.; Wang, X.; Huang, H.; Cheng, S.; Wu, M. A NSGA-II Algorithm for Task Scheduling in UAV-Enabled MEC System. IEEE Trans. Intell. Transp. Syst. 2021, 23, 9414–9429. [Google Scholar] [CrossRef]
  46. Yao, Z.; Wu, H.; Chen, Y. Multi-Objective Cooperative Computation Offloading for MEC in UAVs Hybrid Networks via Integrated Optimization Framework. Comput. Commun. 2023, 202, 124–134. [Google Scholar] [CrossRef]
  47. Tian, Y.; Zheng, X.; Zhang, X.; Jin, Y. Efficient Large-Scale Multiobjective Optimization Based on A Competitive Swarm Optimizer. IEEE Trans. Cybern. 2019, 50, 3696–3708. [Google Scholar] [CrossRef]
  48. Zhu, X.; Zhou, M. Multiobjective Optimized Cloudlet Deployment and Task Offloading for Mobile-Edge Computing. IEEE Internet Things J. 2021, 8, 15582–15595. [Google Scholar] [CrossRef]
Figure 1. Schematic of UAV deployment and task offloading in a large-scale UAV-assisted MEC.
Figure 1. Schematic of UAV deployment and task offloading in a large-scale UAV-assisted MEC.
Mathematics 12 01966 g001
Figure 2. Framework of LDOMO.
Figure 2. Framework of LDOMO.
Mathematics 12 01966 g002
Figure 3. Performance comparisons of all algorithms on four different scales. (a) 200 (100); (b) 400 (200); (c) 600 (300); (d) 1000 (500).
Figure 3. Performance comparisons of all algorithms on four different scales. (a) 200 (100); (b) 400 (200); (c) 600 (300); (d) 1000 (500).
Mathematics 12 01966 g003
Table 1. Main notations.
Table 1. Main notations.
SymbolDefinition
M D the set of mobile devices
Ithe number of mobile devices
U A V the set of UAVs
Jthe number of UAVs
D i the task data size of m d i
C i the CPU cycles of m d i
ψ i the maximum admissible delay of m d i
F j the computational resource of u a v j
O j the storage resource of u a v j
( x i m d , y i m d , 0 ) the spatial location of m d i
( x j u a v , y j u a v , H ) the spatial location of u a v j
l i j whether the task of m d i is offloaded to u a v j
B i j the channel bandwidth between m d i and u a v j
P t the transmit power
P h the hovering power of the UAV
θ i whether the task of m d i can be completed
Table 2. Environment settings.
Table 2. Environment settings.
ParameterValue
Distribution area of UAVs and mobile devices1000 × 1000 m2
Number of mobile devices (I)100, 200, …, 1000
Number of UAVs (J)50, 100, …, 500
Transmission data of one task[0.5, 2] Mbits
CPU cycles to compute one bit[100, 1000] CPU cycles
Maximum delay of one task[100, 200] ms
Transmission bandwidth100 Mbps
Computation capabilities of UAVs [ 5 , 10 ] GHz
Transmit power1 W
Hovering power of UAVs1 kW
Noise power−117 dB
Central frequency2 GHz
Minimum allowable distance of UAVs4 m
Table 3. Algorithm settings.
Table 3. Algorithm settings.
ParametersCommon Parameters
LDOMO,
LDOMO-I
L R = 0.1, e = 20, K = 40Population Size = 100,
Iteration = 200,
Runtimes = 20
NSGA-II R c = 0.9, R m = 0.08
MOEA/DT = 20, δ = 0.9, R c = 0.8, R m = 0.01
LMCSO θ = 0.1
MOPSOw = 0.4, c 1 = 2.0, c 2 = 2.0
Table 4. Comparison results for different numbers of mobile devices.
Table 4. Comparison results for different numbers of mobile devices.
I
(J = 500)
Energy ConsumptionUser Satisfaction
LDOMONSGA-IIMOEA/DLMCSOMOPSOLDOMONSGA-IIMOEA/DLMCSOMOPSO
10056,970.157,431.557,002.057,853.458,077.60.8880.8300.7200.7700.760
20063,414.965,684.964,640.765,974.266,650.80.8840.7900.7000.7400.730
30071,332.874,111.972,910.374,336.275,317.40.8600.7430.6970.6900.687
40077,574.781,653.480,162.381,585.682,668.70.8800.7720.7000.7020.702
50084,481.290,214.189,105.990,339.291,916.80.8760.7300.6880.6900.686
60092,729.798,438.297,719.398,768.7100,130.70.8670.7220.6730.6830.685
700102,604.7107,783.9105,357.2106,677.9109,443.00.8670.7130.6670.6900.680
800103,946.7115,604.6113,217.9115,313.4117,403.90.8670.7120.6700.6800.677
900112,344.3124,771.6122,638.6124,138.6126,358.20.8630.7100.6600.6770.677
1000123,255.4133,216.1130,804.1132,296.6135,684.60.8620.7110.6690.6750.673
Table 5. Comparison Results for Different Numbers of UAVs.
Table 5. Comparison Results for Different Numbers of UAVs.
I
(J = 1000)
Energy ConsumptionUser Satisfaction
LDOMONSGA-IIMOEA/DLMCSOMOPSOLDOMONSGA-IIMOEA/DLMCSOMOPSO
5074,540.787,534.983,418.286,323.689,026.30.8440.7870.6570.6820.666
10077,093.492,497.989,635.290,925.395,479.40.8570.7160.6750.6740.680
15085,530.797,687.494,877.599,300.1100,112.30.8570.7050.6640.6980.667
20086,474.1102,975.999,698.2104,327.6104,663.80.8590.7220.6770.7030.668
25092,963.8108,354.3105,686.8106,151.1110,291.50.8610.7150.6650.6640.679
30099,043.7112,999.2110,784.6112,751.4115,239.30.8620.7210.6570.6760.670
350106,903.4117,161.8114,714.7118,218.3119,621.30.8610.7010.6650.6780.668
400112,741.3122,807.7120,043.5122,736.8124,896.70.8600.7190.6690.6800.664
450113,291.5128,049.4125,420.1127,362.4129,851.30.8620.7110.6660.6710.661
500123,255.4133,216.1130,804.1132,296.6135,684.60.8620.7110.6690.6750.673
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, Q.; Li, L.; Xiao, Z.; Feng, Y.; Lin, Q.; Ming, Z. Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm. Mathematics 2024, 12, 1966. https://doi.org/10.3390/math12131966

AMA Style

Qiu Q, Li L, Xiao Z, Feng Y, Lin Q, Ming Z. Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm. Mathematics. 2024; 12(13):1966. https://doi.org/10.3390/math12131966

Chicago/Turabian Style

Qiu, Qijie, Lingjie Li, Zhijiao Xiao, Yuhong Feng, Qiuzhen Lin, and Zhong Ming. 2024. "Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm" Mathematics 12, no. 13: 1966. https://doi.org/10.3390/math12131966

APA Style

Qiu, Q., Li, L., Xiao, Z., Feng, Y., Lin, Q., & Ming, Z. (2024). Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm. Mathematics, 12(13), 1966. https://doi.org/10.3390/math12131966

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop