Next Article in Journal
Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System
Previous Article in Journal
Moving-Target Defense in Depth: Pervasive Self- and Situation-Aware VM Mobilization across Federated Clouds in Presence of Active Attacks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm

Communication and Network Laboratory, Dalian University, Dalian 116622, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(23), 9546; https://doi.org/10.3390/s22239546
Submission received: 28 October 2022 / Revised: 29 November 2022 / Accepted: 2 December 2022 / Published: 6 December 2022

Abstract

:
With the rapid increase of smart Internet of Things (IoT) devices, edge networks generate a large number of computing tasks, which require edge-computing resource devices to complete the calculations. However, unreasonable edge-computing resource allocation suffers from high-power consumption and resource waste. Therefore, when user tasks are offloaded to the edge-computing system, reasonable resource allocation is an important issue. Thus, this paper proposes a digital-twin-(DT)-assisted edge-computing resource-allocation model and establishes a joint-optimization function of power consumption, delay, and unbalanced resource-allocation rate. Then, we develop a solution based on the improved whale optimization scheme. Specifically, we propose an improved whale optimization algorithm and design a greedy initialization strategy to improve the convergence speed for the DT-assisted edge-computing resource-allocation problem. Additionally, we redesign the whale search strategy to improve the allocation results. Several simulation experiments demonstrate that the improved whale optimization algorithm reduces the resource allocation and allocation objective function value, the power consumption, and the average resource allocation imbalance rate by 12.6%, 15.2%, and 15.6%, respectively. Overall, the power consumption with the assistance of the DT is reduced to 89.6% of the power required without DT assistance, thus, improving the efficiency of the edge-computing resource allocation.

1. Introduction

As the growth rate of cloud-computing power is gradually unable to meet the exponentially growing real-time data processing requirements, edge-computing technology has been proposed. Edge computing migrates computing task scheduling to a location close to the data source for execution, effectively improving the computing efficiency. However, due to limited edge-computing resources, new challenges have been created for resource-allocation optimization. At the same time, the edge-computing resource allocation scheme also affects the user experience. This paper mainly conducts research on edge-computing resource allocation. In this section, we introduce the motivation, related studies, and contributions.

1.1. Motivation

With the rapid increase of intelligent Internet of Things (IoT) devices, the edge network facing the physical world generates massive amounts of perception data. Nevertheless, cloud computing cannot support the low-latency computing requirements of these IoT devices. Therefore, edge computing has been proposed to reduce the transmission time between IoT devices [1,2,3]. The concept of edge computing is to move part of the computing load to the network edge closer to the user-end and employ edge nodes, e.g., base stations and switches, to complete the computing [4].
Currently, IoT and edge computing supports industrial, medical care, electric power, and agriculture applications, etc. [5,6,7]. However, due to the limited resources of edge devices, resource-efficient task allocation schemes must be developed under certain communication delay conditions. The current research on edge-computing resource allocation primarily considers the total energy consumption, battery power, computing load, and network load to optimize the resource allocation [8,9,10,11]. During edge-computing resource allocation for a single node, the number of applications to be served is considered, while, when allocating multiple nodes, problems, such as minimizing the energy consumption and load balancing, are primarily considered [12].

1.2. Related Studies

Intelligent optimization algorithms have been used to solve optimization problems in edge computing. For instance, Alfakih et al. delivered offloaded tasks to virtual machines in edge servers in a planned way to minimize the computing time and service costs. Additionally, the authors proposed an integrated accelerated particle swarm optimization scheme based on dynamic programming as a multi-objective (APSO) algorithm for dynamic task scheduling and load-balancing technology [13].
Luo et al. established an offloading communication and computing framework for vehicle edge computing. By jointly considering offloading decision-making and communication and computing resource allocation to minimize delay and cost, the edge resources can be efficiently scheduled. Indeed, a particle swarm optimization based on the computational offloading (PSOCO) algorithm was previously proposed [14].
Moreover, Subbaraj et al. used a local search strategy based on crowd search for resource allocation scheduling, which improved the allocation success rate [15]. Liu et al. proposed a joint-optimization objective to evaluate the unavailability level, communication delay, and resource waste and used a biogeographic optimization algorithm to solve the optimization objective [16].
The whale optimization algorithm has also been applied to solve the problem of edge computing and has effectively improved the edge-computing system’s efficiency. For instance, Huang et al. proposed a multi-objective whale optimization algorithm (MOWOA) based on time and energy consumption to solve the optimal offloading mechanism of computational offloading in mobile edge computing [17]. Xu et al. analyzed a three-layer network architecture for an Industrial Internet of Things (IIoT) that collaboratively and wirelessly powered an edge-computing network, using an improved hybrid whale optimization algorithm to maximize the efficiency of wireless sensor devices by offloading the decision-making, power allocation, computing resources, energy harvesting time, and residual energy [18].
A DT is defined as a virtual representation of a physical system, updated through the information exchange between the physical and virtual systems [19]. Currently, DT technology has been applied to assist edge computing [20,21,22,23,24] and provide a global perspective for edge computing task offloading and resource allocation by establishing a DT of physical entities [25]. Using DT technology reflects the operating status of the edge device and observes the errors of the DT and physical entities in real time, which are analyzed and calculated through the DT.
This strategy affords the unloading and the resource-allocation process to neglect real-time interactions between the user equipment and edge servers, improving the task efficiency and reducing the system energy consumption [26,27]. In [28], the authors developed a DT-enabled mobile edge computing (MEC) architecture. The delay minimization problem was proposed in the MEC model and was performed according to the transmission power, user association, intelligent task offloading, and DT-estimated CPU processing rate. This scheme solved and improved the performance delay problems.
Reference [29] built an edge-computing network combining DT and edge computing to improve the edge-computing network performance and reduce the resource overhead through DT technology. In [30], the authors established a dynamic DT of the air-assisted Internet of Vehicles and deployed a DT system in the UAV for unified resource scheduling and improved energy efficiency. A DT edge network was also suggested in [31], where the authors considered a DT of edge servers to evaluate the status of edge servers, improve the average offload delay, and reduce system costs.

1.3. Contributions

Although DT technology was initially applied in edge computing, the above research mainly considered delay and power consumption issues. Nevertheless, during the DT-assisted edge-computing resource allocation, improving the resource allocation balance can reduce the waste of resources. Thus, this paper uses DT to optimize the resource allocation of edge computing. In traditional edge-computing resource allocation, the assignment task must be transmitted to the edge server, which allocates resources according to the task.
However, the emergence of DT technology enables the edge server to have a resource device model so that the edge server can directly allocate tasks according to the resource device to reduce the overhead of the transmission process. Therefore, we use DT technology to assist edge-computing servers in resource allocation; establish a joint-optimization function for transmission delay, resource allocation imbalance rate, and power consumption; and develop a method based on the whale optimization algorithm. The main contributions of this work are as follows:
(1)
We consider a DT-assisted edge-computing resource-allocation system and establish a DT-assisted resource-allocation model.
(2)
We build a joint-optimization model of time, energy consumption, and resource allocation imbalance rate through the DT and the physical model.
(3)
We improve the whale optimization algorithm to solve the DT-assisted edge-computing resource-allocation optimization problem.
The remainder of this paper is organized as follows. Section 2 describes the system model and elaborates on the resource-allocation problem of DT-assisted edge computing. Section 3 introduces the proposed improved algorithm, and Section 4 presents the simulation and comparison results. Finally, Section 5 concludes this paper.

2. System and Computation Model

The system comprises two parts, a physical entity and its DT. The physical entity comprises the users, edge servers, and resource devices. The DT is deployed inside the edge server, and, after the client generates the task parameter model, it reaches the edge server. The DT in the edge server allocates resources to the task according to the energy consumption, storage, and computing, generates the optimal result, and transmits the results to the user device. The user equipment directly transmits the task to the resource equipment, which completes the calculations. The DT cannot entirely and accurately simulate the physical entity as there will be specific errors, and the cumulative error will increase with time. Therefore, parameter calibration between the physical entity and the DT is performed at regular intervals in the system. For ease of understanding, Table 1 lists the key notations in this paper.

2.1. DT-Assisted Edge-Computing Model

As illustrated in Figure 1, a DT-assisted edge-computing setup consists of user devices, edge servers, resource devices, and DT. The user equipment initiates a task request to the edge server, the edge device allocates computing resources to the requested task, and the DT is deployed in the edge server.
Let U = { 1 , 2 , 3 , , u } represent user equipment that needs to request resources and let E = { 1 , 2 , 3 , , e } denote edge servers. The resource devices for allocation managed by each edge server are given by S = { 1 , 2 , 3 , , s } . The DT of the resource devices in the DT layer provides the real-time resource usage and is represented by D T s = { s 1 , s 2 , s 3 , , s n } , where s i = { q 1 , q 2 , q 3 , , q m } , and q i is a particular resource of s i . A task model in an edge server is represented as J = { j 1 , j 2 , j 3 , , j l } , where the i-th task is j i = { z 1 , z 2 , z 3 , , z m } , and z i is the demand quantity of a task for a specific resource. According to the resource situation of the resource device’s DT, the DT deployed in the edge server selects the optimal allocation strategy for each task request and assigns the task to the physical resource device to complete the calculation.

2.2. DT-Assisted Edge-Computing Latency Model

Edge computing brings computing closer to the user to reduce latency. Therefore, during the edge-computing resource allocation, the processing delay is a vital evaluation index, mainly divided into the DT model’s communication, computation, and correction delay. The DT model is not calibrated continuously and is updated only when the difference between the DT model and the physical entity reaches a threshold. We use T 0 to denote its update time. The simulated transmission time T ˜ m τ of the m-th task is defined as:
T ˜ m τ = D m B m
where D m is the data volume of the m-th task simulated by the DT, and  B m is the bandwidth used by the m-th task of the DT. Since the DT model and the physical entity are not the same, the error Δ T m τ of the simulated transmission of the m-th task is defined as:
Δ T m τ = D m + Δ D m B m + Δ B m D m B m = B m Δ D m Δ B m D m B m ( B m + Δ B m )
where Δ D m is the DT’s data error providing analog transmission, and  Δ B m is the bandwidth error of the analog transmission. Hence, the total time T τ of the task data transfer is defined as:
T τ = e E j J T ˜ e , j τ + Δ T e , j τ
The task calculation time of the DT simulation is represented by T ˜ m c , which can be calculated by Formula (4).
T ˜ m c = C m f n
where C m is the calculation amount of the m-th task of the DT simulation and f m is the calculation frequency allocated by the resource device for the m-th task. Therefore, the calculation time error Δ T m c generated by the DT is defined as:
Δ T m c = C m + Δ C m f m + Δ f m C m f m = f m Δ C m Δ f m C m f m ( f m + Δ f m )
where Δ C m is the calculated error of DT providing the analog transmission and Δ f m is the error between the actual computing frequency of the m-th task and the digital twin simulation computing frequency. Then, the total computation time T c of all tasks is defined as:
T c = e E j J T ˜ e , j c + Δ T e , j c
Therefore, the total time for data transfer, computation, and DT model correction during the allocation process is represented by T t o t , which can be calculated using Formula (7).
T t o t = e E j J T ˜ e , j τ + Δ T e , j τ + e E j J T ˜ e , j c + Δ T e , j c + e E T e 0

2.3. DT-Assisted Edge-Computing Energy-Consumption Model

The battery power of edge-resource devices is limited, thereby, affecting the task-allocation efficiency considering the power usage and maximizing the use of edge resources. Therefore, this paper establishes the following power consumption model, with the power consumption of the m-th task defined as:
P m = k f m 3 t = k ( f m + Δ f m ) 2 C m + Δ C m
where k is the coefficient of different computing resources related to the physical device chip, f m is the calculation frequency of the m-th task, and t is the calculation time required by the m-th task. The total computing power consumption for all computing tasks is:
P t o t = e E j J k ( f e , j + Δ f e , j ) 2 ( C e , j + Δ C e , j )

2.4. A Balanced Model of Edge-Computing Resource Allocation Assisted by DT

We consider the balance of different resource allocations in the resource devices during resource allocation. If a particular resource type occupies too much space, it will no longer be possible to allocate resources, and many other resources may be left, thereby, reducing the actual resource usage rate of the devices. Therefore, during resource allocation, balancing the resource allocation is mandatory. When the difference between the remaining resources in different dimensions during resource allocation is large, most resources are wasted, and thus the resource utilization rate is lower. Assuming that there are N resource devices, the resource in the m-th resource device is S m , the total number of resource types is M, and the simulated resource usage rate is U ˜ i m . Then, we use U i m to represent the real usage rate, which can be defined as:
U i m = U ˜ i m + Δ U i m
where Δ U i m is the error between the usage rate of the i-th resource of the m-th device simulated by the digital twin and the real usage rate. Then, the average utilization rate of M resources of the m-th device is represented by U m a v g , which is defined as:
U m a v g = i M U i m M
Then, the resource allocation imbalance rate of the m-th device is represented by D m , which is defined as:
D m = 1 M i M ( U i m U i a v g ) 2
The imbalance rate of the resource allocation for all resource equipment is represented by D ¯ , which is defined as:
D ¯ = e E s S D e , s N
When the physical device decides that the task must be handed over to the edge resource to complete the calculation during the calculation process, these tasks will be transferred to the edge computing device to complete the calculation. This paper obtains the status of the edge-resource devices through the DT model deployed on the edge server and assigns tasks through the computing task model.
Finally, the assignment results are distributed to the user devices, which directly transmit the tasks to the resource devices to complete the calculations. During task assignment, assigning tasks nearby can reduce the transmission delay; however, such an assignment may cause resource waste due to unbalanced resource utilization. Therefore, we comprehensively consider the delay, power consumption, and resource utilization and establish a DT auxiliary system for optimal resource allocation by minimizing the delay, power consumption, and resource utilization. Finally, the target problem optimized in this paper is expressed as follows:
min W = θ 1 T t o t + θ 2 P t o t + θ 3 D ¯ s . t . C 1 : 0 < θ 1 , θ 2 , θ 3 < 1 C 2 : θ 1 + θ 2 + θ 3 = 1 C 3 : j J P i , j < P max i , i E C 4 : j J f i , j < f max i , i E C 5 : T i , j < T max i , j J , i E
where θ is the weight coefficient, C 3 is the constraint on the power consumption of edge-resource devices, C 4 is the constraint on the frequency of resource devices, and  C 5 is the constraint on the task time.

3. Improved Whale Optimization Algorithm

3.1. Whale Optimization Algorithm

The whale optimization algorithm (WOA) [32] is a meta-heuristic intelligent optimization algorithm proposed by Mirjalili et al., which considers three behaviors by simulating humpback hunting: surround, search, and attack.

3.1.1. Surrounding the Prey

Humpback whales identify the prey’s location and surround it. The whale individual closest to the prey is the current optimal solution, and the other whales approach the current optimal individual. The position update formula is as follows:
D = C · X * t X t
X t + 1 = X * t A × D
where t is the number of iterations, X * t is the current optimal whale individual position vector, and  X t is the current whale individual position vector. A and C are coefficients, which are calculated as follows:
A = 2 a × r a
C = 2 · r
where a decreases linearly from 2 to 0 during the iterative process, and r is a random number within [ 0 , 1 ] .

3.1.2. Bubble Attack

During updating the position, the humpback whale spits out bubbles surrounding the prey and hunts in a spiral manner with a certain probability, defined as:
X ( t ) = D · e b l · cos ( 2 π l ) + X * ( t )
where b is a constant and l is a random number within [ 1 , 1 ] .

3.1.3. Searching for Prey

When A > 1 , the whale stays away from the current optimal whale individual and performs a global search, unaffected by the currently optimal whale. The position update formula is as follows:
D = C × X r a n d X ( t )
X ( t + 1 ) = X r a n d A × D
where X r a n d is the location of a random individual whale.

3.2. DT-Assisted Edge-Computing Resource Allocation Based on the Improved Whale Optimization Algorithm

The whale optimization algorithm has a convergence speed in the process of resource allocation, and it is easy to fall into the problem of local optimal solutions. Therefore, when solving this problem, we improved the whale optimization algorithm and proposed IPWOA. First, we coded the computing tasks so that the whale optimization algorithm can solve the resource allocation tasks. After that, in order to improve the early convergence speed of the whale optimization algorithm, we proposed a greedy initialization strategy.
Through the effective initialization of the whale population, the initial whale population is in a better state. Finally, we redesigned the whale’s predation and search strategy. By updating the optimal value of the whale’s own search process in combination with individual whales, the local search effect of the whale optimization algorithm is improved. The following is the specific improvement strategy.

3.2.1. Encoding

During the application, we encode optimized tasks and map resource devices to whale populations. Here, the distance of the whale population is the difference between different resource devices—that is, the distance between resource devices. The coded index of the population represents the tasks assigned by generations, and its length is the same as the number of tasks. The filled parameter is the number of the selected edge-computing resource, and the corresponding number indicates the resource code.
The coding format is illustrated in Figure 2, where the number of 0–14 in the population is the number of an optimization task, and the resource number is 0–6. Figure 2 depicts the task assignment, where the assignment of task No. 0 is calculated by resource device No. 2.

3.2.2. Greedy Initialization Method

The initialized population significantly influences the results during the algorithm’s execution, and a better-initialized population speeds up the algorithm’s convergence. Hence, we designed an initialization method based on a greedy strategy according to the optimization problem, and the obtained initialization population is a locally optimal solution. For the initialization algorithm, we employed formula (22) to allocate a resource device for each task, which is the resource device with the minimum value of the optimization objective function generated in the resource device set. For example, in Figure 2, the optimal solution corresponding to task 0 is first calculated to be 2, and, based on this, the optimal solution for the other tasks is solved. The corresponding allocation scheme is an optimal local solution, considered to be an initialized population.
Simultaneously, the assignment results from different tasks may differ, as, in the beginning, an initial assignment task index is randomly generated to select where to start the assignment task to ensure population diversity.
X i t = min { w 1 , w 2 , w 3 , , w n }
where X i t is the computing resource allocated by the i-th task, and  w n is the cost value generated by the resource device allocated by the corresponding task.

3.2.3. Improve Optimization Methods

A
Nonlinear convergence factor.
The convergence factor of the original whale optimization algorithm is linear; however, the optimization algorithm must quickly enter the local search stage after greedy initialization. The linear convergence factor is given by formula (23), and the optimization algorithm quickly enters the local search stage.
a = 2 · ( 1 t t max )
B
Improving the predation mechanism.
When the whale population is preying, the whales will approach the prey; however, this will force the whale population to fall into the optimal local solution. Therefore, to rebalance the global and local optimizations, a self-learning item is designed and defined as:
Y ( t + 1 ) = Y ( t ) + A r 1 · X α X ( t ) + v r 2 · X * X ( t ) X ( t + 1 ) = X ( t ) + Y ( t + 1 )
where X α is the optimal value of the current whale individual, r 1 and r 2 are random numbers, and v is an inertia weight.
C
Improving the siege mechanism.
A spiral surrounds the original whale. In this paper, the individual whales have an updated position depending on the latest position of the spiral to avoid falling into a locally optimal solution: the global and local optimizations as well as the self-learning item are designed and defined as:
X δ = D · e b l · cos ( 2 π l ) + X * ( t ) Y ( t + 1 ) = Y ( t ) + A r 1 · X α X δ + v r 2 · X * X ( t ) X ( t + 1 ) = X ( t ) + Y ( t + 1 )
where X δ is the individual whale after the current spiral update position.
The pseudo-code for the improved algorithm (Algorithm 1) is provided below:
Algorithm 1:IPWOA
Input: tasks J, computing resources S, t max
Output: BestAllocation
1: Set t = 0 , set number of tasks t a s k s N u m , set t max , s e a r c h N o , allocation plan P;
2: Initial each allocation plan P using Equation (22);
3: Calculate the fitness of each allocation plan P using Equation (14);
4: Best allocation b P = the best allocation plan;
5: While t < t max
6:    Update a, A, C, l, p;
7:    For i = 0 to t a s k s N u m :
8:       if p < 0.5
9:          if A < 1
10:             Update resource allocation plan P using Equation (24);
11:          else
12:             Update resource allocation plan P using Equation (20), Equation(21);
13:       else
14:          if A < 1
15:             Update resource allocation plan P using Equation (25);
16:          else
17:             Update resource allocation plan P using Equation (19);
18:    Check if any distribution plan the search space and amend it;
19:    Calculate the fitness of each distribution plan using Equation (14);
20:    Update the best allocation b P if there is a better solution;
21:     t = t + 1 ;
22: return best allocation b P
The IPWOA solution task assignment flow chart is illustrated in Figure 3, comprising the following major steps:
Step 1: Initialize the parameters, such as the memory, computing resources, and time constraints of edge devices and computing tasks. Initialize the parameters of the whale population. Set basic parameters, such as the number of whale populations and the maximum number of iterations.
Step 2: Use formula (22) to greedily initialize the whale swarm to generate a set of initial optimal solutions.
Step 3: If the termination iteration condition is reached, step 7 is performed. Otherwise, step 4 is performed.
Step 4: Calculate the objective function value of resource allocation according to formula (14), and record the optimal individual in the current whale group and the global optimal whale individual. Then, generate a random number p. If p < 0.5 , go to step 5. Otherwise, go to step 6.
Step 5: Generate a random number A. If A < 1 , update the whale positions through formula (24). Otherwise, update the whale positions through formulas (20) and (21), and repeat step 3.
Step 6: Generate a random number A. If A < 1 , update the whale positions using formula (25). Otherwise, update the whale positions using formula (19) and repeat step 3.
Step 7: Output the optimal whale individual and record the optimal computing resource allocation result.

4. Simulation and Result

In this section, we build simulation experiments to evaluate the proposed model and solution method. The simulation experiments were performed on a computer configured with an Intel Core I7-7700HQ 2.8 GHz CPU and 16 GB RAM.

4.1. Simulation Setup

For the simulated experiments, we consider that the resources possessed by the resource devices may be different. The computing frequency range of the simulated resource devices is [50, 80] kHz, the bandwidth resources are [180, 230] KB, and the memory resources are [190, 250] MB. The error is in the range of (0,1), and the range of coefficient k is (0, 1). The computing resources occupied by the corresponding different tasks are [8, 13] kHz, the memory resources are [18, 25] KB, and the occupied memory is [8, 13] kHz. The resources are [15, 26] MB, and the time constraint is [0.7, 1.25] s. The simulation experiments are conducted by utilizing the above parameters.

4.2. Impact of the Improved Algorithm on Resource Allocation

As illustrated in Figure 4, we compare the iterative results of different algorithms for resource-allocation problems. Specifically, the simulation experiments involve the particle swarm optimization algorithm (PSO), gray wolf optimization algorithm (GWO), whale optimization algorithm (WOA), the initialized improved whale optimization algorithm (IWOA), the improved whale optimization algorithm in Section 3.2.3 (PWOA), and the proposed improved whale optimization algorithm (IPWOA). All algorithms were applied for 200 iterations to solve the optimization task. Since the original WOA was randomly initialized, the early convergence was slower than in the IPWOA. Furthermore, by improving WOA, the optimization results were better than WOA and the other algorithms, with the experimental results demonstrating that IPWOA attained an optimal result that was 15.2% smaller than the original WOA.
Additionally, this paper analyzed the algorithm’s average optimal results over multiple iterations to verify that the improvement effect is more generic. We conducted 60 rounds of experiments on each algorithm, and each experiment was iterated 200 times. We recorded the average optimization target values at 20, 30, 40, 50, and 60 rounds. The corresponding results are reported in Table 2 and Figure 5. The average result of 60 iterations shows that the optimal result of IPWOA was 12.6% higher than that of WOA, and it also achieved better results compared with PSO and GWO.
We also analyzed the final optimal allocation scheme obtained by each optimization algorithm. Figure 6 and Figure 7 are comparisons of the unbalanced rates of resource allocation and power consumption brought about by the optimal allocation scheme of each algorithm. Figure 6 shows that, in the allocation scheme of the optimal solution, the power consumption of IPWOA was reduced by 15.2% compared with WOA, and Figure 7 shows that the average resource allocation imbalance rate was reduced by 15.6%.

4.3. Optimization of an Edge-Computing Resource-Allocation System with a DT System

The DT technology can assist in allocating edge-computing resources, allowing the edge-computing system to reduce the number of data transmissions, and affording resource devices to devote more time to computing. When the resource device has more computing time, the allocated frequency is relatively less, and the computing power consumption is reduced.
Figure 8 highlights that, by optimally allocating the resources obtained by the improved whale under the same time and time constraints, exploiting the DT assistance increases the proportion of computing time over the total time. Therefore, we further analyzed the impact of digital twin assistance on resource-allocation power consumption in terms of the time saved. As illustrated in Figure 9, the power consumption with the assistance of the DT was 89.6% of the one not assisted by a DT.

5. Conclusions

With the development of edge-computing technology, IoT applications are increasingly connected to edge-computing systems, which improves the efficiency of edge-computing resource allocation and improves the user experience.
In this paper, a DT-assisted edge-computing resource-allocation model was developed to solve the problem of low resources and high-power consumption in the process of edge-computing resource allocation. Specifically, we considered: (1) Through digital-twin-assisted edge computing, a problem model of power consumption, delay, and an unbalanced resource-allocation rate was established. (2) A greedy initialization strategy was proposed, which effectively improved the early convergence speed of the whale optimization algorithm. (3) An improved search strategy was proposed, and the accuracy of the algorithm search was improved by introducing the optimal item of the individual whale search process.
Simulation experiments showed that the IPWOA reduced the resource allocation and allocation objective function value, the power consumption, and the average resource allocation imbalance rate by 12.6%, 15.2%, and 15.6%, respectively. The power consumption with the assistance of the DT was reduced to 89.6% of the power required without DT assistance.
In our future work, we will further consider the impacts of dynamic user devices on the optimization problem and continue to study the problem of digital technology offloading tasks for dynamic user edge computing.

Author Contributions

Conceptualization, S.Q. and J.Z.; methodology, S.Q.; software, J.Z.; validation, Y.L., J.D., and F.C.; formal analysis, Y.W.; investigation, F.C.; resources, S.Q.; data curation, J.D.; writing—original draft preparation, J.Z.; writing—review and editing, S.Q. and J.Z.; visualization, J.Z.; supervision, Y.L.; project administration, A.L.; funding acquisition, S.Q. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the fund project of the Equipment Development Department of the Central Military Commission grant number [No.6140002010101, No.6140001030111], and the APC was funded by Dalian University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  2. Wang, P.; Yao, C.; Zheng, Z.; Sun, G.; Song, L. Joint Task Assignment, Transmission, and Computing Resource Allocation in Multilayer Mobile Edge Computing Systems. IEEE Internet Things J. 2019, 6, 2872–2884. [Google Scholar] [CrossRef]
  3. Qiu, S.; Li, A. Application of Chaos Mutation Adaptive Sparrow Search Algorithm in Edge Data Compression. Sensors 2022, 22, 5425. [Google Scholar] [CrossRef]
  4. Varghese, B.; Wang, N.; Barbhuiya, S.; Kilpatrick, P.; Nikolopoulos, D.S. Challenges and Opportunities in Edge Computing. In Proceedings of the 2016 IEEE International Conference on Smart Cloud (SmartCloud), New York, NY, USA, 18–20 November 2016; IEEE: New York, NY, USA, 2016; pp. 20–26. [Google Scholar] [CrossRef] [Green Version]
  5. Pan, J.; McElhannon, J. Future Edge Cloud and Edge Computing for Internet of Things Applications. IEEE Internet Things J. 2018, 5, 439–449. [Google Scholar] [CrossRef]
  6. Raju, B.; Kumar, R.; Senthilkumar, M.; Sulaiman, R.; Kama, N.; Dhanalakshmi, S. Humidity sensor based on fibre bragg grating for predicting microbial induced corrosion. Sustain. Energy Technol. Assess. 2022, 52, 102306. [Google Scholar] [CrossRef]
  7. Vianny, D.M.M.; John, A.; Mohan, S.K.; Sarlan, A.; Ahmadian, A. Water optimization technique for precision irrigation system using IoT and machine learning. Sustain. Energy Technol. Assess. 2022, 52, 102307. [Google Scholar] [CrossRef]
  8. Zhao, M.; Yu, J.J.; Li, W.T.; Liu, D.; Yao, S.; Feng, W.; She, C.; Quek, T.Q.S. Energy-Aware Task Offloading and Resource Allocation for Time-Sensitive Services in Mobile Edge Computing Systems. IEEE Trans. Veh. Technol. 2021, 70, 10925–10940. [Google Scholar] [CrossRef]
  9. Xia, S.; Yao, Z.; Li, Y.; Mao, S. Online Distributed Offloading and Computing Resource Management With Energy Harvesting for Heterogeneous MEC-Enabled IoT. IEEE Trans. Wirel. Commun. 2021, 20, 6743–6757. [Google Scholar] [CrossRef]
  10. Li, Q.; Yao, H.; Mai, T.; Jiang, C.; Zhang, Y. Reinforcement-Learning- and Belief-Learning-Based Double Auction Mechanism for Edge Computing Resource Allocation. IEEE Internet Things J. 2020, 7, 5976–5985. [Google Scholar] [CrossRef]
  11. Wang, J.; Zhao, L.; Liu, J.; Kato, N. Smart Resource Allocation for Mobile Edge Computing: A Deep Reinforcement Learning Approach. IEEE Trans. Emerg. Top. Comput. 2021, 9, 1529–1541. [Google Scholar] [CrossRef]
  12. Mach, P.; Becvar, Z. Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  13. Alfakih, T.; Hassan, M.M.; Al-Razgan, M. Multi-Objective Accelerated Particle Swarm Optimization With Dynamic Programing Technique for Resource Allocation in Mobile Edge Computing. IEEE Access 2021, 9, 167503–167520. [Google Scholar] [CrossRef]
  14. Luo, Q.; Li, C.; Luan, T.H.; Shi, W. Minimizing the Delay and Cost of Computation Offloading for Vehicular Edge Computing. IEEE Trans. Serv. Comput. 2022, 15, 2897–2909. [Google Scholar] [CrossRef]
  15. Subbaraj, S.; Thiyagarajan, R.; Rengaraj, M. A smart fog computing based real-time secure resource allocation and scheduling strategy using multi-objective crow search algorithm. J. Ambient. Intell. Humaniz. Comput. 2021, 1–13. [Google Scholar] [CrossRef]
  16. Liu, J.; Liu, C.; Wang, B.; Gao, G.; Wang, S. Optimized Task Allocation for IoT Application in Mobile-Edge Computing. IEEE Internet Things J. 2022, 9, 10370–10381. [Google Scholar] [CrossRef]
  17. Huang, M.; Zhai, Q.; Chen, Y.; Feng, S.; Shu, F. Multi-Objective Whale Optimization Algorithm for Computation Offloading Optimization in Mobile Edge Computing. Sensors 2021, 21, 2628. [Google Scholar] [CrossRef]
  18. Xu, H.; Li, Q.; Gao, H.; Xu, X.; Han, Z. Residual Energy Maximization-Based Resource Allocation in Wireless-Powered Edge Computing Industrial IoT. IEEE Internet Things J. 2021, 8, 17678–17690. [Google Scholar] [CrossRef]
  19. VanDerHorn, E.; Mahadevan, S. Digital Twin: Generalization, characterization and implementation. Decis. Support Syst. 2021, 145, 113524. [Google Scholar] [CrossRef]
  20. Zhang, K.; Cao, J.; Zhang, Y. Adaptive Digital Twin and Multiagent Deep Reinforcement Learning for Vehicular Edge Computing and Networks. IEEE Trans. Ind. Inform. 2022, 18, 1405–1413. [Google Scholar] [CrossRef]
  21. Van Huynh, D.; Nguyen, V.D.; Khosravirad, S.R.; Sharma, V.; Dobre, O.A.; Shin, H.; Duong, T.Q. URLLC Edge Networks with Joint Optimal User Association, Task Offloading and Resource Allocation: A Digital Twin Approach. IEEE Trans. Commun. 2022, 70, 7669–7682. [Google Scholar] [CrossRef]
  22. Xu, X.; Shen, B.; Ding, S.; Srivastava, G.; Bilal, M.; Khosravi, M.R.; Menon, V.G.; Jan, M.A.; Wang, M. Service Offloading With Deep Q-Network for Digital Twinning-Empowered Internet of Vehicles in Edge Computing. IEEE Trans. Ind. Inform. 2022, 18, 1414–1423. [Google Scholar] [CrossRef]
  23. Fan, B.; Wu, Y.; He, Z.; Chen, Y.; Quek, T.Q.; Xu, C.Z. Digital Twin Empowered Mobile Edge Computing for Intelligent Vehicular Lane-Changing. IEEE Netw. 2021, 35, 194–201. [Google Scholar] [CrossRef]
  24. Dong, R.; She, C.; Hardjawana, W.; Li, Y.; Vucetic, B. Deep Learning for Hybrid 5G Services in Mobile Edge Computing Systems: Learn From a Digital Twin. IEEE Trans. Wirel. Commun. 2019, 18, 4692–4707. [Google Scholar] [CrossRef] [Green Version]
  25. Zhou, Z.; Jia, Z.; Liao, H.; Lu, W.; Mumtaz, S.; Guizani, M.; Tariq, M. Secure and Latency-Aware Digital Twin Assisted Resource Scheduling for 5G Edge Computing-Empowered Distribution Grids. IEEE Trans. Ind. Inform. 2022, 18, 4933–4943. [Google Scholar] [CrossRef]
  26. Li, B.; Liu, Y.; Tan, L.; Pan, H.; Zhang, Y. Digital Twin Assisted Task Offloading for Aerial Edge Computing and Networks. IEEE Trans. Veh. Technol. 2022, 71, 10863–10877. [Google Scholar] [CrossRef]
  27. Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Communication-Efficient Federated Learning for Digital Twin Edge Networks in Industrial IoT. IEEE Trans. Ind. Inform. 2021, 17, 5709–5718. [Google Scholar] [CrossRef]
  28. Do-Duy, T.; Van Huynh, D.; Dobre, O.A.; Canberk, B.; Duong, T.Q. Digital Twin-Aided Intelligent Offloading With Edge Selection in Mobile Edge Computing. IEEE Wirel. Commun. Lett. 2022, 11, 806–810. [Google Scholar] [CrossRef]
  29. Liu, T.; Tang, L.; Wang, W.; Chen, Q.; Zeng, X. Digital-Twin-Assisted Task Offloading Based on Edge Collaboration in the Digital Twin Edge Network. IEEE Internet Things J. 2022, 9, 1427–1444. [Google Scholar] [CrossRef]
  30. Sun, W.; Wang, P.; Xu, N.; Wang, G.; Zhang, Y. Dynamic Digital Twin and Distributed Incentives for Resource Allocation in Aerial-Assisted Internet of Vehicles. IEEE Internet Things J. 2022, 9, 5839–5852. [Google Scholar] [CrossRef]
  31. Sun, W.; Zhang, H.; Wang, R.; Zhang, Y. Reducing Offloading Latency for Digital Twin Edge Networks in 6G. IEEE Trans. Veh. Technol. 2020, 69, 12240–12251. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
Figure 1. DT-assisted edge-computing resource-allocation model.
Figure 1. DT-assisted edge-computing resource-allocation model.
Sensors 22 09546 g001
Figure 2. Resource equipment and user task encoding method.
Figure 2. Resource equipment and user task encoding method.
Sensors 22 09546 g002
Figure 3. IPWOA flow chart.
Figure 3. IPWOA flow chart.
Sensors 22 09546 g003
Figure 4. Optimized objective values after multiple iterations of different algorithms. Each algorithm iterates 200 times.
Figure 4. Optimized objective values after multiple iterations of different algorithms. Each algorithm iterates 200 times.
Sensors 22 09546 g004
Figure 5. The optimization target value after multiple rounds of iterations of different algorithms. Each algorithm was executed for multiple rounds, each round of iterations included 200 times, and the average optimal optimization target value was generated.
Figure 5. The optimization target value after multiple rounds of iterations of different algorithms. Each algorithm was executed for multiple rounds, each round of iterations included 200 times, and the average optimal optimization target value was generated.
Sensors 22 09546 g005
Figure 6. Comparison of the power consumption of resource devices generated by different algorithms. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Figure 6. Comparison of the power consumption of resource devices generated by different algorithms. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Sensors 22 09546 g006
Figure 7. Comparison of the power consumption impact of different algorithms for edge computing. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Figure 7. Comparison of the power consumption impact of different algorithms for edge computing. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Sensors 22 09546 g007
Figure 8. Comparison of the impacts of a DT on the task-computing time.
Figure 8. Comparison of the impacts of a DT on the task-computing time.
Sensors 22 09546 g008
Figure 9. Comparison of the impacts of a DT on the edge-computing power consumption. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Figure 9. Comparison of the impacts of a DT on the edge-computing power consumption. The resource device power consumption was generated by the optimal allocation scheme of each algorithm.
Sensors 22 09546 g009
Table 1. Key notations.
Table 1. Key notations.
NotationDescription
Uset of users who need to request computing resources
Eset of edge server
Sset of resource devices
D T s DT set of resource devices
s i set of resources of the i-th resource device, s i D T s
Jset of computing tasks
j i resources required for the i-th task, j i J
z k demand of the i-th task on the k-th resource, z k j i
D m data volume of the m-th task simulated by the DT, D m j
B m bandwidth used by the m-th task of the DT simulation
T ˜ m τ simulated transmission time of the m-th task
Δ D m error of the m-th task data volume of the DT simulation
Δ B m error of the transmission bandwidth of the m-th task simulated
by the digital twin
Δ T m τ transit time error for the m-th task of a DT simulation
T ˜ m c transit time for the m-th task of the DT simulation
C m computational amount of the m-th task of the DT simulation
f m calculation frequency assigned by the resource device to the m-th task
Δ C m calculated error of DT providing the analog transmission
Δ f m error between the actual computing frequency of the m-th task
and the digital twin simulation computing frequency
Δ T m c computational time error for the m-th task of a DT simulation
P m computational energy consumption generated by the m-th task
U i m actual usage rate of the i-th resource of the m-th resource device
U ˜ i m usage rate of the i-th resource of the m-th resource device simulated
by the DT
Δ U i m error of the utilization rate of the i-th resource of the m-th resource device
simulated by the DT
U m a v g average resource usage rate of the m-th resource device
D m unbalanced resource-allocation rate of the m-th resource device
Table 2. Average optimization objective value after multiple iterations of different algorithms.
Table 2. Average optimization objective value after multiple iterations of different algorithms.
AlgorithmRound 20Round 30Round 40Round 50Round 60
PWOA5917.585768.955554.725453.485284.10
PSO5540.775365.335174.715050.524918.83
GWO6778.416657.156435.486283.256111.39
IPWOA5463.135266.745080.254982.794846.93
IWOA5575.545384.475206.875109.374957.43
WOA6102.325978.895757.575674.875542.64
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qiu, S.; Zhao, J.; Lv, Y.; Dai, J.; Chen, F.; Wang, Y.; Li, A. Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm. Sensors 2022, 22, 9546. https://doi.org/10.3390/s22239546

AMA Style

Qiu S, Zhao J, Lv Y, Dai J, Chen F, Wang Y, Li A. Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm. Sensors. 2022; 22(23):9546. https://doi.org/10.3390/s22239546

Chicago/Turabian Style

Qiu, Shaoming, Jiancheng Zhao, Yana Lv, Jikun Dai, Fen Chen, Yahui Wang, and Ao Li. 2022. "Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm" Sensors 22, no. 23: 9546. https://doi.org/10.3390/s22239546

APA Style

Qiu, S., Zhao, J., Lv, Y., Dai, J., Chen, F., Wang, Y., & Li, A. (2022). Digital-Twin-Assisted Edge-Computing Resource Allocation Based on the Whale Optimization Algorithm. Sensors, 22(23), 9546. https://doi.org/10.3390/s22239546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop