Next Article in Journal
Dynamic Voltage and Frequency Scaling and Duty-Cycling for Ultra Low-Power Wireless Sensor Nodes
Previous Article in Journal
Saliency Guidance and Expansion Suppression on PuzzleCAM for Weakly Supervised Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Service Migration Strategy Based on Multi-Attribute MDP in Mobile Edge Computing

1
School of Information Science & Electrical Engineering, Shangdong Jiaotong University, Jinan 250357, China
2
School of Control Science & Engineering, Shandong University, Jinan 250000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(24), 4070; https://doi.org/10.3390/electronics11244070
Submission received: 2 November 2022 / Revised: 29 November 2022 / Accepted: 5 December 2022 / Published: 7 December 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In order to solve the problem of service interruption caused by user movement and the limited service range of edge nodes, a service migration algorithm based on the multi-attribute Markov decision process was proposed for mobile edge computing. By performing service migration, the distance between the user and the service is always kept to a small range. In addition, in order to prevent the service quality from being affected by the frequent migration of users, the return function of the model was defined by comprehensively considering the service quality, the resource demand of the service, the migration cost, and the movement income of users in each node, and on the premise of taking into account the migration cost and resource conditions, which did not only make up the deficiency of the service migration scheme based solely on distance. The number of migrations is also reduced, and a single migration target server is no longer used. The candidate server set was constructed based on the user’s motion trajectory, and the Q-Learning algorithm was used to solve the problem. Simulation results show that the proposed algorithm can reduce the number of migrations and ensure the balance between the number of migrations and the cost of migrations.

1. Introduction

In recent years, various emerging technologies have made portable devices increasingly intelligent, such as augmented reality (AR) routing and navigation apps that run on mobile phones and autonomous driving apps that run on in-car devices. These applications cannot run locally due to the limited battery capacity and computing power of portable devices. In addition, uploading tasks to the data center cloud is no longer a viable solution, as communication latency can exceed the application’s quality of service (QoS) requirements. Moving edge computing [1] (MEC) is a promising example of solving this problem. In the MEC framework, servers are placed on base stations and other access network devices to provide users with storage and computing resources with a shorter transmission distance. Edge computing technology [2,3] enables users to unload computing tasks to nearby MEC servers and receive results with a smaller delay.
Despite the proliferation of MEC-based research in recent years, there are still many problems that remain to be solved [4], among which the most important one is mobility. When a user moves across a base station, the distance between the user and the MEC server where the user is serving exceeds the threshold, which is likely to lead to a sharp decline in QoS and even interruption of the ongoing service. Therefore, how to ensure the continuity and quality of service, namely mobility management, is an urgent problem to be solved [5]. In the traditional mobile cellular network, when the user moves in a certain area, the continuity and quality of service can be guaranteed by cell switching. In a mobile edge computing environment, service continuity can be ensured by performing migration operations on services. The problem of service migration can be divided into two sub-problems: one is where to migrate; the second is when to migrate. User mobility is a key factor in migration. Most of the current service migration work only takes the distance between the user and the service as the state, without considering the mobility of the user and the state of the node.
In this paper, we focus on the problem of service migration where users move between multiple edge nodes and propose a service migration strategy algorithm (SMSMA) based on multi-attribute MDP to make migration decisions. The main contributions to this paper are as follows:
(1)
The restriction of a single target edge server is removed, and a list of migrating target nodes is constructed based on the motion parameters and location information of users in each time slot. Migration decisions are made according to the profit value function.
(2)
It improves the shortcoming of service migration schemes based solely on hop distance. Based on the traditional one-dimensional Markov decision process (MDP) model, it introduces service resource demand, server surplus resources, migration cost, and movement income, and finally uses the Q-learning algorithm of reinforcement learning to solve the problem.

2. Related Theory and Methods

The key challenge of business migration is user mobility, especially unknown movement trajectories. This creates huge difficulties in deciding when and where to migrate services. At present, most migration decisions are realized based on Markov decision, but with the requirements of users on service quality, the traditional scheme cannot be applied to more complex and intelligent services. In order to ensure the reliability of service migration, Zhao [6] et al. proposed a fault-tolerant method of dynamic redundant path selection service migration. By using the sliding window-based model and identifying a set of service migration paths to evaluate the time-varying failure rate of ES, the reliability of service migration was effectively improved. In addition, in order to reduce the impact of service migration on edge intelligence, Wang et al. [7] proposed a user-centered dynamic DNN reasoning service migration management with flexible multi-exit mechanism, aiming to maximize the overall user utility through various service downtime. However, the scheme uses the migration judgment of neural network, which brings a great burden to mobile devices to some extent. Li [8] et al. adopted three-way decision-making for service migration. By dividing regions, different operations are performed for users in different regions and different schemes are adopted to make migration decisions. This scheme can increase the stability of migration to a certain extent, but the algorithm complexity is high, and does not fundamentally solve the problem of low Qos caused by service migration. Similarly, Chen [9] and Rui [10] et al. proposed a multi-user server migration strategy based on deep reinforcement learning algorithm, thus effectively realizing fast decision making. Yang [11] et al. took various location information of mobile users in k consecutive time slots into consideration for service migration decision, and designed an online algorithm based on DQN to solve the problem of service migration. Although accurate migration location can be obtained without any future information, the above schemes do not consider the two dominant factors of edge nodes in the migration decision: (l) network state and (2) server state. Researchers from Rutgers University [12] designed an active quality of service aware edge cloud service migration (SEGUE) decision-making system by combining service response time parameters and MDP models. The system can monitor and collect the response time of each server in the network. The QoS prediction module is used to detect performance conflicts and make migration decisions. The income function of MDP is used to measure the cumulative QoS improvement degree, and a node with the highest cumulative QoS return was selected as the target migration node. The process of QoS monitoring and service migration can improve the reliability of service to a certain extent. However, the accuracy and timeliness of the monitoring data lack an effective guarantee mechanism, and the design idea that the response time fully reflects the network state needs to be verified. In addition, to solve the question of whether to migrate, when to migrate, and where to migrate. Zhao [13] et al. proposed an edge computing migration strategy based on multi-attribute decision making. The effectiveness of the migration strategy is verified by simulation. Literature [14] proposes a dynamic service migration strategy DSMMP based on multi-parameter MDP model, which takes into account the impact of communication delay and network bandwidth, and no longer uses a single migration target server, but builds candidate edge nodes based on user movement trajectory. Finally, the Bellman equation was used to make migration decisions, but this scheme ignores the impact of migration cost. It is easy to introduce large migration costs while blindly reducing migration times.
In addition, edge computing can run users. In an edge computing environment, service providers can deploy their application instances on edge servers at the edge of the network to serve their users with lower latency. However, most current service deployments do not consider the reliability and robustness of the deployment process at all. Therefore, Zhao [15] et al. describe the robustness-oriented edge application deployment problem as a constrained optimization problem, and put forward an integer programming-based method to accurately solve this problem, so as to solve the edge application deployment problem. In addition, when an edge server needs to serve too many application users at the same time, its perceived quality of service can be severely affected. Therefore, Zhao [16] et al. proposed and developed the multi-edge application deployment problem in the MEC environment, aiming at maximizing the overall service quality of application users with minimal deployment cost, taking into account the shareability and communication interference of applications. It was also proved that multi-edge application deployment is NP hard, and a heuristic method was proposed to solve this problem. After service deployment, edge servers are prone to faults due to hardware faults, software exceptions, or network attacks. This is to ensure the normal operation of edge services. Considering user coverage and service reliability, Zhao [17] et al. proposed an optimization method based on integer programming to find the optimal solution of small-scale application development problem.
In addition, in the process of service migration, the interruption will seriously affect the service quality of users. Most of the current research on seamless migration is based on container implementation. Meng [18] et al. proposed and designed a container federation file system based on differentiated hard disk and dynamic loading strategies to solve the problem of too long migration time caused by the bundled transmission of file system and container image in the process of container-based service migration. This effectively reduces container-based service migration time and optimizes service outage time and quality of service decline time. Yin [19] et al. proposed a service migration mechanism based on mobility perception. With the minimum service delay as the optimization goal, the corresponding destination node was selected according to the migration cost and the movement direction of the device to which the service belongs. Finally, the waiting delay and migration delay in the service process are effectively reduced, and the service quality of edge nodes is optimized.
Through the above survey, it was found that most of the current service migration schemes ignore the influence of the user’s movement path and the server state. Therefore, this paper proposes an edge service migration strategy based on user mobility. When the destination migration node is unknown, various factors such as the user’s moving track and the status of each node are comprehensively considered to make migration decisions.

3. Multi-Attribute Migration Strategy Design

3.1. Destination Node List Selection

Compared with other schemes that have clearly migrated nodes in advance, this paper considers selecting the most suitable destination node from multiple possible migrated nodes when the node to be migrated is unknown. The design of the network model is shown in Figure 1: the system connects each base station and pile node to an edge server, and the signal coverage range of each base station and pile node is the service coverage range of the corresponding edge server. As shown in the figure below, the network is divided into two levels and consists of three types of nodes: user devices, edge data centers, and cloud data centers.
In addition, the selection rules of candidate nodes are shown in Figure 2. In a two-dimensional scenario, if the user does not carry out service migration, the current service node is the only edge node. As shown in the figure, when a certain state detection time slot begins, E S 4 runs the service calculation of a user and detects that the user leaves its service range at a speed. If service migration is decided at this time, the system will construct the candidate list set by following operations. First, obtain the speed and position coordinates of the current time slot, and then obtain the position information of the first few time slots so as to predict the list of target nodes that the user device may migrate to. The list of candidate nodes shown in the figure below is: E S L i s t = { E S 3 , E S 4 , E S 5 , E S 6 } .

3.2. Multi-Attribute MDP Design

Based on the candidate node list constructed in the previous section and the modeling of network environment and user movement, this paper conducts the modeling of the Markov decision process to determine the state function, action function and return function of the Markov decision process. It should be noted that most of the current return functions focus on the quality of service and migration cost of the edge server, but the return function constructed in this paper also combines the processing capacity of the edge server with the movement income of the user. The processing capacity of the edge node determines whether the remaining resources of the destination node are sufficient to maintain the service. By introducing the motion income, the user’s residence time in each node can be calculated through the predicted movement trajectory so as to reduce the total return of the system of the destination node with a short residence time as much as possible. As shown in Figure 3, this paper divides the time axis into decision time slots of equal length according to T. A time slot was taken as a sampling interval, ES state detection was carried out at the beginning of each time slot, and a service migration decision was made.
(1) State function: From the perspective of delay reduction, the MEC server closest to the user is the best choice for migration. However, in the actual situation, distance is not the only standard to measure the quality of service. The resource occupation and network delay of MEC server are also crucial. If the MEC server closest to the user is not in the state to provide better service to the user, the migration is undoubtedly a failure. Therefore, when making service migration decisions, apart from considering the distance between users and nodes, the computing resource requirements of the service, the remaining available computing resources of the target server, and the network latency should also be considered comprehensively. In this paper, the computing resource demand of the service is represented as C a , the remaining computing resource of the target server is represented as C b i (represented as the target node list), and the network delay of the target node is represented as T b i .
In summary, this chapter combines the distance between the user and the MEC, the computing requirements of the service, and the remaining computing resources of the target server as the status of the moment, i.e.,
S t = { d t ,   C a ,   C b i }
(2) Action function: The system action is to make migration decision according to the current state and decide whether to perform service migration. The action set of the system is defined as A = { 0 , 1 , 2 , , i } , where A = 0 means that the service migration is not performed and the user still maintains communication with the original MEC server, which means that the service migration is performed. A ≠ 0 means that the service is migrated to the first node in the target list. According to the time flow model in Figure 3, the actions taken after state statistics at the moment are denoted as a t , then a t A .
(3) Probability function: According to the definition of Markov decision process, the probability function is expressed as the probability of moving from the previous state to the next state. In the Markov decision-making process, since the transfer between states requires actions, the transfer probability at the moment is shown in Formula (2):
P s i i s i + 1 a t = P | S t + 1 = s t + 1 | S t = s t , a t , i = 1 , 2 , 3
where S t and S t + 1 respectively represent the state space at time t and t+1, and a t represents the action at time t. Therefore, P s i i s i + 1 a t represents the probability of moving from state S t to S t + 1 at time t because of action a t .
(4) Return function.
In this paper, the return function is formed by combining quality of service, resource demand, migration cost, and movement income to calculate the profit value brought by each action. The quality of service was used to reflect the communication delay between the edge node and the user, and the resource demand was introduced to judge whether the remaining resources of the destination node can maintain the operation of the user service, so as to judge whether the migration is successful. The migration overhead can reduce the continuous migration of users, effectively reducing service downtime. Finally, through the introduction of sports revenue to maximize the value of user migration, we avoid some unnecessary migration and further reduce the frequency of migration.
(1) Quality of service: The system takes the communication delay between the user and the service as the evaluation index of service quality and expresses the communication delay between the user and the service by the communication distance. The communication distance represents the distance between the user and the edge node serving the user. The greater the communication distance, the farther the distance from the service site, the lower the service quality, and the worse the service experience. q ( s t , a t ) indicates the quality of service provided by the corresponding node after an action is taken in the state. In order to facilitate expression, it is assumed that communication delay is proportional to communication distance, and the quality of service at time is expressed as a function of communication distance reduction, as shown in Formula (3):
q ( s t , a t ) = { q m a x α d t     d R q m a x τ d t     d < R
where q m a x represents the highest value of service quality provided by the node to the user respectively; where α represents the distance discount factor when the user is within the range of the current communication node; τ represents the distance discount factor when the user is outside the range of the current communication node.
(2) Processing capacity: The existing research assumes that the target server can support the services to be migrated without considering the resource shortage of the target server. When the load of the target MEC node is high, the remaining resources are not enough to maintain the operation of the service. At this time, if the conclusion of migration is reached through the migration decision, then such a migration can be considered a failed one. Therefore, this chapter takes into account the resource requirements of the service and the load on the target server when making migration decisions. This requirement can be achieved by further modifying the return function. When C a C d < 0 , if the service migration is performed, i.e., a ≠ 0, then a relatively large penalty value M (M is a relatively large positive number) is defined for the return function of this action. In addition, as shown in Formula (4), this paper uses the remaining CPU and memory resources to define the processing capacity of edge nodes, and the resource requirements for migration services are defined by the usage of CPU and memory. The processing capability is determined by comparing the resource requirements of the service with the remaining resources of the target server.
q ( s t , a t ) = { q m a x α d t     d R q m a x τ d t     d < R
(3) Migration overhead: When users make the decision to migrate, migration will generate a certain amount of migration overhead, which generally includes two parts. The first part is the cost of data transmission between the original node and the target node. The second part is the cost of starting the service on the migrated node after the migration. The migration cost of the first part is related to the transmission distance, while the transmission cost of the second part can be defined as a constant. Therefore, the migration overhead is defined as follows, as shown in Formula (5):
c ( s t , a t ) = { δ + β d t 0                       i f       a t 0   i f       a t = 0
(4) Movement gain: As shown in Figure 4 and Figure 5, after predicting the mobile trajectory of the user, the mobile distance of the user within the service scope of the node can be obtained according to the angle of the user within the service scope of the node, and the mobile income of the user can be defined according to the stay time. You can set sports benefits to select nodes where users stay for a long time, so that services tend to migrate to these nodes, which can reduce the migration time to a certain extent and avoid some unnecessary migration, which can be divided into the following cases:
① When the user leaves the service scope of the original node and does not migrate, the movement income of the user on the original code is 0.
② When the user is within the service range of the destination node and chooses to migrate or when the user is within the service range of its communication code and chooses not to migrate, the user’s movement trajectory is shown below.
At this time, the user moves a distance in its service range of ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 , and the movement gains at this time are:
( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 v
③ When the user is not in the service range of the destination node and chooses to migrate, the movement trajectory of the user at this time is shown as follows.
At this point the user moves a distance in its service range of l e n = 2 R cos θ , where θ denotes the angle between the user’s movement direction and the line P. The service coverage radius of the node is R . The movement gains at this point are:
2 R cos θ v
Ultimately, there are
h ( s t , a t ) = { 0 2 R cos θ v ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 v
Finally, combining the reward and penalty mechanisms of service quality, processing capacity, migration overhead, and campaign gain, the modified payoff function is expressed as:
r ( s t , a t ) = { q ( s t , a t ) + h ( s t , a t )         q ( s t , a t ) ω c ( s t , a t ) + ρ h ( s t , a t )             q ( s t , a t ) ω c ( s t , a t ) + ρ h ( s t , a t ) M a t = 0 a t 0   a n d   C a C d a t 0   a n d   C a < C d
where ω is used to determine the share of migration overhead.   ω means the larger the value, the greater the share of migration overhead, ρ denotes the share of movement benefits, and ρ means the larger the number, the larger the share of movement benefits.

4. Multi-Attribute Service Migration Decisions

4.1. The Best Action Gains Seek to Solve

Based on the multi-attribute MDP model defined in Section 3.2, the service migration strategy designed in this section is solved using the Q-learning algorithm, which is a value iterative learning algorithm for solving reinforcement learning problems, and its main idea is to construct a Q-table of states and actions to store Q values by learning Q value to the optimal policy.   Q value is Q ( s , a ) , which refers to the state within a certain decision time slot s under which the action is taken a The expectation obtained by the action is a function of the state s and action a function that is related to both state and action, and can be viewed as a long-term payoff. The update mode of Q function is shown in Formula (10)
Q ( s , a ) = ( 1 ϑ ) Q ( s , a ) + ϑ   [ γ + γ m a x Q ( s , a ) ]
In Formula (10), the s represents the current system state, s represents the next state, and a represents the current action.   r represents the state in which the s action is taken a the immediate reward obtained. γ is the discount factor, which takes a value between 0 and 1, and indicates the importance given to future returns.   γ A larger value indicates that the month values future returns.   ϑ is the learning rate, and ϑ determines the magnitude of the impact of the previous training results at each iteration of the update.
The training results of the Q-learning algorithm are usually represented by a Q-table. As shown in Table 1, the rows of the Q-table represent the states, the columns represent the actions, and each element Q ( s , a ) in the table represents the return of the system taking action a under the state s.
In the service migration decision problem of this paper, according to the MDP model defined in Section 3.2, it is known that the action set size of this paper is List + 1, where List denotes the length of the list of candidate nodes, so the columns of the Q-table a n corresponding to the Q value denotes the value of the candidate node list in the state of s n migrated to each node in the candidate list in the case of Q value. The result of using the Q-learning algorithm for the service migration decision problem is to obtain such a Q-table as shown in Table 1. At each decision moment, the statistical state s and, based on the available actions, the action in the Q-table with state s have the actions with the highest value in the row of the Q-table we selected.

4.2. Migration Strategy Solving

According to the definition of user model and MDP model in Section 3, to ensure a reasonable service migration policy for edge nodes in a specific state, this paper solves the multi-attribute service migration decision scheme in mobile edge computing, as shown in Figure 6.
(1)
Initialization. When the status detection interval is t, the user is in the service range of the source node E S 0 , and the status value is s = 0 . The threshold indicates the minimum status threshold for service migration. The initial value is Threshold =N, N is the maximum status value between the user and the source node, and the Q-table is initialized.
(2)
At the beginning of each new state detection time slot, detect the edge node and the current state value. If s = 0 , no service migration is carried out.
(3)
If it is s > 0 , the user’s current position information ( x 0 , y 0 ) and speed information v 0 will be obtained first, and the user’s moving track between each edge node will be predicted.
(4)
Obtain the status value between each edge node, calculate the revenue function, and record the general report to each node in the Q-table.
(5)
Select the action with the maximum value according to the value in Q-table according to the ε g r e e d y strategy, and then the state space becomes s . Observe the new state and calculate the new immediate return r according to the return function.
(6)
Update Q-table and update the state space and repeat the fourth step until the function converges and the optimal migration decision is selected.
(7)
End.
When selecting the optimal action at each decision moment, one problem to be considered is how to balance the relationship between making decisions with known information and exploring other decisions to gather more information. Reasonable balance between utilization and exploration has a very important impact on the learning ability of the system. On the one hand, only using available information to select actions may lead to certain actions in certain states not being explored, and the selected actions are not optimal. On the other hand, too much random exploration will cause the training to take too long and be difficult to converge. In this algorithm, the ε g r e e d y strategy is used to solve this problem. The strategy randomly selects the action with the probability of ε at each step, while the remaining ε probability directly selects the action with the maximum return according to the Q-table. In the concrete implementation,   1 ε will decay with the training, that is, the randomness gradually decreases, the proportion of exploration gets smaller, and the proportion of utilization gets larger and larger.

5. Simulation and Results Evaluation

5.1. Simulation Parameter Setting

In order to evaluate the performance of the service migration decision algorithm proposed in this paper, it is necessary to conduct simulation tests on the algorithm. This paper uses Python to conduct simulation experiments.
According to the network model described in Figure 1, the laying of base stations and the setting of regional coverage, the service range of each edge node is 15 m, and the repeated coverage range of adjacent edge nodes is 2 m. In addition, according to the service range of each node and the communication function defined in Formula (3), the maximum quality of service is defined as 48 in this paper. In this way, users can only get a small quality of service at the node service edge, which is closer to the real stage. Table 2 shows the detailed design of other parameters in the simulation experiment.
The migration strategy algorithm proposed in this paper first considers the user’s mobility and predicts the user’s movement trajectory according to the user’s location information, movement direction and speed information. Considering the problem of too-large state space, the algorithm divides the movement trajectory prediction into a separate module and executes it separately in each time slot. Secondly, the computing resource requirements of the migration service and the influence of the remaining available resources of the target node on the migration decision algorithm are considered. Then, we also consider the effect of migration costs on migration decision algorithms. Finally, the user’s stay time in each node is taken into account in the return function, so that more accurate migration decisions can be made.
In addition to the migration decision algorithm proposed in this paper, four other migration decision algorithms are compared in this paper as follows.
Always Migration Solution: once a user leaves the service range of the current communication node, a migration service is executed immediately to migrate the service to the edge server closest to the user in the current time slot.
MDP scheme: based on the Markovian stochastic process, only the movement of users is considered, and the computing resource situation and movement gain of the service and target server are not considered.
DSMMP scheme: the impact of computational resources and movement gains are considered, in addition to the selection of target nodes, but the impact of migration overhead on migration decisions is not considered.

5.2. Analysis of Results

The migration decision algorithm calculates the return size of each action that can be taken at each decision moment so as to select the action with the greatest return and take this action. Thus, the total return for each algorithm reflects the algorithm’s ability to maximize returns, namely, its ability to make better decisions. Therefore, this paper considers the impact of migration cost, service resource demand size, and movement income on the total return of the system. The total return represents the total revenue from the completion of all migration processes. The greater the total return, the higher the overall service quality in the whole migration process, that is, the better the service experience of users in the whole migration process.
In Figure 7 and Figure 8, we first consider the impact of migration costs and movement income on migration times.
(1)
Impact of migration overhead on migration strategy
By changing the migration overhead factor in the payoff function ω to change the migration overhead size, we can obtain the migration policy based on the communication distance d when the user performs the migration for the first time. The horizontal coordinate indicates the weight of migration overhead ω , which determines the importance of the migration overhead when performing the migration strategy. ω The larger the value, the greater the importance of migration overhead in the migration policy. The vertical coordinate indicates the distance between the user and the original communication node when the migration policy is first executed. From the figure, it can be seen that when ω is small, it means that less importance is given to migration overhead, and when ω is smaller than a certain value, it means that migration overhead is negligible compared to the benefits of migration, and only the distance between the user and the communication node and the benefits of the user’s movement in each node are considered, i.e., the user tends to leave the service range of the original communication node around to perform the service migration; furthermore, when ω is larger, it means that migration overhead is more important, so a comprehensive trade-off is needed to make a migration strategy. ω Larger implies that the migration overhead accounts for a larger share of the payoff function, so users tend to make migration decisions when they are far away from the original communication node, which tends to consume more communication costs.
(2)
Impact of migration overhead on total system return
The above figure shows the relationship between the migration overhead factor ω with the total system return, where the horizontal coordinate is the migration overhead coefficient in the return function, the left vertical coordinate is the total system return, and the right vertical coordinate is the corresponding ω . The right vertical coordinate is the corresponding total migration overhead. For the total return, it can be seen that when ω is small, the MDP is almost the same as the total system return for always migrating. This is because when ω is small, the migration overhead is a smaller part of the total return and only the quality of service is considered, so the MDP degenerates to the migration scheme. The reason why the SMSMA and DSMMP schemes proposed in this paper are always higher than the other two schemes is that the processing capacity of the edge server and the campaign gain are considered, and the SMSMA and DSMMP schemes rarely have migration failures, while the other two schemes inevitably have migration failures, which will lead to a lower total system payoff. In addition, the total system return of the DSMMP scheme, although higher compared to the migration scheme and the MDP scheme, does not take into account the impact of migration decisions when considering migration, so that when ω is small, the total return of the DSMMP scheme is comparable to that of the algorithm in this paper, but as ω increases, the DSMMP scheme has to spend much more than the migration overhead of the SMSMA scheme in executing the migration strategy. At ω , the total system payoff of the DSMMP scheme has a more obvious decreasing trend when the size is larger. In addition, as the ω communication threshold for performing migration increases, the user stays outside the range of communication nodes for a longer time, so the quality of service decreases significantly, and the number of migrations decreases, and the returns of the SMSMA scheme, MDP scheme, and always migration scheme all decrease to some extent.
For migration overhead, it can be seen that ω is because the always migration scheme has more migrations and performs migration whenever it leaves the communication range of the original communication node without examining whether the benefits of migrating the service to the destination node are within the acceptable range. The DSMMP scheme, on the other hand, performs migration at a relatively long distance from the destination node in order to meet the requirements of low latency and high quality of service, which results in a higher migration overhead, which is detrimental when performing large service migrations. At ω small, the migration overhead is negligible and the MDP scheme degenerates to an always migration scheme, after which ω was due to the fact that as the migration overhead percentage grows, the number of migrations decreases, so that the total migration overhead remains within a certain range. In contrast, the migration overhead is considered in this paper compared to the DSMMP scheme, so that the total migration overhead remains within a certain range when ω larger and the migration overhead also always remains stable. Compared to the MDP scheme, the number of migrations is reduced to a certain extent due to the effect of motion gain, so the migration overhead spent by the SMSMA scheme proposed in this paper is always kept at a lower level.
(3)
Impact of service resource requirement size on total system return
The DSMMP scheme proposed in this paper considers not only the impact of communication distance on migration decisions, but also the service resource demand size. Different service resource requirements have different impacts on migration decision scheme, and then on the total return of the system.
As shown in Figure 9, the total return of several schemes is positive when the service resource demand is small. Comparing the three schemes, the MDP scheme, the DSMMP scheme, and the SMSMA scheme, proposed in this paper, all give more reasonable migration decision options with higher return values. In contrast, the return value of the always migration scheme is much lower. The reason is that both schemes do not take into account the resource situation of the MEC server. When the demand for service resources becomes large, it is likely that the remaining resources of the MEC server are not enough to support the migration of services, resulting in migration failure and a larger penalty value for migration failure, which leads to a sharp decrease in the total return. In contrast, the DSMMP scheme and the SMSMA scheme proposed in this paper do not have such a situation because they consider the resource situation, and the total return always remains at a high level. Comparing the MDP scheme with the always migration scheme, the MDP scheme is better than the always migration scheme. The reason is that the always migration scheme performs frequent migrations and the number of migration failures is very high, which leads to a sharp drop in the total return and even a negative total return when the service resource demand is very high. In addition, the SMSMA proposed in this paper is always able to maintain a higher total report than that of the DSMMP scheme because the proposed scheme in this paper additionally considers the migration overhead and reduces the number of migrations to a certain extent so that the migration overhead is always kept at a low level. From an overall perspective, the SMSMA scheme proposed in this chapter outperforms the other two schemes in terms of total return, both when the resource requirements are small and large, because of the consideration of the resource situation.
(4)
Effect of movement gain on the number of migrations
Figure 10 reflects the effect of movement gain ρ on the number of migrations, where the horizontal coordinate represents the movement gain ρ and the vertical coordinate represents the number of migrations of the edge service over the entire user movement. It can be seen that when ρ the SMSMA algorithm and the MDP algorithm proposed in this paper, due to the influence of migration overhead, have a significantly lower number of migrations when ρ the number of migrations is significantly lower than that of DSMMP and always algorithms when the migration overhead is considered. In addition, since the MDP and always algorithms do not consider the influence of motion gain on the migration strategy, the number of migrations always remains the same no matter how motion coefficient ρ . The number of migrations always remains the same no matter how the motion coefficient changes. In contrast, the SMSMA and DSMMP algorithms proposed in this paper consider the influence of motion gain on the migration strategy, and the number of migrations remains constant when the motion coefficient ρ The number of migrations of both schemes decreases to a certain extent when the motion coefficient is large, which can reduce the migration overhead and improve the total system payoff. In addition, the SMSMA proposed in this paper additionally considers the influence of migration overhead compared with the DSMMP algorithm, so that the number of migrations of the algorithm in this paper can always be kept in a relatively stable state, which neither leads to a large migration overhead due to too many frequent migrations, nor leads to a large communication cost between users and nodes and a large service transmission overhead in migration overhead due to a small number of migrations.
(5)
Impact of campaign gains on total system returns
Figure 11 shows the influence of motion return ρ on the total return of the system, where the abscissa represents the motion coefficient and the ordinate represents the total return of the system. As can be seen from the figure, the system total return of always and MDP algorithms is not affected by the motion coefficient, as the migration decision of these two schemes do not consider the influence of the motion return. In addition, in a small range, with the increase of ρ , the total return of the system of the SMSMA algorithm proposed in this paper and the DSMMP algorithm considering the motion return will increase to a certain extent, because the reduction of migration times reduces the impact of migration costs on the total return of the system to a large extent. When ρ is higher than a certain threshold, the system total return of the above two algorithms will decrease with the increase of ρ . This is because the communication cost will also increase to some extent with the decrease of migration times. In addition, it can be seen that the total return of this paper and DSMMP algorithm increases first and then decreases with the increase of motion coefficient. This is because when the sports income is too small, the number of migrations are high, and some unnecessary migration will also appear, which makes the migration cost have a high impact on the total return. However, when the motion return is high, the migration times will be reduced, but it will also increase the cost of service transmission in the migration cost. Therefore, when ρ is higher than a certain threshold, the DSMMP algorithm does not consider the migration cost to balance the impact of migration times, resulting in a significant reduction in the total return of the system. Therefore, the algorithm proposed in this paper comprehensively considers the migration cost and motion return, so that the total return of the system can always maintain a relatively stable state.
(6)
The influence of the number of users on the migration times
Figure 12 shows the impact of the number of users on the total return of the system, where the abscissa represents the number of users and the ordinate represents the total return of the system. As can be seen from the figure, when the number of users gradually increases, the total return of the system of various schemes decreases to varying degrees. This is because when the number of users increases to a certain value, the service quality that the edge node can provide for each user also decreases accordingly, which makes the decision algorithm significantly reduce when calculating the income of the current node, and thus leads to the increase of migration times. However, the algorithm proposed in this paper can still maintain the stability of total return under the condition of a large number of users, because the limitation of migration cost and movement return proposed in this paper limits the increase of migration times to a certain extent and reduces the migration cost compared with other algorithms. The number of users will not affect migration decisions.
The above experimental comparison shows that the scheme proposed in this paper is always ahead of other schemes and can make a decision with higher returns under the influence of multiple factors such as migration cost, communication distance, and quality of service. Other schemes often have higher or lower migration times, resulting in excessive migration overhead or high communication experiment. However, the scheme proposed in this paper can maintain a relatively stable state between the number of migration and the total revenue.

6. Conclusions

In order to solve the problem of service interruption caused by user movement on edge networks, this paper proposes a service migration algorithm SMSMA based on multi-attribute MDP model. The algorithm combines the quality of service, edge node processing ability, migration cost, and user movement path to construct the revenue function of an MDP model. Furthermore, Q-learning, an iterative algorithm of value function, was used to obtain cumulative benefits so as to solve the migration state threshold with maximum benefits. In addition, compared with other schemes, this paper proves that the proposed algorithm has higher adaptability and reliability in dynamic environments. It was able to maintain a high quality of service while balancing the number of migrations.
In the future, the SMSMA algorithm will be further perfected and improved. On the one hand, the quality of service is analyzed in detail in the migration decision, such as the addition of response delay, energy consumption, etc. On the other hand, the extensibility and applicability of the algorithm to dynamic environment are further improved. For example, the algorithm is considered to be applied to a more general MANET environment.

Author Contributions

Conceptualization, P.T. and G.S.; Formal analysis, P.T.; Funding acquisition, G.S.; Methodology, P.T.; Project administration, G.S. and F.Z.; Resources, G.S.; Software, P.T.; Validation, Z.A. and J.L.; Writing—original draft, P.T.; Writing—review & editing, P.T. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant No. 61375084), Shandong Natural Science Foundation, China (ZR2019MF064).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. OpenFog Consortium. IEEE Standard for Adoption of OpenFog Reference Architecture for Fog Computing; IEEE Std 1934–2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–176. [Google Scholar]
  2. Aslanpour, M.S.; Toosi, A.N.; Cicconetti, C.; Javadi, B.; Sbarski, P.; Taibi, D.; Assuncao, M.; Gill, S.S.; Gaire, R.; Dustdar, S. Serverless Edge Computing: Vision and Challenges. In Proceedings of the Australasian Symposium on Parallel and Distributed Computing (AusPDC 2021), virtual, 1–5 February 2021. [Google Scholar]
  3. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications. IEEE Access 2017, 5, 6757–6779. [Google Scholar] [CrossRef]
  4. Ahmed, E.; Rehmani, M.H. Mobile Edge Computing: Opportunities, solutions, and challenges. Future Gener. Comput. Syst. 2016, 70, 59–63. [Google Scholar] [CrossRef]
  5. Wang, S.; Xu, J.; Zhang, N.; Liu, Y. A Survey on Service Migration in Mobile Edge Computing. IEEE Access 2018, 6, 23511–23528. [Google Scholar] [CrossRef]
  6. Zhao, J.; Ma, Y.; Xia, Y.; Dai, M.; Chen, P.; Long, T.; Shao, S.; Li, F.; Li, Y.; Zeng, F. A Novel Fault-Tolerant Approach for Dynamic Redundant Path Selection Service Migration in Vehicular Edge Computing. Appl. Sci. 2022, 12, 9987. [Google Scholar] [CrossRef]
  7. Wang, P.; Ouyang, T.; Liao, G.; Gong, J.; Yu, S.; Chen, X. Edge intelligence in motion: Mobility-aware dynamic DNN inference service migration with downtime in mobile edge computing. J. Syst. Archit. 2022, 130, 102664. [Google Scholar] [CrossRef]
  8. Xu, Y.; Zheng, Z.; Liu, X.; Yao, A.; Li, X. Three-way decisions based service migration strategy in mobile edge computing. Inf. Sci. 2022, 609, 533–547. [Google Scholar] [CrossRef]
  9. Chen, W.; Chen, Y.; Wu, J.; Tang, Z. A multi-user service migration scheme based on deep reinforcement learning and SDN in mobile edge computing. Phys. Commun. 2021, 47, 101397. [Google Scholar] [CrossRef]
  10. Rui, L.; Wang, S.; Wang, Z.; Xiong, A.; Liu, H. A dynamic service migration strategy based on mobility prediction in edge computing. Int. J. Distrib. Sens. Netw. 2021, 17. [Google Scholar] [CrossRef]
  11. Yang, B.; Han, P.; Feng, C.; Liu, Y.; Guo, L. Service Migration with High-Order MDP in Mobile Edge Computing. In Proceedings of the 2021 13th International Conference on Communication Software and Networks (ICCSN), Chongqing, China, 4–7 June 2021. [Google Scholar]
  12. Zhang, W.; Hu, Y.; Zhang, Y.; Raychaudhuri, D. SEGUE: Quality of Service Aware Edge Cloud Service Migration. In Proceedings of the 2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Luxembourg, 12–15 December 2016. [Google Scholar]
  13. Zhao, D.; Yang, T.; Jin, Y.; Xu, Y. A service migration strategy based on multiple attribute decision in mobile edge computing. In Proceedings of the 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017. [Google Scholar]
  14. Guo, H.; Rui, L.; Gao, Z. Dynamic Service migration strategy based on Multi-parameter MDP model in Vehicle edge Networks. J. Commun. 2020, 41, 14. [Google Scholar]
  15. Li, B.; He, Q.; Cui, G.; Xia, X.; Chen, F.; Jin, H.; Yang, Y. READ: Robustness-oriented Edge Application Deployment in Edge Computing Environment. IEEE Trans. Serv. Comput. 2020, 99, 1746–1759. [Google Scholar] [CrossRef]
  16. Zhao, L.; Tan, W.; Li, B.; He, Q.; Huang, L.; Sun, Y.; Xu, L.; Yang, Y. Joint Shareability and Interference for Multiple Edge Application Deployment in Mobile-Edge Computing Environment. IEEE Internet Things J. 2022, 9, 1762–1774. [Google Scholar] [CrossRef]
  17. Zhao, L.; Li, B.; Tan, W.; Cui, G.; He, Q.; Xu, X.; Xu, L.; Yang, Y. Joint Coverage-Reliability for Budgeted Edge Application Deployment in Mobile Edge Computing Environment. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 3760–3771. [Google Scholar] [CrossRef]
  18. Meng, X.; Lu, W. Container-Based Fast Service Migration Method for Mobile Edge Computing. J. Circuits Syst. Comput. 2021, 30, 2250117. [Google Scholar] [CrossRef]
  19. Yin, L.; Li, P.; Luo, J. Smart contract service migration mechanism based on container in edge computing. J. Parallel Distrib. Comput. 2021, prepublish. [Google Scholar] [CrossRef]
Figure 1. Network model.
Figure 1. Network model.
Electronics 11 04070 g001
Figure 2. User movement model.
Figure 2. User movement model.
Electronics 11 04070 g002
Figure 3. Time slot model.
Figure 3. Time slot model.
Electronics 11 04070 g003
Figure 4. Campaign revenue model (users within the service).
Figure 4. Campaign revenue model (users within the service).
Electronics 11 04070 g004
Figure 5. Campaign revenue model (for users not in the service).
Figure 5. Campaign revenue model (for users not in the service).
Electronics 11 04070 g005
Figure 6. Workflow.
Figure 6. Workflow.
Electronics 11 04070 g006
Figure 7. Effect of overhead factor on communication distance.
Figure 7. Effect of overhead factor on communication distance.
Electronics 11 04070 g007
Figure 8. Effect of overhead factor on total return versus system overhead.
Figure 8. Effect of overhead factor on total return versus system overhead.
Electronics 11 04070 g008
Figure 9. Effect of service size on system total return.
Figure 9. Effect of service size on system total return.
Electronics 11 04070 g009
Figure 10. Effect of movement gain on the number of migrations.
Figure 10. Effect of movement gain on the number of migrations.
Electronics 11 04070 g010
Figure 11. Effect of motion coefficient on total system return.
Figure 11. Effect of motion coefficient on total system return.
Electronics 11 04070 g011
Figure 12. The effect of the number of users on the total return of the system.
Figure 12. The effect of the number of users on the total return of the system.
Electronics 11 04070 g012
Table 1. Example of Q-table.
Table 1. Example of Q-table.
Q ( s , a ) a 0 a 1 a n
s 0 Q ( s 0 , a 0 ) Q ( s 0 , a 1 ) Q ( s 0 , a 0 )
s 1 Q ( s 1 , a 0 ) Q ( s 1 , 1 ) Q ( s 1 , a n )
s n Q ( s n , a 0 ) Q ( s n , a 1 ) Q ( s n , a n )
Table 2. Simulation experiment parameter settings.
Table 2. Simulation experiment parameter settings.
ParametersParameter Values
Number of edge nodes 8 × 4
Number of users1
User Speed / ( m · s 1 ) 1
Time Gap / s2
Maximum communication distance of edge nodes / m 15
Maximum quality of service for edge nodes q m a x 48
Service quality overhead factor (within the service)2
Service quality overhead factor (out of scope of services)3
Factor in migration overhead3
Constant term in migration overhead15
Coefficients of the returns to motion in the return function ρ 1
Coefficient of migration overhead in the return function1
Remaining available resources on the serverRandom values between [0, 10]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, P.; Si, G.; An, Z.; Li, J.; Zhou, F. Service Migration Strategy Based on Multi-Attribute MDP in Mobile Edge Computing. Electronics 2022, 11, 4070. https://doi.org/10.3390/electronics11244070

AMA Style

Tian P, Si G, An Z, Li J, Zhou F. Service Migration Strategy Based on Multi-Attribute MDP in Mobile Edge Computing. Electronics. 2022; 11(24):4070. https://doi.org/10.3390/electronics11244070

Chicago/Turabian Style

Tian, Pengxin, Guannan Si, Zhaoliang An, Jianxin Li, and Fengyu Zhou. 2022. "Service Migration Strategy Based on Multi-Attribute MDP in Mobile Edge Computing" Electronics 11, no. 24: 4070. https://doi.org/10.3390/electronics11244070

APA Style

Tian, P., Si, G., An, Z., Li, J., & Zhou, F. (2022). Service Migration Strategy Based on Multi-Attribute MDP in Mobile Edge Computing. Electronics, 11(24), 4070. https://doi.org/10.3390/electronics11244070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop