Next Article in Journal
UAV Flight and Landing Guidance System for Emergency Situations
Next Article in Special Issue
Mobile Phone Passive Positioning through the Detection of Uplink Signals for Search and Rescue
Previous Article in Journal
Ultra-Wideband Angle of Arrival Estimation Based on Angle-Dependent Antenna Transfer Function
Previous Article in Special Issue
Conditional Artificial Potential Field-Based Autonomous Vehicle Safety Control with Interference of Lane Changing in Mixed Traffic Scenario
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An LSTM-Method-Based Availability Prediction for Optimized Offloading in Mobile Edges

1
School of Computer Science and Engineering, Central South University, Changsha 410000, China
2
School of Electrical and Electronic Engineering, University of Adelaide, Adelaide, SA 5005, Australia
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4467; https://doi.org/10.3390/s19204467
Submission received: 6 September 2019 / Revised: 12 October 2019 / Accepted: 12 October 2019 / Published: 15 October 2019

Abstract

:
Mobile edge computing (MEC) can augment the computation capabilities of a vehicle terminal (VT) through offloading the computational tasks from the VT to the mobile edge computing-enabled base station (MEC-BS) covering them. However, due to the limited mobility of the vehicle and the capacity of the MEC-BS, the connection between the vehicle and the MEC-BS may be intermittent. If we can expect the availability of MEC-BS through cognitive computing, we can significantly improve the performance in a mobile environment. Based on this idea, we propose a offloading optimization algorithm based on availability prediction. We examine the admission control decision of MEC-BS and the mobility problem, in which we improve the accuracy of availability prediction based on Empirical Mode Decomposition(EMD) and LSTM in deep learning. Firstly, we calculate the availability of MEC, completion time, and energy consumption together to minimize the overall cost. Then, we use a game method to obtain the optimal offloading decision. Finally, the experimental results show that the algorithm can save energy and shorten the completion time more effectively than other existing algorithms in the mobile environment.

1. Introduction

Mobile and Internet of Things(IOT) devices have become an integral part of our daily lives [1,2]. With the emergence of emerging services, computing demanding applications touch every aspect of human activity. A typical mobile scenario is the Internet of Vehicles. A series of highly computationally intensive and complex services such as augmented reality, parking positioning, emergency situations on congested roads [3]. A large number of devices are used to generate more and more data. Those services require higher computing power. However, VT has limited computing capacity due to the limitation of physical size, which poses a major challenge to these computationally resource-intensive services [4,5].
In order to handle a large number of complex and intensive tasks, cloud computing has been proposed for storage and data processing [6,7]. While cloud computing is a centralized pool for configurable and powerful computing resources, its network latency is often unpredictable. The rapid growing distributed data further make it impractical to transfer all data to a remote cloud [8].
MEC has recently become a new computing paradigm that enables field data to be processed on network edges, mobile devices and connected devices [9,10]. It integrates IT service environment and cloud computing functions. MEC mainly implements computationally intensive and delay-critical applications, which greatly reduces delay and mobile energy consumption.
The Cellular Base Station (BS), which is considered to be a key driver of the MEC, has a storage function similar to cloud computing, serving the end user’s computing request. By adding a MEC server to the base station, the traditional base station becomes a MEC-BS, which enhances the VT’s ability from heavy computing resource load while reducing latency and avoiding congestion. In other words, MEC-BS can help VT to handle services, speed up processes, reduce service energy consumption.
Because of the exponential growth of computing services, MEC-BS cannot provide unlimited computing offload services for VT. If too many tasks offloaded from the VT, it will lead to MEC-BS queuing, delay or overload. In the existing work, these schemes attempt to reduce the load on the MEC-BS by rejecting, deferring or queuing the unloading request of the VT. At this time, the connection may be intermittent, resulting in service interruption. Therefore, it is especially important to design a new cognitive computing architecture to effectively manage MEC-BS resources.
In order to achieve effective management of the MEC-BS, it is important to consider the following questions: (1) How to make predictions? (2) Which tasks are offloaded to MEC-BS? (3) How to minimize the overall cost (completing time, energy consumption and availability of MEC-BS)?
Aiming to solve the above problems, we propose an optimized offloading method by predicting availability based on deep learning. The main contributions are as follows:
  • By predicting the availability of MEC-BS, we can reserve resources for tasks in advance, thereby reducing the heavy load of MEC-BS and ensuring service connectivity. The availability of MEC-BS is related to two aspects. The first is the admission control of the MEC-BS, and the other is the user’s dwell time, and then we decompose it into several prediction problems based on EMD, then separately predict based on LSTM method, and finally use the sum of predicted sub-data results as the output of the entire model.
  • In our optimization algorithm, local computing and MEC-BS computations were considered separately. They evaluated energy efficiency costs (EEC)(completing time, energy consumption and availability of MEC-BS), then transforms our algorithm into EEC minimization. The weight of the above three aspects is used to adjust the deviation between them, and we can flexibly adapt to different needs.
  • In order to solve the optimization problem, we use game theory to solve this problem. The above problem is defined as a distributed potential game problem by studying the limited improved properties and the potential game Nash Equilibrium.
The rest of this paper is organized as follows. The second section reviews related work, the third section introduces the system model and the proposed algorithm. The fourth section solves the proposed algorithm through the game, and the fifth section describes how to predict the availability of MEC-BS. The sixth section gives the experiment and evaluation of the algorithm, and the seventh section is a summary and future expectation.

2. Related Works

Computational offloading has always been a core issue in mobile edge/fog/cloud computing research [9]. The main problems are designing performance, energy consumption, intermittent connection, etc. In recent years, some research results have been achieved in this field, and we have summarized these studies as follows:

2.1. Computational Offloading Performance and Energy Consumption Issues

The main problem involved in this part is how to improve the efficiency of uninstallation and how to allocate resources reasonably. Kaddi et al. [11] presented a new energy-efficient clustering protocol that uses objective functions and random search jumps to reduce sensor energy consumption. Yang et al. [12] pointed out the minimized energy consumption problem of objective function combined with uplink and downlink. Guo [13] showed the problem of joint offloading and resource allocation based on energy consumption and completion time constraints, and then used the game model to get the optimal solution. Du [14] also studied joint transmission power, wireless bandwidth, and computing resources while ensuring user fairness and maximum tolerable delay. Chen and Zhang [15,16] proposed multi-user MEC computing offloading problem in multi-channel wireless environment, and designed distributed game theory offloading scheme. Chen [17] employed a method of offloading to a mobile cloud network by predicting bandwidth and computing rate to minimize resource consumption and meet delay requirements.Deng [18] proposed a workload dynamic scheduling algorithm, which can maximize the average throughput utility while guaranteeing the task processing delay in the worst case. The above papers only consider the computational offloading of performance problems, since the efficiency will be greatly reduced in case of connection interruption problems.

2.2. Computational Offloading Problem in the Internet of Vehicles Scenario

In recent years, the Internet of Vehicles has received extensive attention as a specific application scenario. Hou [19] pointed the vehicle fog computation (VFC) architecture, using near-user equipment and vehicle cooperation for communication and computation. Yu [20] provided a game model for management and contribution computing resources between different service providers. Mahadev [21] proposed a flexible offload strategy to perform task migration by combining in-vehicle cloud with infrastructure-based cloud. Liu et al. [22] raised the computational offloading problem in multi-vehicle edge networks, expressing the problem as a game problem of multi-user computing offloading. Zhang [23] considered the computational transfer strategy of vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication modes greatly reduces the delays in computational offloading and transmission costs. Wang et al. [24] proposed a novel spider-web-like transmission mechanism for emergency data (TMED) in vehicular ad hoc networks, which improved the packet delivery ratio and average transmission delay of emergency data. Due to the need to consider the mobility environment, the Internet of vehicles is selected as a suitable scenario in this paper. However, this paper focuses on the architecture of Internet of vehicles, which is not considered together with performance in the target problem.

2.3. Intermittent Connection Problems in Mobile Edge Network Architecture

Most of the above studies have relatively stable edge computing connection systems. However, as described by Klein [25], since the user is mobile, there may be intermittent connectivity issues between the smart device and the network environment.The key to distinguishing between traditional cloud and mobile cloud systems is whether they can be persistently connected. The traditional cloud request method is intermittent connection on demand, and the control signal requirements in the mobile cloud system are more durable. In the current popular network systems, The presence of an intermittent connection may cause the offload request of the user to fail, which forces the user to request again. Almeida and Hoang [26,27] claimed intermittent connections caused by cloudlets. For example, due to limited resources, the cloud can reject a user’s offload request.In particular, the authors [26,27] believe that the revenue and QoS requirements of cloud providers are calculated based on the acceptance control and resource allocation of mobile users. [26] optimized the problem by a queuing-based approach. [27] redefine the problem as a semi-Markov decision process by introducing admission control and resource allocation. They all only considered the ideal situation (i.e., did not consider the persistent connection problem). The current and future connection modes are unknown when the user is moving.
In summary, the algorithm schemes proposed in the above work in performance, internet of vehicles scenario and mobile edge network architecture can effectively solve the problem of computational offloading under different constraints and scenarios. However, when both mobile environments and MEC-BS overloads are satisfied, none of the MEC-BS load mitigation issues are considered. we propose the computational offloading method based on Deep Learning. In the user’s movement process, we obtain the acceptance probability and dwell time of a series of vehicle tasks, so as to predict the availability of MEC-BS at the next moment. The obtained objective function formulated the optimal offloading strategy for the vehicle through game theory.

3. System Model and Formulation

3.1. Network Model

Assumption 1.
1. 
The mutual interference between the vehicle and the vehicle, MEC-BS and MEC-BS are negligible.
2. 
A vehicle can request one of MEC-BS servers for request task processing.
3. 
The vehicle trajectories follow the poisson distribution [23], all vehicles travel in the same direction at a constant speed.
The mobile edge vehicle network architecture as shown in Figure 1, we consider a scenario consisting of MEC-BS. They can provide offload services.
There are a series of vehicles, each vehicle has a number of computing tasks, each of which can be optionally executed locally or offloaded to MEC-BS.
There are a series of computing tasks M expressed as < L m v , D m v , S m v , t m a x > , which is characterized by: (1) L m v , the amount of computing resources required to complete task M, for example, it can be quantified to the number of central processing unit cycles required; (2) D m v , the size of the input file for some information about the task, such as program code or recorded video; (3) S m v , the size of the received result, which is ignored here. It is much smaller than the transmission time; (4) t m a x is the maximum tolerance time for the task.
For convenience, the main symbols used in this paper and their specific meanings are summarized in Table 1.

3.2. Communication Model

We assume that the offloading decision of task m of vehicle v(MV) is: A = a m v | m M , v V , and a m v = 0 means that MV is executed locally. a m v = 1 means that MV is offloaded to the MEC-BS. The uplink data rate for computation offloading of MV is:
R m v = W l o g 2 ( 1 + P m v T H m v 2 N m v )
where P m v T is the transmission power of MV. H m v 2 denotes the channel fading coefficient. N m v denotes the channel noise power, related to the link ( V , B ) , and W is channel bandwidth.

3.3. Computation Model

The task m is executed in two ways: Local Computing and MEC-BS Computing. When many tasks are to be executed at the same time, the task must wait in the queue, including all pending tasks before the task m. Considering that the vehicle part follows the Poisson distribution, our modeling execution task m follows the M/ M/1 queue system [23], the probability of vehicle mission generation is λ v .

3.3.1. Local Computing

The main engine for local computing is the CPU on the VT, and the CPU performance is controlled by CPU cycle frequency ( f m v , CPU speed). It uses advanced DVFS technology to adjust the CPU speed by changing the cycle frequency of the CPU chip to achieve the minimum completion time and minimum power consumption of VT [13]. The computation execution time and energy consumption of MV by local computing is respectively given by:
T m v l = λ v L m v f m v 1
E m v l = λ v k L m v f m v 1
where k is the effective switched capacitance depending on the chip architecture. We set k = 10 11 [13].
Definition 1.
Energy-Efficiency Cost (EEC): The weighted sum of energy consumption, computation completion time and availability of MEC-BS.
When executed locally, there is no need to consider the availability of MEC-BS, so the EEC of MV in local computing is:
Z m v l = γ m v E E m v l + γ m v T ( T m v l t m a x )
where 0 γ m v E , γ m v T 1 , denote the weights of energy consumption and computation completion time to make them calculate on the same order of magnitude, and the VT makes different decisions. For example, a delay-sensitive application (such as online movies) might prefer to be big γ m v T , a device with low battery will choose a bigger γ m v E .

3.3.2. MEC-BS Computing

Offloading process: The VT v offload the task m to the MEC-BS, and then the MEC-BS return the result to the VT v. It is worth noting that the availability of the MEC-BS is computed when the VT offloads the task to the MEC-BS, and there are three phases: (i) the transmitting phase; (ii) the MEC-BS execution phase; (iii) the result return phase.
In our algorithm, the availability of MEC-BS, η m v depended on the mobility problem and the admission control decision of MEC-BS [28]. The computation of the specific availability algorithm is given in Section V.
The time and energy consumption for the transmission phase is:
T m v c t r s = λ v D m v R m v ( A )
E m v c t r s ( A ) = λ v P m v T T m v c t r s ( A )
The time and energy consumption for the execution phase is:
T m v c e x e = λ v L m v f c 1
E m v c e x e = λ v k L m v f c 2
The penalty cost related to the availability of MEC-BS is:
C m v = ( 1 η m v ) B C p e n
where B is the number of MEC-BS, and C p e n is a constant, ( 1 η m v ) B is the probability of offloading failure. The total completion time is the transmission time plus the execution time,
T m v c = T m v c t r s + T m v c e x e
The total energy consumption is the transmission energy plus the execution energy consumption,
E m v c = E m v c t r s ( A ) + E m v c e x e
According to Equations (6)–(12),the EEC of MV in MEC-BS is:
Z m v c = γ m v α C m v + γ m v E E m v c + γ m v T ( T m v c t m a x )
where 0 γ m v α 1 , is the weights of penalty cost.

3.4. Centralized Optimization Problem

The EEC of MV is:
Z v = m = 1 M Z m v = m = 1 M ( 1 a m v ) Z m v l + a m v Z m v c
To develop the optimal offloading decision A, we transforms the complex problem into EEC minimization problems under time and availability constraints. It is given by:
min A v = 1 V Z v
C 1 : m = 1 M ( 1 a m v ) T m v l + a m v T m v c t m a x C 2 : 0 η m v 1 C 3 : f c 0 C 4 : f m v 0 C 5 : a m v 0 , 1 C 6 : 0 P m v T P m v m a x
C1 is completion time constraint. It specifies the total completion time that is required for the maximum completion time in all tasks of the Vehicle V application. (i.e., completion time deadline), t m a x , which is set by the latency requirement for an application. C2 is availability constraints. Availability is between 0-1.C3 and C4 mean non-negative computing resources. C5 means the task m of the vehicle V is executed locally or offloaded to MEC-BS. C6 is the range of variation of the uplink transmission power. The above problem is a mixed integer programming problem. It is a non-convex and NP-hard problem [16].

4. Computation Offloading Game

We can adopt a game theory approach to achieve efficient offloading decisions for vehicles. Each vehicle is owned by a different individual. they pursue different interests, and finally achieve a solution that satisfies all parties. By studying the existence of Nash Equilibrium (NE) in potential games, we describes the above problem as a distributed potential game [12].

4.1. Game Formulation

The game model is denoted as G = { V , ( A v ) v V , ( v v ) v V } , where u v is the cost function of VT v. The game model is specifically described as follows:
Players: Each VT is a participant. Participants compete for communication and computing resources, minimizing their communication and computing costs.
Strategies: Each VT takes energy consumption and time cost into account, with the aim of minimizing the total cost. It is worth noting that local calculations timeout will result in penalty costs. The offloading decision for VT v is a v A v .
Payoffs: The cost per vehicle v is the time cost and energy consumption in the communication and computation process. We denote u v ( a v , a v ) as the cost function, where a v = ( a 1 , , a v 1 , a v + 1 , , a v ) is the offloading decision matrix consisting of all VT but VT v.
Each VT minimizes its own competitive cost:
min a v 0 , 1 u v ( a v , a v ) = ( 1 a v ) Z v l + a v Z v c , v V
The proposed game model is solved using the existence of Nash Equilibrium (NE).

4.2. The Existence of NE

Definition 2.
A strategy profile a * = ( a 1 * , a 2 * , , a k * ) is a NE of the game model.If at the equilibrium a * , no player can further reduce its cost by unilaterally altering its strategy. i.e.,
u v ( a v * , a v * ) u v ( a v , a v * ) , a v 0 , 1 , v V
NE has high self-stability. All vehicles will reach a balance when they get a solution that is satisfactory to each other, because each vehicle is selfish and acts for its own benefit. Then we study the existence of NE.
Definition 3.
A game is called an exact potential game, if it admits a potential function ϕ ( a ) such that for every v V , and a v , a v , a v A v , if
u v ( a v , a v ) u v ( a v , a v ) = ϕ ( a v , a v ) ϕ ( a v , a v )
Theorem 1.
Every ordinal potential game with finite strategy sets owns as least one pure-strategy NE and has the finite improvement property (FIP).
The general potential game includes an exact potential game. A general potential game always admits a NE. Next, we prove that the game model is an exact potential game.
Theorem 2.
The game model is an exact potential game with the potential function given in Equation (18), and hence, always has a NE and the FIP.
ϕ ( a ) = ( 1 a v ) { k v K [ γ k E P k D k R k + γ k M ( T k l t m a x ) ] + Z v l } + a v k = 1 K Z k c
Proof. 
Based on Equation (17), we have
u v ( 1 , a v ) u v ( 0 , a v ) = Z v c Z v l
Based on Equation (19), ϕ ( 1 , a v ) and ϕ ( 0 , a v ) can be written as follows respectively, where 1 means the task offloaded to MEC-BS, and 0 means the task is executed locally,
ϕ ( 1 , a v ) = k = 1 K Z k c = Z v c + k v K Z k c = Z v c + k v K [ γ k E P k D k R k + γ k M ( T k l t m a x ) ]
ϕ ( 0 , a v ) = k v K [ γ k E P k D k R k + γ k M ( T k l t m a x ) ] + Z v l
From Equations (20) and (21) we can achieve that:
ϕ ( 1 , a v ) ϕ ( 0 , a v ) = Z v c Z v l
From Equations (19) and (22) we obtain that ϕ ( 1 , a v ) ϕ ( 0 , a v ) = u v ( 1 , a v ) u v ( 0 , a v )
Therefore, The game model is an exact potential game. There is at least one NE and FIP.

4.3. Algorithm Description

In this section, we describe the game algorithm. Since the access node has complete information form VT and MEC-BS (offloading requests and responses, parameters of VTs and MEC-BS), it can compute the optimal offloading decision A and send A to all VTs. All VTs will follow the optimal offloading decision without deviation because of the property of NE. The NE derived by game algorithm denoted by Algorithm 1.
Algorithm 1 Computation Offloading Decision for vehicle V
Require: M:a sequence of M tasks of vehicle V
t m a x :maximum tolerance time
I t e r m a x : maximum number of iterations
Ensure: { A } :optimal offloading decision
 Initialization: D m v , L m v , γ m v E , γ m v T , γ m v a , C p e n and iteration index t 1
for m = 1 to M do
  while t I t e r m a x do
   compute R m v , T m v l , E m v l by (1)–(3), respectively
   compute Z m v l = γ m v E E m v l + γ m v T ( T m v l t m a x )
   compute η m v from Availability Prediction
   compute T m v c t r s , E m v c t r s , T m v c e x e , E m v c e x e by (6)–(8)(10), respectively
   compute T m v c = T m v c t r s + T m v c e x e , E m v c = E m v c t r s + E m v c e x e , respectively
   compute Z m v c = γ m v α ( 1 η m v C p e n ) + γ m v E E m v c + γ m v T ( T m v c t m a x )
   if Z m v c Z m v l then
     a m v = 1
   else
     a m v = 0
   end if
   update η m v ( t + 1 ) = η m v ( t )
  end while
end for

5. Vehicle Mobility Prediction: MEC-BS Availability Estimation

The computation model in Section 3 requires the MEC-BS availability η m v . η m v depends on the mobility problem and the admission control decision of MEC-BS. Based on the collected data set, we decompose a time series prediction problem into several time series prediction problems based on EMD decomposition method, and then use LSTM method to separately predict, and finally use the sum of predicted sub data results as the output of the whole model.

5.1. MEC-BS Admission Control Based on Distance Priority

The MEC-BS may reject the offloading request. Therefore, we know that a very important factor affecting availability is the admission control of MEC-BS. We have adopted a distance-based access control strategy [27]. Specifically, the vehicles have different priorities from the MEC-BS in the same coverage area. The criteria for admission control policies may vary [29]. Other admission control strategies can also be used in this model. Before calculating the reception probability of MEC-BS availability, we introduce a concept similar to Signal Interference and Noise Ratio (SINR), which we call the External Service Ratio (ESR), for vehicle v, and other vehicles v i , i 1 , 2 within the same coverage. The ESR is:
β E S R ( r ) = g r c E s + g i r i c
where g is the energy consumption (e.g., transmission power), we can assume that all g i g , r is the distance from the vehicle v to the MEC-BS. We can estimate the distance r by the signal strength of the MEC-BS. c is a positive constant to indicate the sensitivity of the function, and E s indicates the energy consumption of the MEC-BS to handle its own work. From Equation (23), we can conclude that vehicles located closer to MEC-BS have relatively greater ESR. We assume that the MEC-BS has a preset constant β ^ . The MEC-BS is accepted only when the ESR of the vehicle offloading task is greater than the preset constant. (i.e., β E S R β ^ ).
We define the virtual zone radius R g ( r ) , which refers to the zone suppression of interference transmission around each receiver. Once an active vehicle appears in the virtual zone, the vehicle’s ESR is less than the preset ESR (i.e., β E S R β ^ ).Any offloading request for vehicle v will be rejected [30].
The virtual zone radius R g ( r ) is given by:
R g ( r ) = ( r c β ^ E s g ) 1 c
Then the admission probability is given by:
ζ ( r ) = e 2 Π λ a u [ R g ( r ) ] 2
where λ a u is the density of active vehicles, because active vehicles are only part of the vehicle. We can denote λ a u = c a λ u , where c a is a constant. From Equations (24) and (25), we can see that the admission probability depends on the size of the virtual radius and the density of the active vehicle.

5.2. Mobility Problem: Estimated Dwell Time

The VT’s dwell trajectory in the MEC-BS coverage is shown in Figure 2, where R is the coverage radius of the MEC-BS, d is the crossing distance within the coverage of the MEC-BS ( 0 < d < 2 R ) , the variable α [ 0 , Π ] represents the angle between vehicle trajectories direction and MEC-BS, r represents the distance between the vehicle and the MEC-BS, and h represents the geometric height. The model assumes that the moving speed and direction of the vehicle do not change within the period of time. We can conclude that:
h = r s i n α , 0 α Π , d = R 2 h 2 + r c o s α , 0 α Π
The dwell time T of the vehicle v in the coverage of the MEC-BS is given by:
T ( r ) = d v 0 = ( R 2 ( r s i n α ) 2 + r c o s α v 0 ) , 0 α Π
so relative time is:
T ˜ t = T τ T
where T represents the predicted dwell time within the communication range, τ denotes the time required for the task to be completed.

5.3. Availability Prediction Based EMD and Deep Learning

5.3.1. Data Preprocessing

The data of admission control and dwell time is converted into a machine learning processable format, such as qualitative variables need to be quantized, uniform size is not required in the same dimension, and so on. Integrate two sets of feature matrices to obtain a new feature matrix I t (28) through time correlation.
I t = [ ζ t , T ˜ t ] H

5.3.2. EMD Decomposition

Time series decomposition techniques are widely used in time series related topics. EMD is chosen as a method for decomposing time series [31]. The formula is as follows:
y ( t ) = i = 1 n I M F i ( t ) + r n ( t )
IMF (Intrinsic Mode Function) is a component of the original time series, and each IMF components contain local characteristic signals of different time scales of the original signal, r n is the residual component. There are different degrees of information entropy between these components, and we assume that they all have partial information components of the original time series. The total Information entropy H = H 1 H 2 H n , H i is the Infomation entropy of I M F i . Therefore, it can be decomposed into several time series prediction problems.
The Figure 3 shows that the original time series is decomposed into 7 sub-components, making it easier to build models and predictions.The decomposition data in each IMF component is used to construct a corresponding LSTM prediction model.

5.3.3. LSTM Method for Prediction

A Long-short Term Memory(LSTM) method is proposed to predict the short-term availability due to its ability to handle long-term and short-term dependencies [32,33], LSTM shares similar architecture with Recurrent Neural Network(RNN), which are consists of one input layer, one hidden layer, and one output laye, which is shown in Figure 4. Typically, at each iteration t, the LSTM cell has the input layer X t , a hidden layer and an output layer h t . By adding a cell state, the LSTM cell is capable of handling long-term dependency of the sequence data, the previous output cell state, C t 1 and current input cell state, C ˜ t , both influence the current output cell state, C t . Three gates control the information to flow into and out of the cell state which are the forget gate, the input gate, and the output gate, denoted as f t , i t , and o t , respectively. The forget gate controls how much information from previous cell state should be forgotten by the current cell state. The input gate handles how much information from the current input layer flows into the current cell state. The output gate controls how much information from the current cell state would be conveyed into the current output layer. They can be calculated by the following equations,
f t = σ g ( W f X t + U f h t 1 + b f )
i t = σ g ( W i X t + U i h t 1 + b i )
o t = σ g ( W o X t + U o h t 1 + b o )
C ˜ t = t a n h ( W c X t + U c h t 1 + b c )
where W f , W i , W o , and W c are the weight matrices for mapping current input layer into three gates and current input cell state. U f , U i , U o , and U c are the weight matrices for mapping the previous output layer into three gates and current input cell state. b f , b i , b o , and b c are bias vectors for gate and input cell state calculation. σ g is the gate activation function which is normally a sigmoid function. t a n h is the hyperbolic tangent function which is the activation function for current input cell state. Then, the current output cell state and output layer can be calculated by the following equations. Finally, the output of the LSTM prediction model should be the availability in the next time iteration.
C t = f t C t 1 + i t C ˜ t
h t = o t t a n h ( C t )
Finally, the input data X t is the data that each sub-component needs to predict. The predicted output through the LSTM method should be the sub-data at the next iteration, and the sum of the predicted sub-data results is taken as the output of the entire model.
EMD based LSTM method are as follows:
(1) Through time correlation, the admission probability and dwell time of feature matrices are integrated to obtain a new integrated feature matrix I t .
(2) Transform the matrix into a form of 7 sub-data sets plus a redundancy through EMD decomposition.
(3) Training the sub-data to obtain the predicted results using LSTM method.( LSTM input: each sub-data by EMD; LSTM output: the predicted value of each sub-data.)
(4) Sum of each sub-predicted value as the total predicted output.

6. Performance Analysis

6.1. Experiment Profile

We consider the experimental scenario that V = 20 vehicles are traveling in one direction on a road with a distance of 5L. The density of the vehicles on the road is set as λ v = 0 . 5 , and two MEC-BSs are set on 1/3, 2/3 of the road respectively. For wireless access, we set the bandwidth W = 5 MHz. We set the channel gain H m v = d v j ς from vehicle v to MEC-BS. Where d v j is the distance from the vehicle to the MEC-BS, ς = 4 is the path loss factor, we set the channel noise power N m v = 100 dBm, and initialize the weighting factor γ m v a = 0 . 4 , γ m v E = γ m v T = 0 . 3 .

6.2. Evaluation the LSTM Model

We used the LSTM model to predict moving VT in Section 5. In this section, From Figure 5a,b, we evaluated the LSTM model based on IMF1 and IMF4, respectively. In both figures, we use 500 data (the data used in this study were collected by Google cluster data-2011-2, which describes the offloading of tasks under a certain time constraint, which is very similar to the situation we studied), of which the first 495 are training sets and the last 5 are test data. We can see from the above figures that both curves are close to the actual data. In order to assess the accuracy of the simulation results, average absolute error(MAE) and root mean square relative error(RMSE) are chosen as evaluation indicators. The formulas are as (36) and (37).
M A E = 1 n i = 1 n | r i r ^ i |
R M S E = 1 n [ i = 1 n ( r i r ^ i ) 2 ] 1 2
where n is the sample size, r i and r ^ i are real and predicted value, the LSTM model are compared with traditional predicting algorithms, including ARIMA (Autoregressive Integrated Moving Average Model), RT (Random Forest), KNN (K-Nearest Neighbor), GBDT (Gradient Boosting Decison Tree), the traditional prediction methods used for comparison are also tested under the same environment and data. The results are shown in the Table 2.
We can see that the MSE and RMSE of LSTM model are smaller, and they can evaluate the degree of data change. The smaller the value is, the better accuracy the prediction model has in describing experimental data, which indicating that the LSTM model based EMD method is very feasible to achieve the predicted effect. Because the LSTM has three gates that prevent gradient dispersion, it further illustrates the feasibility of using the LSTM method for prediction in a mobile environment.

6.3. Evaluation the Availability of MEC-BS

In the third section of the computation model we need to use the availability of MEC-BS, then in the fifth section we specifically calculate the availability of MEC-BS. To verify the proposed motion guess, we simulated three modes of movement for the vehicle. As shown in Figure 6a, the vehicle has different options: centripetal, centrifugal, and random directions. The vehicle can be offloaded at any time during the MEC-BS coverage, and we record the probability of successful task offloading. For each movement mode, 4000 vehicle samples were independently tested and the MEC-BS coverage radius ranged from 5 to 70 m. The results of the analytical experiment are given in Figure 6b. The probability of successful offloading of the vehicle when the vehicle is moving centrifugally (i.e., the worst case) is much smaller than when it is moving centripetally (i.e., the best case). This is very likely because when performing centrifugation, the dwell time is relatively short, the dwell time is shorter, and the availability of MEC-BS is lower.

6.4. Impact of Weights

In this subsection, we examine the impact of weights, γ m v α , γ m v E , γ m v T on the penalty cost, the energy consumption and computation time of tasks with a different number of predecessors in Figure 7. To better observe the relationship between energy consumption and computation time, we set γ m v α = 0.2. For a given task, the energy consumption increase as the γ m v E decreases, however, the changes of the computation time are opposite. This is reasonable since a large γ m v E will lead to the increase of P m v T , which in turn causes the decrease of transmission power in edge execution.

6.5. Evaluation and comparison of the proposed algorithm

First, we evaluate the impact of task complexity on EEC in Figure 8. We use the computing Load-input Data Ratio(LDR) to characterize the complexity of a task, i.e., L D R = L m v / D m v . We can observe that (i) the proposed algorithm can achieve converge within 100 iterations for all tasks; (ii) the task with larger LDR has higher EEC; (iii) the task with larger LDR has the lower speed of convergence, that is, the complexity of task can increase the convergence time. Therefore, reasonable LDR task setting plays an important role in improving offloading performance.
Under the strict deadline of completion time, the energy efficiency cost EEC is shown in Figure 9. In this section, in order to evaluate our algorithm. We compare it with the following algorithms: (1) local execution completely algorithm(LECA). All vehicles perform tasks locally; (2) cloud execution completely algorithm (CECA). All vehicle tasks are offloaded to MEC-BS.
We can draw some observations from Figure 9. First, our algorithm can reduce the EEC compared with LECA. When our algorithm remains stable, the cost can be reduced. This is because our algorithm can optimally choose to perform tasks locally or in MEC-BS according to the computational cost. Second, our algorithm has lower performance than CECA and LECA, because the proposed prediction model can save a lot of waiting time, the deadline of completion time becomes longer, offloading to the cloud becomes more flexible, and its EEC also decreases slightly.
It shows the computational overhead of the VT dynamics in Figure 10. As time increases, the computational cost of all algorithms increases. At a certain time value, the computational overhead will not change. This is because after adding the MEC-BS availability prediction link, it is possible to study the load balancing of the MEC-BS and save the waiting time to some extent. so our proposed algorithm and LECA and CECA are less.
In this part, we compare with the following algorithms: (i) the offloading game algorithm in [16] called Zhang’s algorithm; (ii) the resource scheduling algorithm in [18] called Deng’s algorithm. Figure 11 shows the completion time and energy consumption of the task at different input data sizes.
We can observe from Figure 11a that when the input data is small, Zhang’s algorithm consumes the least amount of energy. As the input data increases, the energy consumption grows faster because there is no corresponding energy control and adaptation mechanism in their algorithms, so our algorithm is more efficient when the input data is large. The Deng’s algorithm can adjust energy consumption according to resource scheduling. So when the input data is greater than 8, it is more energy efficient than Zhang’s algorithm, However, compared with these two algorithms. Our algorithm energy consumption has been low, which makes sense, our algorithm uses a predictive model which can save a large amount of waiting time. For the task of low transmission cost and high computational complexity, better computing resources are more important.
In addition, we can also observe from Figure 11b that our algorithm time increases more slowly as the input data gets larger. It can be observed that our algorithm is more effective when the input data is large, this is very recognized, when the input data is larger or can be understood as more tasks, because our algorithm has added LSTM prediction, can learn to continuously optimize the offloading results.

7. Conclusions

In this paper, we propose an LSTM-based availability prediction for optimized offloading in mobile edges. Firstly, considering the mobility of the vehicle, we propose a predictive availability scheme based on EMD and deep learning. Secondly, We jointly the availability of MEC, completion time, and energy consumption to minimize the overall cost. Then, we use the game method to get the optimal offloading decision. Finally, simulation experiments show that compared with the existing algorithms, the algorithm we proposed can effectively save energy and reduce the completion time.
For future work, we can consider parked vehicles and moving vehicles as part of the MEC, or combined with a variety of IOT devices for optimal offloading under different constraints, allowing full use of all resources, rational allocation of resources, and maximizing resource and time utilization.

Author Contributions

Conceptualization, C.C. and M.Z.; methodology, M.Z.; software, C.C.; validation, C.C., M.Z. and K.W.; formal analysis, C.C.; investigation, K.W.; resources, M.Z.; data curation, K.W.; writing—original draft preparation, C.C.; writing—review and editing, C.C.; visualization, K.W.; supervision, K.W.; project administration, M.Z.; funding acquisition, M.Z.

Funding

This work was supported in part by the National Science Foundation of China under Grant 61572526.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zikria, Y.B.; Kim, S.W.; Hahm, O.; Afzal, M.K.; Aalsalem, M.Y. Internet of Things (IoT) Operating Systems Management: Opportunities, Challenges, and Solution. Sensors 2019, 8, 1793. [Google Scholar] [CrossRef]
  2. Zikria, Y.B.; Yu, H.; Afzal, M.K.; Rehmani, M.H.; Hahm, O. Internet of Things (IoT): Operating System, Applications and Protocols Design, and Validation Techniques. Futur. Gener. Comput. Syst. 2018, 88, 699–706. [Google Scholar] [CrossRef]
  3. Rasool, I.U.; Zikria, Y.B.; Kim, S.W. A Review of WAVE Multichannel Operational MAC Protocols: QoS Analysis and Other Related Issues. Sage Hindawi Int. J. Distrib. Sens. Netw. 2017, 13, 1–22. [Google Scholar]
  4. Zhang, K.; Mao, Y.; Leng, S.; Vinel, A.; Zhang, Y. Delay constrained offloading for mobile edge computing in cloud-enabled vehicular networks. In Proceedings of the 8th International Workshop on Resilient Networks Design and Modeling (RNDM), Halmstad, Sweden, 13–15 September 2016. [Google Scholar]
  5. Qiu, T.; Wang, H.; Li, K.; Ning, H.; Sangaiah, A.K.; Chen, B. SIGMM: A Novel Machine Learning Algorithm for Spammer Identification in Industrial Mobile Cloud Computing. IEEE Trans. Ind. Inform. 2018, 15, 2349–2359. [Google Scholar] [CrossRef]
  6. Chiang, M.; Zhang, T. Fog and IoT: An Overview of Research Opportunities. IEEE Internet Things J. 2016, 3, 854–864. [Google Scholar] [CrossRef]
  7. Wang, B.; Gu, X.; Yan, S. STCS: A practical solar radiation based temperature correction scheme in meteorological WSN. Int. J. Sens. Netw. 2018, 28, 22–33. [Google Scholar] [CrossRef]
  8. Yu, Y. Mobile Edge Computing Towards 5G: Vision, Recent Progress, and Open Challenges. China Commun. English Vers. 2016, 13, 89–99. [Google Scholar] [CrossRef]
  9. Mao, Y.; You, C.; Zhang, J. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Commun. Surv. 2017, 19, 2322–2358. [Google Scholar] [CrossRef] [Green Version]
  10. Ansari, N.; Sun, X. Mobile Edge Computing Empowers Internet of Things. IEICE Trans. Commun. 2018, E101B, 604–619. [Google Scholar] [CrossRef]
  11. Kaddi, M.; Khelifa, B.; Omari, M. An Energy-Efficient Protocol Using an Objective Function & Random Search with Jumps for WSN. Comput. Mater. Continua 2019, 58, 603–624. [Google Scholar] [Green Version]
  12. Al-Shuwaili, A.; Simeone, O.; Bagheri, A.; Scutari, G. Joint Uplink/Downlink Optimization for Backhaul-Limited Mobile Cloud Computing with User Scheduling. IEEE Trans. Signal Inf. Process. Netw. 2017, 3, 787–802. [Google Scholar] [CrossRef]
  13. Guo, S.; Liu, J.; Yang, Y.; Xiao, B.; Li, Z. Energy-Efficient Dynamic Computation Offloading and Cooperative Task Scheduling in Mobile Cloud Computing. IEEE Trans. Mob. Comput. 2019, 18, 319–333. [Google Scholar] [CrossRef]
  14. Du, J.; Zhao, L.; Feng, J.; Chu, X. Computation Offloading and Resource Allocation in Mixed Fog/Cloud Computing Systems with Min-Max Fairness Guarantee. IEEE Transa. Commun. 2018, 66, 1594–1608. [Google Scholar] [CrossRef]
  15. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing. IEEE-ACM Trans. Netw. 2016, 24, 2827–2840. [Google Scholar] [CrossRef]
  16. Zhang, J.; Xia, W.; Yan, F.; Shen, L. Joint Computation Offloading and Resource Allocation Optimization in Heterogeneous Networks with Mobile Edge Computing. IEEE Access 2018, 6, 19324–19337. [Google Scholar] [CrossRef]
  17. Chen, X.; Wang, L.; Wang, X.; Jin, R. Predictive offloading in mobile-fog-cloud enabled cyber-manufacturing systems. In Proceedings of the IEEE Industrial Cyber-Physical Systems (ICPS), St. Petersburg, Russia, 15–18 May 2018. [Google Scholar]
  18. Deng, Y.; Chen, Z.; Zhang, D.; Zhao, M. Workload scheduling toward worst-case delay and optimal utility for single-hop Fog-IoT architecture. IET Commun. 2018, 12, 2164–2173. [Google Scholar] [CrossRef]
  19. Hou, X.; Li, Y.; Chen, M.; Wu, D.; Jin, D.; Chen, S. Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures. IEEE Trans. Veh. Technol. 2016, 65, 3860–3873. [Google Scholar] [CrossRef]
  20. Yu, R.; Huang, X.; Kang, J.; Ding, J.; Maharjan, S.; Gjessing, S.; Zhang, Y. Cooperative Resource Management in Cloud-Enabled Vehicular Networks. IEEE Trans. Ind. Electron. 2015, 62, 7938–7951. [Google Scholar] [CrossRef]
  21. Satyanarayanan, M.; Bahl, P.; Cáceres, R.; Davies, N. The Case for VM-Based Cloudlets in Mobile Computing. IEEE Pervasive Comput. 2009, 8, 14–23. [Google Scholar] [CrossRef]
  22. Liu, Y.; Wang, S.; Huang, J.; Yang, F. A computation offloading algorithm based on game theory for vehicular edge networks. In Proceedings of the IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018. [Google Scholar]
  23. Zhang, K.; Mao, Y.; Leng, S.; He, Y.; Zhang, Y. Mobile-Edge Computing For Vehicular Networks a Promising Network Paradigm with Predictive Off-Loading. IEEE Veh. Technol. Mag. 2017, 12, 36–44. [Google Scholar] [CrossRef]
  24. Qiu, T.; Wang, X.; Chen, C.; Atiquzzaman, M.; Liu, L. TMED: A spider web-like transmission mechanism for emergency data in vehicular ad hoc networks. IEEE Trans. Veh. Technol. 2018, 67, 8682–8694. [Google Scholar] [CrossRef]
  25. Klein, A.; Mannweiler, C.; Schneider, J.; Schotten, H.D. Access schemes for mobile cloud computing. In Proceedings of the Eleventh International Conference on Mobile Data Management, Kansas City, MO, USA, 23–26 May 2010. [Google Scholar]
  26. Almeida, J.; Almeida, V.; Ardagna, D.; Cunha, Í.; Francalanci, C.; Trubian, M. Joint admission control and resource allocation in virtualized servers. J. Parallel Distrib. Comput. 2010, 70, 344–362. [Google Scholar] [CrossRef]
  27. Hoang, D.T.; Niyato, D.; Wang, P. Optimal admission control policy for mobile cloud computing hotspot with cloudlet. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 1–4 April 2012. [Google Scholar]
  28. Chen, X. Decentralized Computation Offloading Game for Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 974–983. [Google Scholar] [CrossRef]
  29. Wang, G.; Liu, M. Dynamic Trust Model Based on Service Recommendation in Big Data. Comput. Mater. Continua 2019, 58, 845–857. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, Y.; Niyato, D.; Wang, P. Offloading in Mobile Cloudlet Systems with Intermittent Connectivity. IEEE Trans. Mob. Comput. 2015, 14, 2516–2529. [Google Scholar] [CrossRef]
  31. Mou, S.; Ji, Y.; Tian, C. Retail Time Series Prediction Based on EMD and Deep Learning. In Proceedings of the IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC), Guiyang, China, 22–24 August 2018. [Google Scholar]
  32. Dai, S.; Li, L.; Li, Z. Modeling Vehicle Interactions via Modified LSTM Models for Trajectory Prediction. IEEE Access 2019, 7, 38287–38296. [Google Scholar] [CrossRef]
  33. Pan, L.; Qin, J.; Chen, H.; Xiang, X.; Li, C.; Chen, R. Image Augmentation-Based Food Recognition with Convolutional Neural Networks. Comput. Mater. Continua 2019, 59, 297–313. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Network Architecture.
Figure 1. Network Architecture.
Sensors 19 04467 g001
Figure 2. VT’s dwell trajectory in the MEC-BS coverage.
Figure 2. VT’s dwell trajectory in the MEC-BS coverage.
Sensors 19 04467 g002
Figure 3. EMD Decomposition.
Figure 3. EMD Decomposition.
Sensors 19 04467 g003
Figure 4. Architecture of LSTM (Red circle are arithmetic operators and the rectangles in different colours are the gates in LSTM).
Figure 4. Architecture of LSTM (Red circle are arithmetic operators and the rectangles in different colours are the gates in LSTM).
Sensors 19 04467 g004
Figure 5. LSTM model predicts (a) IMF1 and (b) IMF4.
Figure 5. LSTM model predicts (a) IMF1 and (b) IMF4.
Sensors 19 04467 g005
Figure 6. Mobility for (a) centripetal, centrifugal and random directions, (b) simulation results for centripetal (best-case), centrifugal (worst-case) and random direction movement.
Figure 6. Mobility for (a) centripetal, centrifugal and random directions, (b) simulation results for centripetal (best-case), centrifugal (worst-case) and random direction movement.
Sensors 19 04467 g006
Figure 7. Comparison of energy consumption and computation time for different γ m v E , γ m v T . (a) Energy consumption. (b) Computation delay.
Figure 7. Comparison of energy consumption and computation time for different γ m v E , γ m v T . (a) Energy consumption. (b) Computation delay.
Sensors 19 04467 g007
Figure 8. Dynamics of EEC of task with different LDRs.
Figure 8. Dynamics of EEC of task with different LDRs.
Sensors 19 04467 g008
Figure 9. Comparison of EEC.
Figure 9. Comparison of EEC.
Sensors 19 04467 g009
Figure 10. Comparison of energy consumption.
Figure 10. Comparison of energy consumption.
Sensors 19 04467 g010
Figure 11. Comparison of energy consumption and application completion time for different algorithms. (a) Energy consumption. (b) Completion time.
Figure 11. Comparison of energy consumption and application completion time for different algorithms. (a) Energy consumption. (b) Completion time.
Sensors 19 04467 g011
Table 1. Main Symbols And Their Meanings.
Table 1. Main Symbols And Their Meanings.
SymbolsMeanings
BThe number of MEC-BSs
VThe number of VT
MThe number of tasks
B The set of MEC-BSs, B = { 0 , 1 , , B }
V The set of VT, V = { 0 , 1 , , V }
M The set of Tasks, M = { 0 , 1 , , M }
L m v Tasks load of m
D m v Input data size of m
S m v Received result size of m
t m a x Maximum tolerance time of m
AComputation offloading decision
P m v T Power of m in transmit
P m v T Channel gain of m
N m v The power of the channel noise
WBandwidth channel
R m v Transmit rate of m
λ v The generation probability of m
f m v Local computational capability
f c MEC-BS computational capability
η m v MEC-BS availability
T m v l / T m v c t r s / T m v c e x e Local/transmission/MEC-BS time
E m v l / E m v c t r s / E m v c e x e Local/ transmission/MEC-BS energy consumption
Z m v l / Z m v l Local/MEC-BS energy-efficiency cost
γ m v E / γ m v T / γ m v α The weight of energy consumption/time/penalty in offloading for m
Table 2. The prediction results of each model.
Table 2. The prediction results of each model.
ModelsMAERMSE
A R I M A 0.66240.8800
R T 0.56160.8002
K N N 0.50880.7251
G B D T 0.43630.7033
L S T M 0.40700.6678

Share and Cite

MDPI and ACS Style

Cui, C.; Zhao, M.; Wong, K. An LSTM-Method-Based Availability Prediction for Optimized Offloading in Mobile Edges. Sensors 2019, 19, 4467. https://doi.org/10.3390/s19204467

AMA Style

Cui C, Zhao M, Wong K. An LSTM-Method-Based Availability Prediction for Optimized Offloading in Mobile Edges. Sensors. 2019; 19(20):4467. https://doi.org/10.3390/s19204467

Chicago/Turabian Style

Cui, Chaoxiong, Ming Zhao, and Kelvin Wong. 2019. "An LSTM-Method-Based Availability Prediction for Optimized Offloading in Mobile Edges" Sensors 19, no. 20: 4467. https://doi.org/10.3390/s19204467

APA Style

Cui, C., Zhao, M., & Wong, K. (2019). An LSTM-Method-Based Availability Prediction for Optimized Offloading in Mobile Edges. Sensors, 19(20), 4467. https://doi.org/10.3390/s19204467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop