Next Article in Journal
Analysis of the Development Status and Prospect of China’s Agricultural Sensor Market under Smart Agriculture
Previous Article in Journal
Intraoperative Beat-to-Beat Pulse Transit Time (PTT) Monitoring via Non-Invasive Piezoelectric/Piezocapacitive Peripheral Sensors Can Predict Changes in Invasively Acquired Blood Pressure in High-Risk Surgical Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks

The Guangdong Key Laboratory of Information Security Technology, School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3306; https://doi.org/10.3390/s23063306
Submission received: 8 February 2023 / Revised: 28 February 2023 / Accepted: 17 March 2023 / Published: 21 March 2023
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)

Abstract

:
The rapid development of the Internet of Things (IoT) has led to computational offloading at the edge; this is a promising paradigm for achieving intelligence everywhere. As offloading can lead to more traffic in cellular networks, cache technology is used to alleviate the channel burden. For example, a deep neural network (DNN)-based inference task requires a computation service that involves running libraries and parameters. Thus, caching the service package is necessary for repeatedly running DNN-based inference tasks. On the other hand, as the DNN parameters are usually trained in distribution, IoT devices need to fetch up-to-date parameters for inference task execution. In this work, we consider the joint optimization of computation offloading, service caching, and the AoI metric. We formulate a problem to minimize the weighted sum of the average completion delay, energy consumption, and allocated bandwidth. Then, we propose the AoI-aware service caching-assisted offloading framework (ASCO) to solve it, which consists of the method of Lagrange multipliers with the KKT condition-based offloading module (LMKO), the Lyapunov optimization-based learning and update control module (LLUC), and the Kuhn–Munkres (KM) algorithm-based channel-division fetching module (KCDF). The simulation results demonstrate that our ASCO framework achieves superior performance in regard to time overhead, energy consumption, and allocated bandwidth. It is verified that our ASCO framework not only benefits the individual task but also the global bandwidth allocation.

1. Introduction

In recent decades, the Internet of things (IoT) has experienced rapid development and become ubiquitous in our daily lives. IoT devices have proliferated and evolved with advanced hardware architectures, and are being leveraged to create seamless networks that cover every corner of our globe [1]. Along with the development of IoT devices, a promising computing paradigm known as edge computing has arisen; this involves moving the location of computation from the central network to the network edge [2]. Moving the task execution from the cloud server to the multi-access edge computing (MEC) server (e.g., base station, access point) significantly alleviates the congestion of the core network and releases the burden of the cloud. Tasks with real-time requirements, computation-intensive characteristics, and high energy consumption (e.g., deep neural network (DNN)-based automatic license plate recognition) appear. Mobile devices where tasks are generated are constrained in terms of energy and computational capabilities (e.g., smartphones and unmanned aerial vehicles). Therefore, it is necessary to offload tasks to nearby MEC servers for remote execution [3], which is also known as computation offloading [4].
However, the exponential growth in the volume of offloaded data has led to increased traffic burdens on cellular networks, causing channel congestion. Under unstable network conditions, such as extremely high transmission latency, the performance of computation offloading can drastically decline. A caching policy [5] is proposed to tackle this issue by proactively storing the service in IoT devices, including MEC servers and mobile devices, to reduce the traffic of the cellular network. If an IoT device caches the service libraries and parameters, the task can be directly processed. Hence, the task processing time can dramatically reduce [6]. A DNN-based task is executed by a corresponding service package, consisting of reliable libraries and network parameters. Since the MEC server and mobile devices process distinct types of tasks, it is impractical to proactively cache all types of services due to storage limits. They only carry out caching whenever a task is required to be executed, and the caches are stored within a restricted time horizon.
Machine learning plays a significant role in the wireless network [7]. Considering a distributed machine learning scenario [8], the DNN is trained in a distributed manner. Then, the trained parameters of the DNN are assembled on an application server. The application server gathers all of the trained parameters and further trains a global DNN. Since the new data are generated from mobile devices, the trained parameters are updated ceaselessly and the global DNN is retrained based on the newly gathered parameters at the end of every global training round. Thus, the global DNN always reflects the up-to-date trained parameters. However, mobile devices may not fetch the latest parameters in every round. Hence, the cached DNN model may be outdated, which should be updated to keep the model fresh. To measure the freshness of the global service parameters at the MEC servers and the mobile devices, we introduce the concept of AoI [9], which is defined as the elapsed time since the generation of the latest received global service parameters response. The global service parameters are generated by training at the end of every global training round. When the MEC server or mobile device is required to execute inference tasks, it first checks whether fresh service parameters exist. If the service parameters are stale, the MEC server or mobile device needs to request the application server to fetch the up-to-date trained parameters for inference task execution.

1.1. Challenges

To realize distributed machine learning and service caching, the following challenges should be addressed:

1.1.1. Cost of the Task

On the one hand, the inference task completion time needs to be less than its corresponding maximal tolerance deadline. Thus, minimizing the inference task completion time is necessary for real-time requirements. On the other hand, the inference tasks are generated on energy-constrained mobile devices, which carefully make the offloading decisions to minimize energy consumption. Therefore, it is challenging to minimize the cost of the inference task consisting of time delay and energy consumed.

1.1.2. Bandwidth Consumption of the Application Server

If IoT devices fetch the latest service parameters from the application server, it utilizes the limited wireless bandwidth of cellular networks. Therefore, there is a trade-off between the fetching time and the total available bandwidth. If the application preferentially guarantees the fetching time, the remained bandwidth is not enough to serve other applications, and vice versa. Thus, the challenge of time and bandwidth trade-off needs to be addressed.

1.1.3. Matching between Wireless Channels and IoT Devices

In a condition of limited bandwidth, the matching between the wireless bandwidth and IoT devices is significant enough to minimize the fetching time since an IoT device may experience diverse channel fading and co-channel interference on different wireless channels. Hence, it is the third challenge to match between wireless channels and IoT devices to further minimize the service fetching time.

1.2. Related Work

1.2.1. Offloading with Cache

Some works make offloading decisions by considering the cache technology. In [10], an algorithm was devised by taking into account the multi-cast opportunity with cache in a multi-user scenario. A computing offloading and content caching model was proposed to reduce the time delay in the internet of vehicles in [11]. In [12], an optimal computing offloading and caching policy was designed to minimize the latency in a hybrid mobile system. In [13], an approximation collaborative computation offloading scheme and a game-theoretic collaborative computation offloading scheme were devised to achieve better offloading performance and scale well with the increasing computation task numbers. The above works do not consider the age of the cache, which may degrade the QoS.

1.2.2. Cache of Data

In terms of data caching, existing works focus on frequently reused data to improve performance. In [14], a deep supervised learning method was adopted to make real-time decisions in a dynamic vehicle network. An online caching placement and prediction-based data pre-fetch method were designed in [15] to address the uncertainty of future task parameters. In [16], a cache deployment strategy in a large-scale Wi-Fi system was adopted to maximize the caching benefit and achieve better caching performance. In [17], a joint power allocation–caching problem was formulated to maximize the downlink performance in the caching FiWi network. However, these works do not take into account the caching of the service, which is crucial in the DNN-based task.

1.2.3. Cache of Service

With respect to service caching, a few works consider caching services to enhance system efficiency. In [18], an online caching algorithm was proposed to minimize the overall computation delay. An extremely compelling (but much less studied) problem was studied in MEC-enabled dense cellular networks in [19]. In [20], an online service caching algorithm was devised to achieve the optimal worst-case competitive ratio under homogeneous task arrivals. In [21], a cache placement algorithm was adopted to minimize the data traffic forwarded to the remote cloud. The above-mentioned works only studied the cache and did not combine it with offloading.

1.2.4. Age of Information

In regard to the age of information, some works focused on minimizing the AoI of the optimized goal. In [9], the concept of AoI was first proposed, and general methods were derived to calculate the age metric, which can be applied to broad types of service systems. Dynamic cache content update scheduling algorithms were designed to minimize the average AoI of the dynamic content delivered to the users in [22]. In [23], a dueling deep R-network-based status updating algorithm was proposed by combining the dueling deep Q-network and R-learning to minimize the average cost. In [24], an algorithm aimed to obtain an optimal trade-off between age and latency was adopted for the freshness-aware buffer update in a mobile edge scenario. However, these works did not leverage the AoI metric to improve the offloading performance in an edge system.

1.3. Contribution

In this paper, we consider an AoI-aware service caching-assisted offloading scenario. Our objective is to minimize the weighted sum of the average completion delay, energy consumption, and allocated bandwidth. We decompose the original problem into three subproblems: minimizing the average time overhead cost and energy consumption of inference tasks, minimizing the required average bandwidth, and minimizing the fetching time of responding IoT devices. Furthermore, to solve the subproblems, we propose the AoI-aware service caching-assisted offloading framework (ASCO) to deal with them, which consists of three modules: the method of Lagrange multipliers with the KKT condition-based offloading module (LMKO), the Lyapunov optimization-based learning and update control module (LLUC), and the Kuhn–Munkres (KM) algorithm-based channel-division fetching module (KCDF). Simulation results show that our ASCO framework achieves superior performance compared to other baseline combinations in terms of time overhead, energy consumption, and allocated bandwidth. The main contributions of the paper are summarized as
  • To minimize the average time overhead cost and energy consumption of inference tasks, we transform the problem into a Lagrangian dual problem. Then, we propose the LMKO module based on the method of Lagrange multipliers with Karush–Kuhn–Tucker (KKT) conditions to make an optimal offloading decision.
  • To minimize the required average bandwidth, we transform the problem into a Lyapunov plus penalty problem by minimizing the total required bandwidth while keeping the requesting data queue backlog stable. Further, we propose the LLUC module based on the Lyapunov optimization to derive an optimal dequeued rate.
  • To minimize the fetching time of IoT devices, we consider the problem of finding the perfect matching by maximizing the sum of the link weights in the equalling subgraph. Moreover, we propose the KCDF module based on the KM algorithm to obtain the optimal matching decision.
The novelty of the paper consists of three aspects. First we propose an AoI-aware service caching-assisted offloading scenario, which has not been considered in the literature. This scenario takes into account the service caching in distributed machine learning, including the service libraries and parameters. It is a popular technology and worthy to be investigated. We also consider the freshness of the service caching for the computation offloading. Existing works omit the AoI of the caching, especially the service caching, which degrades the offloading performance. We aim to minimize the costs from both the mobile device side and the global perspective. Then, we propose the novel ASCO framework, including three modules. The proposed algorithm outperforms the existing baselines.
The rest of the paper is organized as follows. We elaborate on the system module in Section 2. An analysis of the formulated problems is detailed in Section 3. Section 4 presents the proposed solution. The evaluation simulation is described in Section 5, followed by the conclusion in Section 6.

2. System Model

We consider an AoI-aware service caching asymmetric network consisting of heterogeneous mobile devices and MEC servers in Figure 1. A set of | N | mobile devices indexed by n is denoted as N = { 1 , , | N | } , e.g., smartphones and intelligent vehicles. A set of | M | MEC servers indexed by m is denoted as M = { 1 , , | M | } , e.g., access points and base stations. Since an AI-based inference task generated from the mobile device is computationintensive and has real-time requirements, the mobile device with constrained computation capability needs to offload the inference task to the MEC server with sufficient computation resources. On the one hand, the inference task is processed by the corresponding service. For instance, an image recognition inference task is inferred by a DNN service running in service libraries, e.g., machine learning frameworks. On the other hand, caching data can alleviate the transmission traffic during the offloading and include content caching and service caching.
Considering a distributed machine learning scenario, an application server periodically trains an up-to-date DNN and then distributes it to the MEC servers and mobile devices, among which the inference task data are hardly reusable while the DNN service is frequently reusable. Thus, different from cashing the inference task data, caching the DNN service significantly reduces the transmission time. Note that the DNN service consists of the service libraries and service parameters. Since a DNN with the latest parameters owns better inference accuracy based on the periodical training, the service parameters should be updated when it has a new version. The service libraries are static and are only transmitted once for caching while the service parameters are dynamic. We define the AoI as the elapsed time since the generation of the latest received service parameters at MEC servers or mobile devices, which measures the freshness of service parameters. If the AoI of service parameters is less than the periodical training round, then the parameters are considered to be the latest version and fresh enough to be used for inference. Otherwise, since a new version is generated at the application server, the parameters are stale and need to be updated to the latest version. Note that mobile devices do not hold the AoI information on the side of MEC servers due to privacy concerns and transmission overhead.

2.1. Task Model

Considering a time-slotted system, a set of | T | timeslots indexed by t is denoted as T = { 1 , , | T | } . The inference task generated from the mobile device n at timeslot t is denoted as k n ( t ) . A set of | J | service types indexed by j is denoted as J = { 1 , , | J | } . For instance, plate image recognition and face recognition are distinct types of services. Each inference task has a corresponding service type; the relationship is represented as follows: x k n ( t ) , j typ = 1 if k n ( t ) is of type j; otherwise, x k n ( t ) , j typ = 0 . This satisfies the condition that each inference task can only be executed by a service of a certain type, at most: j J x k n ( t ) , j typ = 1 .
The inference tasks are computation-intensive and have maximum time tolerance. We define the inference task profile of k n ( t ) as ( d k n ( t ) , c k n ( t ) , T k n ( t ) max ) , where d k n ( t ) is the inference task input size, c k n ( t ) is the task computation amount, and T k n ( t ) max is the task maximum completion tolerance deadline. Take the task image recognition as an example, d k n ( t ) is the image bit size, and c k n ( t ) represents the required CPU cycles of the DNN service. T k n ( t ) max is the image recognition deadline, meaning that the inference task processing delay cannot exceed the tolerance time.

2.2. Communication Model

The application server, mobile devices, and MEC servers mutually communicate under the cellular network. Based on the Shannon theory, the transmission rate between the mobile device n and MEC server m can be referred to as
r n , m ( t ) = b n , m ( t ) log 2 ( 1 + p n , m ( t ) h n , m ( t ) σ 2 + I n , m ( t ) ) ,
where b n , m ( t ) is the allocation bandwidth, p n , m ( t ) means the transmission power from n to m, h n , m ( t ) is the channel gain, σ 2 represents the additive white Gaussian noise, and I n , m ( t ) is the co-channel interference that mobile device n suffers on the cellular channel, respectively. The transmission power affects the achievable spectral efficiency, and a highly allocated bandwidth can lead to an efficient transmission rate. The channel gain between each one varies due to mobility. Since mobile devices are energy-constrained, the transmission power has an upper bound: p n , m ( t ) p max , where p max is the maximum transmission power.
Moreover, the uploading time of inference task k n ( t ) from the mobile device n to the MEC server m can be calculated as
T n , m , k n ( t ) upl = x m , k n ( t ) exe ( t ) d k n ( t ) r n , m ( t ) ,
where x m , k n ( t ) exe ( t ) is the offloading decision and defined as x m , k n ( t ) exe ( t ) = 1 if k n ( t ) is offloaded to m; otherwise, x m , k n ( t ) exe ( t ) = 0 . It satisfies: m M x m , k n ( t ) exe ( t ) 1 , meaning that each inference task is offloaded to one MEC server at most.
The transmission rate from the application server to the MEC server r 0 , m ( t ) and the transmission rate from the application server to the mobile device r 0 , n ( t ) can be similarly calculated with (1).

2.3. Caching Model

For the purpose of alleviating the transmission traffic, the MEC servers and mobile devices have to cache the DNN service in their caching storage if they have no corresponding cache. Let d j lib be the library size and d j par be the parameter size of service type j, respectively.
Therefore, the fetching time for the DNN service of type j to the MEC server m can be calculated as
T 0 , m , j cac = T 0 , m , j lib + T 0 , m , j par = ( 1 x m , j cac ( t ) ) ( d j lib r 0 , m ( t ) + d j par r 0 , m ( t ) ) ,
where T 0 , m , j lib and T 0 , m , j par are the fetching times of the service libraries and parameters at the MEC servers, respectively, and x m , j cac ( t ) is the service caching placement decision at the MEC server m, defined as x m , j cac ( t ) = 1 if j is cached in m; otherwise, x m , j cac ( t ) = 0 . Similarly, the fetching time for the DNN service of type j to mobile device n can be represented as T 0 , n , j cac .
Due to the limited caching capacity of the MEC server, there is a constraint on the storage cache:
j J x m , j cac ( t ) ( d j lib + d j par ) d m max ,
where the total DNN service size of all types cannot exceed the storage upper bound d m max . The total DNN service size in mobile devices has a similar constraint.
Fresh parameters can effectively infer the DNN task with the satisfied performance. To measure the freshness of the DNN service parameters, we introduce the concept of AoI to quantify the age in the MEC server m:
Δ m , j ( t ) = t t j ,
where t j is the timeslot of the latest periodical training of service type j. The same calculation of Δ n , j ( t ) is in mobile devices. The updating mechanism at the MEC server m can be defined as Δ m , j ( t ) = d j par r 0 , m ( t ) if fetching ends at timeslot t; otherwise, Δ m , j ( t ) = Δ m , j ( t 1 ) + 1 , and at mobile device n: Δ n , j ( t ) = d j par r 0 , n ( t ) if fetching ends at timeslot t; otherwise, Δ n , j ( t ) = Δ n , j ( t 1 ) + 1 . Since the DNN is trained periodically in the application server, the DNN training round of type j can be denoted as T j int . If the AoI of the parameters is less than the training round, the parameters can be regarded as fresh parameters. Let x m , j fre ( t ) be the service parameter freshness status, defined as follows: x m , j fre ( t ) = 1 if Δ m , j ( t ) < T j int ; otherwise, x m , j fre ( t ) = 0 , and x n , j fre ( t ) = 1 if Δ n , j ( t ) < T j int ; otherwise, x n , j fre ( t ) = 0 . If x m , j fre ( t ) = 0 or x n , j fre ( t ) = 0 , the MEC server or the mobile device is required to fetch an up-to-date version of the service parameters from the application server; the fetching times are T 0 , m , j par and T 0 , n , j par , respectively.

2.4. Execution Model

In terms of execution, the inference task is executed under the existence of the corresponding service. If there is no DNN service caching at the MEC server or mobile device, they are required to fetch a DNN service cache and then further carry out the execution. After fetching the service caching, the execution delay of inference task k n ( t ) at the MEC server m is calculated as
T m , k n ( t ) exe = x m , k n ( t ) exe ( t ) c k n ( t ) f m ( t ) ,
where f m ( t ) is the computation capability of the MEC server m. Likewise, the execution delay at the mobile device n is calculated as
T n , k n ( t ) exe = ( 1 m M x m , k n ( t ) exe ( t ) ) c k n ( t ) f n ( t ) ,
where f n ( t ) is the constant computation capability of the mobile device n. Here, the computation capability of a mobile server is less than a MEC server, and f m ( t ) has an upper bound: f n ( t ) < f m ( t ) f max , where f max is the maximum of the computation capability.

2.5. Energy Model

From the perspective of energy consumption, we focus on the energy of mobile devices since they usually have batteries of limited capacity while the MEC server is connected to the power grid. Hence, the energy consumed for the local execution of the mobile device n can be calculated as
E n , k n ( t ) exe = ( 1 m M x m , k n ( t ) exe ( t ) ) μ c k n ( t ) f n 2 ( t ) ,
where μ refers to the effective switched capacitance.
In the case of offloading, the energy consumption of the mobile device only includes the uploading energy, calculated as
E n , m , k n ( t ) upl = x m , k n ( t ) exe ( t ) p n , m ( t ) d k n ( t ) r n , m ( t ) .
Energy consumption is another crucial metric of mobile devices. The cost of the mobile device consists of the time delay and energy consumption with distinct emphasis.

2.6. Cost Model

At timeslot t, the mobile device n with the generated DNN inference task k n ( t ) can make an offloading decision to process the task. According to the service caching placement decision and service parameter freshness status, the cost of the mobile device can be divided into the following cases, as seen in Figure 1.

2.6.1. Case 1: Offloading with Fresh Cache

First, in case 1, the mobile device offloads the inference task to the MEC server with caching service libraries and fresh parameters. The combination of the decision and status satisfies: x k n ( t ) , 1 ( t ) = x m , k n ( t ) exe ( t ) x k n ( t ) , j typ x m , j cac ( t ) x m , j fre ( t ) = 1 . The total time delay, in this case, can be calculated as T k n ( t ) , 1 = T n , m , k n ( t ) upl + T m , k n ( t ) exe . In addition, the total energy consumption of the mobile device is represented as E k n ( t ) , 1 = E n , m , k n ( t ) upl .

2.6.2. Case 2: Offloading with Stale Cache

In Case 2, the mobile device offloads the inference task to the MEC server with caching service libraries and stale parameters. The combination of the decision and status satisfies: x k n ( t ) , 2 ( t ) = x m , k n ( t ) exe ( t ) x k n ( t ) , j typ x m , j cac ( t ) ( 1 x m , j fre ( t ) ) = 1 . The total time delay, in this case, can be calculated as T k n ( t ) , 2 = T n , m , k n ( t ) upl + T 0 , m , j par + T m , k n ( t ) exe . Moreover, the total energy consumption of the mobile device is denoted as E k n ( t ) , 2 = E n , m , k n ( t ) upl .

2.6.3. Case 3: Offloading without Cache

Then, in case 3, the mobile device offloads the inference task to the MEC server without any DNN service cache. The combination of the decision and status satisfies x k n ( t ) , 3 ( t ) = x m , k n ( t ) exe ( t ) x k n ( t ) , j typ ( 1 x m , j cac ( t ) ) = 1 . The total time delay, in this case, can be calculated as follows: T k n ( t ) , 3 = T n , m , k n ( t ) upl + T 0 , m , j lib + T 0 , m , j par + T m , k n ( t ) exe . Likewise, the total energy consumption of the mobile device is also represented as E k n ( t ) , 3 = E n , m , k n ( t ) upl .

2.6.4. Case 4: Local Execution with Fresh Cache

For local execution, in case 4, the mobile device locally executes the inference task with caching service libraries and fresh parameters. The combination of the decision and status satisfies x k n ( t ) , 4 ( t ) = ( 1 m M x m , k n ( t ) exe ( t ) ) x k n ( t ) , j typ x n , j cac ( t ) x n , j fre ( t ) = 1 . The total time delay, in this case, can be calculated as follows: T k n ( t ) , 4 = T n , k n ( t ) exe . In addition, the total energy consumption of the mobile device is denoted as E k n ( t ) , 4 = E n , k n ( t ) exe .

2.6.5. Case 5: Local Execution with Stale Cache

In case 5, the mobile device locally executes the inference task with caching service libraries and stale parameters. The combination of the decision and status satisfies x k n ( t ) , 5 ( t ) = ( 1 m M x m , k n ( t ) exe ( t ) ) x k n ( t ) , j typ x n , j cac ( t ) ( 1 x n , j fre ( t ) ) = 1 . The total time delay, in this case, can be calculated as T k n ( t ) , 5 = T 0 , n , j par + T n , k n ( t ) exe . Then, the total energy consumption of the mobile device is calculated as E k n ( t ) , 5 = E n , k n ( t ) exe .

2.6.6. Case 6: Local Execution without Cache

Finally, in case 6, the mobile device locally executes the inference task without any DNN service cache. The combination of the decision and status satisfies x k n ( t ) , 6 ( t ) = ( 1 m M x m , k n ( t ) exe ( t ) ) x k n ( t ) , j typ ( 1 x n , j cac ( t ) ) = 1 . The total time delay, in this case, can be calculated as T k n ( t ) , 6 = T 0 , n , j lib + T 0 , n , j par + T n , k n ( t ) exe . Similarly, the total energy consumption of the mobile device is also denoted as E k n ( t ) , 6 = E n , k n ( t ) exe .

3. Problem Formulation

In the AoI-aware caching-assisted asymmetric offloading scenario, the average cost of the mobile device and the total bandwidth between the application server and MEC servers or mobile devices should be considered due to their crucial effectiveness. On the one hand, minimizing the average cost of the mobile device can ensure that the real-time requirements of the generated inference tasks are met and the battery energy is conserved. As the consumed bandwidth of the application server is limited, while it bears other realtime inference tasks, it is required to minimize the total bandwidth consumption between the application server and MEC servers or mobile devices. Accordingly, the average cost of time completion delay is as follows:
T ave = 1 | T | | N | t T n N ( m M i = 1 3 x k n ( t ) , i ( t ) T k n ( t ) , i + i = 4 6 x k n ( t ) , i ( t ) T k n ( t ) , i ) ,
where x k n ( t ) , i ( t ) is defined as x k n ( t ) , i ( t ) = 1 if k n ( t ) is executed via case i; otherwise, x k n ( t ) , i ( t ) = 0 , and satisfies that each inference task must be executed via one of the cases at one MEC server or local mobile device: m M i = 1 3 x k n ( t ) , i ( t ) + i = 4 6 x k n ( t ) , i ( t ) = 1 . Then, the average cost of energy consumption can be denoted as
E ave = 1 | T | | N | t T n N ( m M i = 1 3 x k n ( t ) , i ( t ) E k n ( t ) , i + i = 4 6 x k n ( t ) , i ( t ) E k n ( t ) , i ) ,
the time average global allocation bandwidth between the application server and MEC servers or mobile devices is denoted as b 0 .
Therefore, we formally formulate the original problem to minimize the time average global allocation bandwidth and the average cost of the mobile device consisting of inference task completion delay and energy consumption:
min x k n ( t ) , i ( t ) , f m ( t ) , p n , m ( t ) Z = ξ tim T ave + ξ ene E ave + ξ ban b 0
s . t . T k n ( t ) , i T k n ( t ) max , n N , t T , i { 1 , , 6 } ,
E k n ( t ) , i E k n ( t ) max , n N , t T , i { 1 , , 6 } ,
m M i = 1 3 x k n ( t ) , i ( t ) + i = 4 6 x k n ( t ) , i ( t ) = 1 ,
x m , k n ( t ) exe ( t ) { 0 , 1 } , m M , n N , t T ,
x m , j cac ( t ) { 0 , 1 } , m M , j J , t T ,
x m , j fre ( t ) { 0 , 1 } , m M , j J , t T ,
j J x m , j cac ( t ) ( d j lib + d j par ) d m max , m M , j J , t T ,
j J x n , j cac ( t ) ( d j lib + d j par ) d n max , n N , j J , t T ,
f n ( t ) < f m ( t ) f max , m M , n N , t T ,
p n , m ( t ) p max , m M , n N , t T ,
where x k n ( t ) , i ( t ) , f m ( t ) , and p n , m ( t ) are optimization variables. ξ ban , ξ tim , and ξ ene are the given weights of the average global AoI, average time cost, and average energy cost, respectively. (13) and (14) indicate that the inference task completion time delay and consumed energy have upper bounds. According to (15), each inference task has to be executed via (at most) one case at one MEC server or local mobile device. (16)–(18) show that the optimization variables are binary. (19) and (20) constrain the caching capacity limit of heterogeneous services at the MEC server or the mobile device. (21) shows that the computation capability of the MEC server is higher than the mobile device and has a maximum. (22) restricts the upper bound of the uplink transmission power of the mobile device.
x m , k n ( t ) exe ( t ) , x m , j cac ( t ) , and x m , j fre ( t ) are discrete binary integer variables; p n , m ( t ) and f m ( t ) are continuous variables. The objective functions are not linear to the variables, which are coupled mutually. Therefore, problem (12) is an MINLP problem known as NP-hard. It is difficult to solve the problem within the polynomial time. Combining the practical asymmetric environment, it is more challenging to analyze and propose a solution.

3.1. Average Cost Minimization Problem

From the perspective of mobile devices, we first decompose problem (12) into a problem to minimize the average cost of mobile devices:
min x k n ( t ) , i ( t ) , f m ( t ) , p n , m ( t ) ξ tim T ave + ξ ene E ave
s . t . ( C 1 ) ( C 10 ) ,
where mobile devices make their decisions based on the weighted sum of time delay and energy consumption.

3.2. Bandwidth Consumption Minimization Problem

From the perspective of the application server, the total bandwidth allocated to the requested MEC server or mobile device is constrained when it transmits the requested service data. We secondly decompose problem (12) into a problem minimizing the consumed bandwidth of the application server:
min ξ ban b 0
s . t . b 0 1 | T | t T b 0 max ,
where b 0 max is the total allocated bandwidth upper bound of the application server at one timeslot.

3.3. Service Fetching Time Minimization Problem

When the application server transmits the service data to the MEC servers or mobile devices, the total transmission time of the responding service data can be minimized based on the total allocated bandwidth. We further formulate problem (26):
min T fet ( t )
s . t . b 0 , m ( t ) + n N ( t ) b 0 , n ( t ) b 0 ( t ) ,
x k n ( t ) , 2 ( t ) + x k n ( t ) , 3 ( t ) + x k n ( t ) , 5 ( t ) + x k n ( t ) , 6 ( t ) = 1 ,
x k n ( t ) , i ( t ) { 0 , 1 } , i = { 2 , 3 , 5 , 6 } ,
where T fet ( t ) is the total transmission time of the responding service data. (27) indicates that the total allocated bandwidth has an upper bound, (28) and (29) limit the combination decisions.
Here, we clarify the connections among these three subproblems and how they can work together to reach the optimal solution for problem (12). Problem (12) jointly minimizes the cost of mobile devices and the global allocation bandwidth. Firstly, problem (23) minimizes the mobile device cost, including the time delay and energy consumption. Secondly, problem (24) minimizes the time average allocation bandwidth from a global perspective. Thirdly, problem (26) further minimizes the responding service transmission time after making the offloading decision based on the solution of the problem (23).

4. Solution

In this section, we propose three modules to, respectively, solve the subproblems in the last section. In particular, the LMKO module can minimize the average cost of mobile devices. To minimize the consumed bandwidth of the application server, we devise the LLUC module. Moreover, the KCDF module minimizes the total transmission time of the responding service data.

4.1. Method of Lagrange Multipliers with the KKT Condition-Based Offloading Module (LMKO)

To minimize the average cost of mobile devices, we transform problem (23) into a problem of tractable form and further leverage convex optimization to solve it. According to constraint (13), the time delay of each case cannot exceed the inference task completion tolerance deadline. Hence, we set the time delay of the case with most of the procedures to the maximum tolerance time to reduce the number of optimization variables: T k n ( t ) , 3 = T k n ( t ) max . For succinct expression, we define notations A and B as
A = T k n ( t ) max T 0 , m , j lib T 0 , m , j par ,
B = σ 2 + I n , m ( t ) h n , m ( t ) .
Then, f m ( t ) and p n , m ( t ) can be transform into the function values of g 1 ( T m , k n ( t ) exe ( t ) ) and g 2 ( T m , k n ( t ) exe ( t ) ) , respectively:
f m ( t ) = g 1 ( T m , k n ( t ) exe ( t ) ) = c k n ( t ) T m , k n ( t ) exe ( t ) ,
p n , m ( t ) = g 2 ( T m , k n ( t ) exe ( t ) ) = ( 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ) 1 ) B .
Moreover, the cost of the objective function in problem (23) can be calculated as
Z 1 = ξ tim 1 | T | | N | t T n N ( m M ( x k n ( t ) , 1 ( t ) A + x k n ( t ) , 2 ( t ) ( T k n ( t ) max T 0 , m , j lib ) + x k n ( t ) , 3 ( t ) T k n ( t ) max ) + i = 4 6 x k n ( t ) , i ( t ) T k n ( t ) , i ) + ξ ene 1 | T | | N | t T n N ( m M i = 1 3 x k n ( t ) , i ( t ) × g 2 ( T m , k n ( t ) exe ( t ) ) ( A T m , k n ( t ) exe ( t ) ) + i = 4 6 x k n ( t ) , i ( t ) E k n ( t ) , i ) .
Therefore, problem (23) can be further transformed into:
min T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) Z 1
s . t . g 1 ( T m , k n ( t ) exe ( t ) ) f max ,
g 2 ( T m , k n ( t ) exe ( t ) ) p max ,
x k n ( t ) , i ( t ) [ 0 , 1 ] ,
where T m , k n ( t ) exe ( t ) and x k n ( t ) , i ( t ) are optimization variables. Constraint (36) reflects the computation capability limit and T m , k n ( t ) exe ( t ) . (37) constrains the relationship between the maximum power and T m , k n ( t ) exe ( t ) . (38) indicates that the decision combination is relaxed to be continuous.
Subsequently, we leverage the method using Lagrange multipliers with KKT conditions [25] to solve problem (35). Before this, we prove that the problem (35) is convex.
Now, we define another function of T m , k n ( t ) exe ( t ) as follows:
g 3 ( T m , k n ( t ) exe ( t ) ) = g 2 ( T m , k n ( t ) exe ( t ) ) ( A T m , k n ( t ) exe ( t ) ) ,
and take the second partial derivative of g 3 ( T m , k n ( t ) exe ( t ) ) with respect to T m , k n ( t ) exe ( t ) :
2 g 3 T m , k n ( t ) exe 2 = ln 2 2 d k n ( t ) 2 B 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ( t ) ) b n , m ( t ) 2 ( A T m , k n ( t ) exe ( t ) ) 3 .
All of the terms in (40) are positive, 2 g 3 T m , k n ( t ) exe 2 > 0 , and g 3 ( T m , k n ( t ) exe ( t ) ) is a convex function. Similarly, g 3 ( x k n ( t ) , i ( t ) T m , k n ( t ) exe ( t ) ) is a convex function. Furthermore, we define a perspective function of g 3 ( x k n ( t ) , i ( t ) T m , k n ( t ) exe ( t ) ) as
g 4 ( x k n ( t ) , i ( t ) T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) ) = x k n ( t ) , i ( t ) g 3 ( x k n ( t ) , i ( t ) T m , k n ( t ) exe ( t ) x k n ( t ) , i ( t ) ) = x k n ( t ) , i ( t ) g 2 ( T m , k n ( t ) exe ( t ) ) ( A T m , k n ( t ) exe ( t ) ) .
According to (41), g 4 ( x k n ( t ) , i ( t ) T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) ) is convex so that the objective function of the problem (35) is a convex function. Then, the second partial derivatives of g 1 ( T m , k n ( t ) exe ( t ) ) and g 2 ( T m , k n ( t ) exe ( t ) ) with respect to T m , k n ( t ) exe ( t ) are, respectively, calculated as (42) and (43):
2 g 1 T m , k n ( t ) exe 2 = 2 C k n ( t ) T m , k n ( t ) exe ( t ) 3 ,
2 g 2 T m , k n ( t ) exe 2 = ln 2 d k n ( t ) B 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ( t ) ) b n , m ( t ) ( A T m , k n ( t ) exe ( t ) ) 3 × ( ln 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ( t ) ) + 2 ) .
All terms in (42) and (43) are positive, 2 g 1 T m , k n ( t ) exe 2 > 0 , and 2 g 2 T m , k n ( t ) exe 2 > 0 . Thus, constraint (36) and (37) are convex with T m , k n ( t ) exe ( t ) . The feasible region of the Problem (35) is a convex set. We can derive that the Problem (35) is convex. In addition, if p max and f max are high enough, we can find a feasible solution to making all of the constraints slack, hence satisfying the Slater condition. A convex problem that satisfies the Slater condition is sufficient for the problem and its dual problem to be strong. In other words, they have zero dual gap and their optimal solutions are equal.
Next, we define the Lagrangian relaxation function of the problem (35) as
L ( T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) , λ m , k n ( t ) , 1 , λ m , k n ( t ) , 2 ) = C + t T n N m M ( λ m , k n ( t ) , 1 ( g 1 ( T m , k n ( t ) exe ( t ) ) f max ) + λ m , k n ( t ) , 2 ( g 2 ( T m , k n ( t ) exe ( t ) ) p max ) ) ,
where λ m , k n ( t ) , 1 and λ m , k n ( t ) , 2 are the Lagrangian multipliers. The Lagrangian relaxation function relaxes the constraints of the Problem (35). Here, we formally transform problem (35) into its dual problem:
max λ m , k n ( t ) , 1 , λ m , k n ( t ) , 2 min T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) L
s . t . λ m , k n ( t ) , 1 0 , λ m , k n ( t ) , 2 0 ,
where we first fix λ m , k n ( t ) , 1 , λ m , k n ( t ) , 2 and minimize L to obtain the infimum, then fix T m , k n ( t ) exe ( t ) , x k n ( t ) , i ( t ) and maximize the infimum. (46) indicates that the Lagrangian multipliers are positive.
We further detail the KKT condition of the problem (45):
L T m , k n ( t ) exe = ξ ene | T | | N | t T n N m M x k n ( t ) , i ( t ) g 3 T m , k n ( t ) exe + t T n N m M ( λ m , k n ( t ) , 1 g 1 T m , k n ( t ) exe + λ m , k n ( t ) , 2 g 2 T m , k n ( t ) exe ) = 0 ,
λ m , k n ( t ) , 1 ( g 1 ( T m , k n ( t ) exe ( t ) ) f max ) = 0 ,
λ m , k n ( t ) , 2 ( g 2 ( T m , k n ( t ) exe ( t ) ) p max ) = 0 ,
g 1 ( T m , k n ( t ) exe ( t ) ) f max ,
g 2 ( T m , k n ( t ) exe ( t ) ) p max ,
λ m , k n ( t ) , 1 0 , λ m , k n ( t ) , 2 0 ,
where
g 3 T m , k n ( t ) exe = B ( ln 2 d k n ( t ) 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ) b n , m ( t ) ( A T m , k n ( t ) exe ) 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ) + 1 ) ,
g 1 T m , k n ( t ) exe = c k n ( t ) T m , k n ( t ) exe 2 ,
and
g 2 T m , k n ( t ) exe = B ln 2 d k n ( t ) 2 d k n ( t ) b n , m ( t ) ( A T m , k n ( t ) exe ) b n , m ( t ) ( A T m , k n ( t ) exe ) 2 .
In KKT conditions, (47) is the dual stationarity condition. (48) and (49) are the complementary slackness conditions. (50) and (51) are the primal feasibility conditions while (52) is the dual feasibility condition.
Since (47) is a transcendental equation, we derive the solution by the Newton iteration method:
T m , k n ( t ) exe = T m , k n ( t ) exe L T m , k n ( t ) exe 2 L T m , k n ( t ) exe 2 ,
where
2 L T m , k n ( t ) exe 2 = ξ ene | T | | N | t T n N m M x k n ( t ) , i ( t ) 2 g 3 T m , k n ( t ) exe 2 + t T n N m M ( λ m , k n ( t ) , 1 2 g 1 T m , k n ( t ) exe 2 + λ m , k n ( t ) , 2 2 g 2 T m , k n ( t ) exe 2 ) = 0 .
After iterations of the Newton method, we obtain the optimal solution T m , k n ( t ) exe * . Then, the optimal resource allocation f m ( t ) * and p n , m ( t ) * can be calculated according to (32) and (33), respectively.
Moreover, we leverage the subgradient method to update the Lagrangian multipliers:
λ m , k n ( t ) , 1 = max { λ m , k n ( t ) , 1 + α m , k n ( t ) , 1 ( g 1 ( T m , k n ( t ) exe ( t ) ) f max ) , 0 } ,
λ m , k n ( t ) , 2 = max { λ m , k n ( t ) , 2 + α m , k n ( t ) , 2 ( g 2 ( T m , k n ( t ) exe ( t ) ) p max ) , 0 } ,
where α m , k n ( t ) , 1 and α m , k n ( t ) , 2 are the diminishing step size, respectively.
Since problem (45) is convex with respect to the optimization variable, the update iteration can converge to the optimal solution, satisfying the following conditions: τ sub = 1 α m , k n ( t ) , 1 ( τ sub ) = , τ sub = 1 α m , k n ( t ) , 2 ( τ sub ) = , τ sub = 1 α m , k n ( t ) , 1 ( τ sub ) 2 < , and τ sub = 1 α m , k n ( t ) , 2 ( τ sub ) 2 < , where τ sub is the iteration index. There is proof in [26].
Based on the given service caching placement decision and service parameter freshness status, we can calculate the cost of local execution as follows:
C k n ( t ) loc = x k n ( t ) , i ( t ) ( ξ tim T k n ( t ) * + ξ ene E k n ( t ) * ) , i { 1 , 2 , 3 } ,
the offloading cost of the MEC server m is calculated as
C m , k n ( t ) off = x k n ( t ) , i ( t ) ( ξ tim T k n ( t ) * + ξ ene E k n ( t ) * ) , i { 4 , 5 , 6 } ,
and the minimum offloading cost among all the MEC servers is calculated as
C m * , k n ( t ) off = min m M C m , k n ( t ) off
where T k n ( t ) * and E k n ( t ) * are calculated according to f m ( t ) * and p n , m ( t ) * . The offloading decision can be further derived. If C k n ( t ) loc < C m * , k n ( t ) off , x m * , k n ( t ) exe ( t ) = 1 , and x m , k n ( t ) exe ( t ) = 0 , m M \ m * , the inference task is offloaded to the MEC server m * ; otherwise, x m , k n ( t ) exe ( t ) = 0 , m M , and the inference task is executed locally.
The pseudo-code of the LMKO is shown in Algorithm 1. The complexity of LMKO is O ( | M | ( τ new + τ sub ) ) + | M | ) , where τ new and τ sub are the iteration numbers of the Newton method and the subgradient method, respectively.

4.2. Lyapunov Optimization-Based Learning and Update Control Module (LLUC)

Since the application server also bears other applications, its bandwidth resources are limited and need to be economized. In this subsection, we minimize the bandwidth consumption of the application server from the perspective of a global view while minimizing the service fetching time to accelerate the inference task processing.
The inference tasks are generated randomly, and the execution request in the offloading style or local style is a random event for the MEC server and mobile device. If there is no caching service or fresh service parameters, they call for the application server to fetch the service. Therefore, the fetching request is also random in terms of the application server, which has no a priori distribution. We regard the total requested service data size waiting for transmission as a queue, and leverage the Lyapunov optimization to solve the problem of stabilizing a randomly arriving queue system.
The application server transmits the requested service data as soon as possible to decrease the fetching time. At timeslot t, the total requested service data size can be defined as the enqueued rate:
d enq ( t ) = n N ( m M ( x k n ( t ) , 2 ( t ) d j par + x k n ( t ) , 3 ( t ) ( d j lib + d j par ) ) + x k n ( t ) , 5 ( t ) d j par + x k n ( t ) , 6 ( t ) ( d j lib + d j par ) ) .
Moreover, let d deq be the dequeued rate, which is the total size of the service transmitted from the application server to the requesting MEC servers or mobile devices. Furthermore, the backlog of the queue can be defined as Q ( t + 1 ) = max { Q ( t ) + d enq ( t ) d deq ( t ) , 0 } , where the enqueued rate and dequeued rate can affect the queue backlog of the next timeslot. Then, we define the quadratic Lyapunov function as Y ( t ) = Q ( t ) 2 2 , and the Lyapunov drift can be denoted as Δ Y ( t ) = Y ( t + 1 ) Y ( t ) . In addition, we define the penalty function of the Lyapunov optimization, which equals the total allocated bandwidth consumption for transmitting the requested service data at timeslot t: b 0 ( t ) = β d deq ( t ) , where β is the simplified transformation coefficient.
We formally transform problem (24) into the Lyapunov optimization problem:
min d deq ( t ) Z 2 = Δ Y ( t ) + V ( t ) b 0 ( t )
s . t . lim t Q ( t ) t = 0 ,
b 0 ( t ) b 0 max ,
where V ( t ) is the adaptive weight of the penalty, and (65) is the stable condition of the queue system, and b 0 max in (66) is the maximum of the total available bandwidth between the application server and all requesting MEC servers or mobile devices.
Algorithm 1 Method of Lagrange multipliers with the KKT condition-based offloading module (LMKO).
Require:
time cost T k n ( t ) , i , energy cost E k n ( t ) , i , time weight ξ tim , delay weight ξ ene , maximum tolerance time T k n ( t ) max , maximum computation capability f max , maximum power p max , maximum Newton iteration τ new , subgradient threshold ϵ sub
Ensure:
offloading decision x m , k n ( t ) exe ( t )
1:
for t = 1 to | T |  do
2:
   for  t = 1 to | N |  do
3:
     for  t = 1 to | M |  do
4:
        for  t = 1 to τ new  do
5:
          Calculate T m , k n ( t ) exe based on the Newton iteration method according to (56).
6:
        end for
7:
        Obtain T m , k n ( t ) exe * .
8:
        repeat
9:
          Update λ m , k n ( t ) , 1 and λ m , k n ( t ) , 2 based on the subgradient method according to (58) and (59), respectively.
10:
        until  λ m , k n ( t ) , 1 λ m , k n ( t ) , 1 ϵ sub and λ m , k n ( t ) , 2 λ m , k n ( t ) , 2 ϵ sub
11:
        Calculate f m ( t ) * , p n , m ( t ) * , and C m , k n ( t ) off according to (32), (33), and (61), respectively.
12:
     end for
13:
     Calculate C k n ( t ) loc and C m * , k n ( t ) off according to (60) and (62), respectively.
14:
     if  C m * , k n ( t ) off > C k n ( t ) loc  then
15:
         x m * , k n ( t ) exe ( t ) = 1 .
16:
         x m , k n ( t ) exe ( t ) = 0 , m M \ m * .
17:
     else
18:
         x m , k n ( t ) exe ( t ) = 0 , m M .
19:
     end if
20:
   end for
21:
end for
22:
return  x m , k n ( t ) exe ( t )
Theorem 1 
([27]). Assuming there are constants D 0 , ϵ que > 0 , V max 0 , b 0 max > 0 , such that for all t and all possible variables Q ( t ) , the Lyapunov drift-plus-penalty condition holds that:
E [ Δ Y ( t ) + V ( t ) b 0 ( t ) | Q ( t ) ] D ϵ que Q ( t ) + V max b 0 max ,
where E [ 1 2 ( d enq ( t ) d deq ( t ) ) 2 | Q ( t ) ] D indicates that the difference between the enqueued rate and the dequeued rate has an upper bound, E [ d enq ( t ) d deq ( t ) | Q ( t ) ] ϵ que indicates that the queue is controlled, V max is the maximum of V ( t ) over time, and b 0 max is the maximum of b 0 ( t ) mentioned above. For all t > 0 , the time average queue backlog and the time average bandwidth satisfy the following:
1 | T | t = 1 | T | E [ Q ( t ) ] D + V max ( b 0 max b 0 min ) ϵ que + E [ L ( 1 ) ] | T | ϵ que ,
1 | T | t = 1 | T | E [ b 0 ( t ) ] D V max + b 0 max + E [ L ( 1 ) ] | T | V max ,
where b 0 min is the minimum of b 0 ( t ) .
Theorem 1 explains that when the Lyapunov drift-plus-penalty condition is met, the average queue backlog is at most O ( V max ) complexity, and the average bandwidth is at most O ( 1 V max ) above the maximum bandwidth. Hence, we find that there is a trade-off between the queue backlog and the bandwidth penalty, which is tuned by V ( t ) .
In addition, since Δ Y ( t ) + V ( t ) b 0 ( t ) ( d enq ( t ) d deq ( t ) ) 2 2 + Q ( t ) ( d enq ( t ) d deq ( t ) ) + V ( t ) β d deq ( t ) , we can derive the optimal controlled dequeued rate by taking the derivative with respect to d deq ( t ) and further setting it to 0:
d deq , * ( t ) = Q ( t ) V ( t ) β + d enq ( t ) .
To improve the Lyapunov optimization, we first design an adaptive learning penalty weight method to adaptively adjust to V ( t ) :
V ( t ) = V ( 1 ) e ζ ϕ ( t ) ,
where ζ is the learning rate of penalty weight, ϕ ( t ) = 1 | N | n N ( t ) 𝟙 { T k n ( t ) > T k n ( t ) max } represents the ratio of the missing tolerance time inference task number, and 𝟙 is an indicator function. The emphasis on the bandwidth penalty is lowered as the ratio of the overtime inference tasks increases. When the ratio is alleviated, the weight of the bandwidth is set to be higher.
Secondly, since the transmission data sizes among all the requested MEC servers or mobile devices are distinct, e.g., some request the service libraries and parameters while others only request the parameters, the application server can preferentially respond to request only to the service parameters to decrease the consumption of the bandwidth when the weight of the bandwidth penalty is high. Therefore, we devise a dequeued rate update mechanism. When V ( t ) > ϵ thr where ϵ thr is a given threshold of the penalty weight, the enqueued rate can be updated to:
d deq , ( t ) = m M n N j J ( x k n ( t ) , 2 ( t ) d j par + x k n ( t ) , 5 ( t ) d j par ) ,
d deq , ( t + 1 ) = d deq , * ( t + 1 ) + m M n N j J ( x k n ( t ) , 3 ( t ) × ( d j lib + d j par ) + x k n ( t ) , 6 ( t ) ( d j lib + d j par ) ) ,
where the request transmission data sizes of cases 3 and 6 are assigned to timeslot t + 1 to alleviate the bandwidth penalty at the current timeslot t.
Thirdly, from the perspective of the service parameters with few timeslots until the next training, if the application server directly send the part of data, it can be requested again soon due to its stale service parameters. We further propose a freshness-aware transmitting method to reduce the service-requested frequency; the service parameters that will be trained soon are arranged to be transmitted at the end of their training. If the time condition satisfies ( ( t t j ) mod T j int ) > η T j int , where η is the given proportion of the training round and mod is an operator of taking the remainder, the dequeued rate is arranged as follows:
d deq , ( t ) = d deq , * ( t ) m M n N j ¯ J \ j ( x k n ( t ) , 2 ( t ) d j ¯ par + x k n ( t ) , 3 ( t ) ( d j ¯ lib + d j ¯ par ) + x k n ( t ) , 5 ( t ) d j ¯ par + x k n ( t ) , 6 ( t ) ( d j ¯ lib + d j ¯ par ) ) ,
d deq , ( t + T j int ( ( t t j ) mod T j int ) ) = d deq , * ( t + T j int ( ( t t j ) mod T j int ) ) + m M n N j J ( x k n ( t ) , 2 ( t ) d j par + x k n ( t ) , 3 ( t ) ( d j lib + d j par ) + x k n ( t ) , 5 ( t ) d j par + x k n ( t ) , 6 ( t ) ( d j lib + d j par ) ) ,
where the service ready to be trained is arranged to be transmitted from timeslot t to timeslot t + T j int ( ( t t j ) mod T j int ) .
The pseudo-code of LLUC is shown in Algorithm 2. The complexity of the adaptive learning penalty weight method, dequeued rate update mechanism, and freshness-aware transmitting method are O ( | N | ) , O ( | M | | N | | J | ) , and O ( | M | | N | | J | ) , respectively.
Algorithm 2 Lyapunov optimization-based learning and update control module (LLUC).
Require:
enqueued rate d enq ( t ) , initial queue backlog Q ( 1 ) , transformation coefficient β , initial bandwidth penalty weight V ( 1 ) , learning rate of penalty weight ζ , given threshold of penalty weight ϵ thr , given training round proportion η
Ensure:
dequeued rate d deq , ( t )
1:
for  t = 1 to T do
2:
   Update V ( t ) based on the adaptive learning penalty weight method according to (71).
3:
   Calculate d deq , * ( t ) based on the Lyapunov optimization according to (70).
4:
   if  V ( t ) > ϵ thr  then
5:
     Update d deq , ( t ) and d deq , ( t + 1 ) based on the dequeued rate update mechanism according to (72) and (73), respectively.
6:
   end if
7:
   if  ( ( t t j ) mod T j int ) > η T j int  then
8:
     Update d deq , ( t ) and d deq , ( t + T j int ( ( t t j ) mod T j int ) ) based on the freshnessaware transmitting method according to (74) and (75), respectively.
9:
   end if
10:
end for
11:
return  d deq , ( t ) .

4.3. KM Algorithm-Based Channel Division Fetching Module (KCDF)

Since the application server transmits with the MEC servers or mobile devices under the cellular network, the cellular channel matching is crucial to reduce the total transmission time of requested service data d deq , ( t ) .
The total transmission time of the dequeued requested service data can be denoted as
T fet ( t ) = n N ( t ) ( m M ( t ) ( x k n ( t ) , 2 ( t ) T 0 , m , j par + x k n ( t ) , 3 ( t ) ( T 0 , m , j lib + T 0 , m , j par ) ) + x k n ( t ) , 5 ( t ) T 0 , n , j par + x k n ( t ) , 6 ( t ) ( T 0 , n , j lib + T 0 , n , j par ) ) ,
where M ( t ) and N ( t ) are the responding sets of MEC servers and mobile servers based on the dequeued service data, respectively.
First, we divide the total allocated bandwidth into two parts, one is allocated for transmitting cases 2 and 5, and another is allocated for cases 3 and 6 with more transmitted data. The allocated bandwidth divided method is designed as
b 0 par ( t ) = b 0 ( t ) n N ( t ) ( m M ( t ) x k n ( t ) , 2 ( t ) d j par + x k n ( t ) , 5 ( t ) d j par ) / n N ( t ) ( m M ( t ) ( x k n ( t ) , 2 ( t ) d j par + x k n ( t ) , 3 ( t ) ( d j lib + d j par ) ) + x k n ( t ) , 5 ( t ) d j par + x k n ( t ) , 6 ( t ) ( d j lib + d j par ) ) ,
and
b 0 lib , par ( t ) = b 0 ( t ) b 0 par ( t ) ,
where b 0 par ( t ) and b 0 lib , par ( t ) are the total allocated bandwidths of cases 2 and 5 and cases 3 and 6, based on their total transmitted data sizes, respectively.
Take cases 2 and 5 as an example, we defined the response set as S = M par ( t ) N par ( t ) , where S is indexed by s and has cardinal number | S | , M par ( t ) and N par ( t ) are the responding sets with cases 2 and 5 of the MEC servers and mobile devices, respectively. Let A = { 1 , , | A | } be the set of | A | cellular channels indexed by a. The matching decision of s and a can be defined as x s , a mat ( t ) = 1 if a is allocated to s; otherwise, x s , a mat ( t ) = 0 , and it is constrained by: s S x s , a mat ( t ) = 1 , a A , a A x s , a mat ( t ) = 1 , s S , where each cellular channel is allocated for, at most, one MEC server or mobile device, and each MEC server or mobile device is assigned, at most, one cellular channel. Thus, the transmission latency of the service parameter from the application server to the MEC server or mobile device s over cellular channel a is:
T s , a par = d j ( s ) par b 0 par ( t ) | S | log ( 1 + p s , a ( t ) h s , a ( t ) σ 2 + I s , a ( t ) ) ,
where j ( s ) is the service type transmitted for s, p s , a ( t ) , h s , a ( t ) , and I s , a ( t ) are the transmission power, channel gain, and co-channel interference under channel a to s, respectively.
Here, we formally formulate the problem to minimize the total transmission time of the service parameters:
min x s , a mat ( t ) Z 3 = 1 | S | s S a A x s , a mat ( t ) T s , a par
s . t . s S x s , a mat ( t ) = 1 , a A ,
a A x s , a mat ( t ) = 1 , s S ,
x s , a mat ( t ) { 0 , 1 } , s S , a A ,
where x s , a mat ( t ) is the optimization variable. (81)–(83) are the constraints of the matching decision.
We leverage the KM algorithm [28] to solve the problem. The complete weighted link bipartite graph is defined as G = ( S , A , < S , A > ) , where S and A are vertex sets of two sides, < S , A > is the link set, and the weight of its element is derived from our devised link-initialized method:
w s , a = T s , a par , if T s , a par < θ min a A T s , a par , 0 , otherwise ,
where θ is a coefficient of the minimum service data transmission time to remove the unacceptable transmission time. The feasible vertex label is satisfied: w s + w a w s , a , where w s = min a A w s , a and w a = 0 are the vertex label of s and a in the KM algorithm. Let G mat = ( S , A , < S mat , A mat > ) be the equalling matching subgraph, satisfying w s + w a = w s , a , where the link set < S mat , A mat > is initialized to an empty set.
The perfect matching of the equalling matching subgraph G mat can be denoted as M * , and we have the following theorem.
Theorem 2. 
Assuming w s and w a are the feasible vertex labels, if the equalling match subgraph G mat has a perfect matching, M * , M * is also a perfect match with a minimum total weight of G .
Proof. 
The proof is analyzed in Appendix A.    □
The steps of the KM algorithm can be elaborated as follows:
  • Initialize w s , w a , and w s , a .
  • Enumerate s S , find a A satisfies w s + w a = w s , a based on the Hungarian algorithm.
  • If a ( A A mat ) A rea , add < s , a > into G mat ; otherwise, calculate the matching distance z = min a { w s , a w s w a , s S rea , a ( A A mat ) } , set w s = w s + z , s S rea and w a = w a z , a A rea . Then, change the reachable path into links, e.g., s , < a 1 s * > , a * to < s a 1 > , < s * a * > .
  • Repeat 2 and 3 until obtaining M * of G mat .
Therein, S mat and A mat are the vertex sets of two sides, whose elements have links in G mat , respectively. S rea and A rea are the searching reachable path sets of two sides by the breadth-first search in the Hungarian algorithm [29], respectively. s * and a * are the corresponding variables of z. The matching decision can be derived from:
x s , a mat ( t ) = 1 , if < s , a > M * , 0 , otherwise ,
After obtaining the result of the KM algorithm, a few individual MEC servers or mobile devices are allocated an unsatisfied cellular channel, which significantly delays the transmission time. We further design a worst-case arranging mechanism to deal with it:
x s , a mat ( t ) = 0 , if w s , a | < s , a > M * = max a A w s , a , 1 , otherwise ,
if x s , a mat ( t ) = 0 , s is scheduled to be allocated, a cellular channel in the next timeslot, which consumes less time when it is allocated a satisfied channel in the next timeslot.
The pseudo-code is shown in Algorithm 3. The complexity of the naive KM algorithm is O ( | S | 4 ) , and the KM algorithm with the slack array is O ( | S | 3 ) . The complexities of the allocated bandwidth divided method, link-initialized method, and worst-case arranging mechanism are O ( | M ( t ) | | N ( t ) | ) , O ( 1 ) , and O ( 1 ) , respectively. Parts of cases 3 and 6 can be similarly solved in Algorithm 3.
Algorithm 3 KM algorithm-based channel-division fetching module (KCDF).
Require:
total allocated bandwidth b 0 par ( t ) , responding set S , cellular channel set A , link threshold T ¯ s , a par , transmission power p s , a ( t ) , channel gain h s , a ( t ) , co-channel interference I s , a ( t )
Ensure:
matching decision x s , a mat ( t )
1:
Calculate T s , a par based on the allocated bandwidth divided method according to (79).
2:
Initialize G based on the link-initialized method according to (84).
3:
Obtain the perfect matching M * based on the KM algorithm.
4:
Derive the matching decision x s , a mat ( t ) according to (85).
5:
Rearrange the matching decision x s , a mat ( t ) based on the worst-case arranging mechanism according to (86).
6:
return  x s , a mat ( t ) .

5. Evaluation

5.1. System Implementation

For system implementation, we implement the framework in a real-world collaborative edge system testbed that consists of a Raspberry Pi4 Model B board (with 1.5 GHz CPU, 4 GB memory) and a desktop (with an Intel 8 Cores i7-10700F 2.90 GHz CPU and 16 GB memory). Raspberry Pi serves as the application server. The desktop serves as the MEC servers and mobile devices. All devices are connected under a local wireless router. We use the transmission control protocol (TCP) socket programming for guaranteeing reliable communication over all devices in the environment.

5.2. Case Study

We present a simulation of the proposed framework on the edge system testbed through a real-world image analysis case study: automatic license plate recognition. In particular, we leverage the convolutional neural network (CNN) framework as a service developed in [30]: an ImageNet model VGG-16. The VGG-16 model is a deep CNN with 16 layers for image recognition tasks and is trained in a distributed machine learning style. We use the open-source automatic license plate recognition dataset (available online: https://platerecognizer.com (accessed on 6 May 2022)) to emulate the tasks generated by mobile devices.

5.3. Experiment Setup

We use simulations to compare the performance of the framework. The hyperparameters of the simulation are as follows: the input size of the task is in [ 2 , 10 ] MB , the computation amount in [1000, 50,000] cycles, the MEC servers and mobile device number are in { 10 , 20 , 30 , 40 } , the learning rate of the penalty function is in { 0.1 , 0.2 , 0.3 , 0.4 } , the proportion of the training round is in { 0.8 , 0.85 , 0.9 , 0.95 } , and the link-initialized coefficient is in { 1.5 , 1.8 , 2.1 , 2.4 } .
Hence, we first choose some representative baselines compared with the LMKO module.
  • Fresh cache offloading priority (FCOP): An algorithm where the mobile device searches a MEC server with a fresh parameter cache and immediately offloads the task.
  • Cache offloading priority (COP): An algorithm where the mobile device searches a MEC server with cache and immediately offloads the task.
  • Offloading priority (OP): An algorithm where the mobile device searches a MEC server and immediately offloads the task.
  • Local execution with fresh cache priority (LEFC): An algorithm where the mobile device executes the task locally if it maintains a fresh parameter cache; otherwise, it offloads the task to a MEC server.
Moreover, we further pick competitive baselines compared with the LLUC module.
  • Queue backlog priority (QBP): An algorithm constrains the penalty weight in a relatively low range of the Lyapunov optimization.
  • Total bandwidth priority (TBP): An algorithm constrains the penalty weight in a relatively high range.
  • Queue backlog empty (QBE): An algorithm fixes the penalty weight to 0 of the Lyapunov optimization.
  • Fixed total bandwidth (FTB): An algorithm fixes the penalty weight in an extremely high value.
We also select a few representative strategies compared with the KCDF module.
  • Hungary algorithm (HA) [29]: An algorithm is leveraged to solve the maximal matching problem of a non-weight bipartite graph.
  • Channel bandwidth allocated-based size (CBAS): An algorithm where the total bandwidth is allocated based on the responding service data size.
  • Channel bandwidth allocated-based case (CBAC): An algorithm where the total bandwidth is allocated based on the requesting offloading case.
  • Uniform allocation of channel bandwidth (UACB): An algorithm where the total bandwidth is allocated uniformly.

5.4. LLUC Evaluation

We first investigate the LLUC module to compare the performance of the time average–total bandwidth under different learning rates of the penalty weight. From Figure 2a, it can be shown that our proposed LLUC module with ζ = 0.1 achieves the best result over the change of the requesting number. As ζ increases from 0.1 to 0.4, the performance degrades over all of the requesting numbers. A lower learning rate results in a relatively high penalty weight and the Lyapunov optimization minimizes the penalty. In the meantime, selecting a lower learning rate may lead to a higher backlog and further delay the response fetching time. Therefore, it is advisable to make a moderate selection to balance the bandwidth consumption and time overhead. Since using ζ = 0.2 only increases bandwidth by 24.0% and decreases the queue backlog by 53.3% compared to using ζ = 0.1 for 10 requests, we take ζ = 0.2 as the learning rate, considering the bandwidth and time.
Then, we studied the LLUC module to compare the performance of the average AoI of each response service data under distinct round proportions of request rearrangements. In Figure 2b, the simulation results show that when η = 0.8 , i.e., when a 20% interval is left until the next training, the algorithm consistently achieves the lowest AoI regardless of the number of requests. As η increases from 0.8 to 0.95, the average AoI becomes higher. More requests are rearranged to another timeslot for service data with lower AoI. However, the time latency can deteriorate while the response timeslots are delayed. Therefore, considering the responding transmission time and AoI service data, which decrease the service fetching frequencies, we selected a medium proportion η = 0.9 to balance the trade-off, whose average AoI only increases by 23.8% and the time latency decreases by 36.2%, when comparing η = 0.8 under 10 requests.

5.5. KCDF Evaluation

From the perspective of the module KCDF, we first evaluate the performance of the average fetching time under different link-initialized coefficients. Figure 3a plots that the KCDF with θ = 1.8 outperforms other link-initialized coefficients as the responding number increases. It is concluded that a lower link-initialized coefficient can remove more unsatisfied links in the KM algorithms. As θ increases from 1.8 to 2.4, the average fetching time becomes higher. However, selecting a link-initialized coefficient that is too low such as θ = 1.5 can increase the probability that the MEC server or mobile device fails to find a link in the equalling matching subgraph, which can significantly decline the performance. To make a feasible balance trade-off, we select θ = 1.8 to guarantee the transmission time latency with at least a 2.3% performance improvement over the second-best result from θ = 1.5 in 10 responses.
Secondly, we compare the average fetching time under distinct rearrange conditions in Figure 2b. It is illustrated that rearranging based on the link weight is not less than the maximum, i.e., w s , a | < s , a > M * = max a A w s , a has the minimum average fetching time when the responding number varies. The fetching time increases from the maximum, the second maximum, and the third maximum. When the application server rearranges based on the weight of the link is not less than the third, its result is even worse than the non-rearrangement. If the link is not the worst choice of the MEC server or mobile device, it is better to respond at this timeslot; otherwise, it suffers a higher fetching time. We choose the rearranging condition if the link weight is not less than the maximum to the LLUC module, which results in a time reduction of at least 3.7% compared to the second maximum case for under 10 responses.

5.6. Performance Comparison

5.6.1. Average Cost Comparison

From Figure 4a, we investigate the average cost of the distinct baselines of the LMKO modules. Our LMKO module achieves the minimum of Z 1 under different requesting numbers. The LMKO module is capable of obtaining the minimum cost by offloading the task to the best MEC server or local execution. The second-best result belongs to the FCOP algorithm since the mobile device chooses a MEC server with a fresh cache to offload. The performances of the COP and OP are poor owing to their extra service fetching times. The worst result is brought by the LEFC since it does not take advantage of offloading in the edge system. The LMKO module is efficient in terms of the cost of the time delay and energy consumption with at least a 4.1% performance gain compared to the second-best result from FCOP (for under 10 requests).

5.6.2. Average Total Bandwidth Comparison

Figure 4b illustrates the performance of the time-averaged total allocated bandwidth of the baselines of the LLUC baselines. The proposed LLUC module has a superior result comparing other baselines while the requesting number increases. The efficiency of the LLUC module maintains a controlled queue backlog while minimizing the total allocated bandwidth. In the meanwhile, the TBP algorithm obtains the second minimum result since it takes a higher penalty weight but causes a longer queue backlog. The QBP algorithm preferentially considers the queue backlog, leading to a medium result. The FTB algorithm delivers a high performance despite a fixed total bandwidth, due to its large backlog. The worst result belongs to the QBE algorithm, which keeps the 0 queue backlog, even as the bandwidth significantly increases. The LLUC module outperforms other baselines in regard to queue backlog stability and total allocation bandwidth with at least a 19.7% performance gain compared to the second-best result from TBP under 10 requests.

5.6.3. Average Fetching Time Comparison

Figure 4c displays the results of comparing the average fetching times of the baseline methods of the KCDF module. Our KCDF module exhibits superior performance across varying response numbers. The KCDF module finds a perfect match in the equalling matching subgraph, where each MEC server or mobile device matches its best-allocated cellular channel. The second lowest number belongs to the CBAS algorithm, which allocates the bandwidth according to the service data size. Each MEC server or mobile device can obtain the satisfied channel. The CBAC algorithm allocating the bandwidth with the offloading case suffers a similar situation with the CBAS and attains a medium result. The UACB has a poor result since it uniformly allocates the bandwidth leading to the response with service libraries and parameters having unsatisfied transmission latency. The HA algorithm suffers the worst result since it never updates the vertex label while it cannot find a link in the equalling matching subgraph, which results in a few vertices being matched, i.e., a few channels are allocated. Thus, our KCDF module has efficient performance in terms of average fetching time with at least a 2.2% performance gain compared to the second-best result from CBAS under 10 responses.

5.6.4. Average Time Cost of Baselines Combination

In Figure 5a, we investigated the performance of the average time cost of the baseline combinations, which includes our proposed ASCO framework (LMKO, LLUC, and KCDF modules) and other baselines achieving the competitive result consisting of FCOP, TBP, and CBAS algorithms. We can see that our ASCO framework always outperforms other baseline combinations while the time weight parameter changes, and the weights of energy and bandwidth remain. We minimize the average time cost by finding the most suitable offloading decision and allocating the best cellular channel. Other baseline combinations have declined results comparing our framework. LMKO+TBP+KCDF, LMKO+LLUC+CBAS, and LMKO+TBP+CBAS achieved moderate performance, as there was not a significant improvement in the modules. On the other hand, FCOP+LLUC+KCDF, FCOP+LLUC+CBAS, and FCOP+TBP+KCDF incurred higher costs, as their modules placed less emphasis on time concerns. Take FCOP+TBP+CBAS as an example, it had the worst performance due to the absence of the proposed modules. As ξ tim increased, the performance gap between our framework and another baseline combination enlarged, which explains why the proposed framework has a significant gain in terms of time delay with at least a 9.6% improvement compared to the second-best result from LMKO+TBP+KCDF under ξ tim = 0.1 .

5.6.5. Average Energy Cost of Baselines Combination

Figure 5b shows the performance of the average energy cost of the combinations. Our ASCO framework keeps the best result while the energy weight varies and the weights of the time and bandwidth are fixed. The LMKO module makes an economized energy offloading decision to save the energy consumption of mobile devices. At the same time, other baseline combinations with the LMKO module outperform other combinations without the LMKO module due to the consideration of energy in the LMKO module. LMKO+TBP+KCDF, LMKO+LLUC+CBAS, and LMKO+TBP+CBAS obtain middle performances due to the lack of energy concerns. FCOP+LLUC+KCDF, FCOP+LLUC+CBAS, and FCOP+TBP+KCDF have higher costs since their modules are not efficient in terms of costs. Similarly, FCOP+TBP+CBAS had the worst performance. Hence, the average energy result verifies that our proposed framework achieves superior performance with respect to energy consumption with at least a 2.8% improvement compared to the second-best result from LMKO+TBP+KCDF under ξ ene = 0.05 .

5.6.6. Average Bandwidth Consumption of Baselines Combination

Figure 5c illustrates the comparison of the average bandwidth consumption allocated from the application server under the baseline combinations. The proposed ASCO framework attains minimum results except in an extreme case with ξ ban = 40 , while the weights of the time and energy are fixed. In this case, the emphasis on bandwidth allocation is extremely significant so that our framework only obtains the second-best performance while LMKO+TBP+KCDF has the best result. LMKO+LLUC+CBAS and LMKO+TBP+CBAS obtain middle performances because they do not well balance the total bandwidth and mobile device cost. FCOP+LLUC+KCDF, FCOP+LLUC+CBAS, FCOP+TBP+KCDF, and FCOP+TBP+CBAS always obtain the worse results due to the algorithm’s inefficiency. However, the value of the bandwidth is impractical since it leads to relatively less consideration of the time delay and energy consumption. In most moderate-weight cases, our framework dominates other baseline combinations in regard to the average bandwidth consumption with at least a 6.2% improvement compared to the second-best result from LMKO+TBP+KCDF under ξ ban = 10 .

6. Conclusions

In our work, we consider a scenario of AoI-aware service caching-assisted offloading. The proposed ASCO framework consists of three modules: (1) the LMKO module based on the method of Lagrange multipliers with KKT conditions. (2) The LLUC module based on the Lyapunov optimization. (3) The KCDF module based on the KM algorithm. The simulation results verify that the proposed ASCO framework outperforms other baseline combinations with respect to time overhead, energy consumption, and allocated bandwidth. The ASCO framework is efficient in the individual inference task and global bandwidth allocation and is viable to be practically deployed.
This work can be extended in several future directions. First, considering the proactive service caching, the MEC server can predict the offloading request and call for the application server for advanced fetching. Second, considering the task partition, if the tasks are partitioned before execution, the subtasks can be executed in distinct MEC servers or locally.

Author Contributions

Conceptualization, J.F.; Methodology, J.F.; Software, J.F.; Validation, J.F.; Formal analysis, J.F.; Investigation, J.F.; Resources, J.F.; Data curation, J.F.; Writing—original draft, J.F.; Writing—review & editing, J.G.; Visualization, J.F.; Supervision, J.G.; Project administration, J.F.; Funding acquisition, J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 62171481, National Key Research and Development Program of China grant number 2019YFE0114000, Special Support Program of Guangdong grant number 2019TQ05X150, Natural Science Foundation of Guangdong Province grant number 2021A1515011124 and the Science and Technology Program of Guangzhou under Grant 202201011577.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Here is the proof of Theorem 2 from Section 4.3.
Proof. 
On the one hand, as G mat is a generated equalling matching subgraph of G , the perfect matching M * of G mat is also a perfect matching of G , and < s , a > M * w s , a = s S w s + a A w a . On the other hand, any non-perfect matching M has < s , a > M w s , a s S w s + a A w a . Therefore, the perfect matching M * has a minimum total weight equaling its upper bound. □

References

  1. Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Gener. Comput. Syst. 2013, 29, 1645–1660. [Google Scholar] [CrossRef] [Green Version]
  2. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef] [Green Version]
  3. Feng, J.; Gong, J. Joint Detection and Computation Offloading With Age of Information in Mobile Edge Networks. IEEE Trans. Netw. Sci. Eng. 2022, 1–14. [Google Scholar] [CrossRef]
  4. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  5. Parvez, I.; Rahmati, A.; Guvenc, I.; Sarwat, A.I.; Dai, H. A survey on low latency towards 5G: RAN, core network and caching solutions. IEEE Commun. Surv. Tutor. 2018, 20, 3098–3130. [Google Scholar] [CrossRef]
  6. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A survey on mobile edge networks: Convergence of computing, caching and communications. IEEE Access 2017, 5, 6757–6779. [Google Scholar] [CrossRef]
  7. Waqas, M.; Tu, S.; Halim, Z.; Rehman, S.U.; Abbas, G.; Abbas, Z.H. The role of artificial intelligence and machine learning in wireless networks security: Principle, practice and challenges. Artif. Intell. Rev. 2022, 55, 5215–5261. [Google Scholar] [CrossRef]
  8. Verbraeken, J.; Wolting, M.; Katzy, J.; Kloppenburg, J.; Verbelen, T.; Rellermeyer, J.S. A survey on distributed machine learning. ACM Comput. Surv. 2020, 53, 1–33. [Google Scholar] [CrossRef] [Green Version]
  9. Kaul, S.; Yates, R.; Gruteser, M. Real-time status: How often should one update? In Proceedings of the 2012 Proceedings IEEE INFOCOM, Orlando, FL, USA, 25–30 March 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 2731–2735. [Google Scholar]
  10. Sun, Y.; Chen, Z.; Tao, M.; Liu, H. Bandwidth gain from mobile edge computing and caching in wireless multicast systems. IEEE Trans. Wirel. Commun. 2020, 19, 3992–4007. [Google Scholar] [CrossRef] [Green Version]
  11. Ning, Z.; Zhang, K.; Wang, X.; Guo, L.; Hu, X.; Huang, J.; Hu, B.; Kwok, R.Y. Intelligent edge computing in internet of vehicles: A joint computation offloading and caching solution. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2212–2225. [Google Scholar] [CrossRef]
  12. Yang, X.; Fei, Z.; Zheng, J.; Zhang, N.; Anpalagan, A. Joint multi-user computation offloading and data caching for hybrid mobile cloud/edge computing. IEEE Trans. Veh. Technol. 2019, 68, 11018–11030. [Google Scholar] [CrossRef]
  13. Guo, H.; Liu, J. Collaborative computation offloading for multiaccess edge computing over fiber–wireless networks. IEEE Trans. Veh. Technol. 2018, 67, 4514–4526. [Google Scholar] [CrossRef]
  14. Wu, H.; Lyu, F.; Zhou, C.; Chen, J.; Wang, L.; Shen, X. Optimal UAV caching and trajectory in aerial-assisted vehicular networks: A learning-based approach. IEEE J. Sel. Areas Commun. 2020, 38, 2783–2797. [Google Scholar] [CrossRef]
  15. Ko, S.W.; Huang, K.; Kim, S.L.; Chae, H. Live prefetching for mobile computation offloading. IEEE Trans. Wirel. Commun. 2017, 16, 3057–3071. [Google Scholar] [CrossRef] [Green Version]
  16. Lyu, F.; Ren, J.; Cheng, N.; Yang, P.; Li, M.; Zhang, Y.; Shen, X.S. LEAD: Large-scale edge cache deployment based on spatio-temporal WiFi traffic statistics. IEEE Trans. Mob. Comput. 2020, 20, 2607–2623. [Google Scholar] [CrossRef]
  17. Gu, Z.; Lu, H.; Zhu, Z. On throughput optimization and bound analysis in cache-enabled fiber-wireless networks. IEEE Trans. Veh. Technol. 2020, 69, 9068–9082. [Google Scholar] [CrossRef]
  18. Chen, L.; Xu, J.; Ren, S.; Zhou, P. Spatio–temporal edge service placement: A bandit learning approach. IEEE Trans. Wirel. Commun. 2018, 17, 8388–8401. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, J.; Chen, L.; Zhou, P. Joint service caching and task offloading for mobile edge computing in dense networks. In Proceedings of the IEEE INFOCOM 2018-IEEE Conference on Computer Communications, Honolulu, HI, USA, 16–19 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 207–215. [Google Scholar]
  20. Zhao, T.; Hou, I.H.; Wang, S.; Chan, K. Red/led: An asymptotically optimal and scalable online algorithm for service caching at the edge. IEEE J. Sel. Areas Commun. 2018, 36, 1857–1870. [Google Scholar] [CrossRef]
  21. He, T.; Khamfroush, H.; Wang, S.; La Porta, T.; Stein, S. It’s hard to share: Joint service placement and request scheduling in edge clouds with sharable and non-sharable resources. In Proceedings of the 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–6 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 365–375. [Google Scholar]
  22. Ma, M.; Wong, V.W. Age of information driven cache content update scheduling for dynamic contents in heterogeneous networks. IEEE Trans. Wirel. Commun. 2020, 19, 8427–8441. [Google Scholar] [CrossRef]
  23. Xu, C.; Xie, Y.; Wang, X.; Yang, H.H.; Niyato, D.; Quek, T.Q. Optimal status update for caching enabled IoT networks: A dueling deep R-network approach. IEEE Trans. Wirel. Commun. 2021, 20, 8438–8454. [Google Scholar] [CrossRef]
  24. Zhang, S.; Li, J.; Luo, H.; Gao, J.; Zhao, L.; Shen, X.S. Low-latency and fresh content provision in information-centric vehicular networks. IEEE Trans. Mob. Comput. 2020, 21, 1723–1738. [Google Scholar] [CrossRef]
  25. Boyd, S.; Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  26. Davis, D.; Drusvyatskiy, D.; Kakade, S.; Lee, J.D. Stochastic subgradient method converges on tame functions. Found. Comput. Math. 2020, 20, 119–154. [Google Scholar] [CrossRef] [Green Version]
  27. Neely, M.J. Stochastic network optimization with application to communication and queueing systems. Synth. Lect. Commun. Netw. 2010, 3, 1–211. [Google Scholar]
  28. Kuhn, H.W. Variants of the Hungarian method for assignment problems. Nav. Res. Logist. Q. 1956, 3, 253–258. [Google Scholar] [CrossRef]
  29. Mills-Tettey, G.A.; Stentz, A.; Dias, M.B. The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs; Tech. Rep. CMU-RI-TR-07-27; Robotics Institute: Pittsburgh, PA, USA, 2007. [Google Scholar]
  30. Li, H.; Wang, P.; Shen, C. Toward end-to-end car license plate detection and recognition with deep neural networks. IEEE Trans. Intell. Transp. Syst. 2018, 20, 1126–1136. [Google Scholar] [CrossRef]
Figure 1. Schematics of the offloading cases.
Figure 1. Schematics of the offloading cases.
Sensors 23 03306 g001
Figure 2. LLUC evaluation under different learning rates and training round proportions. (a) Learning rate. (b) Proportion of training round.
Figure 2. LLUC evaluation under different learning rates and training round proportions. (a) Learning rate. (b) Proportion of training round.
Sensors 23 03306 g002
Figure 3. KCDF evaluation under different link-initialized coefficients and rearrange conditions. (a) Link-initialized coefficient. (b) Rearrange condition.
Figure 3. KCDF evaluation under different link-initialized coefficients and rearrange conditions. (a) Link-initialized coefficient. (b) Rearrange condition.
Sensors 23 03306 g003
Figure 4. Performances under different module baselines. (a) LMKO baselines. (b) LLUC baselines. (c) KCDF baselines.
Figure 4. Performances under different module baselines. (a) LMKO baselines. (b) LLUC baselines. (c) KCDF baselines.
Sensors 23 03306 g004
Figure 5. Performances under different baseline combinations. (a) Average time cost. (b) Average energy cost. (c) Average bandwidth consumption.
Figure 5. Performances under different baseline combinations. (a) Average time cost. (b) Average energy cost. (c) Average bandwidth consumption.
Sensors 23 03306 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, J.; Gong, J. AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks. Sensors 2023, 23, 3306. https://doi.org/10.3390/s23063306

AMA Style

Feng J, Gong J. AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks. Sensors. 2023; 23(6):3306. https://doi.org/10.3390/s23063306

Chicago/Turabian Style

Feng, Jialiang, and Jie Gong. 2023. "AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks" Sensors 23, no. 6: 3306. https://doi.org/10.3390/s23063306

APA Style

Feng, J., & Gong, J. (2023). AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks. Sensors, 23(6), 3306. https://doi.org/10.3390/s23063306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop