Next Article in Journal
Evaluating Taiwan’s Geothermal Sites: A Bounded Rationality Data Envelopment Analysis Approach
Previous Article in Journal
On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance

1
School of Computer, Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan 528400, China
2
Computer Science and Engineering School, University of Electronic Science and Technology of China, Chengdu 611731, China
3
School of Engineering and Technology, Central Queensland University, Rockhampton 4701, Australia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(16), 2478; https://doi.org/10.3390/math12162478
Submission received: 17 July 2024 / Revised: 5 August 2024 / Accepted: 8 August 2024 / Published: 10 August 2024
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In the Internet of Things (IoT) era, Mobile Edge Computing (MEC) significantly enhances the efficiency of smart devices but is limited by battery life issues. Wireless Power Transfer (WPT) addresses this issue by providing a stable energy supply. However, effectively managing overall energy consumption remains a critical and under-addressed aspect for ensuring the network’s sustainable operation and growth. In this paper, we consider a WPT-MEC network with user cooperation to migrate the double near–far effect for the mobile node (MD) far from the base station. We formulate the problem of maximizing long-term computation rates under a power consumption constraint as a multi-stage stochastic optimization (MSSO) problem. This approach is tailored for a sustainable WPT-MEC network, considering the dynamic and varying MEC network environment, including randomness in task arrivals and fluctuating channels. We introduce a virtual queue to transform the time-average energy constraint into a queue stability problem. Using the Lyapunov optimization technique, we decouple the stochastic optimization problem into a deterministic problem for each time slot, which can be further transformed into a convex problem and solved efficiently. Our proposed algorithm works efficiently online without requiring further system information. Extensive simulation results demonstrate that our proposed algorithm outperforms baseline schemes, achieving approximately 4% enhancement while maintain the queues stability. Rigorous mathematical analysis and experimental results show that our algorithm achieves  O ( 1 / V ) , O ( V )  trade-off between computation rate and queue stability.

1. Introduction

In the era of the Internet of Things (IoT) [1,2,3,4], the surge in the number of mobile devices and network traffic has created a high demand for data processing capabilities and response speeds. Mobile Edge Computing (MEC) technology effectively enhances the operational efficiency of smart devices by providing powerful computing resources at the network’s edge, particularly in delay-sensitive applications such as Augmented Reality (AR), Virtual Reality (VR) and autonomous driving [5]. By offloading complex computational tasks to MEC servers [6,7,8,9,10,11], resource-constrained mobile devices (MDs) can significantly alleviate the pressure experienced when running high-demand applications and achieve a notable leap in performance. However, the limited battery capacity of mobile devices has become a bottleneck that restricts their further development [12]. This limitation is especially evident for devices that require continuous operation over long periods and are not easily recharged, such as in remote areas or emergency situations, where insufficient battery life can severely impact the functionality and reliability of the devices. Therefore, despite the significant advantages of MEC in enhancing network performance, the limitation of battery endurance remains a key issue that urgently needs to be addressed.
In addition to battery constraints, reducing the energy consumption of IoT devices during the data offloading process is equally important. Trillions of tiny smart sensors make up the Internet of Things, facing significant limitations in computational capabilities and energy supplied by batteries. Advancements in wireless energy harvesting technologies, including renewable energy harvesting and wireless power transfer (WPT) [13], can alleviate the challenges previously posed by battery capacity limitations. Renewable energy sources like solar, wind, and ocean energy can provide power to some extent, but they are significantly influenced by natural conditions such as weather and climate [14]. To address this issue, green wireless charging technology has emerged. This technology can offer stable energy to devices through radio frequency signals and store it in the batteries of IoT nodes for future use, extending battery life [15,16]. To ensure that nodes do not fail due to energy depletion, green wireless charging adheres to the principle of energy neutrality, as stated in [17], ensuring that the energy consumed in any operation never exceeds the energy collected. Green Wireless-Powered Mobile Edge Computing (WPMEC) combines the strengths of WPT and MEC, enhancing devices’ computational capabilities and energy self-sufficiency. In the upcoming 6G networks, green WPMEC will provide IoT devices with quick response and real-time experiences [18], while reducing operational costs and extending the lifespan of devices.
However, WPMEC networks face the challenge of the dual far–near effect [19] caused by positional differences, which has prompted the development of edge collaborative networks [20,21,22] to optimize application offloading performance. By introducing a user cooperation (UC) mechanism, where nearby users assist distant users in accelerating the processing of computational tasks while effectively offloading their own tasks, this collaborative approach leverages the superior channel conditions of nearby users to gain more energy during the WPT phase. This not only addresses the unfairness caused by geographical location but also enhances the efficiency of energy utilization. The dense deployment of smart IoT devices further facilitates the opportunity to utilize the unused computing resources of idle devices and wireless energy harvesting. These devices, by assisting in completing the computational tasks of distant users, contribute to improving the overall computational performance of the WPMEC network.
To demonstrate the effectiveness of UC, recent studies, such as References [23,24], have effectively addressed the dual far–near effect in WPMEC networks through UC. The D2D communication in Reference [25] and the incentive mechanism in Reference [26] are both designed to facilitate resource sharing and collaborative offloading. In References [20,24,27,28], authors have focused on studying the basic three-node WPMEC model, which involves a Far User (FU) being allowed to offload computational input data to a Near User (NU). In Reference [29], researchers designed a Non-Orthogonal Multiple Access (NOMA)-based computation offloading scheme aimed at enhancing the performance of multi-user MEC systems. Google has also developed federated learning technology, which enables multiple devices to collaborate on machine learning tasks. Despite this, these studies are often based on the assumption of determinism or predictability of future information, failing to fully integrate the dynamic changes of the network environment, which may affect the efficiency and success rate of task offloading and processing.
This paper investigates the basic three-node green WPMEC network shown in Figure 1, focusing on the use of collaborative communication technology to accomplish the computation-intensive and delay-sensitive tasks powered by the HAP. Our goal is to maximize the network’s data processing capability in a real-time dynamic offloading system, taking into account the randomness of data generation and the high dynamics of wireless channel conditions. The challenges we face in addressing this issue include the unpredictability of task arrivals and the dynamics of channel conditions, as well as the coupling of variables in resource allocation, which makes traditional convex optimization methods inapplicable. To tackle these challenges, we have designed an efficient dynamic task offloading algorithm, the User-Assisted Dynamic Resource Allocation Algorithm (UADRA). This algorithm employs Lyapunov optimization techniques to transform the problem into a simplified form that relies on current information, and performs dynamic resource allocation in each time slot to enhance the network’s data processing capability. Our primary contributions are summarized as follows:
  • We propose a long-term computation rate maximization model for green sustainable WPT-MEC networks. Our model extends previous works [28,30] to address the double near–far effect problem, while introducing an incentive mechanism grounded in data-weight assignment for near and far nodes to improve data transmission efficiency.
  • By applying the stochastic network optimization technique, variable substitution method, and convex optimization theory, the multi-stage stochastic problem is transformed into a deterministic convex problem for each time slot, which can then be solved efficiently. Our proposed algorithm UADRA can work efficiently without relying on the prior system information.
  • We evaluate the proposed algorithm performance under various system parameter configurations through extensive simulations. Simulation results show that our algorithm outperforms benchmark methods, enhancing overall performance by up to 4% while ensuring system queue stability.
The remainder of this paper is organized as follows. In Section 2, we present the system model of the user-assistance green WPMEC network and formulate the MSSO problem. In Section 3, we utilize the Lyapunov optimization approach to tackle the problem, putting forward an effective dynamic offloading algorithm with an accompanying theoretical analysis of its performance. Section 4 evaluates the efficacy of the suggested algorithm via simulation outcomes. Finally, Section 5 concludes the paper.

Related Work

The integration of WPT technology with MEC networks provides an effective solution for IoT devices, enhancing their energy and computing capabilities with controllable power supply and low-latency services. Recent research has extensively explored the potential of these wirelessly powered MEC networks. For instance, in [31], researchers optimized charging time and data offloading rates for WPT-MEC IoT sensor networks to improve computational rates in various scenarios. Furthermore, the authors in [32] investigated a NOMA-assisted WPT-MEC network with a nonlinear EH model, successfully enhancing the system’s Computational Energy Efficiency (CEE) by fine-tuning key parameters within the network. Specifically, to meet the energy consumption requirements of devices, the authors in [33] proposed a Particle Swarm Optimization (PSO)-based algorithm. The goal was to reduce the latency of devices processing computational data streams by jointly optimizing charging and offloading strategies. Additionally, in [34], the authors focused on the computational latency issue in WPT-MEC networks. They found suitable offloading ratio strategies to achieve synchronized latency for all WDs, effectively reducing the duration of the overall computational task.
To tackle the dual far–near effect issue, researchers have begun to focus on user-assisted WPMEC networks and have confirmed their effectiveness in enhancing the computing performance of distant users. Specifically, in [35], the study analyzed a three-node system composed of distant users, nearby users, and the base station within a user-assisted MEC-NOMA network model, addressing the optimization problem of joint transmission time and power allocation for users. Furthermore, References [36,37] respectively explored joint computing and communication collaboration schemes and the application of Device-to-Device (D2D) communication in MEC. The method proposed in [36] aims to maximize the total computing rate of the network with the assistance of nearby users, while [37] focuses on minimizing the overall network response delay and energy consumption through joint multi-user collaborative partial offloading, transmission scheduling, and computing allocation. Ref. [16] extended this research, expanding from a single collaboration pair to multiple collaboration pairs, proposing a scheme to achieve the minimization of the total energy consumption of the AP.
In user assistance networks, the online collaborative offloading method, which is highly adaptable and can promptly respond to changes in task arrivals, has garnered significant attention from the research community. For instance, in [38], to address the randomness of energy and data arrivals, a Lyapunov optimization-based method was proposed to maximize the long-term system throughput. Furthermore, in [39], the authors studied and proposed a Lyapunov-based Profit Maximization (LBPM) task offloading algorithm in the context of the Internet of Vehicles (IoV), which aims at maximizing the time-averaged profit as the optimization goal. Additionally, in [40], within the application of MEC in the industrial IoT, a Lyapunov-based privacy-aware framework was introduced, which not only addressed privacy and security issues but also achieved optimization effects by reducing energy consumption. In [41], focusing on a multi-device single MEC system, the energy-saving task offloading problem was formulated as a time-averaged energy minimization problem considering queue length and resource constraints.
Unlike prior studies, this paper is dedicated to addressing the challenges of dynamic task offloading in green, sustainable WPMEC networks with user assistance. We take into account the total system’s energy consumption constraint, the dynamically arriving tasks in real-time scenarios, and the high dynamics of wireless channel conditions. Moreover, the temporal coupling between WPT and user collaborative communication, along with the coupling of data offloading timing and transmission power in collaborative communication, imposes significant challenges.

2. System Model

2.1. Communication Model

As depicted in Figure 1, we consider a WPMEC system that comprises two MDs and one HAP. The HAP is equipped with both an RF energy transmitter and an MEC server, offering wireless energy and computation offloading services to MDs within the base station’s coverage area. Both MDs work on the same frequency band and are tasked with completing computation-intensive, delay-sensitive operations. One MD, known as the Far User (FU), is situated at a considerable distance from the HAP. In contrast, the other MD, called the Near User (NU), is in closer proximity to the HAP and serves as a facilitator, aiding the FU by offloading a portion of its tasks to the HAP.
In a multi-node network, interference between signals can seriously affect the performance of the network. In our system model, we have strategically employed a Time Division (TD) approach to manage interference, which allocates distinct time slots for WPT and task offloading, ensuring that each MD operates without interference from others [30]. We implement a Time Division Multiple Access (TDMA) framework with each time slot lasting T seconds. At the start of each time slot, both MDs capture RF signals emitted by the HAP for energy harvesting. Given the suboptimal channel conditions between the FU and the HAP, along with the compounded near–far effect, the FU is capable of transferring some of its computational data to the NU. The NU then forwards these data to the HAP. Additionally, the NU is scheduled to offload its own computational tasks to the HAP. The primary symbols and definitions used are enumerated in Table 1.

2.2. Task Queue Model

Both FU and NU maintain a queue to buffer the randomly arriving tasks for awaiting processing. Let  Q f t  and  Q n t  denote the queue lengths at the FU and NU at slot t, respectively, which can be observed at the beginning of time slot t. The backlog of task queue updates according to the following equations:
Q f t + 1 = max Q f t d f loc t + d f off t , 0 + A f t
Q n t + 1 = max Q n t d n loc t + d n , a 2 off t , 0 + A n t
where  A f t  and  A n t  represent the raw task data arriving at the FU and NU data queues during time slot t, respectively. Note that the computational task arrivals  A f ( t )  and  A n ( t )  at each time slot t are independently and identically distributed (i.i.d) across the entire time period and satisfy  E A f t = λ f  and  E A f t = λ n . In reality,  A f ( t )  and  A n ( t )  can follow any probability distribution [42]. Here, we assume that  A f ( t )  and  A n ( t )  follow the commonly used exponential distribution with rate parameters  λ f  and  λ n . The means of these exponential distributions are therefore  1 λ f  and  1 λ n , respectively.
In reality, the data queue lengths  Q n  and  Q f  are always positive. Additionally, since the data processing amount by the MDs cannot exceed the current queue length, we have the following constraints:
d f loc t + d f , n off t Q f t
d n loc t + d n , a 2 off t Q n t
Additionally, if the local computation of the MD is capable of processing the entire current queue, then local computation is given priority, which is also in line with practical requirements. We assume that each MD necessitates a baseline energy expenditure to sustain its system’s fundamental operations. Our objective is to maximize the overall system’s data processing capacity while meeting the total energy emission constraints of the HAP. Therefore, we neglect the energy consumption needed for maintaining the task queue of each MD, as in [13,30,43].

2.3. Computation Model

(1) Local Computation Mode: MDs are capable of performing continuous local computations. Let  f f  and  f n  denote the local CPU frequencies of the FU and NU, respectively. Additionally, let  ϕ f  and  ϕ n  represent the CPU cycles required to process one bit of data at the FU and NU, respectively. Consequently, the amount of raw data (in bits) processed locally by MDs within a time slot t and the corresponding energy expenditure over that period can be determined by the following threshold rules.
For the FU, the local data processing in bits is given by
d f loc t = f f T ϕ f , if   f f T ϕ f < Q f t Q f t , others
At the beginning of each time slot t, we obtain the queue length of the FU. If the queue length exceeds its local processing capacity, the FU will utilize a user collaboration mechanism to offload partial tasks; if the queue length is within the computing capacity, the FU will perform complete local computation (that is,  τ 1 = 0 , τ 2 = 0 ), and the corresponding energy consumed during this period, as combined with  d t = f t / ϕ  (where  d t  denotes the amount of data processed locally at slot t) [30], is
e f loc t = κ f f 3 T , if f f T ϕ f < Q f t κ f f 2 Q f t ϕ f , others
Similarly, for the NU, the local data processing is
d n loc t = f n T ϕ n , if f n T ϕ n < Q n t Q n t , others
When the NU engages in complete local computation,  τ 3 = 0 , with the energy consumed being
e n loc t = κ f n 3 T , if   f n T ϕ n < Q n t κ f n 2 Q n t ϕ n , others
Here,  κ > 0  represents the computational energy efficiency coefficient of the processing chip [44].
(2) Offloading Mode: We adopt a partial task offloading strategy for both FU and NU, as illustrated in Figure 2. Initially, the FU transfers a portion of its task data to the NU during time  τ 1 t , using a transmission power  P 1 t . Subsequently, the NU forwards the FU’s data to the HAP during time  τ 2 t  with a transmission power  P 2 t . Furthermore, the NU offloads its own data to the HAP during time  τ 3 t  using transmission power  P 3 t . Throughout this process, the NU consumes the energy it has harvested. It is essential that the amount of data offloaded by the FU for remote computation at the HAP corresponds to the data initially offloaded to the NU.
In practical applications, the HAP possesses significantly greater computational capacity and transmission power compared to the MDs. As a result, the time required for data computation and feedback at the HAP is negligible. Therefore, the total time allocated for energy harvesting and task offloading by MDs must not exceed the time slot duration T, as represented by the following inequality:
τ 0 t + τ 1 t + τ 2 t + τ 3 t T
We focus solely on the energy consumption of the MDs during the offloading process. The transmission power of the FU  P 1 t  is constrained by  P 1 t P f m a x . According to Shannon’s theorem, the amount of data offloaded from the FU to the NU is given by
d f , n off t = τ 1 t W log 2 1 + P 1 t g f t σ 2
The corresponding energy consumption for this process is
e f , n off t = P 1 t τ 1 t
where W is the channel bandwidth,  g f t  is the channel gain from the FU to the NU during time slot t, and  σ 2  represents the power of additive white Gaussian noise.
After the initial time  τ 0 t + τ 1 t , the NU has received the offloaded tasks from the FU. The NU must then determine the transmission powers  P 2 t  for relaying the FU’s data to the HAP and  P 3 t  for offloading its own data. The quantity of data relayed by the NU from the FU to the HAP is expressed as
d n , a 1 off t = τ 2 t W log 2 1 + P 2 t g n t σ 2
with the associated energy consumption being
e n , a 1 off t = P 2 t τ 2 t
Similarly, the amount of data offloaded by the NU to the HAP is
d n , a 2 off t = τ 3 t W log 2 1 + P 3 t g n t σ 2
and the corresponding energy consumption is
e n , a 2 off t = P 3 t τ 3 t
Here,  P 2 t , P 3 t P n m a x  denote the transmission power of the NU, and  g n t  represents the channel gain from the NU to the HAP during the respective time slot.
During each time slot t, the NU is required to process all offloading tasks received from the FU, adhering to the constraint that the amount of data offloaded by the FU does not exceed that of the NU’s capability to relay it
d f , n off t d n , a 1 off t

2.4. Energy Consumption Model

As illustrated in Figure 2, during the initial phase  τ 0 t , the HAP disseminates radio frequency energy to the MDs. The energy harvested by each MD can be represented as
e f eh t = α 1 τ 0 t P 0 t
e n eh t = α 2 τ 0 t P 0 t
where  e f eh t  and  e n eh t  denote the energy harvested by the FU and NU from the HAP during time slot t, respectively. The coefficients  α 1  and  α 2  are defined as  α 1 = μ h f t α 2 = μ h n t , where  0 < μ < 1 , represents the energy conversion efficiency.  P 0 t  signifies the RF energy transmission power of the HAP, and the channel power gains from the HAP to the FU and NU are represented by  h f t  and  h n t , respectively. Assuming block fading, these gains remain consistent within a time slot but fluctuate independently between frames. It is important to note that the WPT from the HAP is the sole energy source for performing computational tasks.
In each time slot t, the energy consumption of the FU and NU must adhere to the following constraints:
e f loc t + e f , n off t α 1 P 0 t τ 0 t
e n loc t + e n , a 1 off t + e n , a 2 off t α 2 P 0 t τ 0 t
For a sustainable wireless charging IoT network, all energy for the wireless nodes originates from the RF signals emitted remotely by the HAP. Therefore, controlling the energy consumption of the HAP is of significant importance to the development of the entire IoT network. Consequently, we assume that the time-average energy consumption of the HAP is subject to the following constraints.
P 0 t τ 0 t ¯ γ
where  x ¯ lim sup K 1 K t = 0 K 1 E { x } P 0 t τ 0 t  is the energy consumption for the HAP at the tth time slot, and  γ  is the energy threshold.

2.5. Problem Formulation

For a dynamically changing WPT-MEC network system, maintaining system stability is crucial due to the stochastic arrival of tasks and the dynamic changes in the channel environment [42]. Therefore, we first provide the definition of system queue stability as follows.
Definition 1.
(Queue Stability). The task data queue is strongly stable if it satisfies [42]
lim K 1 K t = 1 K E Q f t < ,
lim K 1 K t = 1 K E Q n t <
In this paper, we aim to design a dynamic offloading algorithm to maximize the long-term average weighted sum computation rate for all MDs. This involves making decisions on time allocation  τ t = τ 0 t , τ 1 t , τ 2 t , τ 3 t , and power allocation  P t = P 0 t , P 1 t , P 2 t , P 3 t  at each time slot t under the long term power consumption constraint of the IoT network. Our proposed algorithm should ensure the stability of data queues and work without prior knowledge of random channel conditions and data arrival patterns, optimizing within each time slot. By denoting  τ = { τ t } t = 1 K P = { P t } t = 1 K , the maximization of the weighted computation rate problem of WPT-MEC can be formulated as a MSSO problem (P1), as follows:
(22a) P 1 : max τ , P lim K 1 K t = 1 K D t o t t (22b) s . t . τ 0 t + τ 1 t + τ 2 t + τ 3 t τ , (22c)     e f loc t + e f , n off t α 1 P 0 t τ 0 t , (22d)     e n loc t + e n , a 1 off t + e n , a 2 off t α 2 P 0 t τ 0 t , (22e)     P 0 t τ 0 t ¯ γ , (22f)     lim K 1 K t = 1 K E Q f t < , lim K 1 K t = 1 K E Q n t < , (22g)     d f loc t + d f , n off t Q f t , (22h)     d n loc t + d n , a 2 off t Q n t , (22i)     d f , n off t d n , a 1 off t , (22j)     τ 0 t , τ 1 t , τ 2 t , τ 3 t , 0 , (22k)     0 p 0 t P a max , 0 p 1 t P f max , 0 p 2 t , p 3 t P n max ,
where  D t o t t = ω 1 d f loc t + d f , n off t + ω 2 d n loc t + d n , a 2 off t ω 1  and  ω 2  denotes the fixed weight of the FU and NU, respectively. Constraint (22b) ensures the total time allocation does not exceed the time slot. Constraints (22c) and (22d) represent the energy constraints for the FU and Near NU, respectively. Constraint (22e) captures the average power constraints for the system. Constraints (22f) guarantee the stability of data queues. Constraints (22g) and (22h) denote the maximum data processing limits for the FU and NU within time slot t. Constraint (22i) ensures that the data offloaded from the FU can be processed within the same time slot. The problem presents significant challenges in two main aspects: (1) The volatility of task arrivals and the dynamically varying nature of wireless channel conditions for both data transmission and WPT introduce a stochastic element to the optimization challenge. (2) The energy expenditure at the HAP exhibits temporal coupling, and the WPT time period  t 0  also plays a crucial role in determining the allocation of offloading time. Therefore, Problem (P1) cannot be easily solved by conventional convex optimization methods and dynamic programming techniques, even with the complete future system information, due to the ‘curse of dimensionality’ [42]. To address this, we introduce a Lyapunov-based optimization algorithm that transforms the stochastic problem into a deterministic one within each time slot.

3. Algorithm Design

3.1. Lyapunov Optimization

To address the average power constraints, we introduce a virtual energy queue  Y t + 1 = Y t γ + P 0 t τ 0 t + , where  x + max 0 , x  [42]. Here,  Y t  can be seen as a queue with random “energy arrivals”  P 0 t τ 0 t  and a fixed “service rate”  γ . Thus, we derive the following Lemma 1.
Lemma 1.
The long-term average power constraints will be met if the virtual queue  Y t  satisfies average rate stability
lim K 1 K E Y t = 0
Proof. 
According to the above formula  Y t + 1 = [ Y t γ + P 0 t τ 0 t ] + , we can conclude that
Y t + 1 Y t γ + P 0 t τ 0 t
we can expand all the terms of  t { 0 , 1 , 2 , , K 1 } , sum them up, and then take the average, yielding
Y K Y 0 K 1 K t = 0 K 1 P 0 t τ 0 t γ
Next, we simultaneously take the expectation on both sides and set K, obtaining
lim sup K E Y t K lim sup K 1 K t = 0 K 1 E P 0 t τ 0 t γ
By our assumption  lim K 1 K E Y t = 0 , it follows that  lim sup K 1 K t = 0 K 1 E P 0 t τ 0 t γ 0 , i.e.,  P 0 t τ 0 t ¯ γ , the lemma is proven.    □
By defining a network queue vector  Θ t = Q n t , Q f t , Y t , which encompasses the queue lengths for the NU, FU, and the virtual queue, respectively, we can obtain the associated Lyapunov function  L Θ t  and the Lyapunov drift  Δ Θ t  as
L Θ t 1 2 Q n t 2 + Q f t 2 + Y t 2
Δ Θ t E L t + 1 L t Θ t
Employing the Lyapunov optimization theory, we derive the drift-plus-penalty as
Δ V Θ t = Δ Θ t V E D tot t Θ t
Here,  V > 0  signifies the penalty’s importance weight. The Lyapunov optimization method aims to minimize the upper bound of the drift plus penalty, thereby maximizing the objective function while ensuring queue stability. Optimizing the objective function across each time slot leads to long-term optimality. It is important to note that a higher value of V prioritizes objective maximization, whereas a lower value emphasizes queue stability. To obtain the upper bound of  Δ V Θ t , we derive the following Lemma 2.
Lemma 2.
At each time slot t, for any control strategy  τ t , P t , the one-slot Lyapunov drift-plus-penalty   Δ V Θ t is bounded as per the following inequality
Δ V Θ t B V E D tot t Θ t + Q f t E A f t d f loc t d f , n off t Θ t + Q n t E A n t d n loc t d n , a 2 off t Θ t + Y t E P 0 t τ 0 t γ Θ t
where B is a constant that satisfies the following  t
B 1 2 E d f loc t + d f , n off t 2 + A f t 2 Θ t + 1 2 E d n loc t + d n , a 2 off t 2 + A n t 2 Θ t + 1 2 E γ 2 + P 0 t τ 0 t 2 Θ t
Proof. 
For all  a , b , c 0 , we have the inequality  m a x [ a b , 0 ] + c 2 a 2 + b 2 + c 2 + 2 a c b . By using the inequality, we have
Δ Q f t = 1 2 Q f t + 1 2 Q f t 2 d f loc t + d f , n off t 2 + A f t 2 2 + Q f t A f t d f loc t d f , n off t
Δ Q n t = 1 2 Q n t + 1 2 Q n t 2 d n loc t + d n , a 2 off t 2 + A n t 2 2 + Q n t A n t d n loc t d n , a 2 off t
Δ Y t = 1 2 Y t + 1 2 Y t 2 P 0 t τ 0 t 2 + γ 2 2 + Y t P 0 t τ 0 t γ
Upon combining inequalities (31)–(33), the resulting expression yields the upper bound of the Lyapunov drift-plus-penalty.    □
Here, the parameter V serves as a trade-off between the task computation rate and queue stability. An increased value of V directs the algorithm to prioritize task computation rates, potentially at the expense of queue stability. A suitable selection of V will enable the system to achieve a balance between the task computation rate and the task queue stability. By applying the drift-plus-penalty minimization technique, and eliminating the constant term observable at the start of time slot t in (29), we can obtain the optimal solution to problem (P1) by solving the following problem in each individual time slot.
  P 2 : min τ t , P t Y t P 0 t τ 0 t Q f t + ω 1 V τ 1 t W log 2 1 + P 1 t g f t σ 2 + d f loc t (34a) Q n t + ω 2 V τ 3 t W log 2 1 + P 3 t g n t σ 2 + d n loc t (34b) s . t . ( 22 b ) ( 22 d ) , ( 22 g ) ( 22 k )
Due to the the non-convex constraints (34b), (P2) remains a non-convex problem. We introduce auxiliary variables  φ 0 = p 0 t τ 0 t , φ 1 = p 1 t τ 1 t , φ 2 = p 2 t τ 2 t , φ 3 = p 3 t τ 3 t  here. According to Equation (22j), we have  0 φ 0 τ 0 P a max , 0 φ 1 τ 1 P f max , 0 φ 2 τ 2 P n max , 0 φ 3 τ 3 P n max . We denote  φ = φ 0 , φ 1 , φ 2 , φ 3 . To simplify the mathematical expressions, we omit t here. So, (P2) can be equivalently substituted as
(35a) P 3 : max τ , φ Y φ 0 + C 1 τ 1 W log 2 1 + φ 1 A τ 1 + d f loc + C 2 τ 3 W log 2 1 + φ 3 B τ 3 + d n loc (35b) s . t . τ 0 + τ 1 + τ 2 + τ 3 τ , (35c)     φ 1 + k f f 3 T α 1 φ 0 , (35d)     φ 2 + φ 3 + k f n 3 T α 2 φ 0 , (35e)     τ 1 W log 2 1 + φ 1 A τ 1 τ 2 W log 2 1 + φ 2 B τ 2 , (35f)     f f T ϕ f + τ 1 W log 2 1 + φ 1 A τ 1 Q f , (35g)     f n T ϕ n + τ 3 W log 2 1 + φ 3 B τ 3 Q n ,       0 φ 0 τ 0 P a max , 0 φ 1 τ 1 P f max , (35h)     0 φ 2 τ 2 P n max , 0 φ 3 τ 3 P n max , (35i)     ( 22 j )
The coefficients are defined as  C 1 = Q f + ω 1 V  and  C 2 = Q n + ω 2 V , where  A = g f σ 2 , and  B = g n σ 2 . Owing to the non-convex constraint (35e), problem (P3) remains non-convex. To address this, we introduce auxiliary variables  ψ 1  and  ψ 2 , defined such that  ψ 1 τ 1 W log 2 1 + φ 1 A τ 1 , ψ 2 τ 3 W log 2 1 + φ 3 B τ 3 . By making these definitions, problem (P3) can be reformulated in terms of these auxiliary variables as follows:
(36a) P 4 : max τ , φ Y φ 0 + C 1 τ 1 W log 2 1 + φ 1 A τ 1 + d f loc + C 2 τ 3 W log 2 1 + φ 3 B τ 3 + d n loc (36a) s . t . τ 0 + τ 1 + τ 2 + τ 3 τ , (36c)     φ 1 + k f f 3 T α 1 φ 0 , (36d)     φ 2 + φ 3 + k f n 3 T α 2 φ 0 , (36e)     ψ 1 τ 2 W log 2 1 + φ 2 B τ 2 , (36f)     ψ 1 τ 1 W log 2 1 + φ 1 A τ 1 , (36g)     ψ 1 Q f f f T ϕ f , (36h)     ψ 2 τ 3 W log 2 1 + φ 3 B τ 3 , (36i)     ψ 2 Q n f n T ϕ n , (36j)     0 φ 0 τ 0 P a max , 0 φ 1 τ 1 P f max , (36j)     0 φ 2 τ 2 P n max , 0 φ 3 τ 3 P n max , (36k)     ( 22 j ) ,
ψ 1 = τ 1 W log 2 1 + φ 1 A τ 1 , ψ 2 = τ 3 W log 2 1 + φ 3 B τ 3 , when the problem (P4) reaches the optimal solution, it aligns with the original problem (P3). Specifically, for constraint (36e),  τ 2 W log 2 1 + φ 2 B τ 2  is a concave function of  τ 2  since the perspective operation applied to  W log 2 1 + φ 2 B  preserves convexity [45]. Thus, constraint (36e) is convex, making problem (P4) a convex optimization problem. To further analyze the solution, we will employ convex optimization tools, such as CVX [46].
We introduce an efficient dynamic task offloading algorithm with user assistance to tackle Problem 4. Additionally, we apply the Lagrange method to gain meaningful insights into the optimal solution’s properties.
Theorem 1.
Given non-negative Lagrange multipliers  λ i i = 1 , 2 , , 12 , the optimal power allocation  P = P 0 , P 1 , P 2 , P 3  must fulfill certain conditions
(37a) P 0 = 0 , if τ 0 = 0 P a max + , o t h e r s (37b) P 1 = 0 , if τ 1 = 0 W Q f + V + λ 5 ln 2 λ 2 + λ 10 σ 2 g f + , o t h e r s (37c) P 2 = 0 , if τ 2 = 0 λ 4 W ln 2 λ 3 + λ 11 σ 2 g n + , o t h e r s (37d) P 3 = 0 , if τ 3 = 0 W Q n + V + λ 7 ln 2 λ 3 + λ 12 σ 2 g n + , o t h e r s
Proof. 
Let  λ i 0 i = 1 , 2 , , 12  denote the Lagrange multipliers corresponding to the constraints. The Lagrangian function for problem (P4), constructed based on these multipliers, is as follows
L τ , φ , λ = Y t φ 0 + C 1 τ 1 W log 2 1 + φ 1 A τ 1 + f f T ϕ f + C 2 τ 3 W log 2 1 + φ 3 B τ 3 + f n T ϕ n + λ 1 τ 0 t + τ 1 t + τ 2 t + τ 3 t τ + λ 2 φ 1 + k f f 3 T α 1 φ 0 + λ 3 φ 2 + φ 3 + k f n 3 T α 2 φ 0 + λ 4 ψ 1 τ 2 W log 2 1 + φ 2 B τ 2 + λ 5 ψ 1 τ 1 W log 2 1 + φ 1 A τ 1 + λ 6 ψ 1 Q f f f T ϕ f + λ 7 ψ 2 τ 3 W log 2 1 + φ 3 B τ 3 + λ 8 ψ 2 Q n f n T ϕ n + λ 9 φ 0 τ 0 P a max + λ 10 φ 1 τ 1 P f max + λ 11 φ 2 τ 2 P n max + λ 12 φ 3 τ 3 P n max
We can use the first-order optimality conditions. Taking the derivative of the Lagrangian function yields
(39a) φ 0 = τ 0 P a max + , (39b) φ 1 = τ 1 W Q f + V + λ 5 ln 2 λ 2 + λ 10 τ 1 σ 2 g f + , (39c) φ 2 = τ 2 λ 4 W ln 2 λ 3 + λ 11 τ 2 σ 2 g n + , (39d) φ 3 = τ 3 W Q n + V + λ 7 ln 2 λ 3 + λ 12 τ 3 σ 2 g n 2 ,
By examining the first-order derivatives, we can establish the necessary conditions for optimality. The relationship between the auxiliary variables and the original variables is leveraged to derive the theorem.    □
According to this theorem, we can infer that during the process of radio frequency energy transfer, higher power leads to better results. As the value of W increases, both FU and NU entities are more motivated to offload data, which results in a reduction of the computational tasks performed locally. Furthermore, an increase in V prompts FU and NU to offload a larger portion of their tasks. This shift towards offloading is driven by the desire to enhance the computation rate, which in turn leads the MDS to increase the volume of data offloaded to meet its objectives.
The process of solving the original MSSO problem, denoted as (P1), is encapsulated within Algorithm 1.
Algorithm 1: User-Assisted Dynamic Resource Allocation Algorithm
Mathematics 12 02478 i001

3.2. Algorithm Complexity Analysis

We have employed the Lyapunov algorithm, which not only ensures the stability of the system but also allows us to effectively decompose the complex overall problem into multiple sub-problems (P2). By implementing Algorithm 1, we are able to solve (P2) within each time slot. Therefore, the solution complexity at each time slot is crucial in determining the overall performance and responsiveness of the algorithm. When solving the problem (P4), we have utilized the interior-point method, which possesses a computational complexity of approximately  O ( n 3.5 log ( 1 / ϵ ) ) .
In this context, n denotes the total of decision variables, and  ϵ  reflects the degree of precision in computation. In our algorithm, with  n = 8 , the number of decision variables is sufficiently small. This design not only ensures the efficiency of the algorithm but also enables us to complete the optimization of resource allocation within a reasonable time, thus meeting the performance requirements in practical applications.

3.3. Algorithm Performance Analysis

In this section, we demonstrate that the proposed scheme can achieve an optimal long-term time-average solution. First, we establish a key assumption, as follows
lim K 1 K t = 0 K 1 D tot t = D t ¯
Subsequently, we deduce that the expected value will also converge to the same set of solutions
lim K 1 K t = 0 K 1 E [ D tot t = D t ¯
Furthermore, we establish the existence of an optimal solution founded on the existing conditions of the queue, as follows.
Lemma 3.
Should problem (P1) be solvable, there is a set of decisions  { τ t , P t } *  that adhere to the conditions  t , ε > 0
E A f t d f loc , * t d f off , * t ε
E A n t d n loc , * t d n , a 2 off , * t ε
E P 0 t , * τ 0 t , * γ ε
Here, the asterisk * denotes the value associated with the optimal solution.
Proof. 
Here, we omit the proof details for brevity. See parts 4 and 5 of [42]. □
Theorem 2.
The optimal long-term average weighted computation rate derived from (P1) is bounded below by a lower bound, which is independent of time and space. The algorithm is capable of achieving the following solutions:
(1)
D t ¯ E D t o t * t B / V ,
(2)
All queues  Q f t Q n t Y t  are mean rate stable, thereby satisfying the constraints.
Proof. 
For any  ε > 0 , we consider the policy and queue state as defined in Equation (40). Given that the values  P 0 t , * τ 0 t , * , d f loc , * t , d n loc , * t , d n , a 2 off , * t , d f off , * t , are independent of the queue status  Θ t , we can deduce that
(45a) E A f t d f loc , * t d f off , * t Θ t = E A f t E d f loc , * t + d f off , * t ε (45b) E A n t d n loc , * t d n , a 2 off , * t Θ t = E A n t E d n loc , * t + d n , a 2 off , * t ε (45c) E P 0 t , * τ 0 t , * γ Θ t = E P 0 t , * τ 0 t , * E γ ε
By integrating these results into Equation (29) and taking  ε 0 , we obtain
Δ V Θ t B V E D tot * t + Q f t E A f t d f loc , * t d f , n off , * t t + Q n t E A n t d n loc , * t d n , a 2 off , * t + Y t E P 0 t , * τ 0 t , * γ B V E D tot * t
Utilizing the iterated expectation, and summing the aforementioned inequality over the time horizon  t { 0 , 1 , , T 1 } , we derive the following result
E L Θ K E L Θ ( 0 ) V t = 0 K 1 E D t o t t K B V E D t o t * t
By dividing both sides of Equation (47) by  V K , applying Jensen’s inequality, and considering that  E L Θ t 0 , we obtain
1 K t = 0 K 1 E D t o t t B V E D t o t * t
Furthermore, letting  K , we have
lim K 1 K t = 0 K 1 E D t o t t B V E D t o t * t
From Equation (41), we have
D t ¯ B V E D t o t * t
Furthermore, we obtain
D t ¯ E D t o t * t B V
Theorem 3.
A defined upper limit confines the time-averaged sum of queue lengths
Q B ε + V D t ¯ ε
Proof. 
By employing the method of iterated expectations and applying telescoping sums iteratively for each  t { 0 , 1 , , T 1 } , we can derive
E L Θ K E L Θ ( 0 ) V E D t o t t Θ t K B ε t = 0 K 1 E Q n t + Q f t + Y t
Dividing both sides of (53) by  K ε , and considering  T , we rearrange the terms to obtain the desired result
B ε lim K 1 K t = 0 K 1 E Q n t + Q f t + Y t + V D t ¯ ε 0
Specifically,
Q B ε + V D t ¯ ε
Theorems 2 and 3 underpin our proposed algorithm by establishing that as V increases, the computation rate  D t ¯  improves at a rate of  O 1 / V , whereas the queue length grows at a rate of  O V . This suggests that by choosing an appropriately large value for V, we can achieve an optimal  D t ¯ . Furthermore, the time-averaged queue length, denoted as  Q , is shown to increase linearly with V. This linear relationship implies a trade-off  O 1 / V , O V  between the optimized computation rate and the queue length. This is in line with Little’s Law [13], which posits that delay is proportional to the time-averaged length of the data queue. In other words, our proposed algorithm enables a trade-off between  D t ¯  and the average network latency.

4. Simulation Results

In this section, plenty of numerical simulations are performed to assess the efficiency of our proposed algorithm. Our experiments were conducted on a computational platform with an Intel(R) Xeon(R) Gold 6148 CPU 2.40 GHz, 20 cores and four GeForce RTX 3070 GPUs. In our simulations, we employed a free-space path-loss model to depict the wireless channel characteristics [47]. The averaged channel gain  h ¯  is denoted as
h ¯ = A d ( 3 × 10 8 4 π f c d i ) d e ,
where  A d  denotes the antenna gain,  f c  denotes the carrier frequency,  d e  denotes the path loss exponent, and  d i  denotes the distance between two nodes in meters. The time-varying WPT and task offloading channel gains are represented by the vector  h t = a 1 t h m t , a 2 t h h t , a 3 t g m t , a 4 t g h t , adhering to the Rayleigh fading channel model. In this model, the random channel fading factors  a i  are characterized by an exponential distribution with an expectation of 1, capturing the variability inherent in wireless communication channels. For the sake of simplicity, we assume that the vector of fading factors  a 1 t , a 2 t , a 3 t , a 4 t  is constant and equal to  1.0 , 1.0 , 1.0 , 1.0  for any given time slot, implying that the channel gains are static within that slot. The task arrivals  λ f  and  λ n  both follow an exponential distribution with constant average rates 1.75 and 2, respectively. The parameters are all listed in Table 2.

4.1. Impact of System Parameters on Algorithm Performance

Figure 3 illustrates trends for the average task computation rate  D t o t  and the average task queue lengths of FU and NU over a period of 5000 time slots. The the task arrival rates for FU and NU are set at 1.75 Mbps and 2 Mbps, respectively. Initially,  D t o t  is low, but it rapidly increases and eventually stabilizes as time progresses. This initial surge in  D t o t  is due to the system’s initial adjustment to the initial task queue fluctuations, which demands more intensive processing for FU tasks, resulting in increased energy consumption and a temporary reduction in the overall computation rate. Moreover, the average queue length decreases and stabilizes, reflecting the system’s ability to self-regulate and reach a steady state.
Figure 4 demonstrates the average task computation rate of our proposed algorithm under different control parameter  V = 100 , 300 , 500 , 700 . The results show that the average task computation rates converge similarly across different V. Notably, as V increases, there is a corresponding increase in the average task computation rates. This trend is attributed to the fact that a larger V compels the algorithm to prioritize computation rates over queue stability, which is consistent with theoretical analysis, corresponding with Theorem 2. Here, the parameter V serves as a balancing factor between the task computation rate and queue stability, reflecting a trade-off that is consistent with our theoretical predictions.
Figure 5 shows the trend of average task queue lengths of FU and NU with different V. As V increases from 100 to 900, the task queue lengths of FU and HD declines from  3.1 × 10 8  bits to  2.9 × 10 8  bits and from  3.11 × 10 8  bits to  2.84 × 10 8  bits, respectively. In the user-assisted offloading paradigm, processing a task from NU involves only a single offloading step, which is markedly more efficient than the two-step offloading data from FU. With the increasing of V, the algorithm will pay more attention to optimizing the computation rate. Conversely, a smaller value of V implies that the system will focus more on the stability of the queue. In real-world scenarios, according to Little’s Law, the length of the queue is directly proportional to the delay. The parameter V can be tuned to align with the system’s latency requirements. Therefore, based on the system’s requirements for delay, the value of V can be adjusted to achieve a balance between queue stability and computation rate.
Figure 6 evaluates the impact of the average energy constraint  γ  on system performance with V fixed at 500. As  γ  increases from  1.75  joules to  2.25  joules, the average task computation rate rises from  1.79 × 10 6  bits/s to  1.86 × 10 6  bits/s, while the average task queue length of FU and NU decreases from  4 × 10 8  bits to  2.5 × 10 8  bits. This reduction in queue length and increase in computation rate are attributed to the higher energy availability for WPT, enabling FU and NU to offload tasks more effectively. Notably, after the average energy constraint  γ  reaching 2.1 joules, the variation of task computation rate and task queue is reduced. This observation suggests that there is an upper bound to the energy consumption of our algorithm. Beyond this threshold, additional energy has minimal impact on performance enhancement.
Hence, energy constraint  γ  as a critical parameter, significantly influences both the data processing rates and the stability of task queues within the system. The findings underscore the importance of energy management in optimizing system performance.
Figure 7 presents the total system energy consumption of our proposed algorithm under different values of the parameter V, specifically  V = 300  and  V = 700 . Initially, the total energy consumption exhibits substantial fluctuations. However, with the progression of time, the system’s energy consumption stabilizes and hovers around the average energy constraint  γ  after approximately 2500 time slots. Notably, an elevated V value is correlated with higher average energy consumption, as the algorithm pays more attention to the system’s computation rate, consequently incurring greater energy costs. Figure 7 highlights the algorithm’s efficacy in managing average energy consumption, a critical feature for the sustainability of IoT networks.
In Figure 8, we evaluate the offloading power across varying bandwidths W, ranging from  0.96 × 10 6  Hz to  1.05 × 10 6  Hz. It is observed that all the offloading power increases as bandwidth W escalates. Consistent with the results of the analysis in Theorem 1, the increase in bandwidth makes the system more inclined to perform task offloading, reflected in the increase in offloading power.

4.2. Comparison with Baseline Algorithms

To evaluate the performance of our propose algorithm, we choose the the following three representative benchmarks as the baseline algorithms.
(1)
All offloading scheme: Neither FU nor NU perform local computing and consume all the energy for task offloading.
(2)
No cooperation scheme: FU offloads tasks directly to HAP without soliciting assistance from NU, similar to the method in [48].
(3)
No Lyapunov scheme: Disregarding the dynamics of task queues and the energy queue, this scheme focuses solely on maximizing the average task computation rate, similar to the Myopic method in [30]. To ensure a fair comparison, we constrain the energy consumption of each time slot by solving the following equation:
max D t o t ( t ) s . t . ( 22 b ) ( 22 d ) , ( 22 g ) ( 22 k ) P 0 t τ 0 t γ
Figure 9 shows the average task computation rates under four schemes over a period of 5000 time-slots, with the control parameter V set to 500. All schemes converge after 1000 time slots. Our proposed algorithm achieves the best task computation rate after convergence, outperforming the other three by 0.8%, 3.9%, and 4.1% respectively. Our algorithm’s key strengths lie not only in achieving the highest data processing rate but also in ensuring the stability of the system queues. This prevents excessively long queues that could lead to prolonged response times and a negative user experience. The no-Lyapunov scheme, while achieving the second-highest computation rate, neglects queue stability in its pursuit of maximizing computation speed. This oversight can lead to system instability, prolonged user service times, and potential system failure. The all-offloading scheme, relying solely on edge computing, consumes more energy and thus underperforms in energy-limited scenarios. In the no-cooperation scheme, the system initially benefits from a high computation rate due to the NU’s lack of assistance to the FU. But as the NU’s tasks are completed and its resources are no longer available, the average computation rate falls sharply. The FU’s communication with the HAP is further impeded by the dual proximity effect, causing a notable decline in the system’s long-term computation performance.
Figure 10 shows the impact of varying network bandwidth W from  0.96 × 10 6  Hz to  1.05 × 10 6  Hz on the performance of different schemes. As W increase, task computation rates for all schemes rise, reflecting improved transmission efficiency for wireless power transfer and task offloading. This allows the HAP to handle more offloaded tasks, highlighting the critical role of bandwidth in system performance. Notably, our proposed scheme consistently outperforms others across all bandwidth levels, showcasing its adaptability and robustness in varying network conditions.
Figure 11 evaluates how the distance between FU and NU affects the performance of all four schemes, with distances varying from 120 m to 160 m. We observe that as the distance increases, the computation rates for both our proposed scheme and the all-offloading scheme decrease. This suggests that proximity plays a crucial role in task offloading efficiency. In contrast, the no-cooperation scheme shows a stable computation rate, consistent with its design that excludes task offloading between FU and NU. Interestingly, the no-Lyapunov scheme performs best at a distance of about 140 m. However, its performance drops as the distance decreases, contrary to the expectation that a shorter distance would enhance task offloading from FU to NU. This unexpected trend is likely due to instances where the FU’s task queue depletes faster than new tasks arrive, leading to lower computation rates for the no-Lyapunov scheme. This highlights the importance of balancing task computation rates with queue stability in system design.
In Figure 12, we evaluate the performance of four schemes as the task arrival rate of NU varies from  1.85 × 10 6  bits/s to  2.35 × 10 6  bits/s. Our proposed scheme’s task computation rate demonstrates a modest increase and maintains the highest computation rate as tasks arrive more rapidly. This trend underscores the scheme’s robustness across diverse scenarios. Correspondingly, the no cooperation scheme exhibits a more pronounced increase in task computation rate. This is attributable to its vigorous task processing capacity at the NU, which allows it to capitalize on the higher task arrival rates effectively.

5. Conclusions and Future Work

The joint optimization of computation offloading and resource allocation in WPMEC systems poses a significant challenge due to the time-varying network environments and the time-coupling nature of energy consumption constraints. In this study, we aim to maximize the average task computation rate in an MEC system with WPT and user collaboration. A task computation rate maximization problem was formulated considering the uncertainty of load dynamics alongside the energy consumption constraint. We introduce an online control algorithm, named UAORA by leveraging Lyapunov optimization theory to transform the sequential decision-making dilemma into a series of deterministic optimization problems for each time slot. Extensive simulation results substantiate the effectiveness of the proposed UAORA algorithm, demonstrating a significant enhancement in average task computation performance when compared with benchmark methods. Simulations also underscore the advantages of jointly considering the task computation rate and the task queues in our algorithm.
Our algorithm currently relies only on the present system status for decision-making and does not utilize historical data. There is a significant opportunity to enhance the algorithm’s performance by integrating historical data into the decision-making process. In the future, we plan to employ deep learning or machine learning technologies to harness historical data and build a predictive module for load and channel conditions. This advancement could significantly improve the system’s decision-making efficiency. Moreover, we plan to evaluate our algorithm in various real-world scenarios and consider various real-world constraints, such as different task types and Service Level Agreement (SLA) time constraints.

Author Contributions

Methodology, H.H.; Validation, C.Z. and F.H.; Formal analysis, H.H.; Investigation, H.H. and F.H.; Resources, H.H.; Data curation, H.H. and Y.Y.; Writing—original draft, H.H. and F.H.; Writing—review and editing, H.H.; Supervision, Y.Y. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Planning Project of Guangdong Province, China (No. 2021A0101180005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We thank all of the reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, W.; Zhou, F.; Hu, R.Q.; Wang, B. Energy-efficient resource allocation for secure NOMA-enabled mobile edge computing networks. IEEE Trans. Commun. 2019, 68, 493–505. [Google Scholar] [CrossRef]
  2. Na, Z.; Liu, Y.; Shi, J.; Liu, C.; Gao, Z. UAV-supported clustered NOMA for 6G-enabled Internet of Things: Trajectory planning and resource allocation. IEEE Internet Things J. 2020, 8, 15041–15048. [Google Scholar] [CrossRef]
  3. Zhao, R.; Zhu, F.; Tang, M.; He, L. Profit maximization in cache-aided intelligent computing networks. Phys. Commun. 2023, 58, 102065. [Google Scholar] [CrossRef]
  4. Liu, X.; Sun, Q.; Lu, W.; Wu, C.; Ding, H. Big-data-based intelligent spectrum sensing for heterogeneous spectrum communications in 5G. IEEE Wirel. Commun. 2020, 27, 67–73. [Google Scholar] [CrossRef]
  5. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
  6. Chen, M.; Hao, Y. Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [Google Scholar] [CrossRef]
  7. Sun, C.; Zhou, J.; Liuliang, J.; Zhang, J.; Zhang, X.; Wang, W. Computation Offloading with Virtual Resources Management in Mobile Edge Networks. In Proceedings of the 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), Porto, Portugal, 3–6 June 2018; pp. 1–5. [Google Scholar] [CrossRef]
  8. Guo, H.; Zhang, J.; Liu, J.; Zhang, H. Energy-Aware Computation Offloading and Transmit Power Allocation in Ultradense IoT Networks. IEEE Internet Things J. 2019, 6, 4317–4329. [Google Scholar] [CrossRef]
  9. Sun, H.; Zhou, F.; Hu, R.Q. Joint Offloading and Computation Energy Efficiency Maximization in a Mobile Edge Computing System. IEEE Trans. Veh. Technol. 2019, 68, 3052–3056. [Google Scholar] [CrossRef]
  10. Anajemba, J.H.; Yue, T.; Iwendi, C.; Alenezi, M.; Mittal, M. Optimal Cooperative Offloading Scheme for Energy Efficient Multi-Access Edge Computation. IEEE Access 2020, 8, 53931–53941. [Google Scholar] [CrossRef]
  11. Zhu, X.; Luo, Y.; Liu, A.; Bhuiyan, M.Z.A.; Zhang, S. Multiagent Deep Reinforcement Learning for Vehicular Computation Offloading in IoT. IEEE Internet Things J. 2021, 8, 9763–9773. [Google Scholar] [CrossRef]
  12. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef]
  13. Mao, S.; Leng, S.; Maharjan, S.; Zhang, Y. Energy Efficiency and Delay Tradeoff for Wireless-Powered Mobile-Edge Computing Systems with Multi-Access Schemes. IEEE Trans. Wirel. Commun. 2020, 19, 1855–1867. [Google Scholar] [CrossRef]
  14. Mao, Y.; Zhang, J.; Letaief, K.B. Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. IEEE J. Sel. Areas Commun. 2016, 34, 3590–3605. [Google Scholar] [CrossRef]
  15. Huang, K.; Lau, V.K.N. Enabling Wireless Power Transfer in Cellular Networks: Architecture, Modeling and Deployment. IEEE Trans. Wirel. Commun. 2014, 13, 902–912. [Google Scholar] [CrossRef]
  16. Mao, S.; Wu, J.; Liu, L.; Lan, D.; Taherkordi, A. Energy-Efficient Cooperative Communication and Computation for Wireless-Powered Mobile-Edge Computing. IEEE Syst. J. 2022, 16, 287–298. [Google Scholar] [CrossRef]
  17. Margolies, R.; Gorlatova, M.; Sarik, J.; Stanje, G.; Zhu, J.; Miller, P.; Szczodrak, M.; Vigraham, B.; Carloni, L.; Kinget, P.; et al. Energy-harvesting active networked tags (enhants) prototyping and experimentation. ACM Trans. Sens. Netw. (TOSN) 2015, 11, 1–27. [Google Scholar] [CrossRef]
  18. Tataria, H.; Shafi, M.; Molisch, A.F.; Dohler, M.; Sjöland, H.; Tufvesson, F. 6G Wireless Systems: Vision, Requirements, Challenges, Insights, and Opportunities. Proc. IEEE 2021, 109, 1166–1199. [Google Scholar] [CrossRef]
  19. Ju, H.; Zhang, R. Throughput Maximization in Wireless-Powered Communication Networks. IEEE Trans. Wirel. Commun. 2014, 13, 418–428. [Google Scholar] [CrossRef]
  20. Ji, L.; Guo, S. Energy-Efficient Cooperative Resource Allocation in Wireless-Powered Mobile Edge Computing. IEEE Internet Things J. 2019, 6, 4744–4754. [Google Scholar] [CrossRef]
  21. Li, M.; Zhou, X.; Qiu, T.; Zhao, Q.; Li, K. Multi-Relay Assisted Computation Offloading for Multi-Access Edge Computing Systems with Energy Harvesting. IEEE Trans. Veh. Technol. 2021, 70, 10941–10956. [Google Scholar] [CrossRef]
  22. Mach, P.; Becvar, Z. Device-to-Device Relaying: Optimization, Performance Perspectives, and Open Challenges Towards 6G Networks. IEEE Commun. Surv. Tutorials 2022, 24, 1336–1393. [Google Scholar] [CrossRef]
  23. Su, B.; Ni, Q.; Yu, W.; Pervaiz, H. Optimizing Computation Efficiency for NOMA-Assisted Mobile Edge Computing with User Cooperation. IEEE Trans. Green Commun. Netw. 2021, 5, 858–867. [Google Scholar] [CrossRef]
  24. Li, B.; Si, F.; Zhao, W.; Zhang, H. Wireless-Powered Mobile Edge Computing with NOMA and User Cooperation. IEEE Trans. Veh. Technol. 2021, 70, 1957–1961. [Google Scholar] [CrossRef]
  25. Sun, M.; Xu, X.; Huang, Y.; Wu, Q.; Tao, X.; Zhang, P. Resource Management for Computation Offloading in D2D-Aided Wireless Powered Mobile-Edge Computing Networks. IEEE Internet Things J. 2021, 8, 8005–8020. [Google Scholar] [CrossRef]
  26. Wang, X.; Chen, X.; Wu, W.; An, N.; Wang, L. Cooperative application execution in mobile cloud computing: A stackelberg game approach. IEEE Commun. Lett. 2015, 20, 946–949. [Google Scholar] [CrossRef]
  27. You, C.; Huang, K. Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing. IEEE Trans. Wirel. Commun. 2018, 17, 4104–4117. [Google Scholar] [CrossRef]
  28. Hu, X.; Wong, K.K.; Yang, K. Wireless-Powered Cooperation-Assisted Mobile Edge Computing. IEEE Trans. Wirel. Commun. 2018, 17, 2375–2388. [Google Scholar] [CrossRef]
  29. Wang, F.; Xu, J.; Ding, Z. Optimized Multiuser Computation Offloading with Multi-Antenna NOMA. In Proceedings of the 2017 IEEE Globecom Workshops (GC Wkshps), Singapore, 4–8 December 2017; pp. 1–7. [Google Scholar] [CrossRef]
  30. Bi, S.; Huang, L.; Wang, H.; Zhang, Y.J.A. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks. IEEE Trans. Wirel. Commun. 2021, 20, 7519–7537. [Google Scholar] [CrossRef]
  31. Zhang, S.; Bao, S.; Chi, K.; Yu, K.; Mumtaz, S. DRL-Based Computation Rate Maximization for Wireless-Powered Multi-AP Edge Computing. IEEE Trans. Commun. 2024, 72, 1105–1118. [Google Scholar] [CrossRef]
  32. Shi, L.; Ye, Y.; Chu, X.; Lu, G. Computation Energy Efficiency Maximization for a NOMA-Based WPT-MEC Network. IEEE Internet Things J. 2021, 8, 10731–10744. [Google Scholar] [CrossRef]
  33. Zheng, X.; Zhu, F.; Xia, J.; Gao, C.; Cui, T.; Lai, S. Intelligent computing for WPT–MEC-aided multi-source data stream. EURASIP J. Adv. Signal Process. 2023, 2023, 52. [Google Scholar] [CrossRef]
  34. Zhu, B.; Chi, K.; Liu, J.; Yu, K.; Mumtaz, S. Efficient Offloading for Minimizing Task Computation Delay of NOMA-Based Multiaccess Edge Computing. IEEE Trans. Commun. 2022, 70, 3186–3203. [Google Scholar] [CrossRef]
  35. Wen, Y.; Zhou, X.; Fang, F.; Zhang, H.; Yuan, D. Joint time and power allocation for cooperative NOMA based MEC system. In Proceedings of the 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall), Vancouver, BC, Canada, 18 November–16 December 2020; pp. 1–5. [Google Scholar]
  36. He, B.; Bi, S.; Xing, H.; Lin, X. Collaborative Computation Offloading in Wireless-Powered Mobile-Edge Computing Systems. In Proceedings of the 2019 IEEE Globecom Workshops (GC Wkshps), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–7. [Google Scholar] [CrossRef]
  37. Peng, J.; Qiu, H.; Cai, J.; Xu, W.; Wang, J. D2D-assisted multi-user cooperative partial offloading, transmission scheduling and computation allocating for MEC. IEEE Trans. Wirel. Commun. 2021, 20, 4858–4873. [Google Scholar] [CrossRef]
  38. Lin, X.H.; Bi, S.; Su, G.; Zhang, Y.J.A. A Lyapunov-Based Approach to Joint Optimization of Resource Allocation and 3-D Trajectory for Solar-Powered UAV MEC Systems. IEEE Internet Things J. 2024, 11, 20797–20815. [Google Scholar] [CrossRef]
  39. Sun, G.; Wang, Z.; Su, H.; Yu, H.; Lei, B.; Guizani, M. Profit Maximization of Independent Task Offloading in MEC-Enabled 5G Internet of Vehicles. IEEE Trans. Intell. Transp. Syst. 2024; early access. [Google Scholar] [CrossRef]
  40. Shen, S.; Xie, L.; Zhang, Y.; Wu, G.; Zhang, H.; Yu, S. Joint Differential Game and Double Deep Q-Networks for Suppressing Malware Spread in Industrial Internet of Things. IEEE Trans. Inf. Forensics Secur. 2023, 18, 5302–5315. [Google Scholar] [CrossRef]
  41. Mei, J.; Dai, L.; Tong, Z.; Zhang, L.; Li, K. Lyapunov optimized energy-efficient dynamic offloading with queue length constraints. J. Syst. Archit. 2023, 143, 102979. [Google Scholar] [CrossRef]
  42. Neely, M. Stochastic Network Optimization with Application to Communication and Queueing Systems; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  43. Huang, L.; Bi, S.; Zhang, Y.J.A. Deep reinforcement learning for online computation offloading in Wireless-Powered mobile-edge computing networks. IEEE Trans. Mob. Comput. 2019, 19, 2581–2593. [Google Scholar] [CrossRef]
  44. Wang, Y.; Sheng, M.; Wang, X.; Wang, L.; Li, J. Mobile-edge computing: Partial computation offloading using dynamic voltage scaling. IEEE Trans. Commun. 2016, 64, 4268–4282. [Google Scholar] [CrossRef]
  45. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  46. Grant, M.; Boyd, S. CVX: Matlab Software for Disciplined Convex Programming, Version 2.1. 2014. Available online: https://cvxr.com/cvx (accessed on 17 July 2024).
  47. Wu, T.; He, H.; Shen, H.; Tian, H. Energy-Efficiency Maximization for Relay-Aided Wireless-Powered Mobile Edge Computing. IEEE Internet Things J. 2024, 11, 18534–18548. [Google Scholar] [CrossRef]
  48. Huang, Y.; Liu, Y.; Chen, F. NOMA-aided mobile edge computing via user cooperation. IEEE Trans. Commun. 2020, 68, 2221–2235. [Google Scholar] [CrossRef]
Figure 1. System model of WPMEC network with user-assistance.
Figure 1. System model of WPMEC network with user-assistance.
Mathematics 12 02478 g001
Figure 2. An illustrative time division structure.
Figure 2. An illustrative time division structure.
Mathematics 12 02478 g002
Figure 3. Average task computation rate and average task queue length over time slots.
Figure 3. Average task computation rate and average task queue length over time slots.
Mathematics 12 02478 g003
Figure 4. Average task computation rates with different control parameter V.
Figure 4. Average task computation rates with different control parameter V.
Mathematics 12 02478 g004
Figure 5. Task queue lengths with different control parameter V.
Figure 5. Task queue lengths with different control parameter V.
Mathematics 12 02478 g005
Figure 6. Average task computation rate and task queue length with different energy constraint  γ .
Figure 6. Average task computation rate and task queue length with different energy constraint  γ .
Mathematics 12 02478 g006
Figure 7. Convergence performance of energy consumption with different parameter V.
Figure 7. Convergence performance of energy consumption with different parameter V.
Mathematics 12 02478 g007
Figure 8. Offloading power of FU and NU with different Bandwidth W.
Figure 8. Offloading power of FU and NU with different Bandwidth W.
Mathematics 12 02478 g008
Figure 9. Average task computation rates in different schemes over time slots.
Figure 9. Average task computation rates in different schemes over time slots.
Mathematics 12 02478 g009
Figure 10. Average computation rates in different schemes with different bandwidth W.
Figure 10. Average computation rates in different schemes with different bandwidth W.
Mathematics 12 02478 g010
Figure 11. Average computation rates in different schemes with different distances between FU and NU.
Figure 11. Average computation rates in different schemes with different distances between FU and NU.
Mathematics 12 02478 g011
Figure 12. Average computation rates in different schemes with different task arrival rates of FU.
Figure 12. Average computation rates in different schemes with different task arrival rates of FU.
Mathematics 12 02478 g012
Table 1. Key notations and definitions.
Table 1. Key notations and definitions.
NotationDefinition
TThe time block
  τ 0 t The time for WPT
  τ 1 t The time for offloading of FU
  τ 2 t The time for NU to offload FU’s data
  τ 3 t The time for NU to offload its own data.
e m eh t e h eh t The energy harvested by MD and helper in slot t
h f t , h n t The WPT channel gain between FU and HAP, NU and HAP
g f t , g n t The offloading channel gain between FU and NU, NU and HAP
P 0 t , P 1 t , P 2 t , P 3 t The transmit power for HAP, FU, NU to offload FU’s data
and NU to offload its own data in slot t
  d f loc t The amount of tasks processed locally at FU in slot t
  d f , n off t The amount of tasks offloaded to NU at FU in slot t
  d n loc t The amount of tasks processed locally at NU in slot t
  d n , a 1 off t The amount of tasks that NU offloads to HAP from FU in slot t
  d n , a 2 off t the amount of tasks that NU offloads to HAP from itself in slot t
  e f loc t The energy consumed by processing tasks at FU in slot t
  e f , n off t The energy consumed by offloading tasks at FU in slot t
  e n loc t The energy consumed by processing tasks at helper in slot t
  e n , a 1 off t The energy consumed by NU to offload FU’s tasks in slot t
  e n , a 2 off t the energy consumed by NU to offload its own tasks in slot t
  d m t The amount of tasks processed in slot t
f f , f n The local CPU frequency at FU and NU
ϕ f , ϕ n The CPU cycles required to compute one bit task at FU and NU
  μ The energy conversion efficiency
  κ The computing energy efficiency
WThe channel bandwidth
  σ 2 The additive white Gaussian noise
Table 2. Simulation parameters.
Table 2. Simulation parameters.
SymbolValue
Time slot length1 s
Noise power  σ 2 10 4  W
Distance between the HAP and the FU  d h f 230 m
Distance between the FU and the NU  d f n 140 m
Distance between the HAP and the NU  d h n 200 m
CPU frequency of FU  f f 160 MHz
CPU frequency of NU  f n 220 MHz
CPU cycles to compute 1 bit task of FU  ϕ f 180 cycles/bit
CPU cycles to compute 1 bit task of NU  ϕ n 200 cycles/bit
Equal computing efficiency parameter  κ s   10 8
Weight of the computation rate of FU  ω 1 0.55
Weight of the computation rate of NU  ω 2 0.45
the antenna gain in channel model  A d 3
the carrier frequency in channel model  f c 915 MHz
the path loss exponent in channel model  d e 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, H.; Huang, F.; Zhou, C.; Shen, H.; Yang, Y. Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance. Mathematics 2024, 12, 2478. https://doi.org/10.3390/math12162478

AMA Style

He H, Huang F, Zhou C, Shen H, Yang Y. Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance. Mathematics. 2024; 12(16):2478. https://doi.org/10.3390/math12162478

Chicago/Turabian Style

He, Huaiwen, Feng Huang, Chenghao Zhou, Hong Shen, and Yihong Yang. 2024. "Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance" Mathematics 12, no. 16: 2478. https://doi.org/10.3390/math12162478

APA Style

He, H., Huang, F., Zhou, C., Shen, H., & Yang, Y. (2024). Maximizing Computation Rate for Sustainable Wireless-Powered MEC Network: An Efficient Dynamic Task Offloading Algorithm with User Assistance. Mathematics, 12(16), 2478. https://doi.org/10.3390/math12162478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop