Next Article in Journal
Hyperspectral Image Classification Based on Multi-Scale Convolutional Features and Multi-Attention Mechanisms
Previous Article in Journal
Biomass Estimation of Milk Vetch Using UAV Hyperspectral Imagery and Machine Learning
Previous Article in Special Issue
Advances and Challenges in Deep Learning-Based Change Detection for Remote Sensing Images: A Review through Various Learning Paradigms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach

1
School of Electronic and Information Engineering, Xi’an Jiao Tong University, Xi’an 710049, China
2
School of Electronic and Control Engineering, Chang’an University, Xi’an 710049, China
3
State Key Laboratory of Astronautic Dynamics, China Xi’an Satellite Control Center, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(12), 2184; https://doi.org/10.3390/rs16122184
Submission received: 9 May 2024 / Revised: 11 June 2024 / Accepted: 13 June 2024 / Published: 16 June 2024
(This article belongs to the Special Issue Current Trends Using Cutting-Edge Geospatial Remote Sensing)

Abstract

:
Satellite edge computing (SEC) plays an increasing role in earth observation, due to its global coverage and low-latency computing service. In SEC, it is pivotal to offload diverse observation-data-processing tasks to the appropriate satellites. Nevertheless, due to the sparse intersatellite link (ISL) connections, it is hard to gather complete information from all satellites. Moreover, the dynamic arriving tasks will also influence the obtained offloading assignment. Therefore, one daunting challenge in SEC is achieving optimal offloading assignments with consideration of the dynamic delay-sensitive tasks. In this paper, we formulate task offloading in SEC with delay-sensitive tasks as a mixed-integer linear programming problem, aiming to minimize the weighted sum of deadline violations and energy consumption. Due to the limited ISLs, we propose a fully-decentralized method, called the PI-based task offloading (PITO) algorithm. The PITO operates on each satellite in parallel and only relies on local communication via ISLs. Tasks can be directly offloaded on board without depending on any central server. To further handle the dynamic arriving tasks, we propose a re-offloading mechanism based on the match-up strategy, which reduces the tasks involved and avoids unnecessary insertion attempts by pruning. Finally, extensive experiments demonstrate that PITO outperforms state-of-the-art algorithms when solving task offloading in SEC, and the proposed re-offloading mechanism is significantly more efficient than existing methods.

1. Introduction

Satellite edge computing (SEC) is regarded as a promising architecture in Earth observation [1], the Internet of Things [2], and other scientific applications. Driven by the rapid growth in observation satellites, massive observational data are generated and need to be further processed [3,4]. However, local processing is impractical due to the limited capacities of observation satellites [5]. Moreover, transferring data from satellites to a cloud center also leads to both long latency and an unaffordable network burden [4]. Therefore, the SEC, which deploys mobile edge computing (MEC) [6] servers on satellites, providing global coverage and low-latency computing services, has drawn increasing attention in recent years [7,8,9,10,11].
In SEC, a group of satellites linked by laser intersatellite links (LISLs), perform collaborative computing for submitted data from observation satellites. These data are typically represented as tasks [8]. Some tasks, such as military monitoring [12] and target tracking [13], are delay-sensitive, and expected to be processed before their deadlines. Hence, task offloading, determining where and when tasks are executed, plays a pivotal role in SEC.
Unfortunately, due to the distributed and dynamic nature of SEC [7], it is tough to obtain offloading assignments for delay-sensitive tasks. Specifically, on the one hand, there are sparse ISL connections in SEC; thus, it is hard to gather all the necessary information from all satellites, and sending information from satellites to the control center also involves unnecessary delays. Additionally, the satellite states change rapidly during task processing, collecting such information leading to a huge communication burden on the satellite network. On the other hand, due to the dynamics in SEC, delay-sensitive tasks are continually generated by observation satellites, which causes a daunting challenge for task offloading. That is because newly generated tasks may have the same time horizon as the offloaded ones, affecting the original offloading assignment, and even make it infeasible. Thus, task offloading problems should be carefully addressed to resolve the above challenges.
The task offloading problem in SEC has received increasing attention recently. Specifically, recent works [14,15,16,17,18,19,20,21,22,23] obtain offloading assignments by various centralized approaches deploying on the access satellites or gateway stations. Meanwhile, considering the non-centrality of SEC, works [24,25,26,27,28] adopt the distributed algorithm to reduce the latency caused by long-distance communication [25]. Unfortunately, some practical issues are still neglected, such as the sparsity of ISLs, the limited network resources, and the dynamic arrival of delay-sensitive tasks. To the best of our knowledge, task offloading problems jointly considering the aforementioned practical issues, have not been addressed for SEC.
In this paper, we investigate the task offloading problem in SEC with delay-sensitive tasks by jointly considering the distributed satellites and the dynamic environment. The main contributions of this article are summarized as follows:
  • This paper establishes a mixed-integer linear programming (MILP) model for delay-sensitive task offloading in SEC, which aims to reduce the deadline violation of delay-sensitive tasks while minimizing the system’s energy consumption.
  • Considering the limited ISL connections, we propose a fully decentralized PITO algorithm. The PITO operates on each satellite in parallel and only relies on local communication via ISLs. It iterates between two stages, called task inclusion and consensus and task removal. The first stage aims to include appropriate tasks in each satellite. The second stage reaches consensus on the removal impact of tasks among all satellites and removes conflict tasks which may increase the objective value. Using PITO, tasks can be directly offloaded on board without depending on any central server. Its effectiveness and polynomial complexity are demonstrated.
  • To handle the dynamic arrival of delay-sensitive tasks, a fast re-offloading mechanism is further introduced, which reduces the tasks involved and avoids unnecessary insertion attempts, by pruning. It enables PITO to perform online re-offloading during the computing serving process.
The remainder of this paper is organized as follows. Section 2 introduces the related works. Section 3 describes the MILP model. Section 4 details the proposed PITO algorithm. Then, a re-offloading mechanism is further developed for handling the dynamics in SEC. Section 5 shows the simulation results of the proposed PITO, and these are further discussed in Section 6. Finally, the paper is summarized in Section 7.

2. Related Works

There are two main techniques for task offloading in SEC: centralized and distributed approaches. In centralized ones, offloading schemes of satellites are typically generated by a central server [14]. Some works [16,17] employed exact methods to obtain optimal offloading assignments. For example, Song et al. [16] divided the computation offloading problem into the ground and space segments and proposed an energy-efficient offloading method. Similar to [16], Ding et al. [17] decomposed the optimization problem into four subproblems, and quadratic transform-based fractional programming and the interior point method were used. Since the task offloading problem is NP-hard [29], exact methods can be effective only for small instances. Hence, heuristic and meta-heuristic approaches were adopted [14,15,18]. Specifically, Zhang et al. [14] established a greedy-based task allocation algorithm. Considering the efficiency of the meta-heuristic algorithm in solving the NP-hard problem, Hu et al. [15] and Wang et al. [18] introduced particle swarm optimization-based methods for task offloading in SEC. Additionally, reinforcement learning methods [19,20,22,23,30,31] are also employed to quickly obtain high-quality offloading assignments. Specifically, Qiu et al. [19] first developed a deep Q-learning approach for offloading problems. Mao et al. [20] embedded the long short-term memory model into a learning-based offloading strategy, addressing the dynamics of energy harvesting performance. Considering the data dependence among tasks, Yu et al. [21] proposed a deep imitation learning-driven offloading and caching algorithm. To achieve cooperative scheduling with both tasks and resources, Cui et al. [22] proposed a mixed algorithm combining deep reinforcement learning with convex optimization. Zhang et al. [23] proposed a deep deterministic policy gradient-based algorithm outputting both discrete and continuous variables, achieving simultaneous decisions on offloading locations and allocating resources. To address security concerns in SEC, Sthapit et al. [30] developed a deep deterministic policy gradient-based security-aware offloading algorithm. Subsequently, Liu et al. [31] introduced a federated reinforcement learning-based offloading method to enhance privacy protection. Centralized approaches are usually easier to implement and run faster, but may have the following drawbacks: (1) the central server requires consistent communication with each satellite, leading to a heavy communication burden. For ground-based central servers especially [17,22,32], it also occupies satellite–terrestrial network resources; (2) a high computational demand is caused by generating all offloading assignments and monitoring any state changes of satellites during task processing; and (3) there is a resulting single point of failure.
Considering the non-centrality of SEC, distributed approaches [24,25,26,27,28] are adopted. Specifically, Chen et al. [24] adopted a contract net protocol (CNP)-based method to resolve the autonomous mission planning problem. Works [25,26] employed the diffusion algorithm-based approach. Wang et al. [25] introduced a transmission capacity and computing capacity-aware diffusion algorithm to obtain task offloading assignments. Then, based on [25], Ma et al. [26] proposed a directed diffusion-enhanced task scheduling algorithm (DETS), which embedded a computing gradient that considered both the computing and communication resources. Other works [27,28] used the distributed convex optimization-based approach. For example, Tang et al. [27] employed a binary-variable relaxation method to convert the original nonconvex problem into a linear programming problem, then proposed an alternating-direction method of multipliers (ADMM)-based distributed computation offloading scheme. Finally, a binary-variable recovery algorithm was adopted to obtain the required discrete offloading assignment. Similarly, Zhou et al. [28] adopted the ADMM-based distributed algorithm to resolve the mobility-aware computation offloading problem. The performance impact (PI) algorithm [33,34] is another distributed approach, in which the required assignments can be directly constructed without additional transformations, and the consensus process [35] helps with jumping out of the local optimum and resolving task conflicts. However, the implementation of PI for the task offloading problem in SEC has not been reported.
Few studies have investigated the dynamic arrival of delay-sensitive tasks in SEC [36,37]. Ng et al. [36] proposed z-stage stochastic integer programming to resolve the task offloading problem, which minimizes the cost amid stochastic uncertainties. Based on [36], Ng et al. [37] further presented a z-stage stochastic offloading optimization scheme, in which uncompleted tasks can be re-offloaded in the next stage. In this paper, we introduce a fast re-offloading mechanism using the match-up strategy [38,39,40], which is confirmed to not only achieve good solutions but also reduce re-offloading time.

3. Problem Description and Modeling

3.1. Scenario

The SEC scenario is shown in Figure 1, in which delay-sensitive tasks generated by observation satellites are further uploaded to SEC satellites for processing. There are m SEC satellites S = {1, 2, …, m} and n observation satellites D = {1, 2, …, n} in total. Each SEC satellite sS is equipped with a server whose computing capacity and buffer size are Cs and Ms, respectively. Due to the limited energy supply in orbit, the maximum energy consumption for each SEC satellite is limited to Hs. Every SEC satellite also connects with four neighboring satellites via LISL, so that network topology can be represented as an m × m matrix G, where G[k, l] = 1 if an LISL exists between satellites k and l, and otherwise, G[k, l] = 0. Some useful notations are listed in Table 1.
Each observation satellite iD generates a delay-sensitive task ti, and all tasks T = {t1, t2, …, ti, …, tn} are indivisible. Every task tiT can be characterized by a tuple <λi, μi, σi>, where λi represents the input data size of ti, μi denotes the workload of ti, and σi refers to the deadline.
According to [23], observation satellite iD can access a specific SEC satellite aS covering i for data uploading. Initially, data from task tiT are uploaded from observation satellite i to access satellite a. Then, task ti can be executed directly on a or offloaded to another satellite in S via LISLs. Let sS be the offloaded satellite. Since tasks can be executed only after the required data are received, all offloaded tasks in s must not exceed the available buffer size (represented as MAs), and these tasks are processed sequentially. After ti is completed, its data in s will be released. Then, the whole process is continued until all generated delay-sensitive tasks are finished. Due to a typically small size of outputs, the results feedback process is ignored in this paper.

3.2. Communication Model

The communication latency is composed of two parts: propagation delay and transmission delay.
In this paper, we use Ka-band for the links between the observation satellites and SEC satellites. Therefore, for task tiT, let r i , a R I S L be the rate of radio intersatellite links (RISLs) from observation satellite i to access satellite a, which can be obtained by [22]. Then, the latency for uploading ti is
τ i , a R I S L = l i , a c + λ i r i , a R I S L
where li,a is the distance of RISL, and c is the speed of light. Similarly, let rLISL be the rate of LISLs among SEC satellites, which is a constant. The latency for transmitting ti from a to another satellite s is
τ i , a , s L I S L = l a , s c + λ i r L I S L
where 𝓁a,s represents the length of the shortest route from satellite a to s.
Thus, when task ti is offloaded to satellite sS, the communication latency of task ti is given by
τ i , s c o m m = τ i , a R I S L + τ i , a , s L I S L
Specially, if ti is offloaded to access satellite a, there is no LISL transmission, and we have τ i , a c o m m = τ i , a R I S L .

3.3. Computational Model

For task tiT, the required CPU cycles for processing ti is λiμi. Thus, the computational latency of ti on offloaded satellite s is
τ i , s c o m p = λ i μ i C s
In this paper, we consider a sequential computation model [14] in Figure 2, i.e., each satellite can process only one task at a time, and preemption is not allowed. Hence, when multiple tasks are offloaded to a specific satellite, there will be queue delays that must be taken into account for task computation.
Then we let θ = {θ1, θ2, …, θm} be a task offloading assignment, where θs, ∀sS, represents the task sequence on satellite s. Several time parameters of task ti are defined as follows:
  • τ i A : the time that offloaded satellite s is available for task ti;
  • τ i D : the time that all required data of ti are received by s;
  • τ i S : the time at which ti is performed;
  • τ i F : the finish time of task ti.
First, we let tρ(i)T be the preceding task of ti in task sequence θs. If ti is the first task of θs, we set tρ(i) = ∅; otherwise, ti can be executed by s only after task tρ(i) is completed. Thus, we have
τ i A = 0 ,   if   t ρ ( i ) = τ ρ ( i ) F ,   otherwise
Furthermore, task ti can be performed only after s receives the data from ti. We assume that all data begin transmitting simultaneously (i.e., time 0), and we have
τ i D = 0 + τ i , s c o m m
Then, the start time of task ti can be calculated by
τ i S = max { τ i A , τ i D }
where the queue delay of ti is defined as τ i q u e u e = τ i S τ i D .
Finally, the finish time of ti is given by
τ i F = τ i S + τ i , s c o m p
Based on above analysis, given an offloading assignment θ, we can calculate τ i A , τ i D , τ i S , and τ i F for each tiT, sequentially, using (5), (6), (7), and (8), respectively.

3.4. Energy Model

The energy consumption of SEC consists of two parts: transmission and computation. For transmission, the power of RISL and LISL are ε1 and ε2, respectively. Regarding computation, we use the widely adopted model of energy consumption per computing cycle, κ C s 2 [19]. Then the energy consumption of processing task ti on satellite s is calculated by
E i , s = ε 1 τ i , a R I S L + ε 2 τ i , a , s L I S L + κ C s 2 λ i μ i
Similar to Section 3.2, when ti is offloaded to access satellite a, we have Ei,a = ε1 τ i , a R I S L + κ C a 2 λiμi.

3.5. Problem Formulation

This paper aims to enhance the service experience for delay-sensitive tasks while minimizing overall energy consumption. Let xsi be the decision variable, such that xsi = 1 if task ti is offloaded to satellite s and otherwise, xsi = 0. Another variable is ysij = 1 if satellite s needs to perform task ti before tj, and ysij = 0 otherwise. Decision variables xsi and ysij describe the offloading assignment θ. Then, the MILP model of the considered problem is presented as follows:
F ( θ ) = min x s i , y s i j i D α max { 0 , τ i F σ i } + β s S E i , s x s i
s S x s i = 1 ,   i D
i D λ i x s i M A s ,   s S
i D E i , s x s i H s ,   s S
j D y s i j x s i 1 ,   s S , i D
i D y s i j x s j 1 ,   s S , j D
y s i j ( τ j S τ i F ) 0 ,   s S , i , j D
x s i { 0 , 1 } , y s i j { 0 , 1 } ,   s S , i , j D
α + β = 1 ,   α [ 0 , 1 ] ,   β [ 0 , 1 ]
where xki and ykij are decision variables representing offloading assignment θ. α and β are weight factors. Equation (10) represents the objective function, which aims to minimize the weighted sum of deadline violation and energy usage of all tasks. Equation (11) indicates that each task can only be offloaded to a specific satellite. Equation (12) states that all offloaded tasks on s should not surpass the available buffer size MAs, to avoid task overflow, and we have MAs = Ms, initially. Equation (13) indicates that the energy consumption for processing all offloaded tasks on s should not exceed the energy consumption limitation Hs. Equations (14) and (15) suggest that each task has, at most, one predecessor and one successor in its sequence. Equation (16) illustrates temporal relations among tasks offloaded to the same satellite. Equations (17) and (18) state the domains of variables.

4. The PI-Based Task Offloading Algorithm

This article develops a PITO algorithm for resolving delay-sensitive task offloading problems in SEC. The PITO performs on each satellite in parallel, and only relies on local communication via LISLs. We first introduce the distributed framework of PITO, followed by its two stages: task inclusion and consensus and task removal. Additionally, we establish a re-offloading mechanism to handle the dynamic arriving delay-sensitive tasks.

4.1. General Framework

In PITO, satellites exchange information with their neighbors via LISLs. Each satellite sS iteratively adds or removes delay-sensitive tasks into or out of its task sequence θs, minimizing the global objective F(θ). To do this, we first introduce two indicators for each task tT, i.e., removal impact and inclusion impact.
(1)
Removal impact: for task t ∈ θs, the removal impact R st) indicates the variation of Fs) after removing t from θs; then, we have
R ( θ s t ) = F ( θ s ) F ( θ s t )
where Fs) is the objective value of θs, we have F(θ) = ∑sSFs). θst represents the removal of task t from θs. For completeness, we set R st) = ∞ for any t ∉ θs.
(2)
Inclusion impact: for task t ∉ θs, the inclusion impact I st) represents the minimum variation of Fs) after inserting t into θs; then, we have
I ( θ s t ) = min p { 1 , 2 , , θ s + 1 } { F ( θ s p t ) F ( θ s ) }
where θsp t indicates the inclusion of task t into the p-th position of θs. Similarly, we set I st) = ∞ under two states: when t ∈ θs, or when constraints (12) and (13) cannot be satisfied after incorporating task t.
To introduce the general framework in Figure 3, let us begin with a straightforward example. Initially, satellite sS adds tasks into its sequence θs, following the inclusion rules in Section 4.2. Then, satellite s sends the removal impact R st) of all tasks t ∈ θs to neighboring satellite k (i.e., G[s, k] = 1) via LISLs. After that, satellite k evaluates the received R st) against the inclusion impact I kt) derived from its sequence θk. If criterion (21) is met, task t will be reassigned from θs to the appropriate position within θk, and then F(θ) decreases. Simultaneously, satellite k updates the removal impact of t and relays it to neighboring satellites via LISLs. This process is continued until (21) is no longer met. Then, we introduce Lemma 1 to the detail criterion (21).
Lemma 1. 
For offloading assignment θ = {θ1, θ2, …, θm}, two satellites s and k connect by LISL (i.e., G[s, k] = 1) and a task t∈ θsk. The F(θ) will be decreased by removing t from θs to an appropriate position within θk, as long as (21) is met.
R ( θ s t ) > I ( θ k t )
Proof. 
Let p′ be the optimal position in θk, i.e., p′ = argminp∈{1,2,…,|θk|+1}{Fkp t) − Fk)}, so we have I kt) = Fkp t) − Fk). After removing task t from θs to the p′-th position of θk, a new assignment θ′ is obtained, where θs′ = θst, θk′ = θkp t, and θl′ = θl, ∀lS/{s, k}. According to (19) and (20), we have F(θ′) = ∑lS Fl′) = F(θ) − R st) + I kt). Thus, F(θ′) < F(θ) holds (i.e., F(θ) decreases) only when R kt) > I lt) is met. □
Unfortunately, the limited LISL connections in SEC cause a challenge in spreading removal impacts among all satellites, leading the SEC system to be easily trapped into local optimum [33]. Consider the example in Figure 4: nine satellites s1s9 are connected via LISLs, and task tT is located in θ5. We make two assumptions: (1) I st) > R 5t) for ∀s ∈ {4, 8}, and (2) I 7t) < R 5t). That means that removing task t from θ5 to θ7 will decrease F(θ). However, given that I st) > R 5t) exists for ∀s ∈ {4, 8}, task t cannot be incorporated into either θ4 or θ8 by Lemma 1, and so t cannot be added to θ7, resulting in a local optimum. This problem also exists in other distributed approaches, such as CNP [24] and DETS [26].
To avoid local optimum and update removal impacts among satellites, each satellite sS uses three vectors: Rs, Ws, and Qs, when communicating with its neighbors in PITO.
(1)
Rs = [Rs1, Rs2, …, Rsn]T is a vector recording the latest removal impacts of tasks in T. Initially, we set Rst = R st) for ∀ t ∈ θs, and otherwise Rst = ∞.
(2)
Ws = [Ws1, Ws2, …, Wsn]T is a vector recording the considered offloaded satellite of tasks in T. The entry Wst = k represents the fact that satellite s believes t is offloaded to satellite k. Initially, we set Wst = s for ∀ t ∈ θs, and otherwise, Wst = 0.
(3)
Qs = [Qs1, Qs2, …, Qsm]T is a vector where entry Qsk is the timestamp where satellite s thinks it receives the latest information from satellite k. Initially, Qsk = 0 for ∀ kS. During the communication process, Qsk is updated under the following rules:
Q s k = τ s k , if   G [ s , k ] = 1 max l S G [ s , l ] = 1 Q l k ,   otherwise
where τsk is the time that satellite s receives vectors from k. If satellite k is a neighboring satellite of s (i.e., G[s, k] = 1), we have Qsk = τsk; otherwise, Qsk is set to the latest timestamp Qlk from its neighbors l (i.e., lSG[s, l] = 1).
Subsequently, we detail the framework of PITO in Figure 3. Algorithm PITO runs on each satellite in parallel, including two stages: (1) task inclusion, and (2) consensus and task removal. Specifically, in the task inclusion stage, each satellite sS independently constructs sequence θs. This process may result in various removal impacts for a specific task tT on different satellites. Then, another stage is required, consisting of a further two steps: first, a common removal impact vector Rs is reached among satellites during the consensus process; then, satellites remove conflict tasks from their sequences in the task removal process. The above two stages perform alternately. When there are no changes in offloading assignment θ, Algorithm PITO terminates in all satellites. Notably, in PITO, satellites only communicate with neighbors during the consensus process, which is expected to reduce the communication times. The procedure of PITO is then summarized in Algorithm 1.
Algorithm 1. PITO
Input: satellites S, delay-sensitive tasks T, network topology G.
Output: offloading assignment θ.
  • Initialize Rs, Ws, and Qs for each satellite sS;
  • Initialize θ = {θs| sS} where θs = ∅ for ∀sS, and converged = False;
  • while not converged
  •      Let θold = θ;
  •      Performs task inclusion on each sS;
  •      Send vectors Rs, Ws, and Qsof each sS to its neighbors k with G[s, k] = 1;
  •      Update Qs of each sS by received information;
  •      Perform consensus process until a common Rs is reached;
  •      Perform task removal on each sS;
  •      if θ = θold
  •           Let converged = True;
  •      end
  • end
  • Output offloading assignment θ;

4.2. Task Inclusion

In the task inclusion stage, each satellite independently incorporates tasks into its task sequence, based on local information.
For satellite sS, the inclusion impact vector Is = [Is1, Is2, …, Isn]T is first constructed according to (20), where entry Ist = I st) for ∀ tT. Then, Is is compared with vector Rs = [Rs1, Rs2, …, Rsn]T (after consensus). One of the tasks in T will be inserted into θs only if
max t T { R s t I s t } > 0
According to Lemma 1, when (23) is satisfied, it indicates the presence of an inclusion that can reduce F(θ). Then task t′ = argmaxtT{RstIst} will be inserted into the p′-th position of sequence θs, where p′ leads to value I st′) (i.e., p′= argminp∈{1,2,…,|θs|+1}{Fsp t′) − Fs)}). Next, we update vectors Rs and Ws by setting Rst = Ist and Wst = s. After that, Is is recalculated and (23) is checked again. This process is repeated until there are no remaining tasks in T or (23) is no longer met. Finally, we obtain new Rs′ based on updated θs′. The task inclusion stage is further presented in Algorithm 2.
Algorithm 2. Task Inclusion
Input: task set T, task sequence θs, and vectors Rs and Ws.
Output: new task sequence θs′, new vectors Rs′ and Ws′.
  • Calculate inclusion impact vector Is;
  • while {Ts} ≠ ∅ and (23) is met
  •      Obtain task t′= argmaxtT{RstIst} and appropriate position p′ in θs;
  •      Insert t′ into p′-th of θs;
  •      Update Rst = Ist, Wst = s;
  •      Update Is;
  • end
  • Let θs′ = θs and Ws′ = Ws;
  • Obtain Rs′ based on θs′;
  • Output θs′, Rs′, and Ws′;
Following the task inclusion stage, new parameters (θs′, Rs′, and Ws′) are generated on each satellite sS. However, since tasks are included independently, a specific task may be assigned to multiple satellites, leading to task conflicts. Moreover, variations exist in obtained Rs among different satellites sS. These pose a challenge in achieving a unified offloading assignment θ. Thus, we introduce the consensus and task-removal stage to resolve the above problems.

4.3. Consensus and Task Removal

In this stage, satellites reach a common Rs and remove conflict tasks from their task sequences.

4.3.1. Consensus

During the consensus process, satellites sS broadcasts Rs, Ws and Qs to neighboring satellites via LISL. Subsequently, the values of Rs and Ws in satellite sS are revised, based on the information of its neighbors, following the action rules [35] designed to achieve consensus. Then, the adopted consensus mechanism is detailed as follows.
Considering sending satellite s and receiving satellite k, vectors Rs = [Rs1, Rs2, …, Rsn]T, Ws = [Ws1, Ws2, …, Wsn]T and Qs = [Qs1, Qs2, …, Qsm]T are transmitted from s to k via LISL. After receiving these vectors, satellite k first updates its timestamp vector Qk, based on (22). Then, for each task tT, elements Rkt and Wkt in Rk and Wk are updated according to Table 2. In Table 2, three actions are considered, with Maintain being the default one: (1) Update: Rkt = Rst and Wkt = Wst; (2) Maintain: Rkt = Rkt and Wkt = Wkt; and (3) Reset: Rkt = ∞ and Wkt = 0.
Notably, the consensus process is crucial for reaching a common Rs among satellites. Recall that the removal impact vector Rs, as defined in Section 4.1, relies heavily on its task sequence θs. Broadcasting the removal impacts directly to all satellites may lead to server elements in Rs reaching non-existent small values, which cannot be achieved by any satellite sS with its task sequences θs.

4.3.2. Task Removal

After reaching a common Rs through the consensus process, there still exist task conflicts. Thus, the task removal process is conducted on each satellite sS to eliminate task conflicts in θs.
For satellite sS, we first obtain pending removal tasks Ψs = {t ∈ θs| Wsts}. Tasks in Ψs are those which exist in θs but are believed not to be offloaded to s, according to Ws. Then, a local removal impact vector Rs = [Rs1, Rs2, …, Rsn]T is calculated by θs, where entry Rst = R st) for ∀ t ∈ θs, and otherwise, Rst = ∞. Next, a task t ∈ Ψs will be removed from θs when
max t Ψ s { R s t R s t } > 0
If condition (24) is met, it implies the presence of a conflicting task with a higher removal impact value, indicating a worse offloading assignment. Then, task t′ = argmaxt∈Ψs{RstRst} will be removed from θs and Ψs. After that, Rs are updated and (24) is evaluated again. This process is continued until Ψs = ∅ or (24) is no longer met. Finally, the remaining tasks in Ψs will be retained in θs, and we set Wst = s and Rst = R st) for each t ∈ Ψs. The task removal process is summarized in Algorithm 3.
Algorithm 3. Task Removal
Input: task sequence θs, and vectors Rs and Ws.
Output: new task sequence θs′, new vectors Rs′ and Ws′.
  • Obtain pending removal tasks Ψs = {t ∈ θs| Wsts};
  • Calculate local removal impact vector Rs = [Rs1, Rs2, …, Rsn]T;
  • while Ψs ≠ ∅ and (24) is met
  •      Obtain task t′ = argmaxt∈Ψs{RstRst};
  •      Remove t′ from θs and Ψs;
  •      Update Rs;
  • end
  • for each remaining tasks t ∈ Ψs
  •      Update Wst = s;
  •      Update Rst = R st);
  • end
  • Let θs′ = θs, Rs′ = Rs and Ws′ = Ws;
  • Output θs′, Rs′, and Ws′.

4.4. Convergence and Complexity Analysis

The convergence of PITO is naturally guaranteed. In PITO, each satellite sS optimizes F(θ) by iteratively adding or removing tasks into or out of its task sequence θs. According to Lemma 1, the offloading assignment θs of each satellite s will be changed as long as F(θ) is decreased. Thus, Algorithm PITO achieves convergence when there are no changes to θ after one iteration.
Subsequently, we discuss the complexity of PITO. There are m satellites and n delay-sensitive tasks in SEC. In the task inclusion stage, a total of m satellites independently integrate, at most, n tasks into their task sequences, and then the complexity is O (mn). In the consensus process, each of the m satellites receives information from up to four neighboring satellites, and updates elements in its vectors for n tasks; then, the corresponding complexity is denoted as O (4mn). In the task removal process, a maximum of m satellites remove no more than n conflicting tasks from their sequences; then, its complexity is O (mn). Therefore, the computational complexity in each iteration of PITO is O (mn) + O (4mn) + O (mn) = O (6mn). Let δ denote the number of iterations required for convergence, and then the complexity of Algorithm PITO is O (6δmn), i.e., it is of polynomial complexity.

4.5. Re-Offloading Mechanism

Using PITO, delay-sensitive tasks can be offloaded on board by satellites. However, due to the dynamics in SEC, the dynamic arrival of delay-sensitive tasks disrupts the original offloading assignment, even making it infeasible. Thus, we propose a re-offloading mechanism based on match-up strategy [38,39,40] in this subsection.
At time τ, q delay-sensitive tasks Tx = {tn+1, tn+2, …, tn+q} are newly generated by observation satellites Dx = {n + 1, n + 2, …, n + q}, which need to be processed by SEC satellites in S. Each task tiTx is also characterized by tuple <λi, μi, σi> in Section 3.1. Given the original assignment θ, the procedure of the proposed re-offloading mechanism is described as follows:
Step 1: Obtain the influence horizon. Let ε be the expected re-offloading time, then the re-offloading assignment θ* = {θs*| sS} will be executed at time τ + ε. Additionally, new tasks in Tx are expected to be completed before maxtiTxσi. Thus, the influence horizon is defined as [μa, μb] = [τ + ε, maxtiTxσi].
Step 2: Determine the re-offloading tasks. With horizon [μa, μb], when new delay-sensitive tasks in Tx are offloaded, the original tasks in the same horizon of θ are disrupted, represented as To = {tiT | τ i S ∈ (μa, μb)}. Next, by the time μa, several tasks in θ may have been completed or are currently in processing. Since re-offloading assignment θ* must not conflict with implemented tasks, tasks Ta = {tiT | τ i S ≤ μa} cannot be re-offloaded. Additionally, tasks in θ executing after μb are slightly impacted, and then these tasks, denoted as Tb = {tiT | τ i S ≥ μb}, are reserved to reduce the re-offloading time. Thus, T = TaToTb exists, and the re-offloading tasks are Tr = TxTo. Figure 5 illustrates an example of the proposed task sets in θ, where tasks in Ta, To and Tb are marked in orange, cyan, and red, respectively.
The above task sets (Ta, To and Tb) are obtained in a distributed way in Algorithm 4. First, each satellite sS calculates time parameters for tasks in θs, as described in Section 3.3. Then, vector Us = [Us1, Us2, …, Us(n+q)]T is generated, where entry Ust = 1 for t ∈ {θs | τ t S ≤ μa}, Ust = 2 for t ∈ {θs | τ t S ≥ μb}, and otherwise, Ust = 0. Since Ust is only determinated by satellite s with t ∈ θs, a common Us can be easily obtained, where Ust = 0, 1, and 2 for t belonging to Tr, Ta, and Tb, respectively.
Algorithm 4. Task classification
Input: satellites S, original assignment θ.
Output: vectors Us, ∀sS.
  • Each satellite sS calculates time parameters for tasks in θs;
  • Obtain initial Us for each satellite sS;
  • while a common Us is not reached
  •     SendUs of each sS to neighboring satellites k with G[s, k] = 1;
  •     for each received Uk by satellite s      //Update Us on satellite s
  •         for each tT
  •             if Ust = 0 and Ukt ≠ 0
  •                 Let Ust = Ukt;
  •             end
  •         end
  •     end
  • end
  • Output Us, ∀sS.
Step 3: Obtain re-offloading assignment. First, we obtain the available buffer size MAs for each sS. By original assignment θ, several tasks Tc = {tiT | τ i F ≤ μa} have been accomplished at time μa, and buffer resources occupied by them are released; then, we have
M A s = M s t i θ s λ i + t i θ s T c λ i
Subsequently, two modifications are involved in F(θ*). For task tiTx, data uploading begins at time μa. Then, the data readiness time τ i D , as defined in (6), is revised as
τ i D = μ a + τ i , s c o m m
Moreover, some tasks in θk may also be re-offloaded to θs*. At time μa, the involved data are transmitted from k to s, leading to additional delays and energy usage. Therefore, for tasks ti ∈ θk ∩ θs*, the data readiness time τ i D in (6) and energy consumption Ei,s in (9) are revised as
τ i D = μ a + τ i , k , s L I S L
E i , s = ε 1 τ i , a R I S L + ε 2 ( τ i , a , k L I S L + τ i , k , s L I S L ) + κ C s 2 λ i μ i
After that, the re-offloading is performed, which introduces two main modifications in the original PITO: (1) in the task inclusion stage, only tasks tTr (i.e., Ust = 0) can be inserted into each θs*. The insertion positions are also limited by the pruning approach; (2) the tasks in TaTb are reserved in θs* during the task removal process.
Specifically, in the task inclusion stage, the pruning approach in Algorithm 5 is proposed, generating candidate insertion positions Φs for each satellite sS with θs*.
Algorithm 5. Pruning approach
Input: task sequence θs* and vectors Us.
Output: candidate positions Φs.
  • Initialize positions pa = 1 and pb = |θs|+1;
  • for u = 1 to |θs|
  •     Let t = θs[u];
  •     if Ust = 2
  •         Let pb = u;
  •         Break;
  •     end
  • end
  • for v = |θs| to 1
  •     Let t = θs[v];
  •     if Ust = 1
  •         Let pa = v + 1;
  •         Break;
  •     end
  • end
  • Output candidate positions Φs = {pa, pa +1, …, pb};
In Algorithm 5, positions pa and pb are initialized as 1 and |θs*| + 1, respectively. Then, in Lines 2–8, index u of the first task t ∈ θ*∩ Tb is obtained, and we set pb = u. Similarly, index v of the last task t ∈ θs* ∩ Ta is obtained by Lines 9–15, and we set pa = v + 1. By pa and pb, we obtain candidate insertion positions Φs = {pa, pa +1, …, pb}. Based on Φs, the inclusion impact vector Is = [Is1, Is2, …, Is(n+q)]T is calculated, where entry Ist = minp∈Φs{Fs* ⊕p t) − Fs)} for ∀ tTrs*, satisfying constraint (12), and otherwise, Ist = ∞. After inserting a task into θs*, we update Φs and Is. Differently from Algorithm 2, the task inclusion process in re-offloading is continued until {Trs*} = ∅ or (23) is no longer satisfied.
Moreover, in the task removal process, tasks tTaTb are retained, and only tasks in Tr can be removed from θs*. Hence, the pending removal tasks for satellite s are revised as Ψs = {t ∈ θs* ∩ Tr | Wsts} in re-offloading. Apart from above modifications, Algorithm PITO proceeds as the original design, and the two stages of PITO are alternately performed until a re-offloading assignment θ* is obtained.
The proposed re-offloading mechanism has the following advantages: Firstly, it reduces the re-offloaded tasks and uses a pruning approach to minimize insertion attempts, thereby reducing re-offloading time. Secondly, reserving tasks tTb in θ* prevents unnecessary data transmission, resulting in lower extra-energy usage and maintaining assignment quality.

5. Computational Experiments

A series of computational experiments are conducted to describe the performance of the proposed PITO.

5.1. Experimental Setup

We establish an SEC simulation environment using MATLAB, as in [1,21,27], adopting six Walker Delta constellations A–F, listed in Table 3. The simulation is set to start at [20 Mar 2024 00:00:00.000 UTCG], with satellite coordinates obtained from the Satellite Tool Kit (STK). We conduct the experiments under three types of instances: small, medium, and large, resulting in 48 combinations (3 × 16), detailed in Table 4. Each combination consists of 10 different testing instances. The altitudes of observation satellites are set to 500 km, with an average distance between satellites of 1000 km and 100 km for low-density and high-density instances, respectively. The task deadlines σi are randomly generated from [15,25] for emergency cases and [15,31] for normal cases. Additional parameters are detailed in Table 5.
The proposed PITO is compared with three state-of-the-art algorithms: CNP [24], DETS [26], and ADMM [27]. The brief explanations of these algorithms are as follows.
CNP is a distributed coordination mechanism based on market-like agreements, where agents bid on tasks announced by a manager agent, with the most suitable agent winning the contract.
DETS is a centerless method based on the directed diffusion algorithm, using a novel computing gradient that jointly considers communication and computation resources.
ADMM is a distributed mechanism that uses binary variable relaxation to convert the original nonconvex problem into a linear programming problem. After solving it with distributed convex optimization, a variable recovery algorithm is applied to obtain a discrete solution.
Then, we use the following three performance indicators:
(1)
RV: the relative value of F(θ) for an algorithm compared with others; we have
R V = F ( θ ) F a l l ( θ ) F a l l ( θ )
where F(θ) is the objective value of an algorithm for a testing instance, and Fall(θ) is the optimal objective value obtained by all algorithms for the same instance. A lower RV value means better performance. It helps to reduce variation among instances in the same combination.
(2)
CT: the communication times, indicating the communication burden between satellites during task offloading.
(3)
RT: the running time required for the task offloading process.
To eliminate randomness, each algorithm is independently run 10 times for each instance. We then calculate average RV (aRV), average CT (aCT), and average RT (aRT), to evaluate their performance. All these algorithms are coded in MATLAB and run on a PC with an Intel Core i7-14700k CPU @ 5.6 GHz and 64 GB of RAM in the 64-bit Windows 11 operation system.

5.2. Comparison with Existing Distributed Algorithms

In this subsection, Algorithm PITO is compared with three competitors (i.e., CNP [24], DETS [26], and ADMM [27]), as described in Section 5.1. The comparison results for small-scale, medium-scale, and large-scale instances are grouped in Table 6, Table 7 and Table 8, respectively. The performances of the four algorithms vary markedly in terms of aRV, aCT, and aRT, and optimal values are marked in bold. These tables show that PITO obtains the optimal aRV and aCT in almost all instances, and only in a few cases does DETS perform the best. This is because DETS uses a directed diffusion algorithm that considers communication and computation resources, yielding good solutions for small-scale instances. However, it tends to become stuck in local optima for large-scale instances. Figure 6 further illustrates the line charts of aRV, aCT, and aRT for these algorithms across different instances.
Figure 6a illustrates the fact that PITO achieves minimal aRV values compared to the three competitors, highlighting its superiority in offloading assignment quality. For small-scale instances, algorithm differences are relatively small due to the limited solution space. However, as the instance scale increases, variations between algorithms become more pronounced. For example, in instances with {E, 100}, the objective value F(θ) of PITO is decreased by 22.63% (≈(1.2925 − 1)/1.2925) and 11.49% (≈(1.1298 − 1)/1.1298) compared to CNP and DETS, respectively. This is because the CNP and DETS are more easily trapped into local optima [33] due to reliance on direct neighbor information (as discussed in Section 4.1), which consequently impacts offloading assignment quality. In contrast, PITO adopts a consensus process, where satellites share comprehensive information vectors with their neighbors, enabling PITO to escape from local optimum. Algorithm ADMM sidesteps these local optimum issues by centrally updating global variables [27]. However, it still faces challenges for discrete–continuous transformation and binary variables recovery [27], which affect assignment quality, especially for large-scale instances.
Figure 6b shows the aCT of all algorithms. PITO achieves the smallest values in all instances, indicating its low communication burden. This is because satellites in PITO only communicate with neighbors during the consensus process, and the adopted vectors further reduce the required communication times. The CNP and DETS rank second and third, respectively. The diffusion mechanism in DETS necessitates more communication times, compared to CNP. The ADMM performs the worst because the global variables update in each iteration, requiring local variables from all satellites. This results in a significant communication burden, which also increases with the number of satellites and tasks.
Figure 6c illustrates the aRT of all algorithms. CNP has the shortest running time due to its simplicity. Notably, CNP, DETS, and PITO show a slight increase in running time as the scale grows, while ADMM exhibits a significant ascending trend. This is attributed to ADMM relying on the iteration of convex optimization algorithms or CVX tools [27], which are time-consuming, especially for large-scale instances. PITO has a certain increase of aRT in large-scale instances, due to the higher number of satellites and tasks requiring more iterations to reach consensus, and this issue can be addressed by dividing tasks into groups or clustering satellites.
Based on Figure 6, PITO consistently achieves the best assignments with minimal communication burden, although taking slightly more time than DETS and CNP. Generally, the slight increase in runtime for better assignments is worthwhile. Consequently, we conclude that PITO outperforms existing methods in solving task offloading problems in SEC.

5.3. Validation of the Re-Offloading Mechanism

In this subsection, we validate the effectiveness of the proposed re-offloading mechanism. We compare it with the existing z-stage method [37] in two groups. The z-stage method re-offloads all uncompleted tasks. In contrast, our proposed mechanism only re-offloads tasks in Tr = TxTo, and uses a pruning approach to avoid unnecessary insertion attempts, further speeding up the re-offloading process. Similar to Section 5.2, ten different testing instances are generated for each combination, and each algorithm independently runs ten times for each instance. The comparison results are presented as follows.
Figure 7 shows the results of aRV and aRT under different numbers of newly generated tasks, q, using origin instances with size of {B, 20}. The proposed re-offloading mechanism consistently generates assignments with aRV values similar to those of the z-stage method. When q = 5, although the proposed re-offloading mechanism exhibits slight backwardness in aRV, the running time decreases by 89.11% (≈(0.1038 − 0.0113)/0.1038) compared to the z-stage method. As q increases, the difference between the two methods diminishes. This is because more newly generated tasks tend to have a larger μb value, leading to more tasks involved in re-offloading. With the instances of q = 30, the differences in aRV between the two approaches are negligible, yet the proposed re-offloading mechanism still reduces running time by 57.36% ((≈(0.7024 − 0.2995)/0.7024)).
Figure 8 illustrates the comparison results of aRV and aRT under different instances with q = 10. The difference in aRV between two approaches remains consistently small. However, as the instance scales increase, there is a notable contrast in their running time. Especially in instances with size {F, 100}, the proposed re-offloading mechanism reduces the running time by 91.23% (≈(34.0785 − 2.9881)/34.0785). This reduction is attributed to the proposed mechanism retaining several original subsequences based on influence intervals, thereby minimizing re-offloading tasks. Furthermore, the employed pruning approach significantly reduces insertion attempts. These special designs of our proposed mechanism allow a reduction in running time while ensuring assignment quality. Conversely, the z-stage method includes all uncompleted tasks in re-offloading, leading to a significant increase in runtime as the instance scale grows.

6. Discussion

In this subsection, we further discuss the effect of instance parameters (number of observation satellite n and number of SEC satellites m) on these algorithms. We organize our comparisons into two groups. Similarly, we generate ten different testing instances for each combination, and each algorithm independently runs ten times for each instance. Here are the comparison results:
Figure 9 illustrates comparison results of aRV, aCT, and aRT under different observation satellite numbers. The performance ranking of algorithms remains unchanged, despite variations in observation satellite numbers. PITO consistently achieves the optimal aRV and aCT across all instances, demonstrating its statistical effectiveness. Moreover, as the number of observation satellite increases, all algorithms exhibit an ascending trend in both aCT and aRT. With the instances of n = 80, PITO not only achieved significantly superior aRV values but also required 80.35% (≈(789 − 155)/789) and 74.29% (≈(603 − 155)/603) fewer communication times compared to ADMM and DETS, respectively.
Figure 10 shows the results of aRV, aCT, and aRT under different constellations. Once again, PITO achieves the optimal aRV. Additionally, with an increase in satellite number, the aCT and aRT values of CNP, DETS, and PITO show only marginal growth, while the values of ADMM exhibit a pronounced upward trend. That is because, in the consensus process of PITO, satellites only communicate with their neighbors using the vectors defined in Section 4.1, resulting in only a slight increase in the required convergence iterations δ as the satellite number increases. In the instances with constellation F (i.e., m = 66), PITO still achieves the best aRV value compared to other algorithms, and requires 26.00% (≈(100 − 74)/100) and 43.79% (≈(131.67 − 74)/131.67) fewer communication times than CNP and DETS, respectively.

7. Conclusions

This paper focuses on the task offloading problem in SEC with dynamic delay-sensitive tasks. Considering the special demands of delay-sensitive tasks, an MILP model is first presented minimizing the weight sum of deadline violations and energy consumption. Based on the MILP model, we take into account the sparsity of ISLs, and further propose a fully-decentralized algorithm called PITO to resolve the task offloading problem. To improve assignment quality, PITO utilizes a consensus process to avoid local optimum, which also reduces the required communication times. To deal with dynamic arriving delay-sensitive tasks, we introduce a re-offloading mechanism based on a match-up strategy, which reduces re-offloaded tasks and prevents unnecessary insertion attempts by pruning. The simulation results demonstrate that PITO outperforms state-of-the-art algorithms in [24,26,27]. It obtains optimal assignments with only a small additional time cost and reduces the communication burden, as well. The proposed re-offloading mechanism achieves significantly higher efficiency than the existing method [37], while maintaining the quality of re-offloading assignments. Extending PITO to handle more practical scenarios with data-dependence tasks is planned in our future work.

Author Contributions

R.Z. conceived the conceptualization and algorithm. R.Z. and Y.F. completed the implementation of the algorithm and the writing of the paper and supported the writing review and editing. Y.Y., X.L. and H.L. provided theoretical guidance and suggestions for revision of the paper. Y.Y. provided funding support and necessary assistance for thesis writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Innovation 2030-Key Project of “New Generation Artificial Intelligence” under Grant 2020AAA0108203 and the National Natural Science Foundation of P.R. China under Grants 62003258 and 62103062.

Data Availability Statement

The original contributions presented in the study are included in the article material, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Leyva-Mayorga, I.; Martinez-Gost, M.; Moretti, M.; Peŕez-Neira, A.; Vázquez, M.Á.; Popovski, P.; Soret, B. Satellite edge computing for real-time and very-high resolution earth observation. IEEE Trans. Commun. 2023, 71, 6180–6194. [Google Scholar] [CrossRef]
  2. Kim, T.; Kwak, J.; Choi, J.P. Satellite Edge Computing Architecture and Network Slice Scheduling for IoT Support. IEEE Internet Things J. 2022, 9, 14938–14951. [Google Scholar] [CrossRef]
  3. Gomes, V.C.F.; Queiroz, G.R.; Ferreira, K.R. An Overview of Platforms for Big Earth Observation Data Management and Analysis. Remote Sens. 2020, 12, 1253. [Google Scholar] [CrossRef]
  4. Yao, X.; Li, G.; Xia, J.; Ben, J.; Cao, Q.; Zhao, L.; Ma, Y.; Zhang, L.; Zhu, D. Enabling the Big Earth Observation Data via Cloud Computing and DGGS: Opportunities and Challenges. Remote Sens. 2020, 12, 62. [Google Scholar] [CrossRef]
  5. Ma, Y.; Wu, H.; Wang, L.; Huang, B.; Ranjan, R.; Zomaya, A.; Jie, W. Remote sensing big data computing: Challenges and opportunities. Future Gener. Comput. Syst. 2015, 51, 47–60. [Google Scholar] [CrossRef]
  6. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile Edge Computing: A Survey. IEEE Internet Things J. 2018, 5, 450–465. [Google Scholar] [CrossRef]
  7. Xie, R.; Tang, Q.; Wang, Q.; Liu, X.; Yu, F.R.; Huang, T. Satellite-Terrestrial Integrated Edge Computing Networks: Architecture, Challenges, and Open Issues. IEEE Netw. 2020, 34, 224–231. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Zhang, W.; Tseng, F.-H. Satellite Mobile Edge Computing: Improving QoS of High-Speed Satellite-Terrestrial Networks Using Edge Computing Techniques. IEEE Netw. 2019, 33, 70–76. [Google Scholar] [CrossRef]
  9. Wang, S.; Li, Q. Satellite Computing: Vision and Challenges. IEEE Internet Things J. 2023, 10, 22514–22529. [Google Scholar] [CrossRef]
  10. Lv, W.; Yang, P.; Ding, Y.; Wang, Z.; Lin, C.; Wang, Q. Energy-Efficient and QoS-Aware Computation Offloading in GEO/LEO Hybrid Satellite Networks. Remote Sens. 2023, 15, 3299. [Google Scholar] [CrossRef]
  11. Hu, Y.; Gong, W.; Zhou, F. A Lyapunov-Optimized Dynamic Task Offloading Strategy for Satellite Edge Computing. Appl. Sci. 2023, 13, 4281. [Google Scholar] [CrossRef]
  12. Bekmezci, I.; Alagöz, F. Energy efficient, delay sensitive, fault tolerant wireless sensor network for military monitoring. Int. J. Distrib. Sens. Netw. 2009, 5, 729–747. [Google Scholar] [CrossRef]
  13. Deng, X.; Li, J.; Guan, P.; Zhang, L. Energy-Efficient UAV-Aided Target Tracking Systems Based on Edge Computing. IEEE Internet Things J. 2022, 9, 2207–2214. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Chen, C.; Liu, L.; Lan, D.; Jiang, H.; Wan, S. Aerial Edge Computing on Orbit: A Task Offloading and Allocation Scheme. IEEE Trans. Netw. Sci. Eng. 2023, 10, 275–285. [Google Scholar] [CrossRef]
  15. Hu, Y.; Gong, W. An On-Orbit Task-Offloading Strategy Based on Satellite Edge Computing. Sensors 2023, 23, 4271. [Google Scholar] [CrossRef] [PubMed]
  16. Song, Z.; Hao, Y.; Liu, Y.; Sun, X. Energy-Efficient Multiaccess Edge Computing for Terrestrial-Satellite Internet of Things. IEEE Internet Things J. 2021, 8, 14202–14218. [Google Scholar] [CrossRef]
  17. Ding, C.; Wang, J.-B.; Zhang, H.; Lin, M.; Li, G.Y. Joint Optimization of Transmission and Computation Resources for Satellite and High Altitude Platform Assisted Edge Computing. IEEE Trans. Wirel. Commun. 2022, 21, 1362–1377. [Google Scholar] [CrossRef]
  18. Wang, C.; Ren, Z.; Cheng, W.; Zhang, H. CDMR: Effective Computing-Dependent Multi-Path Routing Strategies in Satellite and Terrestrial Integrated Networks. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3715–3730. [Google Scholar] [CrossRef]
  19. Qiu, C.; Yao, H.; Yu, F.R.; Xu, F.; Zhao, C. Deep Q-Learning Aided Networking, Caching, and Computing Resources Allocation in Software Defined Satellite-Terrestrial Networks. IEEE Trans. Veh. Technol. 2019, 68, 5871–5883. [Google Scholar] [CrossRef]
  20. Mao, B.; Tang, F.; Kawamoto, Y.; Kato, N. Optimizing Computation Offloading in SatelliteUAV-Served 6G IoT: A Deep Learning Approach. IEEE Netw. 2021, 35, 102–108. [Google Scholar] [CrossRef]
  21. Yu, S.; Gong, X.; Shi, Q.; Wang, X.; Chen, X. EC-SAGINs: Edge-computing-enhanced space–air–ground-integrated networks for internet of vehicles. IEEE Internet Things J. 2021, 9, 5742–5754. [Google Scholar] [CrossRef]
  22. Cui, G.; Duan, P.; Xu, L.; Wang, W. Latency Optimization for Hybridf GEO–LEO Satellite Assisted IoT Networks. IEEE Internet Things J. 2023, 10, 6286–6297. [Google Scholar] [CrossRef]
  23. Zhang, H.; Liu, R.; Kaushik, A.; Gao, X. Satellite Edge Computing with Collaborative Computation Offloading: An Intelligent Deep Deterministic Policy Gradient Approach. IEEE Internet Things J. 2023, 10, 9092–9107. [Google Scholar] [CrossRef]
  24. Chen, X.; Xie, S.; Yu, L.; Fan, C. Sun Iterated Bidding-based Autonomous Mission Planning of Multiple Agile Earth Observation Satellites. In Proceedings of the 2023 35th Chinese Control and Decision Conference (CCDC), Yichang, China, 20–23 May 2023. [Google Scholar]
  25. Wang, C.; Ren, Z.; Cheng, W.; Zheng, S.; Zhang, H. Time-Expanded Graph-Based Dispersed Computing Policy for LEO Space Satellite Computing. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April, 2021; pp. 1–6. [Google Scholar]
  26. Ma, B.; Ren, Z.; Guo, W.; Cheng, W.; Zhang, H. Computation-Dependent Routing Based Low-Latency Decentralized Collaborative Computing Strategy for Satellite-Terrestrial Integrated Network. In Proceedings of the 2022 14th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 1–3 November 2022; pp. 1–5. [Google Scholar]
  27. Tang, Q.; Fei, Z.; Li, B.; Han, Z. Computation Offloading in LEO Satellite Networks With Hybrid Cloud and Edge Computing. IEEE Internet Things J. 2021, 8, 9164–9176. [Google Scholar] [CrossRef]
  28. Zhou, J.; Yang, Q.; Zhao, L.; Dai, H.; Xiao, F. Mobility-Aware Computation Offloading in Satellite Edge Computing Networks. IEEE. Trans. Mob. Computing 2024, 99, 1–15. [Google Scholar] [CrossRef]
  29. Liu, Y.; Wang, S.; Zhao, Q.; Du, S.; Zhou, A.; Ma, X.; Yang, F. Dependency-Aware Task Scheduling in Vehicular Edge Computing. IEEE Internet Things J. 2020, 7, 4961–4971. [Google Scholar] [CrossRef]
  30. Sthapit, S.; Lakshminarayana, S.; He, L.; Epiphaniou, G.; Maple, C. Reinforcement Learning for Security-Aware Computation Offloading in Satellite Networks. IEEE Internet Things J. 2022, 9, 12351–12363. [Google Scholar] [CrossRef]
  31. Liu, Y.; Jiang, L.; Qi, Q.; Xie, S. Energy-Efficient Space–Air–Ground Integrated Edge Computing for Internet of Remote Things: A Federated DRL Approach. IEEE Internet Things J. 2023, 10, 4845–4856. [Google Scholar] [CrossRef]
  32. Ding, C.; Wang, J.-B.; Cheng, M.; Lin, M.; Cheng, J. Dynamic Transmission and Computation Resource Optimization for Dense LEO Satellite Assisted Mobile-Edge Computing. IEEE Trans. Commun. 2023, 71, 3087–3102. [Google Scholar] [CrossRef]
  33. Zhao, W.; Meng, Q.; Chung, P.W.H. A Heuristic Distributed Task Allocation Method for Multivehicle Multitask Problems and Its Application to Search and Rescue Scenario. IEEE Trans. Cybern. 2016, 46, 902–915. [Google Scholar] [CrossRef]
  34. Turner, J.; Meng, Q.; Schaefer, G.; Whitbrook, A.; Soltoggio, A. Distributed Task Rescheduling With Time Constraints for the Optimization of Total Task Allocations in a Multirobot System. IEEE Trans. Cybern. 2018, 48, 2583–2597. [Google Scholar] [CrossRef] [PubMed]
  35. Choi, H.-L.; Brunet, L.; How, J.P. Consensus-Based Decentralized Auctions for Robust Task Allocation. IEEE Trans. Robot. 2009, 25, 912–926. [Google Scholar] [CrossRef]
  36. Ng, W.C.; Lim, W.Y.B.; Xiong, Z.; Niyato, D.; Miao, C.; Han, Z.; Kim, D.I. Stochastic Coded Offloading Scheme for Unmanned-Aerial-Vehicle-Assisted Edge Computing. IEEE Internet Things J. 2023, 10, 5626–5643. [Google Scholar] [CrossRef]
  37. Ng, W.C.; Lim, W.Y.B.; Xiong, Z.; Niyato, D.; Poor, H.V.; Shen, X.S.; Miao, C. Stochastic Resource Optimization for Wireless Powered Hybrid Coded Edge Computing Networks. IEEE Trans. Mob. Comput. 2024, 23, 2022–2038. [Google Scholar] [CrossRef]
  38. Qiao, F.; Ma, Y.; Zhou, M.; Wu, Q. A novel rescheduling method for dynamic semiconductor manufacturing systems. IEEE Trans. Syst. Man Cybern.: Syst. 2018, 50, 1679–1689. [Google Scholar] [CrossRef]
  39. Zhang, R.; Feng, Y.; Yang, Y.; Li, X. A Deadlock-Free Hybrid Estimation of Distribution Algorithm for Cooperative Multi-UAV Task Assignment With Temporally Coupled Constraints. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 3329–3344. [Google Scholar] [CrossRef]
  40. Wang, Z.; Zhang, J.; Yang, S. An improved particle swarm optimization algorithm for dynamic job shop scheduling problems with random job arrivals. Swarm Evol. Comput. 2019, 51, 100594. [Google Scholar] [CrossRef]
Figure 1. An example of the SEC scenario with delay-sensitive observation-data-processing tasks.
Figure 1. An example of the SEC scenario with delay-sensitive observation-data-processing tasks.
Remotesensing 16 02184 g001
Figure 2. Sequential computation model.
Figure 2. Sequential computation model.
Remotesensing 16 02184 g002
Figure 3. Framework of PITO.
Figure 3. Framework of PITO.
Remotesensing 16 02184 g003
Figure 4. Example of local optimum where task t cannot be removed from θ5 to θ7, even if I 7t) < R 5t) exists.
Figure 4. Example of local optimum where task t cannot be removed from θ5 to θ7, even if I 7t) < R 5t) exists.
Remotesensing 16 02184 g004
Figure 5. Gantt chart of offloading assignment θ.
Figure 5. Gantt chart of offloading assignment θ.
Remotesensing 16 02184 g005
Figure 6. Plots of all algorithms under different instances.
Figure 6. Plots of all algorithms under different instances.
Remotesensing 16 02184 g006
Figure 7. Indicator trends under different newly generated tasks using instances with size {B, 20}.
Figure 7. Indicator trends under different newly generated tasks using instances with size {B, 20}.
Remotesensing 16 02184 g007
Figure 8. Indicator trends under different combinations with q = 10.
Figure 8. Indicator trends under different combinations with q = 10.
Remotesensing 16 02184 g008
Figure 9. Indicator trends under different observation satellite numbers with constellation B and m = 9.
Figure 9. Indicator trends under different observation satellite numbers with constellation B and m = 9.
Remotesensing 16 02184 g009
Figure 10. Indicator trends under different constellations with n = 20.
Figure 10. Indicator trends under different constellations with n = 20.
Remotesensing 16 02184 g010
Table 1. Notations.
Table 1. Notations.
NotationDescription
mTotal number of SEC satellites.
nTotal number of observation satellites.
SSet of SEC satellites.
DSet of observation satellites.
CsComputing capacity of SEC satellite s.
MsBuffer space of SEC satellite s.
HsEnergy consumption limitation on SEC satellite s.
GNetwork topology matrix.
T = {t1, t2, …, tn}Set of tasks t.
θ = {θ1, θ2, …, θm}Task offloading solution.
F(θ)Objective value of solution θ.
R st)Removal impact, indicating the variation of Fs) after removing t from θs.
I st)Inclusion impact, indicating the minimum variation of Fs) after inserting t into θs.
Rs = [Rs1, Rs2, …, Rsn]T,
Ws = [Ws1, Ws2, …, Wsn]T,
Qs = [Qs1, Qs2, …, Qsm]T
Consensus vectors of satellite s, where Rs implies the removal impacts of tasks, Ws represents the considered offloaded satellites of tasks, and Qs indicates the latest timestamp of satellites.
ΨsPending removal tasks in satellite s.
a, μb]Influence horizon.
Us = [Us1, Us2, …, Us(n+q)]TTask classification vector of satellite s.
ΦsCandidate insertion positions of sequence θs.
Table 2. Action Rules for satellite k after receiving information from satellite s.
Table 2. Action Rules for satellite k after receiving information from satellite s.
Value of Wst from Sending Satellite sValue of Wkt from Receiving Satellite kActions Taken by k
skif Rst < Rkt → Update
sUpdate
l ∉ {s, k}if Qsl > Qkl or Rst < Rkt → Update
0Update
kkMaintain
sReset
l ∉ {s, k}if Qsl > Qkl → Reset
0Maintain
l ∉ {s, k}kif Qsl > Qkl and Rst < Rkt → Update
sif Qsl > Qkl → Update
else → Reset
lQsl > Qkl → Update
q ∉ {s, k, l}if Qsl > Qkl and Qsq > Qkq → Update
if Qsl > Qkl and Rst < Rkt → Update
if Qsq > Qkq and Qsl < Qkl → Reset
0if Qsl > Qkl → Update
0kMaintain
sUpdate
l ∉ {s, k}if Qsl > Qkl → Update
0Maintain
Table 3. Satellite Constellations.
Table 3. Satellite Constellations.
ConstellationAltitude (km)Inclination (deg)PlanesSatellites (m)
A500097.426
B500053.839
C300060416
D48097.4324
E55060636
F78086.4666
Table 4. Parameter Size for Each Instance Type.
Table 4. Parameter Size for Each Instance Type.
Instance TypeConstellationsTask NumberTask DensityDeadlineCombination Number
SmallA, B3, 5low, highemergency,
normal
2 × 2 × 2 × 2 = 16
MediumC, D10, 20low, highemergency,
normal
2 × 2 × 2 × 2 = 16
LargeE, F50, 100low, highemergency,
normal
2 × 2 × 2 × 2 = 16
Table 5. Simulation Parameters.
Table 5. Simulation Parameters.
ParametersDefault Values
Data size of tasks λi10~30 Mbit
Workload of task μi1~1.5 Kcycle/bit
Computing capacity of satellites Cs5 GHz
Memory space of satellites Ms500 Mbit
Energy consumption limitation of satellites Hs5000 J
Rate of LISL rLISL100 Mbps
Transmission power of RISL ε12 w
Transmission power of LISL ε21 w
Effective capacitance coefficient κ10−28
Weight factors α and β0.5
Table 6. Comparison results for small-scale instances.
Table 6. Comparison results for small-scale instances.
Instance TypeCNPDETSADMMPITO
aRVaCTaRTaRVaCTaRTaRVaCTaRTaRVaCTaRT
{A, 3, low, emergency}0.0380120.01720.014117.590.01720.0103520.70140.001480.0177
{A, 3, low, normal}0.0427120.0172017.850.01870.0079520.69140.001680.0169
{A, 3, high, emergency}0.0450120.00210.019616.000.00290.0155520.37890.00337.750.0031
{A, 3, high, normal}0.0497120.02280.020721.440.01850.0105540.58450.00289.360.0187
{A, 5, low, emergency}0.0449200.02260.027436.000.04350.0110502.54630.0002150.0567
{A, 5, low, normal}0.0413200.0224036.000.04310.0155502.50870.0002150.0591
{A, 5, high, emergency}0.0346200.01860.095438.770.04380.0248522.54110.0001160.0565
{A, 5, high, normal}0.0337200.01920.093238.420.04340.0280522.51810.0006160.0590
{B, 3, low, emergency}0.0401150.00300.028718.590.00380.0239841.11580.002280.0074
{B, 3, low, normal}0.0383150.00290.025018.650.00370.0232841.10610.001980.0045
{B, 3, high, emergency}0.0436150.00230.045918.740.00320.0244871.03480.003890.0041
{B, 3, high, normal}0.0421150.00360.037223.390.00480.0194811.00320.003570.0046
{B, 5, low, emergency}0.0430250.00970.006036.590.03090.0175936.83990.0011120.0409
{B, 5, low, normal}0.0433250.01800.004036.760.03020.0158936.94430.0010120.0412
{B, 5, high, emergency}0.0698250.01910.065638.740.03450.0481937.03960.0004130.0350
{B, 5, high, normal}0.0691250.02370.055238.340.11030.0428936.02050.0010130.0269
Table 7. Comparison results for medium-scale instances.
Table 7. Comparison results for medium-scale instances.
Instance TypeCNPDETSADMMPITO
aRVaCTaRTaRVaCTaRTaRVaCTaRTaRVaCTaRT
{C, 10, low, emergency}0.0725500.05160.075276.800.11000.0270490.2479.13800.0004470.3098
{C, 10, low, normal}0.0778500.03510.066176.350.12490.0267490.7278.90010.0005470.3202
{C, 10, high, emergency}0.0486500.01850.032665.000.07290.0559506.0882.46590.000943.280.3141
{C, 10, high, normal}0.0481500.01800.029365.000.07130.0455506.6481.99280.000843.780.3106
{C, 20, low, emergency}0.09611000.11430.0703193.820.52090.04711317.76994.65630921.0020
{C, 20, low, normal}0.09641000.11120.0739193.970.65280.04771317.92993.64400921.0063
{C, 20, high, emergency}0.05801000.06110.0909186.790.62260.03881349.441022.229501000.9560
{C, 20, high, normal}0.05831000.06670.0869186.990.55070.03951349.761024.851401001.8343
{D, 10, low, emergency}0.0350500.01650.040178.480.06810.02721592307.87340.0049380.4731
{D, 10, low, normal}0.0360500.01650.045278.820.06890.02741592306.60670.0048380.4596
{D, 10, high, emergency}0.0403500.01580.015676.370.07760.0212952186.09310.0011370.4362
{D, 10, high, normal}0.0407500.01610.014876.630.07970.0179952188.02700.0011370.4368
{D, 20, low, emergency}0.04371000.03880.0409153.490.31900.030324241377.05210.0090873.2001
{D, 20, low, normal}0.04081000.03860.0393153.830.33460.033724241484.66250.0079873.1638
{D, 20, high, emergency}0.11931000.07400.0704165.000.45410.039324241528.19820.0036813.0737
{D, 20, high, normal}0.10631000.09600.0675165.000.55630.036024241514.99610.0005813.2331
Table 8. Comparison results for large-scale instances.
Table 8. Comparison results for large-scale instances.
Instance TypeCNPDETSADMMPITO
aRVaCTaRTaRVaCTaRTaRVaCTaRTaRVaCTaRT
{E, 50, low, emergency}0.07452500.15060.0767456.570.53450.033923521335.79390169.595.8748
{E, 50, low, normal}0.07672500.15410.0696456.180.53020.030023521335.28400169.915.9403
{E, 50, high, emergency}0.07502500.14400.0667460.000.56490.032719641754.50210.0002203.1722.9450
{E, 50, high, normal}0.10372500.18140.0965545.000.82230.038719641907.06510.000221732.2933
{E, 100, low, emergency}0.29195000.85580.11971153.461.97150.071337442251.21330337.3490.3036
{E, 100, low, normal}0.27115000.99520.14381088.292.02960.075137442362.83770373.81158.2195
{E, 100, high, emergency}0.30035002.19740.13551278.633.01040.090538582634.20130358.68147.7549
{E, 100, high, normal}0.30685002.22240.12041278.982.77090.087139242756.09780358.64156.2090
{F, 50, low, emergency}0.04392500.09310.0537445.000.42620.033025741393.10360.008216434.4894
{F, 50, low, normal}0.04492500.12090.0494445.000.63960.031225741445.81090.007016435.5420
{F, 50, high, emergency}0.06252500.11380.0472383.920.45670.025125522783.51660.0028192.3137.9846
{F, 50, high, normal}0.05502500.15230.0165441.850.47790.015028162609.78680.0024191.3860.5440
{F, 100, low, emergency}0.13885000.47920.1054936.681.26630.047340923436.24670.0006316.96163.3933
{F, 100, low, normal}0.08595000.34930.04831008.561.38930.030542243792.86780.0006389.23256.1243
{F, 100, high, emergency}0.24225000.57030.08741008.981.35400.044342243851.94230332236.6883
{F, 100, high, normal}0.23775000.61610.09091008.291.63620.042046864013.76420332221.7397
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, R.; Feng, Y.; Yang, Y.; Li, X.; Li, H. Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach. Remote Sens. 2024, 16, 2184. https://doi.org/10.3390/rs16122184

AMA Style

Zhang R, Feng Y, Yang Y, Li X, Li H. Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach. Remote Sensing. 2024; 16(12):2184. https://doi.org/10.3390/rs16122184

Chicago/Turabian Style

Zhang, Ruipeng, Yanxiang Feng, Yikang Yang, Xiaoling Li, and Hengnian Li. 2024. "Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach" Remote Sensing 16, no. 12: 2184. https://doi.org/10.3390/rs16122184

APA Style

Zhang, R., Feng, Y., Yang, Y., Li, X., & Li, H. (2024). Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach. Remote Sensing, 16(12), 2184. https://doi.org/10.3390/rs16122184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop