Next Article in Journal
Research on Gas Control Technology in Goaf Based on the Influence of Mining Speed
Previous Article in Journal
Leakage Quantification in Metallic Pipes under Different Corrosion Exposure Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LSMOF-AD: Three-Stage Optimization Approach with Adaptive Differential for Large-Scale Container Scheduling

1
School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
2
Key Lab of Information Network Security, Ministry of Public Security, Shanghai 200031, China
*
Authors to whom correspondence should be addressed.
Processes 2024, 12(7), 1531; https://doi.org/10.3390/pr12071531
Submission received: 27 June 2024 / Revised: 17 July 2024 / Accepted: 18 July 2024 / Published: 20 July 2024
(This article belongs to the Section Advanced Digital and Other Processes)

Abstract

:
Container technology has gained a widespread application in cloud computing environments due to its low resource overhead and high flexibility. However, as the number of containers grows, it becomes increasingly challenging to achieve the rapid and coordinated optimization of multiple objectives for container scheduling, while maintaining system stability and security. This paper aims to overcome these challenges and provides the optimal allocation for a large number of containers. First, a large-scale multi-objective container scheduling optimization model is constructed, which involves the task completion time, resource cost, and load balancing. Second, a novel optimization algorithm called LSMOF-AD (large-scale multi-objective optimization framework with muti-stage and adaptive differential strategies) is proposed to effectively handle large-scale container scheduling problems. The experimental results show that the proposed algorithm has a better performance in multiple benchmark problems compared to other advanced algorithms and can effectively reduce the task processing delay, while achieving a high resource utilization and load balancing compared to other scheduling strategies.

1. Introduction

With the widespread adoption of IoT, mobile devices, and social networks, the volume of various types of data has exploded [1,2]. Traditional computing models are no longer adequate for storing, processing, and analyzing this vast amount of data. In this context, cloud computing has emerged as an effective solution to address the exponential growth of data, due to its unique advantages. Through advanced containerization technology, cloud computing eliminates hardware isolation between different machines [3], integrating the computing, storage, and network resources of heterogeneous machines into a massive resource pool. This makes resource management and scheduling more flexible and efficient, thereby improving overall resource utilization. Enterprises can dynamically request and release resources based on actual needs, avoiding resource wastage and idleness. As a result, container technology has gained a significant popularity in cloud computing [4], with an increasing number of companies adopting containers as their infrastructure.
The key to ensuring the global optimal configuration of resources and the rapid scheduling of containers in a cloud computing cluster lies in the design and implementation of the task scheduling strategy [5,6], which is NP-hard [7,8,9,10,11] and can be formulated as an optimization problem with single- or multi-objective perspectives. Metaheuristic algorithms are effective in exploring large solution spaces, to find near-optimal solutions for optimization problems, and well-suited for addressing the challenges posed by changing workloads in cloud computing environments, thanks to their random and iterative mechanisms [12,13]. Researchers have proposed many swarm-based metaheuristic algorithms with good performance to handle single-objective task scheduling problems, such as the Cuckoo Optimization Algorithm (COA) [14], the Particle Swarm Optimization (PSO) [15], the Firefly Algorithm [16], the Jellyfish Search Optimizer (JSO) [17], the Salp Swarm Algorithm [18], and the Whale Optimization Algorithm [19]. As the number of objectives increases and conflicts often arise between them, multi-objective optimization algorithms based on decomposition [20,21,22,23] or Pareto dominance [24,25,26] are subsequently proposed to handle scheduling problems with multiple objectives in cloud computing. However, as the scale of cloud computing clusters continues to expand, the scheduling problem has gradually evolved from a multi-objective optimization problem to a large-scale multi-objective optimization problem. With the increase in decision variables, the search space size grows exponentially and leads to the following issues with the aforementioned algorithms: (1) Poor Reliability: Due to the vast search space of large-scale multi-objective problems, algorithms struggle to effectively explore the entire search space, often falling into local optima and limiting population diversity [27]. (2) Insufficient Real-Time Performance: As the problem scale increases, algorithms require substantial computational resources and time to find the optimal solution. The computational complexity of the algorithms increases significantly, failing to meet the real-time requirements of task scheduling [28].
The large-scale multi-objective evolutionary framework (LSMOF) has a demonstrated superiority in handling large-scale optimization problems and can easily incorporate other evolution methods [29]. Based on these facts, we try to explore the application of the LSMOF approach to a large-scale multi-objective container scheduling problem in cloud computing. Specifically, we first try to design a container scheduling model considering three optimization objectives, i.e., the task completion time, resource cost, and load balancing. Then, we make some improvements on the LSMOF and map our innovative algorithm to the proposed model. A performance comparison and analysis are conducted to validate its effectiveness. The specific contributions of our work are as follows:
(1)
To handle a practical large-scale container scheduling problem, we construct a multi-objective optimization model, which can be easily solved through muti-objective optimization algorithms.
(2)
Based on the LSMOF, we design a novel optimization algorithm called LSMOF-AD by incorporating advanced optimization strategies, i.e., the three-stage and adaptive differential strategies, to improve the convergence and diversity of population.
(3)
We conduct theoretical and application experiments, demonstrating that the proposed approach can achieve a better performance in handling large-scale problems compared to other advanced algorithms or strategies.
The rest of this paper is organized as follows. Section 2 introduces the proposed multi-objective container scheduling model. Section 3 presents the proposed LSMOF-AD algorithm for large-scale multi-objective optimization problems. Section 4 demonstrates the superiority of the proposed method through theoretic and application experiments. Section 5 concludes the research work and explores future directions.

2. Container Scheduling Model

This study primarily focuses on how to efficiently schedule multiple containers across various nodes. Each container can be regarded as one task to be scheduled, and each physical host can be seen as one node. The primary objective considered is task completion time, as this directly impacts the user experience. Prolonged task completion times can lead to poor user satisfaction and thereby affect the overall quality of service. Secondly, cost is a significant concern for both users and cloud service providers. Users seek high-quality services without incurring excessive costs, while providers need to keep operational costs manageable while meeting user demands [29]. Thirdly, to fully utilize the resources within a cluster and ensure a smooth task execution, the scheduling strategy must achieve a load balancing of the system. Based on these facts, the multi-objective optimization container scheduling model studied in this section includes three objectives: task completion time, resource cost, and load balancing.

2.1. Completion Time Model

Assume that there are n tasks scheduling to m nodes, the definition of each optimization objective is discussed in detail as follows:
(1)
Task Transfer Time
The process of transferring tasks to a particular node before execution is necessary. The task transfer time is represented by matrix T T :
T T = t t 11 t t 12 t t 21 t t 22 t t 1 m t t 2 m t t n 1 t t n 2 t t n m
t t i j = t i l e n g t h c j b w
where t t i j represents the time required for task i to be transferred to node j , where t i _ l e n g t h is the data length that task i needs to transmit, and c j _ b w is the bandwidth of node j .
(2)
Task Execution Time
A task requires a certain amount of time to be executed on the assigned node. The execution time of the task is represented by the matrix T E :
T E = t e 11 t e 12 t e 21 t e 22 t e 1 m t e 2 m t e n 1 t e n 2 t e n m
t e i j = t i _ d a t a c j _ m i p
where t e i j represents the time required for task i to execute on node j , t i _ d a t a is the amount of data task i needs to transfer, and c j _ m i p is the processing capability of node j .
(3)
Task Completion Time
Combining the relevant definitions provided above, the completion time of a single task is the sum of its execution time and transfer time, which is expressed as
T a s k C o m p l e t i o n T i m e i j = t t i j + t e i j
Thus, the task completion time for each node is the time point at which all the tasks on that node are completed and we obtain
N o d e j c o m p l e t i o n t i m e = M a x T a s k C o m p l e t i o n T i m e i j
where i [ 1 , n ] and j [ 1 , m ] , the total completion time of tasks is the maximum completion time among all nodes, and we obtain
T i m e T o t a l = M a x n o d e j _ c o m p l e t e i o n t i m e
where j [ 1 , n ] , and minimizing T i m e _ T o t a l is one of the optimization objectives of the scheduling model.

2.2. Resource Cost Model

When tasks are scheduled, they occupy various resources of the nodes. These resources mainly include the CPU, bandwidth, and memory, and their usage directly affects the cost and efficiency of the task processing. To simplify the calculations, this paper focuses on these three main resources required for task processing.
(1)
CPU Cost
The CPU cost for a task allocated to a node is related to the computational efficiency of the individual container. The expression is as follows:
C o s t C p u i j = t e i j n j _ m i p n j _ m i p s _ p c o s t
where C o s t C p u i j is the CPU cost required for task i to be processed on node j , t e i j is the execution time of task i on node j , n j _ m i p is the processing capability of node j , and n j _ m i p s _ p c o s t is the unit performance cost of node j .
(2)
Bandwidth Cost
The bandwidth cost for a task allocated to a node is related to the bandwidth capacity of the node and can be expressed as
C o s t B w i j = t t i j n j _ b w n j _ b w _ p c o s t
where C o s t B w i j is the bandwidth cost required for task i to be transferred to node j , t t i j is the transfer time of task i to node j , n j _ b w is the bandwidth capacity of node j , and n j _ b w _ p c o s t is the unit bandwidth cost of node j .
(3)
Memory Cost
The memory cost for a task allocated to a node is related to the task processing time, and we obtain
C o s t R a m i j = t e i j n j _ r a m n j _ r a m _ p c o s t
where C o s t R a m i j is the memory cost required for task i to be processed on node j , t e i j is the execution time of task i on node j , n j _ r a m is the memory capacity of node j , and n j _ r a m _ p c o s t is the unit memory cost of node j .
We assume that the above three costs account for the same proportion of the total costs for general cases, and the total resource cost for task i allocated to node j is expressed as
R e s o u r c e _ C o s t i , j = C o s t C p u i j + C o s t B w i j + C o s t R a m i j
and the total resource cost required for processing all the tasks is expressed as
R e s o u r c e _ C o s t _ T o t a l = i = 1 n R e s o u r c e _ C o s t i , j
where minimizing R e s o u r c e _ C o s t _ T o t a l is one of the optimization objectives of the scheduling model.

2.3. Load Balancing Model

In the container scheduling model, load balancing refers to the even distribution of workloads or tasks across various physical nodes to achieve a rational utilization of system resources and improve the overall system performance. The objective is to ensure that each node receives a reasonable workload, avoiding scenarios where some nodes are overloaded while others remain idle.
Due to differences in computational performance, memory size, storage rate, and bandwidth between physical nodes, it is essential to distribute the container load evenly across all the nodes, to ensure that container applications run efficiently, stably, and make full use of their resources. This paper uses the weighted sum of the CPU and memory utilization as the standard for load balancing. The load balancing degree of physical node i , denoted as L B i , represents the balance of task allocation across various hosts. The calculation formula is shown as
L B i = α C p u U s e d i + β R a m U s e d i
where C p u U s e d i represents the CPU utilization of node i , R a m U s e d i represents the memory utilization of node i , and α and β are the weight coefficients for the CPU and memory, respectively, with α + β = 1 . The values of α and β can be adjusted based on the practical requirements. Specifically, if α is larger than β , we would be more interested in reducing the CPU usage of the node to achieve its load balancing. In our work, we simply set α = β = 0.5 for a general case.
The overall load balancing degree L B for all the physical nodes is the sum of the load balancing degrees of each node, and it is calculated as
L B = i = 1 m L B i m = i = 1 m ( α C p u U s e d i + β R a m U s e d i ) m
where m is the total number of physical nodes, and minimizing L B is one of the optimization objectives of the scheduling model.

3. Algorithm

3.1. Large-Scale Multi-Objective Framework (LSMOF)

For large-scale container clusters, scheduling efficiency is a key metric. It is not advisable to spend a significant amount of time pursuing minor performance improvements in container scheduling. The LSMOF is designed as a large-scale multi-objective evolutionary framework for accelerating algorithmic efficiency, and its main idea is to directly track Pareto optimal solutions through problem reformulation. Algorithm 1 presents the complete algorithmic flow of the LSMOF. First, an initial population P is generated through the random initialization function in the decision space, which involves two stages: In the first stage, it reconstructs the decision space and redefines the original LSMOP as a low-dimensional SOP (denoted as Z ). Then, it uses a single-objective optimizer to optimize Z . After reaching a predetermined number of evaluations, it enters the second stage and utilizes an embedded MOEA to optimize the original LSMOP and so obtains the final population. The LSMOF simply divides the number of iterations (FEs) of the two stages, with the threshold as 0.5, as the parameter experiment indicates that the first stage converges so fast that it does not need too many FEs [29].
Algorithm 1 The main framework of the LSMOF
Input: Z (original LSMOP), F E m a x (total FEs), A l g (embedded MOEA), N (population size for Alg), r (number of reference solutions), t r (threshold).
Output: P (final population).
1: P ← Initialization ( N , Z )
2:while  t t r × F E m a x  do
3: Z ← Problem_Reformulation ( P , r , Z )
4: A , Δt ← Single_Objective_Optimization ( Z )
5: P ← Environmental_Selection ( A P , N )
6: t t + Δ t
7:end while
8: P ← Embedded_MOEA ( P , N , A l g , Z )
Although the LSMOF can be effective in large-scale optimization problems, there are still many areas in need of improvement. First, it employs differential evolution (DE) in its single-objective optimization phase, which is a straightforward and direct global optimization algorithm that primarily relies on global search strategies, making it relatively weak in a local search. When dealing with complex optimization problems, the convergence speed and precision achieved through DE will significantly decrease. Furthermore, the performance of DE is heavily influenced by its control parameters, such as population size, crossover factor, and mutation factor. If these parameters are not set appropriately, the algorithm’s performance may be severely restricted. Additionally, the LSMOF employs a two-stage strategy that separately optimizes diversity and convergence. This can lead to many solutions with local optima. To address these issues, an improved algorithm LSMOF-AD is proposed, which introduces a three-stage optimization strategy and incorporates an adaptive differential evolution strategy (ADE) to enhance its global search capabilities.

3.2. Proposed LSMOF-Based Improved Algorithm (LSMOF-AD)

3.2.1. The Framework of LSMOF-AD

Based on the LSMOF, LSMOF-AD introduces the three-stage and ADE strategies, and its overall process is shown in Figure 1. The first stage focuses on optimizing the convergence of the population. Through a problem reformulation strategy [29], the original large-scale multi-objective problem is redefined as a single-objective problem with relatively small weight variables. Then, ADE is utilized by dynamically adjusting the values of C R and F to balance the diversity of the population. In the second stage, an MOEA/D-AD algorithm [30] with ADE is used to optimize the single-objective problem and enhance the diversity of the population through evenly distributed vectors. In the third stage, based on the iterative mechanism, both the convergence and diversity of the population are optimized simultaneously, to ensure a balance between convergence and diversity. The overall algorithm pseudocode is shown in Algorithm 2.
Algorithm 2 LSMOF-AD algorithm
Input: Z (original LSMOP), F E m a x (total FEs), N (population size), r (number of reference solutions), t r 1 , t r 2 (threshold).
Output: P (final population).
1:  P ← Initialization ( N );
2: flag←1; t 0 ;
3: While termination criterion is not fulfilled do
4: /***********First Stage***********/
5: if flag == 1
6:    Z ← Problem_Reformulation ( P , r , Z )
7:    A , t ← ADE ( Z )
8:    P ← Environmental_Selection ( A P , N )
9:  t t + t
10:   if  t > t r 1 F E m a x
11:    flag = 2
12:   end if
13: end if
14:   /*********Second Stage*********/
15: else if flag == 2
16:    P , t ← MOEA/D-AD ( P )
17:    t t + t
18:   if t > t r 2 F E m a x
19:    flag = 3
20:   end if
21:   /*********Third Stage*********/
22: else if flag == 3
23:    Z ← Problem_Reformulation ( P , r , Z )
24:   A , t 1 ← ADE ( Z )
25:   P ← Environmental_Selection ( A P , N )
26:   P , t 2 ← MOEA/D-AD(P)
27:   t t 1 + t 2 + t
28:end if
29:end while

3.2.2. Adaptive Differential Strategy

The variation factor F mainly regulates the search step size of the differential evolution algorithm, influencing the population’s diversity and convergence. Meanwhile, the crossover probability factor C R determines the probability of randomly selected individuals participating in the crossover process, thereby balancing the relationship between a local and global search [30]. In traditional differential evolution algorithms [31], the crossover factor C R and the mutation factor F are typically set to fixed values, which has several limitations. Firstly, the fixed values for C R and F may not be suitable for different optimization problems. Secondly, fixed parameter values may fail to balance the algorithm’s exploration and exploitation capabilities, potentially causing the algorithm to prematurely converge to local optima [32,33,34]. Thirdly, fixed parameter values may not adequately respond to dynamic changes during the algorithm’s execution and will lead to a decline in performance. Therefore, ADE is proposed in this paper, to dynamically adjust the values of C R and F , so as to improve the search efficiency of the algorithm and the diversity of the population.
In ADE, the population evolution process mainly includes three stages: mutation, crossover, and selection, incorporating generation and the individual related parameters F i , G and C R i , G . Let the population P G be represented as a set of real-valued parameter vectors { X i , G = ( X 1 , i , G , X 2 , i , G , , X D , i , G ) | i = 1,2 , , N P } , where N P represents the population size, D denotes the number of decision variables, and G is the generation to which the population belongs.
(1)
Mutation
This operation is based on the current parent population X i , G i = 1,2 , , N P . For each individual X i , G , a mutant vector V i , G is generated. Using the DE/rand/1 strategy, the formula is as follows:
V i , G = X i 1 , G + F i , G · X i 2 , G X i 3 , G
where i 1 , i 2 , i 3 are randomly selected from the set { 1,2 , , N P } and are mutually exclusive. G denotes the current generation, and the mutation factor F i , G is used to generate the mutant vector for individual i in the G -th generation.
(2)
Crossover
After generating the mutant vector V i , G , it is crossed with the parent vector X i , G to produce the trial vector U i , G . The calculation formula for the crossover operator is as follows:
U i , j , G = V i , j , G   i f   r a n d 0,1 C R i , G   o r   j = j r a n d U i , j , G   o t h e r w i s e
where r a n d a , b is a uniformly distributed random number in the interval [ a , b ] , j r a n d = r a n d i n t ( 1 , D ) is an integer randomly selected from 1 to D , and the crossover factor C R i , G [ 0 , 1 ] is used to generate a trial vector for individual i in the G -th generation.
(3)
Selection
The selection operation chooses the better vector between the parent vector X i , G and the trial vector U i , G based on their fitness values. The calculation formula for the selection operator is as follows:
X i , G + 1 = U i , G   i f   f U i , G < f X i , j X i , G   o t h e r w i s e
Algorithm 3 presents the detailed process of ADE. As recommended in [35], the initial values of the mutation factor F and the crossover factor C R are set to 1 and 0.5, which is the same as the DE strategy adopted in the LSMOF, to show the effect of adaptive adjustments on these parameters. Then, it explores different individuals to find the minimum value with mutation and crossover methods. If the fitness of the trial vector U i , G is less than the fitness of the parent vector X i , G , it is considered that the search direction is correct, and no adjustment of the control parameters is needed in this case. Otherwise, the control parameters are adjusted according to Equations (18) and (19), which aim to explore the local minima within the solution space by reducing the step size. For the control parameters of the next generation, C R and F consider the maximum and minimum values of the objective function in the parent population, as shown in Equations (20) and (21). This method helps eliminate stagnation, i.e., when no individual can replace the individuals in the current population, the population evolution stagnates.
Algorithm 3 Adaptive differential evolution algorithm
Input: P (population)
Output: Final solution set
1:Set Generation G = 0 ;
2:Initialize C R = 0.5 , F = 1 ;
3:while termination criterion is not fulfilled do
4:for each individual X i , G in P do
5:  Generate a mutant vector V i , G according to Equation (15)
6:  Generate a trial vector U i , G according to Equation (16)
  Select better individual according to Equation (17)
7:  Generate F i + 1 , G , C R i + 1 , G for next individual to Equations (18) and (19)
9:end for
  Generate F G + 1 , C R G + 1 for the next generation population according to Equations (20) and (21)
10:end while
The adjustment equations for C R and F are introduced below, which dynamically adjust C R and F based on the previous individual during one iteration of the population. The calculation formulas are shown below:
F i + 1 , G = F i , G r a n d 0 , 1 f U i , G f X i , G f U i , G
C R i + 1 , G = C R i , G r a n d 0 , 1 f U i , G f X i , G f U i , G
where F i , G represents the mutation factor of the i -th individual in the G -th generation, C R i , G represents its crossover factor, and F i + 1 , G and C R i + 1 , G are the mutation and crossover factors of the next individual in the current population, respectively. After traversing the entire population, further adjustments are made to C R and F , as shown below:
F G + 1 = r a n d 0 , 1 m a x m i n m a x
C R G + 1 = r a n d 0 , 1 m a x m i n m a x
where max and min represent the maximum and minimum fitness values of individuals in the G -th generation, and F G + 1 and C R G + 1 are corresponding factors for the first individual of the next generation.

4. Experiment

4.1. Theoritic Experiment

4.1.1. Experimental Setup

The experiments are conducted on the open-source evolutionary multi-objective optimization platform named PlatEMO [36], which provides many popular optimization algorithms and various figure demonstrations. In this section, three sets of experiments are conducted and analyzed.
The first set selects the MOEA/DVA [37], WOF [38], RG-DRA-MMOPSO [39], LMOCSO [40] as comparison algorithms with the proposed LSMOF-AD on LSMOP test suites. The second set uses and designs LSMOF-based algorithms, i.e., LSMOF and LSMOF-Three-Stage, for ablation comparison experiments with LSMOF-AD on the LSMOP datasets, to test the performance of the proposed three-stage and ADE strategies. The third set selects MOEA/DVA and LSMOF as comparison algorithms and plots the distribution of the final non-dominated solutions on the special test instance DTLZ7 with a set of discontinuous Pareto fronts.
The population size of all test suites is set to 100, and the maximum number of iterations adopted as the termination criterion is set to 1500 D in all experiment sets. In MOEA/DVA and LSMOF, the control parameters of the differential evolution operator are set to C R = 0.5 , F = 1 , which are also the initial values of the designed ADE. The genetic operators used are random selection, simulated binary crossover, and polynomial variation. For fair comparisons, the related parameters for the comparison algorithms are set according to the best parameter settings from their respective original papers. All the experiments are independently repeated 30 times, and the results obtained are averaged.
The IGD (Inverted Generational Distance) [41] metric is used as the evaluation indicator on the LSMOP test suites. Moreover, the Wilcoxon signed-rank test [42] is conducted to compare the results achieved by DPCRO and the other compared algorithms with a significance level α = 0.05. The symbols “+”, “−”, and “≈”, respectively, indicate whether the comparison algorithm is significantly better than, significantly worse than, or not significantly different from LSMOF-AD and the best data of each row in tables is highlighted with a gray background.

4.1.2. Experimental Analysis and Results

(1)
Comparison with other advanced algorithms
The comparisons of LSMOF-AD with the other four advanced algorithms are conducted under LSMOPs with 2 and 3 objectives, with the number of decisions set to 200, 500, and 1000. Table 1 shows the comparison results of the algorithms. It is obvious that the proposed LSMOF-AD exhibits a better overall performance than other four compared algorithms. Specifically, LSMOF-AD obtains 29 best results out of 54 cases, MOEA/DVA obtains 2, WOF obtains 4, RG-DRA-MMOPSO obtains 11, and LMOCSO obtains 4, with 4 results having approximately equal IGD values. Notably, MOEA/DVA achieved IGD values far from the Pareto set on LSMOP6 and the bi-objective LSMOP7, likely due to a failure in decision variable analysis leading to significant performance degradation. RG-DRA-MMOPSO can better complete the decomposition of decision variables, especially for LSMOP6, but performs worse on LSMOP9, with a complex disconnected Pareto front. LSMOF-AD can adaptively change the directions of its differential evolution with its iterative mechanism and exhibits superior convergence on most large-scale problems, compared to other advanced algorithms.
(2)
Ablation comparison experiments.
To validate the effectiveness of the three-stage and ADE strategy, we use the LSMOF-AD, LSMOF-Three-Stage, and LSMOF algorithms for comparison on the LSMOP test suites. The LSMOF-Three-Stage algorithm is designed by applying the three-stage strategy described in Section 3.2 to the LSMOF algorithm and employing NSGA-II as the embedded algorithm, instead of using the MOEA/D-AD algorithm. The comparison is also conducted on LSMOPs with 2 and 3 objectives, with the number of decision variables setting to 100, 300 and 600. As shown in Table 2, LSMOF-AD performs the best on 32 out of the 54 test instances, and LSMOF-Three-Stage performs better in most cases than LSMOF. Specifically, LSMOF-Three-Stage has improvements over LSMOF on LSMOP1, LSMOP2, LSMOP4, and LSMOP7, is approximately equal on LSMOP3, LSMOP5, LSMOP8, and LSMOP9, and slightly worse on LSMOP6, which indicates the advantage of the iterative mechanism in the third stage. Furthermore, LSMOF-AD performs better than LSMOF-Three-Stage in most cases, especially for the multimodal cases LSMOP3 and LSMOP7. For the separable LSMOP1 and LSMOP6, the proposed algorithm also has notable improvements, which indicates that ADE can effectively maintain population diversity and prevent individuals from getting trapped in local optima.
(3)
Comparation on special test instance DTLZ7.
DTLZ7 features a set of discontinuous Pareto fronts, enabling the testing of an algorithm’s ability to approximate multiple Pareto fronts and maintain the solution distribution. The distribution of the final non-dominated solutions for the MOEA/DVA, LSMOF, and LSMOF-AD algorithms on the DTLZ7 problem are explored at the number of decision variables set to 1000.
As shown in Figure 2 and Figure 3, MOEA/DVA shows poor convergence in DTLZ7 with 2 and 3 objectives, due to its high computational resource requirements for decision variable analysis. The LSMOF algorithm displays poor diversity in this instance, which shows that LSMOF tends to get trapped in local optima due to its lack of emphasis on diversity. However, LSMOF-AD fits the Pareto front perfectly for the 2-objective case and shows a distribution across four regions in the 3-objective case. These results validate the effectiveness of the phased strategy and ADE of LSMOF-AD in jointly optimizing the population’s diversity and convergence.

4.2. Application Experiment

In the container task scheduling experiment, one scheduling solution represents one mapping relationship between tasks and nodes, which can be encoded using a 0/1 two-dimensional matrix T N . T N ( i , j ) = 1 represents that task T i is assigned to node N i for execution, while T N ( i , j ) = 0 represents the opposite situation. For example, there are eight tasks respectively assigned to five nodes shown in Table 3, and the corresponding two-dimensional matrix representation T N can be encoded as follows:
T N = 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
where tasks 4 and 6 are assigned to node 1 for execution. Crossover and mutation operators in a genetic algorithm can be easily applied for generating new offspring. As discussed in Section 2, three pivot objectives are considered in this paper:
  • Task Completion Time: the total completion time of tasks is the maximum completion time among all nodes.
  • Resource Cost: the total cost required for processing all tasks.
  • Load Balancing: the overall load balancing is calculated by the weighted sum of CPU and memory utilization.

4.2.1. Experimental Setup

To validate the effectiveness of the LSMOF-AD proposed for container scheduling in terms of completion time, resource cost, and load balancing, a custom scheduler implementing the LSMOF-AD algorithm is developed in Go language on a server with a CPU clock rate of 3.2 GHz and 32 GB of memory. The custom scheduler is invoked by modifying the relevant configuration files in kube-scheduler [43]. And the LSMOF-AD scheduling strategy is compared with the default Predicate-Priority strategy and the classic Random scheduling strategy.
This section uses eight servers to build the container cluster, with Dell PowerEdge R830 servers. The specific configurations of each server are shown in Table 4. All servers run CentOS 7 as the operating system, with Docker version docker-ce-18.06.3.ce-3.e17 installed and Kubernetes version v1.20.8. One server serves as the control node, while the rest serve as worker nodes. Due to resource constraints, the control node is tainted, to allow task execution on it as well.

4.2.2. Experimental Analysis and Results

In this section, the LSMOF-AD strategy and comparative strategies are applied to the large-scale multi-objective container scheduling optimization model constructed in Section 2. Simulation experiments are conducted in the container cluster using completion time, resource cost, and load balancing as scheduling metrics. The experiments are divided into three parts: the first part compares different container scheduling strategies in terms of total completion time; the second part compares different container scheduling strategies in terms of resource costs; and the third part compares different scheduling strategies in terms of load balancing.
(1)
Total Task Completion Time
The definition of task completion time is divided into transmission and execution time. By setting different numbers of tasks, the sum of the transmission and execution time for all tasks under each strategy is compared. Figure 4 illustrates the total transmission time for all tasks under different scheduling strategies, while Figure 5 illustrates the total execution time for all tasks under different scheduling strategies. As the number of tasks increases, the growth rate of the transmission and execution time under the LSMOF-AD strategy is relatively lower compared to the other two comparative algorithms, validating that LSMOF-AD can better leverage its advantages when dealing with a large number of tasks.
(2)
Resource Cost
In this section, particular emphasis is placed on evaluating the resource cost of the proposed scheduling strategy, by comparing it with different task quantities and comparative strategies. As shown in Figure 6, the resource costs of the Random and Predicate-Priority strategies consistently exceed those of the LSMOF-AD strategy across different task quantities. Additionally, when the number of tasks is small, the resource costs of the three strategies are relatively close. However, as the number of tasks increases, the resource cost growth of the LSMOF-AD strategy is significantly lower than that of the other two strategies. For instance, when the task quantity is 1000, the resource cost of the Random strategy is nearly 1.5 times that of the LSMOF-AD strategy. As the task quantity continues to increase, the advantage of the LSMOF-AD strategy becomes more pronounced. This is primarily because the proposed strategy is tailored for scheduling under a large number of tasks, effectively mitigating the issue of sharply increasing resource costs associated with rapid task quantity growth.
(3)
Load Balancing
To further investigate the performance of the LSMOF-AD strategy in cluster load balancing, 5 working nodes and 1000 tasks are selected for comparison with the Predicate-Priority scheduling strategy. As shown in Figure 7, LSMOF-AD exhibits a significantly improved load balancing performance compared to Predicate-Priority. Several key reasons contribute to this enhancement: LSMOF-AD demonstrates a better performance in solving large-scale multi-objective container scheduling problems, while the Predicate-Priority strategy only considers the remaining CPU and memory of the nodes. Additionally, the affinity between the tasks and nodes needs to be taken into account, which may also lead to a reduced load balancing performance.
In summary, based on the proposed optimization model, the LSMOF-AD algorithm performs well as the number of tasks increases. It effectively addresses both the task completion time and resource costs, achieving optimal values for both objectives, while also ensuring cluster load balancing.

5. Conclusions

This paper proposes a multi-optimization model for a large-scale container scheduling problem in cloud computing clusters and designs a novel optimization algorithm called LSMOF-AD with advanced three-stage and adaptive differential strategies to address the problem. The effectiveness of LSMOF-AD is validated through theoretic experiments using different test suites, as well as application experiments by a customized scheduler. Despite the improvements achieved by LSMOF-AD in the performance metrics, there are some limitations and areas for further optimization:
(1)
The algorithm proposed in this paper may suffer from issues such as getting stuck in local optima with a significant increase in iteration count, and it may be time-consuming due to the three-stage iteration strategy. Future research could explore combining our algorithm with other advanced intelligent optimization algorithms, such as Multiverse Optimizer [44] or Simulated Annealing [45], to enhance performance, increase the chances of escaping local optima, and further optimize time complexity.
(2)
In the container scheduling algorithm LSMOF-AD proposed in this paper, the CPU and memory are considered as the two main resource dimensions. Hence, one future research direction is to incorporate more resource dimensions and design more reasonable and reliable scheduling strategies.

Author Contributions

Conceptualization, W.D.; software, M.C., M.Z. and W.S.; validation, M.C, M.Z. and W.S.; resources, M.Z. and G.J.; writing—original draft preparation, M.C. and M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is sponsored by the Shanghai Pilot Program for Basic Research, China, 22TQ1400100-16; Nature Science Foundation of Shanghai, China, 23ZR1414900, 22ZR1416500; and Key Lab of Information Network Security, Ministry of Public Security, China, C23600-03.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fernandez-Gomez, A.M.; Gutierrez-Aviles, D.; Troncoso, A.; Martínez-Álvarez, F. A new Apache Spark-based framework for big data streaming forecasting in IoT networks. J. Supercomput. 2023, 79, 11078–11100. [Google Scholar] [CrossRef] [PubMed]
  2. Vaño, R.; Lacalle, I.; Sowiński, P.; S-Julián, R.; Palau, C.E. Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions. Sensors 2023, 23, 2215. [Google Scholar] [CrossRef] [PubMed]
  3. Qian, J.R.; Wang, Y.; Wang, X.X. Load balancing scheduling mechanism for OpenStack and Docker integration. J. Cloud Comput. Adv. Syst. Appl. 2023, 12, 12. [Google Scholar] [CrossRef]
  4. Peinl, R.; Holzschuher, F.; Pfitzer, F. Docker Cluster Management for the Cloud—Survey Results and Own Solution. J. Grid Comput. 2016, 14, 265–282. [Google Scholar] [CrossRef]
  5. Bittencourt, L.F.; Goldman, A.; Madeira, E.R.M. Scheduling in distributed systems: A cloud computing perspective. Comput. Sci. Rev. 2018, 30, 31–54. [Google Scholar] [CrossRef]
  6. Malti, A.N.; Hakem, M.; Benmammar, B. A new hybrid multi-objective optimization algorithm for task scheduling in cloud systems. Clust. Comput. 2024, 27, 2525–2548. [Google Scholar] [CrossRef]
  7. Hosseini, S.M. A novel discrete grey wolf optimizer for scientific workflow scheduling in heterogeneous cloud computing platforms. Sci. Iran. 2022, 29, 2375–2393. [Google Scholar] [CrossRef]
  8. Asghari, A.Y.; Hosseini, S.M.; Rahmani, A.M. A hybrid bi-objective scheduling algorithm for execution of scientific workflows on cloud platforms with execution time and reliability approach. J. Supercomput. 2023, 79, 1451–1503. [Google Scholar] [CrossRef]
  9. Topcuoglu, H.; Hariri, S.; Wu, M.Y. Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 2002, 13, 260–274. [Google Scholar] [CrossRef]
  10. Hosseini, S.M. A hybrid meta-heuristic algorithm for scientific workflow scheduling in heterogeneous distributed computing systems. Eng. Appl. Artif. Intell. 2020, 90, 103501. [Google Scholar] [CrossRef]
  11. Noorian, T.R.; Hosseini, S.M.; Motameni, H. A hybrid meta-heuristic scheduler algorithm for optimization of workflow scheduling in cloud heterogeneous computing environment. J. Eng. Des. Technol. 2022, 20, 1581–1605. [Google Scholar] [CrossRef]
  12. Sandhu, R.; Faiz, M.; Kaur, H.; Srivastava, A.; Narayan, V. Enhancement in performance of cloud computing task scheduling using optimization strategies. Clust. Comput. 2024, 1–24. [Google Scholar] [CrossRef]
  13. Barut, C.; Yildirim, G.; Tatar, Y. An intelligent and interpretable rule-based metaheuristic approach to task scheduling in cloud systems. Knowl. Based Syst. 2024, 284, 111241. [Google Scholar] [CrossRef]
  14. Zavieh, H.; Javadpour, A.; Li, Y.; Ja’fari, F.; Nasseri, S.H.; Rostami, A.S. Task processing optimization using cuckoo particle swarm (CPS) algorithm in cloud computing infrastructure. Clust. Comput. 2023, 26, 745–769. [Google Scholar] [CrossRef]
  15. Nabi, S.; Ahmad, M.; Ibrahim, M.; Hamam, H. AdPSO: Adaptive PSO-based task scheduling approach for cloud computing. Sensors 2022, 22, 920. [Google Scholar] [CrossRef] [PubMed]
  16. Mangalampalli, S.; Karri, G.R.; Elngar, A.A. An efficient trust-aware task scheduling algorithm in cloud computing using firefly optimization. Sensors 2023, 23, 1384. [Google Scholar] [CrossRef]
  17. Bürkük, M.; Yıldırım, G. Cloneable jellyfish search optimizer based task scheduling in cloud environments. Türk Doğa Fen Dergisi 2022, 11, 35–43. [Google Scholar] [CrossRef]
  18. El-Ashmawi, W.H.; Ali, A.F. A modified salp swarm algorithm for task assignment problem. IEEE Syst. J. 2020, 94, 106445. [Google Scholar] [CrossRef]
  19. Chen, X.; Cheng, L.; Liu, C.; Liu, Q.; Liu, J.; Mao, Y.; Murphy, J. A WOA-based optimization approach for task scheduling in cloud computing systems. IEEE Syst. J. 2020, 14, 3117–3128. [Google Scholar] [CrossRef]
  20. Fu, J.Z.; Wang, J.F.; Ma, Y.; Liu, Y.P. Optimized decomposition of manufacturing tasks based on task-correlation analysis. J. Mach. Des. 2023, 40, 65–69. [Google Scholar]
  21. Feng, Z.Y.; Hu, X.B.; Huo, Y.L.; Hou, J.X. Research on task decomposition algorithm model in network collaborative design process. Hoist. Convey. Mach. 2021, 6, 18–24. [Google Scholar]
  22. Wei, L.K.; Zhou, F.Q.; Yu, P.; Li, W.J.; Zhao, M.y.; Yan, X.Q.; Wu, J.J. Task Decomposition of Network Services for Deploying in Heterogeneous Network Computing Nodes. In Proceedings of the 26th IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Porto, Portugal, 25–27 October 2021; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  23. Wang, Z.L.; Zhou, F.Q.; Feng, L.; Li, W.J. Multi-granularity decomposition for componentized multimedia applications based on graph clustering. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Chengdu, China, 4–6 August 2021; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  24. Zheng, X.; Wang, L. A Pareto based fruit fly optimization algorithm for task scheduling and resource allocation in cloud computing environment. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016. [Google Scholar] [CrossRef]
  25. Li, F.; Zhang, L.; Liao, T.W.; Liu, Y. Multi-objective optimisation of multi-task scheduling in cloud manufacturing. Int. J. Prod. Res. 2019, 57, 3847–3863. [Google Scholar] [CrossRef]
  26. Khalili, A.; Babamir, S.M. Optimal scheduling workflows in cloud computing environment using pareto-based grey wolf optimizer. Concurr. Comput. Pract. Exp. 2017, 29, 4044. [Google Scholar] [CrossRef]
  27. Liu, J.; Sarker, R.; Elsayed, S.; Essam, D.; Siswanto, N. Large-scale evolutionary optimization: A review and comparative study. Swarm Evol. Comput. 2024, 85, 101466. [Google Scholar] [CrossRef]
  28. Mao, X.; Wu, G.; Fan, M.; Cao, Z.; Pedrycz, W. DL-DRL: A double-level deep reinforcement learning approach for large-scale task scheduling of multi-UAV. IEEE Trans. Autom. Sci. Eng. 2024, 1–17. [Google Scholar] [CrossRef]
  29. He, C.; Li, L.H.; Tian, Y. Accelerating Large-Scale Multiobjective Optimization via Problem Reformulation. IEEE Trans. Evol. Comput. 2019, 23, 949–961. [Google Scholar] [CrossRef]
  30. Chen, X.; Bian, H.W.; He, H.Y. An Improved Differential Evolution Adaptive Fuzzy PID Control Method for Gravity Measurement Stable Platform. Sensors 2023, 23, 3172. [Google Scholar] [CrossRef] [PubMed]
  31. Lampinen, J.; Zelinka, I. On stagnation of the differential evolution algorithm. Proc. MENDEL 2000, 6, 76–83. [Google Scholar]
  32. Tanabe, R.; Ishibuchi, H. Review and analysis of three components of the differential evolution mutation operator in MOEA/D-DE. Soft Comput. 2019, 23, 12843–12857. [Google Scholar] [CrossRef]
  33. Wang, J.; Zhang, W.; Zhang, J. Cooperative Differential Evolution With Multiple Populations for Multiobjective Optimization. IEEE Trans. Cybern. 2016, 46, 2848–2861. [Google Scholar] [CrossRef]
  34. Verma, S.; Pant, M.; Snasel, V. A Comprehensive Review on NSGA-II for Multi-Objective Combinatorial Optimization Problems. IEEE Access 2021, 9, 57757–57791. [Google Scholar] [CrossRef]
  35. Li, H.; Zhang, Q. Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2008, 13, 284–302. [Google Scholar] [CrossRef]
  36. Tian, Y.; Cheng, R.; Zhang, X. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  37. Ma, X.; Liu, F.; Qi, Y.; Wang, X.; Li, L. A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables. IEEE Trans. Evol. Comput. 2015, 20, 275–298. [Google Scholar] [CrossRef]
  38. Zille, H.; Ishibuchi, H.; Mostaghim, S.; Nojima, Y. Weighted optimization framework for large-scale multi-objective optimization. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, Denver, CO, USA, 20–24 July 2016; Association for Computing Machinery: New York, NY, USA; pp. 83–84. [Google Scholar] [CrossRef]
  39. Liu, J.; Liu, R.; Zhang, X. Recursive grouping and dynamic resource allocation method for large-scale multi-objective optimization problem. Appl. Soft Comput. 2022, 130, 109651. [Google Scholar] [CrossRef]
  40. Tian, Y.; Zheng, X.; Zhang, X.; Jin, Y. Efficient large-scale multiobjective optimization based on a competitive swarm optimizer. IEEE Trans. Cybern. 2019, 50, 3696–3708. [Google Scholar] [CrossRef]
  41. Cai, X.J.; Zhang, M.Q.; Wang, H. Analyses of inverted generational distance for many-objective optimisation algorithms. Int. J. Bio-Inspired Comput. 2019, 14, 62–68. [Google Scholar] [CrossRef]
  42. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  43. Tenner, E. Prometheus Tamed: Fire, Security, and Modernities, 1400 to 1900. Technol. Cult. 2022, 63, 925–926. [Google Scholar] [CrossRef]
  44. Shukri, S.E.; Al-Sayyed, R.; Hudaib, A.; Mirjalili, S. Enhanced multi-verse optimizer for task scheduling in cloud computing environments. Expert Syst. Appl. 2020, 168, 114230. [Google Scholar] [CrossRef]
  45. Aktan, M.N.; Bulut, H. Metaheuristic task scheduling algorithms for cloud computing environments. Concurr. Comput. Pract. Exp. 2022, 34, 13. [Google Scholar] [CrossRef]
Figure 1. The framework of LSMOF-AD.
Figure 1. The framework of LSMOF-AD.
Processes 12 01531 g001
Figure 2. The distribution of final non-dominated solutions on DTLZ7-2D with 1000 decision variables for (a) MOEA/DVA, (b) LSMOF, and (c) LSMOF-AD.
Figure 2. The distribution of final non-dominated solutions on DTLZ7-2D with 1000 decision variables for (a) MOEA/DVA, (b) LSMOF, and (c) LSMOF-AD.
Processes 12 01531 g002
Figure 3. The distribution of final non-dominated solutions on DTLZ7-3D with 1000 decision variables for (a) MOEA/DVA, (b) LSMOF, and (c) LSMOF-AD.
Figure 3. The distribution of final non-dominated solutions on DTLZ7-3D with 1000 decision variables for (a) MOEA/DVA, (b) LSMOF, and (c) LSMOF-AD.
Processes 12 01531 g003
Figure 4. Comparison of total transmission time under different strategies.
Figure 4. Comparison of total transmission time under different strategies.
Processes 12 01531 g004
Figure 5. Comparison of total execution time under different strategies.
Figure 5. Comparison of total execution time under different strategies.
Processes 12 01531 g005
Figure 6. Comparison of resource costs under different strategies.
Figure 6. Comparison of resource costs under different strategies.
Processes 12 01531 g006
Figure 7. Comparison of load balancing under different strategies.
Figure 7. Comparison of load balancing under different strategies.
Processes 12 01531 g007
Table 1. IGD values of comparative algorithms on LSMOP1–LSMOP9.
Table 1. IGD values of comparative algorithms on LSMOP1–LSMOP9.
ProblemMDMOEA/DVAWOFRG-DRA-MMOPSOLMOCSOLSMOF-AD
LSMOP122008.66 × 100 (8.04 × 10−1) −6.30 × 10−1 (9.36 × 10−2) −5.31 × 10−1 (4.44 × 10−2) −3.05 × 10−1 (5.01 × 102) −2.11 × 10−1 (1.44 × 10−2)
5001.91 × 101 (1.00 × 100) −6.58 × 10−1 (6.11 × 10−2) −5.34 × 10−1 (3.51 × 10−2) −6.18 × 10−1 (2.84 × 10−1) −2.23 × 10−1 (1.36 × 10−2)
10002.39 × 101 (7.84 × 10−1) −6.79 × 10−1 (4.22 × 10−2) −4.35 × 10−1 (2.05 × 10−2) −9.25 × 10−1 (3.89 × 101) −3.56 × 10−1 (2.43 × 10−2)
32006.26 × 100 (4.62 × 10−1) −6.95 × 10−1 (1.32 × 10−1) −3.19 × 10−1 (2.02 × 10−2) +4.88 × 10−1 (1.26 × 10−1) −5.81 × 10−1 (9.78 × 10−2)
5009.42 × 100 (2.89 × 10−1) −7.09 × 10−1 (8.36 × 10−2) −4.50 × 10−1 (5.57 × 10−3) −5.23 × 10−1 (8.67 × 10−4) −4.36 × 10−1 (4.72 × 10−2)
10001.08 × 101 (3.22 × 10−1) −8.01 × 10−1 (7.05 × 10−2) −4.10 × 10−1 (3.28 × 10−3) −5.02 × 10−1 (3.11 × 102) −4.04 × 10−1 (2.43 × 10−2)
LSMOP222001.51 × 10−1 (6.75 × 10−4) −7.46 × 10−2 (4.63 × 10−4) −3.68 × 10−2 (1.52 × 10−3) −6.04 × 10−2 (8.55 × 101) −2.81 × 10−2 (1.07 × 10−3)
5007.27 × 10−2 (2.30 × 10−4) −3.30 × 10−2 (3.91 × 10−4) −2.13 × 10−2 (1.30 × 10−3) −1.26 × 10−2 (4.11 × 10−1) +1.67 × 10−2 (4.75 × 10−4)
10004.04 × 10−2 (3.87 × 10−4) −1.92 × 10−2 (3.40 × 10−4) −1.45 × 10−2 (5.36 × 10−4) −1.09 × 10−2 (1.18 × 10−2) +1.15 × 10−2 (4.08 × 10−4)
32001.23 × 10−1 (2.61 × 10−3) +1.36 × 10−1 (3.84 × 10−3) −1.45 × 10−1 (3.79 × 10−3) −1.19 × 10−1 (1.23 × 10−1) +1.31 × 10−1 (2.69 × 10−3)
5007.89 × 10−2 (2.63 × 10−3) +8.54 × 10−2 (3.82 × 10−3) −8.04 × 10−2 (3.72 × 10−3) −5.67 × 10−2 (5.98 × 10−4) +7.97 × 10−2 (1.97 × 10−3)
10006.48 × 10−2 (2.46 × 10−3) +7.00 × 10−2 (4.28 × 10−3) −6.58 × 10−2 (5.11 × 10−3) −8.95 × 10−1 (5.60 × 10−1) −6.80 × 10−2 (1.37 × 10−3)
LSMOP322001.71 × 101 (1.30 × 100) −1.50 × 10−2 (6.88 × 10−2) +1.54 × 100 (1.83 × 10−3) −4.03 × 10−2 (2.06 × 102) +1.51 × 10−1 (1.36 × 10−2)
5002.87 × 101 (8.26 × 10−1) −1.57 × 100 (1.47 × 10−3) −1.57 × 100 (1.05 × 10−3) −8.96 × 100 (1.90 × 10−1) −1.55 × 100 (1.65 × 10−3)
10003.36 × 101 (6.07 × 10−1) −1.58 × 100 (1.61 × 10−3) −1.57 × 100 (5.60 × 10−4) ≈1.83 × 101 (2.16 × 101) −1.57 × 100 (8.03 × 10−4)
32002.30 × 101 (3.53 × 100) −8.61 × 10−1 (3.38 × 10−4) −4.31 × 10−1 (4.10 × 10−2) +1.44 × 10−1 (7.15 × 10−2) +8.59 × 10−1 (3.45 × 10−3)
5003.60 × 101 (2.95 × 100) −8.61 × 10−1 (1.30 × 10−4) −3.39 × 10−1 (1.35 × 10−2) +8.90 × 10−1 (3.93 × 10−4) −8.60 × 10−1 (3.83 × 10−4)
10004.02 × 101 (2.09 × 100) −8.61 × 10−1 (7.28 × 10−4)≈8.60 × 10−1 (1.28 × 10−3) ≈8.97 × 10−1 (3.09 × 10−4) −8.60 × 10−1 (3.64 × 10−4)
LSMOP422006.56 × 10−1 (9.76 × 10−3) −1.33 × 10−1 (1.51 × 10−2) −6.98 × 10−2 (2.33 × 10−3) −1.69 × 10−1 (1.71 × 10−2) −5.78 × 10−2 (6.78 × 10−3)
5005.44 × 10−1 (1.90 × 10−3) −8.74 × 10−2 (6.83 × 10−3) −5.52 × 10−2 (3.05 × 10−3) −1.23 × 101 (6.23 × 10−1) −3.85 × 10−2 (2.05 × 10−3)
10004.61 × 10−1 (6.97 × 10−4) −5.99 × 10−2 (5.57 × 10−3) −3.32 × 10−2 (1.18 × 10−3) −1.83 × 10−1 (4.97 × 100) −2.66 × 10−2 (6.49 × 10−4)
32003.26 × 10−1 (2.31 × 10−3) −3.15 × 10−1 (9.10 × 10−3) −2.95 × 10−1 (7.55 × 10−3) +6.23 × 10−1 (3.74 × 10−2) −2.97 × 10−1 (9.69 × 10−3)
5001.94 × 10−1 (5.71 × 10−4) −2.14 × 10−1 (6.87 × 10−3) −2.20 × 10−1 (6.57 × 10−3) −3.11 × 10−1 (3.79 × 10−2) −1.79 × 10−1 (5.17 × 10−3)
10001.20 × 10−1 (1.96 × 10−4) +1.39 × 10−1 (5.80 × 10−3) −1.45 × 10−1 (6.02 × 10−3) −3.36 × 10−1 (2.38 × 10−2) −1.24 × 10−1 (2.41 × 10−3)
LSMOP522001.42 × 101 (6.21 × 10−1) −7.42 × 10−1 (1.14 × 10−6) −7.31 × 10−1 (3.39 × 10−16) −2.01 × 100 (3.07 × 100) −7.35 × 10−1 (1.81 × 10−2)
5002.09 × 101 (5.02 × 10−1) −7.42 × 10−1 (1.14 × 10−6) −7.35 × 10−1 (3.39 × 10−16) −7.47 × 10−1 (1.90 × 10−1) −6.25 × 10−1 (3.63 × 10−2)
10002.41 × 101 (3.40 × 10−1) −7.42 × 10−1 (3.39 × 10−6)≈7.42 × 10−1 (3.39 × 10−6) ≈1.39 × 100 (9.28 × 10−1) −7.42 × 10−1 (3.39 × 10−6) ≈
32001.17 × 101 (9.27 × 10−1) −5.41 × 10−1 (1.02 × 10−3) −4.99 × 10−1 (4.07 × 10−2) −4.40 × 10−1 (5.11 × 10−3) ≈4.85 × 10−1 (2.55 × 10−2)
5001.70 × 101 (6.15 × 10−1) −5.41 × 10−1 (4.66 × 10−5) −5.35 × 10−1 (9.71 × 10−3) −5.76 × 10−2 (2.85 × 10−3) −5.23 × 10−1 (8.26 × 10−3)
10001.91 × 101 (5.97 × 10−1) −5.41 × 10−1 (7.27 × 10−5) −5.40 × 10−1 (1.22 × 10−3) −6.21 × 10−1 (5.57 × 10−4) −5.27 × 10−1 (5.05 × 10−3)
LSMOP622007.36 × 102 (6.12 × 102) −6.42 × 10−1 (7.36 × 10−2) −2.57 × 10−1 (1.12 × 10−3) +3.69 × 10−1 (1.71 × 102) +6.43 × 10−1 (6.24 × 10−2)
5002.24 × 103 (2.14 × 103) −7.33 × 10−1 (1.76 × 10−1) −2.20 × 10−1 (4.12 × 10−4) +1.23 × 101 (6.23 × 10−1) −6.00 × 10−1 (8.68 × 10−2)
10002.99 × 103 (2.33 × 103) −6.82 × 10−1 (9.03 × 10−4) −3.12 × 10−1 (4.30 × 10−4) +1.89 × 10−1 (4.97 × 100) +6.26 × 10−1 (8.23 × 10−2)
32001.77 × 104 (3.58 × 103) −1.22 × 100 (3.15 × 10−3) −3.81 × 10−1 (2.20 × 10−2) +6.23 × 10−1 (3.74 × 10−2) +1.22 × 100 (4.17 × 10−3)
5003.05 × 104 (6.34 × 103) −1.29 × 100 (2.01 × 10−3) −4.22 × 10−1 (1.11 × 10−2) +5.11 × 10−1 (3.79 × 10−2) +1.29 × 100 (2.83 × 10−3)
10003.68 × 104 (7.07 × 103) −1.31 × 100 (1.31 × 10−3) −3.70 × 10−1 (4.20 × 10−2) +3.81 × 10−1 (2.38 × 10−2) +1.31 × 100 (2.96 × 10−3)
LSMOP722005.58 × 104 (6.03 × 103) −1.48 × 100 (2.34 × 10−3) −1.32 × 100 (3.20 × 10−3) +7.86 × 100 (3.59 × 102) −1.48 × 100 (1.90 × 10−3)
5001.06 × 105 (5.12 × 103) −1.51 × 100 (1.19 × 10−3) −1.50 × 100 (1.20 × 10−3) −3.81 × 100 (3.55 × 10−5) −1.48 × 100 (4.05 × 10−2)
10001.33 × 105 (4.14 × 103) −1.51 × 100 (1.18 × 10−3) −1.51 × 100 (1.29 × 10−3) −2.50 × 100 (2.91 × 10−5) −1.51 × 100 (5.64 × 10−4)
32001.80 × 100 (3.92 × 10−2) −9.78 × 10−1 (4.70 × 10−2) −9.73 × 10−1 (2.55 × 10−2) −6.00 × 10−1 (2.25 × 10−1) −4.10 × 10−1 (3.81 × 10−2)
5001.27 × 100 (9.73 × 10−3) −9.48 × 10−1 (1.26 × 10−1) −9.49 × 10−1 (4.26 × 10−2) −7.76 × 10−1 (2.41 × 10−5) −6.09 × 10−1 (7.02 × 10−2)
10001.10 × 100 (2.56 × 10−3) −9.23 × 10−1 (1.38 × 10−1) −8.64 × 10−1 (3.21 × 10−3) −7.76 × 10−1 (3.27 × 10−5) −6.88 × 10−1 (7.67 × 10−2)
LSMOP822001.40 × 101 (8.86 × 10−1) −7.42 × 10−1 (1.14 × 10−6) −7.42 × 10−1 (3.39 × 10−16) −7.88 × 10−1 (3.59 × 102) −2.86 × 10−1 (7.57 × 10−2)
5002.11 × 101 (4.21 × 10−1) −7.42 × 10−1 (1.14 × 10−6) −7.42 × 10−1 (3.39 × 10−16) −3.81 × 10−1 (3.55 × 10−5) −2.23 × 10−1 (1.91 × 10−2)
10002.39 × 101 (4.73 × 10−1) −7.42 × 10−1 (1.14 × 10−6)≈7.42 × 10−1 (3.39 × 10−16) ≈8.50 × 10−1 (2.91 × 10−5) −7.42 × 10−1 (3.39 × 10−16) ≈
32006.69 × 10−1 (1.07 × 10−2) −3.65 × 10−1 (4.56 × 10−3) −3.33 × 10−1 (2.40 × 10−2) −6.00 × 10−1 (2.25 × 10−1) −3.35 × 10−1 (3.09 × 10−2)
5006.51 × 10−1 (6.13 × 10−3) −3.55 × 10−1 (1.59 × 10−2) −3.15 × 10−1 (1.74 × 10−2) −3.76 × 10−1 (2.41 × 10−5) −2.89 × 10−1 (5.22 × 10−2)
10006.49 × 10−1 (4.56 × 10−3) −3.56 × 10−1 (9.05 × 10−3) −3.45 × 10−1 (1.13 × 10−2) −3.76 × 10−1 (3.27 × 10−5) −2.47 × 10−1 (4.49 × 10−2)
LSMOP922002.26 × 101 (1.92 × 100) −8.10 × 10−1 (1.14 × 10−6) −8.30 × 10−1 (0.00 × 100) −5.00 × 10−1 (3.11 × 101) −4.67 × 10−1 (8.06 × 10−2)
5004.32 × 101 (1.36 × 100) −8.10 × 10−1 (3.21 × 10−4) −5.19 × 10−1 (6.09 × 10−4) −8.42 × 10−1 (2.89 × 10−2) −5.33 × 10−1 (4.85 × 10−3)
10005.24 × 101 (1.03 × 100) −8.09 × 10−1 (4.10 × 10−4) −6.02 × 10−1 (1.34 × 10−3) −3.61 × 100 (5.67 × 100) −6.20 × 10−1 (2.53 × 10−2)
32006.70 × 101 (5.47 × 100) −7.74 × 10−1 (3.80 × 10−1) +1.53 × 100 (4.52 × 10−16) −8.33 × 10−1 (2.22 × 10−1) +1.51 × 100 (9.96 × 10−2)
5001.15 × 102 (5.42 × 100) −8.21 × 10−1 (4.13 × 10−1) +1.49 × 100 (1.20 × 10−1) −9.11 × 10−1 (3.05 × 10−2) +1.23 × 100 (1.69 × 10−1)
10001.37 × 102 (3.51 × 100) −1.08 × 100 (4.00 × 10−1) +1.40 × 100 (1.89 × 10−1) −7.03 × 100 (1.94 × 10−1) −1.18 × 100 (1.20 × 10−1)
+/−/≈2/52/04/47/311/39/412/41/1/
Table 2. IGD values of LSMOF-based variants on LSMOP1-LSMOP9.
Table 2. IGD values of LSMOF-based variants on LSMOP1-LSMOP9.
ProblemMDLSMOFLSMOF-Three-StageLSMOF-AD
LSMOP121007.48 × 10−2 (2.08 × 10−3) −3.97 × 100 (2.41 × 100) −2.94 × 10−2 (4.52 × 10−4)
3005.58 × 10−1 (1.38 × 10−4) −4.55 × 10−1 (2.59 × 10−5) +5.56 × 10−1 (5.62 × 10−4)
6005.64 × 10−1 (5.46 × 10−2) −2.92 × 10−1 (3.78 × 10−2) −2.11 × 10−1 (1.44 × 10−2)
31001.04 × 101 (5.35 × 10−1) −1.83 × 100 (1.46 × 10−1) −6.49 × 10−1 (1.35 × 10−2)
3006.01 × 100 (6.33 × 10−1) −1.03 × 100 (6.51 × 10−1) −1.01 × 10−2 (3.68 × 10−3)
6002.27 × 10−1 (3.05 × 10−2) −4.41 × 10−2 (8.25 × 10−7) +5.22 × 10−2 (7.70 × 10−5)
LSMOP221005.32 × 100 (1.27 × 10−1) −7.50 × 101 (3.24 × 10−1) −1.27 × 10−2 (1.08 × 10−2)
3005.76 × 10−2 (4.56 × 10−4) +6.58 × 10−2 (7.86 × 10−4) ≈8.64 × 10−2 (2.69 × 10−3)
6003.95 × 10−2 (1.03 × 10−3) −2.81 × 10−2 (9.96 × 10−4) +2.81 × 10−2 (1.07 × 10−3)
31001.32 × 10−1 (1.62 × 10−3) ≈1.04 × 10−1 (2.78 × 10−4) +1.38 × 10−1 (5.04 × 10−3)
3001.67 × 101 (3.07 × 10−4) −2.17 × 100 (1.08 × 10−2) −4.35 × 10−2 (2.50 × 10−2)
6001.57 × 10−1 (1.04 × 10−1) −1.53 × 10−1 (3.29 × 10−1) ≈5.43 × 10−3 (2.41 × 10−4)
LSMOP321001.16 × 100 (6.11 × 10−1) −2.22 × 10−1 (1.22 × 10−2) −2.16 × 10−2 (1.46 × 10−2)
3002.28 × 100 (8.58 × 10−2) −1.90 × 10−2 (2.46 × 10−3) ≈1.99 × 10−2 (6.81 × 10−3)
6001.53 × 100 (2.22 × 10−3) −1.53 × 100 (2.64 × 10−3) ≈1.51 × 100 (1.36 × 10−2)
31007.13 × 100 (5.60 × 10−2) −1.62 × 100 (1.60 × 100) −8.60 × 10−1 (2.85 × 10−5)
3003.06 × 10−1 (3.23 × 10−2) −4.29 × 10−1 (2.00 × 10−2) −4.97 × 10−2 (3.24 × 10−2)
6002.51 × 100 (1.58 × 10−1) −4.51 × 10−1 (3.97 × 10−1) −5.61 × 10−3 (4.94 × 10−4)
LSMOP421001.85 × 100 (1.02 × 10−2) −3.92 × 10−1 (1.07 × 10−2) −2.30 × 10−2 (1.13 × 10−2)
3003.72 × 100 (2.12 × 10−1) −5.83 × 10−2 (1.06 × 10−2) −2.02 × 10−2 (7.39 × 10−3)
6001.02 × 10−1 (1.83 × 10−3) −6.81 × 10−2 (5.87 × 10−3) +5.78 × 10−2 (6.78 × 10−3)
31004.00 × 10−1 (6.54 × 10−3) −3.08 × 10−1 (3.75 × 10−3) ≈3.10 × 10−1 (5.95 × 10−3)
3005.03 × 10−1 (2.19 × 10−2) −9.09 × 10−1 (3.97 × 10−2) −7.29 × 10−2 (3.37 × 10−2)
6003.91 × 100 (9.56 × 10−2) −3.58 × 10−1 (3.53 × 10−1) −6.63 × 10−3 (1.38 × 10−3)
LSMOP521002.66 × 101 (5.94 × 10−1) ≈3.17 × 10−2 (6.10 × 10−1) +3.27 × 10−2 (3.20 × 10−3)
3005.35 × 100 (1.18 × 10−1) −1.53 × 10−1 (2.92 × 10−2) +2.68 × 10−1 (1.60 × 10−2)
6007.42 × 10−1 (3.39 × 10−1) ≈7.42 × 10−1 (3.39 × 10−1) ≈7.42 × 10−1 (1.81 × 10−2)
31001.85 × 101 (5.71 × 10−1) −4.01 × 100 (2.89 × 10−1) −7.37 × 10−1 (1.90 × 10−1)
3006.82 × 10−2 (4.03 × 10−2) −1.08 × 10−1 (3.24 × 10−2) +6.17 × 10−2 (2.22 × 10−2)
6005.43 × 100 (3.28 × 10−1) −3.16 × 10−1 (2.58 × 10−1) +5.82 × 10−1 (6.19 × 10−4)
LSMOP621002.66 × 101 (5.94 × 10−1) −3.67 × 10−1 (6.10 × 10−1) +3.27 × 10−2 (3.20 × 10−3)
3005.36 × 100 (1.18 × 10−1) −1.53 × 10−1 (2.92 × 10−2) +2.68 × 10−1 (1.60 × 10−2)
6003.57 × 10−1 (1.35 × 10−3) +3.71 × 10−1 (1.50 × 10−2) −6.43 × 10−1 (6.24 × 10−2)
31003.21 × 104 (8.29 × 10−3) −9.77 × 102 (9.31 × 10−2) −7.77 × 10−1 (2.33 × 10−2)
3006.82 × 10−1 (4.03 × 10−2) −1.08 × 10−1 (3.24 × 10−2) −6.17 × 10−2 (2.22 × 10−2)
6005.43 × 100 (3.28 × 10−1) −3.16 × 10−1 (2.58 × 10−1) −5.82 × 10−2 (6.19 × 10−4)
LSMOP721003.33 × 10−1 (1.35 × 10−2) −6.21 × 10−1 (1.66 × 10−2) −3.79 × 10−2 (1.65 × 10−2)
3006.62 × 100 (1.56 × 10−1) −4.18 × 10−2 (7.31 × 10−2) +4.81 × 10−2 (1.09 × 10−2)
6001.47 × 100 (2.54 × 10−3) −1.43 × 100 (2.88 × 10−3) +1.44 × 100 (1.90 × 10−3)
31002.31 × 100 (1.47 × 10−3) −1.49 × 100 (2.39 × 10−1) −1.08 × 100 (4.06 × 10−2)
3008.98 × 10−1 (1.82 × 10−2) −1.59 × 10−1 (5.48 × 10−2) −9.40 × 10−2 (2.24 × 10−2)
6006.88 × 100 (4.89 × 10−1) −7.29 × 10−1 (9.73 × 10−2) −6.68 × 10−2 (1.17 × 10−3)
LSMOP821004.03 × 10−1 (1.36 × 10−2) −7.08 × 10−2 (1.77 × 10−2) −4.24 × 10−2 (1.09 × 10−2)
3007.80 × 100 (2.66 × 10−1) −5.90 × 10−1 (1.94 × 10−1) −2.13 × 10−2 (8.73 × 10−3)
6007.42 × 10−1 (3.39 × 10−16) −7.42 × 10−1 (3.39 × 10−16) ≈2.86 × 10−1 (7.57 × 10−2)
31007.77 × 10−1 (3.17 × 10−2) −6.40 × 10−1 (1.62 × 10−2) ≈6.33 × 10−1 (2.99 × 10−2)
3001.08 × 100 (3.36 × 10−2) −1.55 × 10−1 (8.14 × 10−2) −1.34 × 10−1 (3.45 × 10−2)
6008.61 × 100 (2.50 × 10−1) −1.47 × 100 (7.01 × 10−1) −8.37 × 10−3 (3.83 × 10−3)
LSMOP921005.10 × 10−1 (9.83 × 10−1) −9.67 × 10−1 (3.32 × 10−2) −3.08 × 10−2 (7.02 × 10−3)
3001.04 × 101 (2.84 × 10−1) −1.21 × 10−1 (4.17 × 10−1) +1.55 × 10−1 (1.12 × 10−2)
6008.10 × 10−1 (0.00 × 100) −8.10 × 10−1 (0.00 × 100) ≈7.67 × 10−1 (8.06 × 10−2)
31001.27 × 100 (7.69 × 100) −2.18 × 10−1 (5.85 × 10−1) +1.47 × 100 (1.41 × 10−1)
3006.05 × 10−1 (8.55 × 10−1) −1.05 × 10−1 (5.01 × 10−2) −5.70 × 10−2 (1.96 × 10−2)
6001.26 × 101 (4.11 × 10−1) −1.18 × 100 (2.84 × 10−1) −1.94 × 10−1 (1.29 × 10−2)
+/−/≈2/49/312/33/9/
Table 3. Relations between nodes and tasks.
Table 3. Relations between nodes and tasks.
Node NumberTask Number
14, 6
22, 5
33
41
57, 8
Table 4. Server specifications.
Table 4. Server specifications.
NameFrequency/GHzMemory/GBBandwidth/MbpsDisk/TB
Controller3.2321004
Node 12.4481002
Node 22.4481002
Node 33.2481002
Node 43.2321002
Node 53.2641008
Node 62.4321002
Node 73.2481002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, M.; Ding, W.; Zhu, M.; Shi, W.; Jiang, G. LSMOF-AD: Three-Stage Optimization Approach with Adaptive Differential for Large-Scale Container Scheduling. Processes 2024, 12, 1531. https://doi.org/10.3390/pr12071531

AMA Style

Chen M, Ding W, Zhu M, Shi W, Jiang G. LSMOF-AD: Three-Stage Optimization Approach with Adaptive Differential for Large-Scale Container Scheduling. Processes. 2024; 12(7):1531. https://doi.org/10.3390/pr12071531

Chicago/Turabian Style

Chen, Mingshan, Weichao Ding, Mengyang Zhu, Wen Shi, and Guoqing Jiang. 2024. "LSMOF-AD: Three-Stage Optimization Approach with Adaptive Differential for Large-Scale Container Scheduling" Processes 12, no. 7: 1531. https://doi.org/10.3390/pr12071531

APA Style

Chen, M., Ding, W., Zhu, M., Shi, W., & Jiang, G. (2024). LSMOF-AD: Three-Stage Optimization Approach with Adaptive Differential for Large-Scale Container Scheduling. Processes, 12(7), 1531. https://doi.org/10.3390/pr12071531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop