Next Article in Journal
Investigation of Efficient Optimization Approach to the Modernization of Francis Turbine Draft Tube Geometry
Next Article in Special Issue
A Survey on High-Dimensional Subspace Clustering
Previous Article in Journal
Resilience-Based Surrogate Robustness Measure and Optimization Method for Robust Job-Shop Scheduling
Previous Article in Special Issue
A Survey on Deep Transfer Learning and Beyond
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution

1
Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Sharqiyah, Egypt
2
College of Engineering and Applied Sciences, American University of Kuwait, Salmiya 20002, Kuwait
3
School of IT and Systems, University of Canberra, Canberra, ACT 2601, Australia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 4049; https://doi.org/10.3390/math10214049
Submission received: 3 September 2022 / Revised: 18 October 2022 / Accepted: 27 October 2022 / Published: 31 October 2022
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)

Abstract

:
Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized as a nondeterministic polynomial time (NP)-hard problem, which cannot be tackled with the classical methods, due to their inability to find a near-optimal solution within a reasonable time. Therefore, metaheuristic algorithms have recently been employed to overcome this problem, but these algorithms still suffer from falling into a local minima and from a low convergence speed. Therefore, in this study, a new task scheduler, known as hybrid differential evolution (HDE), is presented as a solution to the challenge of task scheduling in the cloud computing environment. This scheduler is based on two proposed enhancements to the traditional differential evolution. The first improvement is based on improving the scaling factor, to include numerical values generated dynamically and based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. Multiple tests utilizing randomly generated datasets and the CloudSim simulator were conducted, to demonstrate the efficacy of HDE. In addition, HDE was compared to a variety of heuristic and metaheuristic algorithms, including the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. During trials, makespan and total execution time values were acquired for various task sizes, ranging from 100 to 3000. Compared to the other metaheuristic and heuristic algorithms considered, the results of the studies indicated that HDE generated superior outcomes. Consequently, HDE was found to be the most efficient metaheuristic scheduling algorithm among the numerous methods researched.
MSC:
68W50; 90B36; 90C27

1. Introduction

Healthcare services (HCS) based on cloud computing and IoT are now seen as one of the most significant medical fields, due to the spread of epidemics and health crises, where the best use of HCS can save many lives [1]. Through the use of cloud computing, HCS users can gain access to online computing resources, such as software, hardware, and applications that are managed in accordance with their needs. As a result, cloud computing in the context of the Internet of Things has greatly benefited all users, especially those in the healthcare services industry who wish to deliver medical services via the Internet [2]. The rapid adoption and ease of use in all industries, the proliferation of the Internet of Things concept, and the continued advancement of infrastructure and technology will all increase user demand for cloud computing, doubling the volume of data and requests from users. Task scheduling becomes a more challenging issue. Delivering resources in accordance with user requests and upholding quality of service (QoS) requirements for the end-user is a demanding task [3].
Broadly speaking, cloud computing is made up of a large number of datacenters that house numerous physical machines (host). Each host runs several virtual machines (VMs) that are in charge of executing user tasks with varying quality of services (QoS). Users are able to gain access to cloud resources on a pay-as-you-go basis, through the use of cloud service providers [4]. An IoT environment connected to a cloud computing paradigm makes efficient use of the physical resources already available, thanks to virtualization technology. Thus, multiple users of healthcare services (HCS) can store and access a variety of medical resources using a single physical infrastructure, which includes both hardware and software. One of the most significant issues with healthcare services is the task scheduling problem (TSP), which causes users of healthcare services in a cloud computing environment to experience delays in receiving medical requests. The waiting period, the turnaround time for medical requests, CPU waste, and resource waste are some of the factors that contribute to the delay in processing medical requests. TSP is a NP-hard problem responsible for allocating computing resources to application tasks in an effective manner [2].
When the size of the tasks and the number of VMs both increase, the amount of computing time required to select the best scheduling for those tasks to the VMs increases exponentially. Some classic scheduling techniques, such as first come first served (FCFS), round-robin (RR), and shortest job first (SJF), are able to provide solutions to scheduling, but cannot fulfil the demands of cloud computing because its scheduling problem is NP-hard [5,6]. As traditional scheduling algorithms are unable to solve NP-hard optimization problems, more recently, modern optimization algorithms, also known as metaheuristic algorithms, have been used instead. These algorithms can produce optimal or near-optimal solutions in a reasonable amount of time compared to traditional scheduling algorithms.
Several metaheuristic algorithms have been employed for tackling task scheduling in cloud computing environments; for example, in [6], a new variant of the classic particle swarm optimization (PSO), namely ranging function and tuning function based PSO (RTPSO), was proposed, to achieve better task scheduling. In RTPSO, the inertia weight factors were improved to generate small and large values, for better local search and global search. In addition, RTPSO was integrated with the bat algorithm for further improvements; this variant was named RTPSO-B. Both RTPSO and RTPSO have been compared with various well-established algorithms, such as the genetic algorithm (GA), ant-colony optimization algorithm (ACO), and classical PSO. This comparison showed the efficiency of RTPSO-B in terms of its makespan, cost, and the utilization of resources.
The flower pollination algorithm (FPA) was also applied to tackle the task scheduling in cloud computing. Bezdan, T. et al. [7] improved the exploration operator of the classical FPA by replacing the worst individuals with new ones generated randomly within the search space, to avoid becoming stuck in local minima at the start of the optimization process. This improved FPA was called EEFPA and employed to find the best scheduling of tasks in cloud computing environments, which will minimize the makespan as a major objective. EEFPA was the best scheduler, compared to the other similar approaches considered in the study. Choudhary et al. [8] developed a task scheduling algorithm for bi-objective workflow scheduling in cloud computing that is based on hybridizing the gravitational search algorithm (GSA) and heterogeneous earliest finish time (HEFT); this algorithm was named as HGSA. This algorithm was developed in an effort to shorten the makespan and the computational cost. However, it is possible that GSA may not perform accurately for more complicated tasks.
Raghavan et al. [9] adapted the bat algorithm (BA) to tackle the task scheduling problem in cloud computing, with an objective function for reducing the total cost of the workflow. On the other hand, BA had a subpar performance in the high dimension [6]. Tawfeek et al. [10] devised ant colony optimization to deal with task scheduling in cloud computing, with the goal of reducing the makespan. This algorithm was compared to two traditional algorithms, such as FCFS and RR, and it performed better than both of them. The problem with this algorithm is that it converges slowly, and hence will take several iterations to obtain feasible solutions. Hamad and Omara [11] proposed a task scheduling algorithm based on the genetic algorithm (GA), to find the optimal assignment of tasks in cloud computing, which will optimize the makespan and cost, as well as the resource utilization.
The grey wolf optimizer (GWO) was proposed, to schedule the tasks in cloud computing to utilize the resources more efficiently and minimize the total completion time [12]. This algorithm was compared with several scheduling methods, such as the FCFS, ACO, performance budget ACO (PBACO), and min-max algorithms. The experimental findings showed that GWO was the best-performing scheduler and PBACO was the second best. However, the performance of GWO at large scale was not evaluated and, hence, it is not preferable when the number of tasks is at a large scale. Chen, X. et al. [13] presented a task scheduler based on the improved whale optimization algorithm (IWOA). The standard WOA was improved in IWOA using two factors, namely the nonlinear convergence factor and adaptive population size. IWOA was better than the compared algorithms in terms of accuracy and convergence speed when scheduling small-scale tasks or large-scale tasks in cloud computing environments.
Alsaidy, S.A. et al. [14] proposed two variants of PSO: The first, named LJFP-PSO, is based on initializing a population using a heuristic algorithm known as longest job to fastest processor (LJFP); while the second variant, known as MCT-PSO, is based on employing the minimum completion time (MCT) algorithm to initialize the population and to achieve a better makespan, total execution time, degree of imbalance, and total energy consumption when tackling the task scheduling problem in cloud computing. The glowworm swarm optimization (GSO) was used to solve the problem of task scheduling in cloud computing. The goal of this solution was to minimize the overall execution cost of jobs, while keeping the total completion time within the deadline [15]. According to the findings of a simulation, the GSO based task scheduling (GSOTS) algorithm achieved better results than the shortest task first (STF), the largest task first (LTF), and the particle swarm optimization (PSO) algorithms in terms of lowering the total completion time and the cost of executing tasks. There are several other metaheuristics-based task scheduling algorithms in the cloud computing environment, including the cuckoo search algorithm (CSA) [16], electromagnetism optimization (EMO) algorithm [17], sea lion optimization (SLO) algorithm [18], adaptive symbiotic organisms search (ASOS) [19], hybrid whale optimization algorithm (HWOA) [20], artificial flora optimization algorithm (AFOA) [21], modified particle swarm optimization (MPSO) [22], and differential evolution (DE) [23,24,25,26,27,28,29,30].
As mentioned, the task scheduling problem is classified as a nondeterministic polynomial time (NP)-hard problem and cannot be solved using traditional methods due to their inability to find a near-optimal solution in a reasonable time. Although, metaheuristic algorithms could have a significant effect when tackling these problems, they still suffer from falling into local minima and from a low convergence speed. As a result, a new task scheduler, known as hybrid differential evolution (HDE), is presented in this study, as a solution to the task scheduling challenge in the cloud computing environment. This scheduler is based on two proposed improvements to differential evolution. The first improvement is based on expanding the scaling factor to include dynamically generated numerical values based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. To demonstrate the efficacy of HDE, multiple tests were performed using randomly generated datasets and the CloudSim simulator. HDE was compared to a number of heuristic and metaheuristic algorithms, to show that it is a strong alternative for overcoming the task scheduling problem in cloud computing. The main contributions of this study are as follows:
  • Improving the scaling factor and the exploitation operator of the classical DE, to propose a new task scheduler known as hybrid differential evolution for the challenge of task scheduling in the cloud computing environment.
  • Conducting several experiments using randomly generated datasets and the CloudSim simulator, to verify the efficiency of HDE.
  • The experimental findings show that HDE was the most effective metaheuristic scheduling algorithm among the compared approaches.
The remaining sections of this work are structured as follows: Section 2 describes the classical differential evolution, Section 3 presents the objective function formulation, Section 4 details the steps of the proposed algorithm, Section 5 depicts the results of the experiments and performance comparison, and Section 6 presents the conclusions drawn from this study, as well as observations regarding future research.

2. Task Scheduling in the Cloud Computing Environment

Cloud computing is made up of a large number of datacenters that house numerous physical machines (host). Each host runs several virtual machines (VMs) that are in charge of executing user tasks with varying quality of services (QoS). Figure 1 depicts the task scheduling in a cloud computing environment [4]. Supposing that there are n cloudlets (Tasks), T = T 1 ,   T 2 ,   T 3 ,   ,   T n , which are executed using m virtual machines (VMs), V M = V M 1 ,   V M 2 ,   V M 3 ,   ,   V M m . These tasks have various lengths and the VMs are heterogonous in terms of bandwidth, RAM, and CPU time. The cloud broker makes a request to the cloud information service, to obtain details about the services needed to carry out the tasks it has been given, and then schedules the tasks on the services that have been found. The numerous variables and QoS criteria of the broker influence the choice of the tasks to be provided. The cloud broker is the key element of the task scheduling process, it mediates talks between the user and the provider and decides when to schedule tasks for certain resources. However, there are several issues that need to be considered. The tasks that users submit are first added to the top queue in the system and must wait while the resources are employed. As a result, the system’s queue grows longer, which lengthens the waiting period. However, a more effective approach than the first come first served (FCFS) principle should be used to manage this queue. Second, when the service provider manages the tasks, numerous parameters can be taken into account as multiobjective optimization or single objective optimization, such as the makespan, which directly affects resource consumption [4]. Therefore, a strong task scheduling algorithm should be created and implemented in the cloud broker, to not only meet the QoS requirements imposed by cloud users, but also to achieve good load balancing between the virtual machines, in order to improve resource utilization [4].

3. Differential Evolution

Storn [31] presented a population-based optimization method dubbed differential evolution (DE). DE is comparable to genetic algorithms, in terms of its mutation, crossover, and selection operators. Before commencing the process of optimization, the differential evolution generates a set of solutions, referred to as individuals, each of which has D dimensions and is randomly dispersed within the search space of the optimization problem. This takes place before the optimization procedure begins. After that, as will become clear in the following paragraphs, the mutation and crossover operators are applied, in order to search through the space available in an effort to locate more effective solutions.

3.1. Mutation Operator

This operator is applied to generate a mutant vector v i t for each solution in the population. There are several updating schemes that can be applied to generate the mutant vector, one of the most common schemes generates the mutant vector according to the following formula:
v i t = X a t + F X k t X j t
where a , k, and j stand for the indices of three individuals picked randomly from the population at the current iteration t. F is a positive scaling factor that involves a constant value greater than 0.

3.2. Crossover Operator

After generating the mutant vector v i t using the crossover operator under a crossover probability (CR), the trial vector u i t is generated using both the mutant vector and the solution of the i t h individual, according to the following equation:
u i ,   j t = v i ,   j t i f   r 1 C R   j = j r X i ,   j t o t h e r w i s e
where j r is a random integer generated between 1 and D, j indicates the current dimension, and C R is a constant value between 0 and 1, which specifies the percentage of dimensions copied from the mutant vector to the trial vector.

3.3. Selection Operator

Finally, this operator is presented to compare the vector u i t to vector X i t , with the fittest being used in the next iteration. Generally, the selection operator for a minimization problem is mathematically formulated as follows:
X i t + 1 = u i t i f   ( f ( u i t ) < f ( X i t ) ) X i ( t ) o t h e r w i s e

4. Objective Function Formulation

Supposing that there are n cloudlets (Tasks), T = T 1 ,   T 2 ,   T 3 ,   ,   T n , which are executed using m virtual machines (VMs), V M = V M 1 ,   V M 2 ,   V M 3 ,   ,   V M m . Then, the solutions obtained by a scheduler represent the assignment process of the tasks to the VMs in the order that will minimize the two objectives: makespan, and total execution time. Problems that have two or three objectives are classified as multiobjective. There are two methods, namely a priori or a posteriori, suggested to deal with multiobjective problems [32,33]. In the priori method, the multiobjective problems are treated as a single objective problem, by assigning a weight to each objective, based on the significance of each objective for the decision-makers. On the other hand, the posteriori method proposes that all objectives have the same significance, so it generates a set of solutions, namely non-dominated solutions, that trade off between different objectives. In this study, the priori approach was employed to convert the multiobjective problem into single objective using a weighting variable τ that includes a constant value generated between 0 and 1, to determine the importance of an objective, for example, the makespan, in the fitness function, and the complement of this variable ( 1 τ ) indicates the weight of the other objective. For instance, the objective function under this weighting variable is formulated as follows:
f = 1 τ × η + τ × β
where β represents the makespan objective, and η stands for the total execution time. τ stands for the weighting variable, which is employed to convert this problem from multi-objective to single-objective, to become solvable using the met heuristic algorithms designed for single-objective problems, such as differential evolution. In the future, the posteriori approach will be applied to this problem, in an attempt to achieve better results for all objectives at the same time.
After describing the main components of the objective function, let us describe each one carefully. Starting with the makespan objective, at the beginning time, each V M j   has a variable E t j assigned a value of 0, and then the tasks are distributed using a scheduler to those VMs. Each VM executes their assigned tasks and the execution time needed by each task to be executed under the j t h VM is added to the variable E t j . Finally, after finishing the tasks assigned to all VMs, the values stored in the variable E t for each VM are compared with each other, and the maximum value represents the makespan. Finally, the makespan, defined as the maximum of Et of all VMs, can be computed using the following expression:
β = max ( E t )
The second objective η is the total execution time consumed to complete the tasks assigned to all the VMs and can be computed according to the following formula:
η = j = 1 m E t j
Finally, the proposed algorithm described in the next section is employed to minimize the objective function described in Equation (4), to find the near-optimal scheduling of tasks that minimizes both the makespan and total execution time, hence providing a better quality of service to the users.

5. The Proposed Algorithm

This section presents the main steps employed to adapt the differential evolution for tackling the task scheduling problem in cloud computing; these steps are listed as follows: initialization, evaluation, adaptive mutation factor, and additional exploitation operator.

5.1. Initialization

Before starting the optimization process with any metaheuristic algorithm, a number N of solutions with n dimensions (each dimension represents a task in the scheduling problem) for each solution are defined and randomly initialized between 0 and m, to assign each task to a virtual machine. For example, Figure 2 presents a simple example, to illustrate how to represent a solution for the task scheduling problem. In this figure, we create a solution with 10 tasks, where n is 10, and randomly assign a VM to each task in this solution, where the number of available VMs is up to 7. From this figure, it is obvious that the virtual machine with an index of 2 will execute tasks 2 and 9, and the VM with an index of 0 will be assigned to execute tasks 1 and 5, while the other tasks, in order, will be assigned to the VMs: 1, 4, 3, 5, 6, and 5, respectively. In brief, the mathematical formula of the initialization step is defined as follows:
X i = r   m |   i = 0 ,   2 ,   3 ,   4 ,   ,   N 1
where r stands for a vector involving decimal numbers generated randomly between 0 and 1, and indicates the multiplication operator. Following that step, the evaluation stage commences, to evaluate the quality of each solution and determine the best so-far solution that can reach the lowest value for the objective function described in Equation (4).

5.2. Adaptive Scaling Factor (F)

The mutation stage in the differential evolution has a scaling factor, namely F, responsible for determining the step sizes. This factor in the classical DE algorithm includes a constant positive value, and this value is unchanged during the whole optimization process. Hence, if the population diversity is high and the value of this scaling factor is also high, the step size will take the solution a long distance from the current solution, this even occurs at the beginning of the optimization process, to encourage the algorithm to extensively explore the search space, to find the promising regions that might contain the best-so-far solution. Afterwards, the population diversity might be decreased when increasing the current iteration, while the scaling factor remains constant. However, what will happen if the population diversity remains high during the entire optimization process? Let us imagine together, if the population diversity remains high and the scaling factor is constant, then the generated step sizes are also high and, hence, the algorithm will always search for a better solution in regions far from the current solution, which might contain the near-optimal solution near to it. A similar problem occurs when the scaling factor is small and the population diversity is high. Therefore, in this study, we reformulated this factor to be dynamically updated according to the current iteration, to maximize the exploration operator of the classical DE algorithm at the beginning of the optimization process, and to search for the most promising region within the search space; and then, by increasing the current iteration, this exploration operator will be gradually converted into the exploitation operator, to focus more on this promising region, in order to achieve a better solution, in addition to accelerating the convergence. Based on this, the shortcomings of the classical DE with a constant scaling factor are largely avoided. Finally, the adaptive scaling factor is generated based on the current iteration, using the following formula:
F = α T t T
where T indicates the maximum iteration, and t stands for the current iteration. α is a predefined constant value.

5.3. Convergence Improvement Strategy

The mutation scheme described previously for generating the mutant vector in the classical DE is based on three solutions selected randomly from the current population. As a consequence of this, the exploration operator may be directed to explore the regions surrounding one of these solutions, despite the fact that the near-optimal solution may be in a region close to the best-so-far solution. Therefore, in this study, an additional strategy, namely a convergence improvement strategy (CIS), is proposed to exploit the regions around the best-so-far solution, to improve the convergence ability of the standard DE. This strategy in initially generates a vector, namely v i t , including numerical values generated around the best-so-far solution, using the following formula:
v i t = X * t + r X i t r 1 X * t
where X * t indicates the best-so-far solution at the current iteration t, and r and r 1 are two vectors including numerical values generated randomly between 0 and 1. Following this, another vector, namely a trial vector T i t , will be generated based on conducting a crossover operation between X * t and v i t under a crossover probability (P) predetermined according to experiments. Briefly, the trial vector will be generated according to the following formula:
T i ,   j t = v i ,   j t i f   ( r 2 P ) X j * t o t h e r w i s e
where r 2 is a numerical value generated randomly between 0 and 1. Finally, the fitness value of the trial vector, T i t , will be compared with that of the current solution X i t , and if the fitness of this trial vector is better, then it will be used in the next iteration; otherwise, the current solution is retained in the next generation, even when reaching a better one. However, applying this strategy to all individuals in the population might deteriorate the exploration operator of the classical DE, because it might reduce the population diversity. Therefore, this strategy has to be applied to a number of the solutions, to avoid this shortcoming; this number is estimated using the following formula:
N = N κ
where N is the number of individuals in the population, and κ is the percentage of the individuals that will be extracted to be updated using the convergence improvement strategy. Finally, the steps of the CIS strategy are listed in Algorithm 1.
Algorithm 1. Convergence improvement strategy (CIS)
1. for i = 0 to N
2.  Generate the mutant vector v i t using Equation (9)
3.  for j = 0 to n
4.     r 2 : a number generated randomly between 0 and 1
5.      i f   ( r 2 < P )
6.          T i ,   j t = v i ,   j t
7.     else
8.          T i ,   j t = X j * t
9.    End if
10.   end for
11.    i f f T i t < f X i t
12.        X i t = T i t
13.   end i f
14.  Replace the best-so-far solution X * t with T i t if T i t is better
15.  end for

5.4. Hybrid Differential Evolution

The classical DE is hybridized with the convergence improvement strategy and adaptive scaling factor, to produce a new variant dubbed hybrid differential evolution (HDE), having a better search ability, for a better solution when tackling the task scheduling in cloud computing. Another advantage of HDE is that it has the ability to accelerate the convergence speed in the direction of the near-optimal solution. The steps of HDE are described in Algorithm 2. This algorithm starts by initializing N solutions within the search space of the scheduling problem in cloud computing; the search space of this problem ranges between 0 and the number of virtual machines (m − 1). Then, the initialized solutions are evaluated, and the best-so-far solution X * t that has the lowest fitness value is identified, to be employed for guiding some of the solutions within the optimization process towards a better solution. Line 3 within Algorithm 2 initializes the current iteration t with a value of 0. After this, the optimization process is triggered to update the initialized solutions to reach better solutions, whereby this process starts by defining the termination condition as shown in Line 4. Then, Lines 5–6 update the adaptive scaling factor, which is responsible for improving the exploration and exploitation capability of the proposed algorithm. Following this, updating of the solutions is implemented, to generate the trial vector based on both the current solution and the mutant vector. In the next step, this trial vector is compared with the current solution, to extract the one that will be preserved in the population for the next generation. This process is repeated until satisfying the termination condition. Finally, the flowchart of HDE is shown in Figure 3.
Algorithm 2. Hybrid differential evolution (HDE)
1. Initializes N solutions, X i ,   i = 1 ,   2 ,   ,   N , using Equation (7)
2. Evaluate the initialized solutions and identified X * t
3.  t = 0
4. while (t < T )
5.  Update F using Equation (8)
6.   F = F
7.  for i = 0 to N
8.    Generate the mutant vector v i t using Equation (1)
9.     j r = an   integer   generated   randomly   between   1   and   n
10.    for j = 0 to n
11.      r 2 : a number generated randomly between 0 and 1
12.      i f   ( r 2 < C R     j = j r )
13.          u i ,   j t = v i ,   j t
14.     else
15.          u i ,   j t = X i , j t
16.      End if
17.    end for
18.     i f f u i t < f X i t
19.        X i t = u i t
20.    end i f
21.    Replace the best-so-far solution X * t with T i t if T i t is better
22.  end for
23.  Updating N solutions using the CIS algorithm (Algorithm 1)
24.    t = t + 1
25. end while
Return   X * t

5.5. Time Complexity of HDE

In this part of the article, the time complexity is expressed using big-O notation, so that the superior speed of the suggested approach can be demostrated. To begin, the following are the primary factors that have the most significant impact on the acceleration of the proposed algorithm: the population size, N, the number of tasks, n, the maximum iteration, T , and the time complexity of Algorithm 1. Generally, the time complexity of HDE can be expressed as follows:
T HDE = T DE + T CIS
The time complexity of the standard DE mostly depends solely on the first three factors: N, n, and T, as shown in Algorithm 2, which is formulated as follows:
T DE = O T n N
The time complexity of the CIS also depends on the first three factors, with the exception of N, which is replaced by N . In general, the time complexity of CIS can be expressed as follows:
T CIS = O T n N
By substitution, Equation (12) can be reformulated as follows:
T HDE = O T n N + O T n N
From this equation, the term that has the highest growth rate is O T n N , because N is always greater than N . Therefore, the time complexity of HDE in big-O is O T n N .

6. Results and Simulation

Several experiments were conducted in this study to show the efficiency of the proposed algorithm: HDE was compared to several heuristic and metaheuristic approaches; the metaheuristic algorithms were the sine cosine algorithm (SCA) [34], whale optimization algorithm (WOA) [35], slime mold algorithm (SMA) [36], equilibrium optimizer (EO) [37], grey wolf optimizer (GWO) [38], and classical DE [39]; the heuristic algorithms were first come first served (FCFS) [40], round robin (RR) algorithm [41], and shortest job first (SJF) scheduler [40]. These algorithms were executed in 30 independent runs on datasets generated randomly, with the number of tasks ranging between 100 and 3000; these tasks are labeled as T100 for the task size 100, T200 for the task size 200, and so on. Five VMs were selected for scheduling these task data sets, where the communication time and execution time for each task were randomly generated to determine the ability of each VM to implement each task, while taking into consideration that the workload of this task is different of the other tasks. The performance evaluation metrics used in this study to compare the algorithms were the best, average, worst, and standard deviation of the obtained outcomes within 30 independent repetitions, in addition to a Wilcoxon rank sum test, which was employed to identify whether there were any differences between the outcomes obtained by the proposed and those of the rival optimizers. All of the algorithms were built using the programming language Java on a personal computer, which had the following capabilities: Windows 10 operating system, Intel® CoreTM i7-4700MQ processor running at 2.40 GHz, and 16 GB of memory installed.
The parameters of all the compared algorithms were set to the values found in the cited papers, to ensure a fair comparison. However, there were two main parameters that had to be the same for all the algorithms, and they were the population size (N), and the maximum number of iterations (T), which were set to 25 and 1500, respectively. Regarding the parameters of the classical DE and the proposed algorithm, HDE, several experiments were conducted to estimate the optimal value for each of their parameters. Broadly speaking, the classical DE has two main parameters: F and Cr, which were extracted according to several conducted experiments. From these experiments, it was observed that the best values for these parameters were 0.01 for Cr and 0.1 for the scaling factor (F). Regarding the parameter values of HDE, the Cr and α parameters were set to the same values as the Cr and F in the classical DE. However, HDE has two additional parameters: P and κ , which were estimated by conducting experiments. These experiments showed that the best values for these parameters, P and κ , were 0.01 and 0.2, respectively. Table 1 presents the parameter values of both HDE and DE.

6.1. Comparison with Metaheuristic Algorithms

Due to the fact that meta-heuristic algorithms are stochastic optimization techniques, they require at least 10 separate runs to produce statistically significant results. In this study, each algorithm was run roughly thirty times independently, and the average results of each algorithm are reported. In Table A1 are shown the metrics of the best fitness values for each algorithm in the last iteration over all independent runs. These metrics include the mean, best, and worst values, as well as the standard deviation (SD). The top results in this table are highlighted in bold. This table demonstrates that HDE is comparable to DE and superior to all other algorithms for the 100-task size. With growing task sizes, HDE’s superiority becomes more apparent, and it may be the best option for any tasks with lengths more than 100. Additionally, to further show the efficiency of HDE, the average of the best, worst, and mean fitness values obtained for all the tasks sizes were computed and are presented in Figure 4, which affirms that HDE was the best for all shown metrics. Figure 5 shows the average (Avg) standard deviation (SD) obtained by each algorithm on all tasks sizes. Inspecting this figure shows that HDE was more stable than the other algorithms. After discussing the effectiveness of HDE in terms of the fitness value, we will move on to discuss its capacity to reduce the makespan in the next paragraph.
The metrics that represent the best makespan values for each method in the last iteration across all independent runs are presented in Table A2. The best, worst, and average values, as well as the standard deviation, are the metrics that were included in this study. The best outcomes are displayed in bold within this table. This table illustrates that HDE was superior to all other algorithms for the 100-task size and was on par with DE in terms of its performance. The superiority of HDE became more evident with increasing task sizes, and it is possible that it is the best choice for all tasks with lengths greater than 100. In addition, in order to demonstrate the effectiveness of HDE in a more comprehensive manner, the mean, best, and worst makespan values that were obtained for each of the different task sizes were computed and are shown in Figure 6. This figure demonstrates that HDE was the optimal method for each of the metrics that are presented. Figure 7 displays the average of the standard deviation (SD) values that were obtained by each algorithm for all task sizes. Taking a closer look at this figure reveals that HDE was far more reliable than the other algorithms. Now we have finished talking about how effective HDE was in terms of the fitness value and makespan, in the following paragraph, we will move on to talking about how it had the ability to reduce the total execution time required by all VMs, until finishing the tasks that had been given to them.
The average, worst, best, and SD values obtained as a result of analyzing the total execution time values obtained by the various investigated metaheuristic algorithms within 30 independent runs are reported in Table A3. Based on the results in this table, it is clear that HDE performed just as well as DE and was superior to all the other algorithms when it comes to a size of 100 tasks. It is probable that HDE is the optimal option for all tasks with lengths that are larger than 100. The superiority of HDE became more readily apparent as the size of the tasks was increased. The average of the mean, best, and worst total execution values that were obtained for each of the various task sizes was computed and are given in Figure 8, in order to showcase the efficacy of HDE in a more thorough manner. This was done in order to prove that HDE was effective. This figure illustrates that HDE was the best strategy for each of the metrics that are presented. Figure 9 depicts the average of the standard deviation (SD) values that were obtained using each method for all of the various task sizes. When taking a closer look at this figure, it can be noticed that HDE was significantly more trustworthy than any of the other algorithms. Finally, from all the previous experiments and discussions, it is concluded that HDE is a strong alternative scheduler to find the near-optimal scheduling of tasks in cloud computing, with the aim of minimizing both the makespan and the total execution time.
Table A4 compares the proposed HDE algorithm to the other types using a Wilcoxon rank-sum test with 5% as the level of confidence [42]. The null hypothesis and the alternative hypothesis were both tested in this experiment. The null hypothesis suggests that there is no difference between the two methods being compared; this hypothesis is true when the p-value shown in Table A4 is greater than the confidence level. In contrast, the alternative hypothesis asserts that there is a difference between the results obtained using a pair of algorithms; this hypothesis is supported when p-value is less than the confidence level. Table A4 displays the results of comparing HDE to the other algorithms evaluated in this test. According to this table, the alternative hypothesis holds true in all instances, highlighting the difference between the outcomes of the proposed algorithm and those of other algorithms.
The boxplot represented in Figure 10 is discussed in this paragraph. The boxplot shows a five-number summary, including the lowest, maximum, median, and the first and third quartiles for the fitness values achieved by each algorithm during 30 independent runs. Examining this figure demonstrates that HDE was superior in terms of all five-number summaries for all of the investigated task lengths.
Finally, HDE and the other algorithms were compared based on their respective convergence rates for the fitness function. On the basis of the convergence rates obtained by each algorithm, and presented in Figure 11, for task lengths 100, 200, 300, 400, 500, 600, 700, 800, and 900, HDE achieved superior convergence to the optimal solution for all depicted task lengths. Figure 12 shows a comparison among the algorithms in terms of CPU time. The figure provides the total CPU time in seconds for running each algorithm with different task sizes. HDE had a slightly higher computational cost than traditional DE, but it can provide a better QoS to users, because it can achieve a smaller makepan value for the majority of task sizes.

6.2. Comparison with Some Heuristic Algorithms

In this section, the proposed algorithm is compared with some well-known heuristic schedulers, such as FCFS, SJF, and round robin, to further verify the superiority of HDE in tackling the task scheduling problem in cloud computing. The results of each method were independently tested around thirty times. In Table A5, the average, worst, and best values are shown, as well as the standard deviations, that were achieved by the proposed approach and by each heuristic algorithm for each task size. The results of this table show that HDE performed significantly better than any of the heuristic algorithms. In addition, in order to demonstrate the effectiveness of HDE in a more comprehensive manner, the average of the best, worst, and mean makespan values obtained for all task sizes were computed and are shown in Figure 13. This figure confirms that HDE was the most effective method for all of the metrics presented. Figure 14 depicts the average (Avg) and standard deviation (SD) of the makespan values obtained by each algorithm across all task sizes. Taking a closer look at this figure reveals that HDE was significantly more reliable than the other algorithms. It is worth mentioning that the heuristic algorithms have a negligible computation cost compared to metaheuristic algorithms, but their solutions are poor when compared to the metaheuristics algorithms. As a result, to provide a better quality of service to the users, while excluding the computational cost, the proposed algorithm is the most effective. On the other hand, if the computational cost is more important than the quality of service, then the heuristic algorithms, specifically FCFS, are the best.

6.3. Simulation Results

The CloudSim platform was utilized in order to carry out simulations of the task scheduling process. Researchers from all over the world make use of the CloudSim platform because it is a full-fledged simulation toolset that enables modelling and simulation of real-world cloud infrastructure and application provisioning [12]. In this paper, CloudSim was provided with the parameters settings described in Table 2. In addition, the number of tasks ranged between 100 and 1000, with a step 100, to check the scalability of the proposed algorithm in comparison to the other metaheuristic algorithms. After conducting the simulation process, the makespan values obtained under various metaheuristic algorithms implemented within the Cloudsim platform are presented in Table 3 for each task size. Inspecting this table shows the effectiveness of HDE in comparison to the other algorithms, since it had the lowest makespan values for all the task sizes. This is confirmed by the last row in the same table which contains the average of each column within the table; this row reveals that HDE could achieve average makespan values up to 151,479.2179, which is the smallest in comparison to the others in the same row.

7. Conclusions and Future Work

This paper presents a new scheduler, namely hybrid differential evolution (HDE), for the task scheduling problem in the cloud computing environment. This scheduler is based on two proposed improvements to the classical differential evolution. The first improvement is based on improving the scaling factor, to involve numerical values generated dynamically based on the current iteration, to improve both the exploration and exploitation operators; meanwhile the second improvement is designed to further improve the exploitation operator of the classical DE, to achieve better outcomes in a smaller number of iterations. Several experiments were performed using randomly generated datasets and the CloudSim simulator, to verify the efficiency of HDE. In addition, HDE was compared with several heuristic and metaheuristic algorithms, such as the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. Makespan and total execution time values were obtained for various task sizes, ranging between 100 and 3000, during the experiments. The findings of the experiments indicated that HDE produced effective results when compared to the other metaheuristic and heuristic algorithms that were investigated. As a result, HDE was the most effective metaheuristic scheduling algorithm among the many approaches that were investigated. In future work, HDE will be employed to tackle several other optimization problems, such as the DNA fragment assembly problem, image segmentation problem, 0-1 knapsack problem, and task scheduling problem in fog computing.

Author Contributions

Conceptualization, M.A.-B., R.M. and W.A.E.; methodology, M.A.-B., R.M., W.A.E. and K.M.S.; software, M.A.-B., R.M. and W.A.E.; validation, M.A.-B., R.M., W.A.E., M.S. and K.M.S.; formal analysis, M.A.-B., R.M., W.A.E. and K.M.S.; investigation, M.A.-B., R.M., W.A.E. and K.M.S.; data curation, M.A.-B., R.M., W.A.E., M.S. and K.M.S.; writing—original draft preparation, M.A.-B., R.M and W.A.E.; writing—review and editing M.A.-B., R.M., W.A.E., M.S. and K.M.S.; visualization, M.A.-B., R.M., W.A.E. and K.M.S.; supervision, M.A.-B.; funding acquisition, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Comparison with Some Heuristic and Metaheuristic Algorithms

Table A1. Comparison with some metaheuristic algorithms for the fitness values.
Table A1. Comparison with some metaheuristic algorithms for the fitness values.
TS EOSCAGWOWOASMADEHDE
T100Best18,265.11624,611.80924,088.11519,972.83724,613.26415,150.55315,137.092
Avg19,571.00025,528.00025,375.00021,258.00025,704.00015,196.00015,274.000
Worst21,498.01026,402.73926,051.05122,205.81326,752.88815,257.97815,558.919
SD713.524463.258424.731558.666521.10430.57291.663
T200Best42,426.36248,950.80849,292.42642,546.31649,025.03529,373.84128,909.445
Avg44,394.00050,538.00050,422.00044,527.00050,967.00029,627.00029,231.000
Worst45,476.44651,690.91451,790.67246,474.19852,320.87630,126.85729,605.929
SD828.993778.633618.1341021.092811.779184.835184.273
T300Best69,067.19174,866.30074,062.85066,476.67775,546.10946,803.51643,810.658
Avg71,383.00076,534.00077,081.00069,943.00076,864.00047,615.00044,284.000
Worst73,579.77778,080.75478,483.05173,289.18278,743.00648,568.44944,801.076
SD1134.455867.2211041.3961443.511874.786430.618255.008
T400Best95,440.91198,432.59099,668.26292,093.90699,673.09064,938.82258,147.125
Avg98,176.000102,045.000102,211.00094,585.000102,252.00066,478.00058,708.000
Worst101,000.236104,376.315104,112.79798,252.609105,053.62267,814.98159,395.477
SD1348.6191285.9761075.1391514.4001256.732701.588323.410
T500Best120,119.573123,408.542123,758.958115,201.767125,660.98884,738.67572,282.977
Avg123,307.000127,282.000127,133.000119,111.000127,791.00086,195.00073,084.000
Worst125,568.639129,252.182129,228.583122,399.771129,623.39587,526.69674,219.055
SD1440.6331187.1521291.7051697.175919.858809.404463.869
T600Best145,919.544149,987.437150,121.722142,155.751148,997.028106,351.44388,355.874
Avg148,462.000152,813.000152,648.000144,351.000152,621.000108,316.00089,783.000
Worst151,270.871155,095.573154,638.766146,586.770155,053.210110,123.04791,415.701
SD1292.9041264.7451003.2651346.2391435.543980.036646.848
T700Best171,248.666174,068.432172,269.168162,866.453171,421.871128,092.342103,753.063
Avg173,404.000176,365.000176,581.000168,979.000176,401.000129,789.000105,350.000
Worst177,380.381178,450.265180,508.600174,375.747179,642.085131,573.149107,198.432
SD1388.0311166.3681703.2442689.9291563.520800.844750.188
T800Best197,457.353200,198.689199,590.942190,388.160200,882.988149,519.095117,842.491
Avg201,048.000203,729.000203,492.000195,286.000203,970.000152,379.000120,551.000
Worst204,902.594206,534.694207,776.148200,825.073206,878.190154,292.126123,731.662
SD1683.4701656.5651899.3812196.4111523.4781199.1941176.508
T900Best223,704.461226,558.832226,409.487217,996.729226,114.867172,082.041135,865.435
Avg226,980.000229,924.000230,362.000222,388.000229,935.000174,181.000137,631.000
Worst229,921.195233,724.590233,488.493225,639.051233,651.341176,793.002139,246.915
SD1673.0491834.1091787.1591794.5451706.1941229.686863.086
T1000Best247,875.320249,814.443250,647.285242,389.074250,741.549194,418.935152,868.454
Avg251,792.000253,414.000253,521.000246,631.000253,945.000196,598.000155,350.000
Worst254,529.450257,498.878256,169.629250,551.728256,575.530198,807.888157,666.486
SD1615.8101823.7111598.9831604.4131469.1671146.3931329.473
T1200Best272,161.798275,012.674273,177.920266,930.950275,918.637215,439.409168,684.208
Avg276,665.000278,762.000278,623.000272,150.000279,459.000218,984.000172,241.000
Worst279,816.799282,024.626282,025.504277,318.739281,727.579220,993.332175,396.503
SD1654.5051979.4622136.5472217.5061530.6591458.2541543.670
T1500Best297,700.056301,895.921300,516.283289,987.072301,107.227240,484.692188,831.791
Avg302,389.000304,816.000305,067.000297,244.000304,918.000243,074.000192,415.000
Worst306,262.646306,790.966307,331.538300,181.346308,731.491245,362.366195,299.922
SD2105.2561457.2851731.1712124.4101735.1421424.9561558.728
T2000Best324,305.532328,468.897328,886.931318,205.762325,580.330265,211.125209,699.639
Avg329,160.000331,652.000331,396.000323,903.000330,530.000268,229.000212,927.000
Worst332,916.550335,830.685334,367.352329,829.207334,457.438270,526.428216,136.640
SD2167.0101789.4371634.8252473.6371970.3161248.8391521.468
T2500Best349,691.604350,883.081352,358.092343,367.024351,080.666286,532.304224,756.706
Avg353,786.000355,235.000355,558.000348,528.000355,706.000289,731.000228,761.000
Worst357,011.837359,479.690358,061.266352,923.652360,249.932292,567.734233,007.159
SD1762.2021972.1541614.7742190.9292216.0891639.3961996.000
T3000Best374,687.613374,116.100378,044.013368,165.769374,348.289308,332.567247,888.247
Avg380,344.000382,277.000382,538.000375,197.000381,730.000314,351.000250,242.000
Worst384,988.196386,465.343386,075.515380,355.337386,174.587317,335.349253,382.767
SD2444.0722658.8102068.0512672.6522505.1082100.2871509.886
The bold values indicate the best values.
Table A2. Comparison among algorithms for the makespan values.
Table A2. Comparison among algorithms for the makespan values.
TS EOSCAGWOWOASMADEHDE
T100Best9438.67211,947.63712,712.77710,884.46311,975.7816944.8826944.882
Avg10,516.00012,859.00013,645.00011,750.00012,807.0007010.0007013.000
Worst12,883.04014,826.33715,092.45712,654.81913,823.6567137.4367129.975
SD653.494683.114571.680447.619553.51040.35443.224
T200Best22,547.02823,002.45623,210.79021,671.23023,050.30813,431.82713,185.932
Avg24,472.00024,684.00025,148.00023,759.00024,660.00013,594.00013,348.000
Worst26,066.68926,378.08627,355.24825,403.57026,345.01114,041.47313,538.203
SD829.519960.6011163.940882.523888.877129.34591.631
T300Best36,947.69934,559.88034,731.94634,232.69035,303.61821,389.49719,972.977
Avg38,439.00036,801.00037,327.00036,425.00036,938.00021,878.00020,194.000
Worst40,431.35638,900.22841,404.62339,453.80739,017.01922,363.34620,472.578
SD976.881909.5591666.8571116.936854.587228.565127.459
T400Best48,955.36745,647.85246,915.15246,597.80946,455.14129,617.29226,444.417
Avg51,779.00048,799.00048,556.00048,947.00048,528.00030,500.00026,731.000
Worst55,070.04051,597.17952,287.56251,820.39150,765.24331,290.14527,121.008
SD1509.9221383.7631272.4351346.9701208.233400.760161.878
T500Best61,001.75057,658.76057,566.24158,607.79259,016.13338,674.03632,957.020
Avg64,916.00060,584.00060,244.00060,900.00060,608.00039,487.00033,269.000
Worst67,560.24364,112.11762,532.08563,017.76262,815.30140,093.78833,819.042
SD1560.2911421.0791234.8211193.972940.158417.670224.071
T600Best73,229.92569,331.15369,586.53369,647.49668,512.66848,727.43840,158.207
Avg76,445.00072,328.00072,067.00072,844.00072,158.00049,612.00040,845.000
Worst80,532.89577,530.87675,054.43777,227.793749,26.26950,879.79041,495.153
SD1701.6051594.4301333.4271659.7491515.736541.237292.474
T700Best84,822.01081,046.92880,384.64279,314.56079,554.61958,557.29747,217.700
Avg88,690.00083,167.00083,246.00084,249.00082,848.00059,386.00047,921.000
Worst93,188.46085,347.31586,615.12787,747.23786,999.87060,544.01948,675.067
SD2128.5151231.3051612.7751790.7311662.438460.926330.657
T800Best96,380.44292,521.92192,321.70491,262.03593,067.23268,677.52653,779.212
Avg100,603.00095,559.00095,873.00096,413.00096,213.00069,688.00054,867.000
Worst106,564.90598,404.95199,524.222100,831.242101,401.67970,661.79256,282.630
SD2414.5871423.8262017.7532166.0842066.140513.065545.623
T900Best108,229.730103,645.560104,628.113105,548.365104,277.13378,792.10461,758.499
Avg113,582.000107,669.000107,999.000109,495.000107,688.00079,775.00062,626.000
Worst120,148.440112,419.694114,072.174111,749.833111,998.49881,359.46263,399.167
SD2850.5892007.7372051.1891641.8011715.424624.858408.529
T1000Best118,736.975115,138.440115,625.172116,100.770116,206.11888,832.44969,515.446
Avg123,279.000118,956.000119,125.000120,234.000118,929.00090,012.00070,684.000
Worst129,183.176122,706.004122,156.907124,499.643124,431.72893,798.98471,658.817
SD2489.8811666.8491659.8432417.3311778.316874.974620.257
T1200Best129,795.983126,954.152126,249.252127,570.871127,972.63398,118.16076,652.611
Avg134,908.000130,688.000130,519.000132,035.000130,779.000100,158.00078,409.000
Worst141,397.459135,719.950134,650.907139,766.474136,984.431101,266.79679,725.686
SD2309.2791870.2312018.7532729.9332251.355774.466705.732
T1500Best138,804.579138,898.602139,292.203139,693.196138,606.731109,581.93685,953.213
Avg145,796.000142,250.000142,775.000143,012.000142,380.000111,340.00087,520.000
Worst153,587.370145,607.443150,992.953147,576.341146,704.239114,272.27489,009.316
SD3283.7581526.5832390.0771991.7931614.1241072.070721.108
T2000Best151,784.006151,264.096150,594.060149,722.432151,191.472121,190.06395,349.975
Avg157,611.000155,420.000154,648.000155,396.000154,322.000122,618.00096,813.000
Worst166,344.617160,214.293159,508.888161,005.512159,349.068123,936.50698,162.927
SD3677.9712504.0312137.7042628.6601852.858637.741698.603
T2500Best160,529.235160,637.179162,072.574161,144.576161,100.021130,477.400102,136.632
Avg169,506.000165,738.000166,038.000166,450.000165,581.000132,810.000104,022.000
Worst179,163.135171,771.518169,385.501173,371.558169,513.351134,690.256106,190.281
SD4406.0472343.3411588.8462503.8642282.155925.081933.183
T3000Best172,070.805172,887.505173,311.921174,193.094173,026.585142,122.392112,625.384
Avg181,066.000177,940.000178,119.000178,707.000177,882.000144,111.000113,811.000
Worst187,904.224184,878.509182,574.837183,514.827181,860.483147,212.405115,274.338
SD3690.5482693.0452364.7942069.6852493.5561188.774701.579
The bold values indicate the best values.
Table A3. Comparison among algorithms for total execution time.
Table A3. Comparison among algorithms for total execution time.
TS EOSCAGWOWOASMADEHDE
T100Best38,115.00350,570.64349,339.02840,070.18252,314.20934,167.69934,184.939
Avg40,701.00055,087.00052,746.00043,443.00055,795.00034,298.00034,550.000
Worst43,980.02057,190.51456,010.40746,197.99157,836.61034,461.03535,226.454
SD1093.3901509.7961560.9001514.7011342.05774.832228.008
T200Best86,987.143105,072.258103,976.81587,848.691108,440.50666,302.97065,521.241
Avg90,879.000110,862.000109,394.00092,988.000112,350.00067,038.00066,290.000
Worst95,059.361114,933.861113,761.79698,376.422115,141.89467,843.83767,119.554
SD2114.1522408.7202660.7142577.6791907.367414.783429.283
T300Best140,739.328164,556.186161,414.947141,130.001165,788.713105,870.99599,269.263
Avg148,252.000169,244.000169,843.000148,151.000170,026.000107,669.000100,496.000
Worst151,986.556172,666.021175,454.835155,885.560174,514.307109,713.690101,829.995
SD2883.5411929.7473292.2973351.1972029.9731011.817610.498
T400Best197,268.901220,893.125220,314.876194,313.759223,848.304146,681.999132,120.112
Avg206,434.000226,286.000227,406.000201,072.000227,607.000150,428.000133,321.000
Worst212,775.565233,178.780233,772.309213,052.367234,076.000153,333.362134,949.635
SD3429.7822590.3822766.7794734.1052765.6881528.621718.928
T500Best250,376.325276,708.351275,486.339245,134.20028,0346.649191,901.169164,043.543
Avg259,553.000282,911.000283,207.000254,935.000284,552.000195,182.000165,984.000
Worst268,501.754288,230.427288,712.693264,972.627289,984.941198,374.617168,523.978
SD4225.9052899.3683376.5124857.5502312.0761837.1591043.599
T600Best305,869.915333,578.543335,046.072298,840.145331,419.196240,743.608200,817.097
Avg316,503.000340,612.000340,669.000311,202.000340,368.000245,292.000203,972.000
Worst322,769.104345,645.788346,242.900320,955.422346,933.177249,268.571207,896.980
SD4545.2692832.2422863.2354729.0923649.3642146.2391492.645
T700Best357,976.309386,497.338386,666.396348,756.713384,774.422290,119.422235,668.911
Avg371,070.000393,829.000394,363.000366,683.000394,693.000294,062.000239,351.000
Worst378,442.442401,305.913400,091.078380,151.696402,525.765297,527.211243,753.678
SD4873.7323716.8853533.2078468.3293321.9361805.7721741.489
T800Best426,788.885443,078.326445,461.575413,981.425446,848.296337,406.050267,323.474
Avg435,421.000456,128.000454,603.000425,991.000455,402.000345,326.000273,815.000
Worst447,835.415462,814.404465,671.838440,590.901462,478.406350,468.826281,112.735
SD5155.6944975.6504760.6116537.1173175.6123129.0912672.983
T900Best477,747.825508,820.405507,406.435474,814.038505,128.246388,859.617308,781.618
Avg491,572.000515,186.000515,877.000485,805.000515,177.000394,463.000312,644.000
Worst500,984.743522,422.130524,740.549497,009.192523,713.793401,004.355316,484.309
SD5365.5673128.1234096.0345414.4814544.0283238.2491943.345
T1000Best537,828.097557,237.156556,031.484524,852.018558,305.219439,473.809347,358.806
Avg551,656.000567,151.000567,112.000541,558.000568,982.000445,298.000352,903.000
Worst563,670.866575,833.134574,415.860553,355.183575,042.293451,669.178358,351.049
SD6829.2264411.8204635.8916413.1673339.5052816.9043011.747
T1200Best588,406.658616,501.616609,146.791585,013.689612,946.921489,188.988383,424.603
Avg607,433.000624,268.000624,202.000599,085.000626,378.000496,245.000391,183.000
Worst621,660.221633,980.837632,138.961612,439.335634,727.265501,846.993398,628.411
SD6446.2354055.8835021.4597582.4365868.5403414.3733541.414
T1500Best645,910.065678,355.899671,733.428640,448.075677,639.783545,069.758428,881.806
Avg667,774.000684,136.000683,749.000657,119.000684,171.000550,454.000437,170.000
Worst690,938.157694,111.491693,117.184666,430.377695,449.033556,975.679443,311.339
SD8351.1553736.2415531.4506816.1464114.6853010.0623530.292
T2000Best712,207.300733,380.764737,130.722702,518.639731,200.908599,158.246476,515.523
Avg729,441.000742,861.000743,809.000717,086.000741,682.000607,988.000483,857.000
Worst741,210.983752,039.100753,733.583730,721.668749,436.738613,091.297491,408.639
SD6883.1184637.3384247.1216992.1194927.3303101.0143462.359
T2500Best765,728.675787,262.395785,012.916755,291.076791,036.794646,478.360510,870.211
Avg783,772.000797,394.000797,769.000773,376.000799,331.000655,880.000519,819.000
Worst796,845.810805,270.438809,131.565789,947.532808,619.056661,680.775528,913.206
SD7897.3034431.0705327.9388487.8144819.7423862.8204496.779
T3000Best830,030.908843,649.488851,616.785810,725.718843,766.365696,078.733563,250.918
Avg845,325.000859,065.000859,517.000833,672.000857,376.000711,579.000568,580.000
Worst860,345.325868,701.031866,711.674846,150.448867,698.102718,347.423575,635.766
SD7084.1375029.4393972.8818360.4925830.9415256.2363441.967
The bold values indicate the best value.
Table A4. Comparison using a Wilcoxon rank sum test between HDE and each compared algorithm, in terms of fitness values.
Table A4. Comparison using a Wilcoxon rank sum test between HDE and each compared algorithm, in terms of fitness values.
TS EOSCAGWOWOASMADE
T100h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−116.35455 × 10−5
p-111111
T200h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−117.77255 × 10−9
p-111111
T300h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T400h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T500h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T600h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T700h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T800h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T900h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
T1000h-3.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
p-111111
The Bold font indicates that there is a difference between the outcomes of HDE and those of the competitors.
Table A5. Comparison with heuristic algorithms for the total execution time and makespan.
Table A5. Comparison with heuristic algorithms for the total execution time and makespan.
FCFSSJFRRHDE
TS MSTotal
Execution
MSTotal
Execution
MSTotal
Execution
MSTotal
Execution
T100Best12,344.29253,662.70612,398.82055,392.64813,303.34155,738.9976944.88234,184.939
Avg14,910.00058,624.00014,562.00059,537.00015,390.00059,010.0007013.00034,550.000
Worst17,044.31562,160.31218,442.82164,955.07420,028.54262,707.3367129.97535,226.454
SD1285.6662349.5611369.8982201.3441582.7061824.60143.224228.008
T200Best22,844.658110,292.30624,328.549112,870.66924,911.324111,094.47813,185.93265,521.241
Avg27,145.000115,596.00028,231.000117,153.00027,956.000117,593.00013,348.00066,290.000
Worst31,734.097121,304.18633,372.631124,674.72530,746.313122,266.95013,538.20367,119.554
SD1896.6452616.4472440.7612513.9211461.0832916.00191.631429.283
T300Best36,401.175167,860.71035,551.337169,794.78037,405.544170,045.36119,972.97799,269.263
Avg40,855.000174,596.00040,437.000175,831.00040,835.000174,987.00020,194.000100,496.000
Worst48,259.582181,227.23646,488.415184,826.16145,607.496182,173.61620,472.578101,829.995
SD3164.1963007.5802540.0093417.1832145.9822568.089127.459610.498
T400Best48,561.026222,538.64947,999.873226,957.70647,704.192223,741.72326,444.417132,120.112
Avg53,428.000233,554.00053,169.000233,803.00052,562.000232,425.00026,731.000133,321.000
Worst61,278.109246,043.69661,410.113241,419.65656,838.183240,902.34027,121.008134949.635
SD3014.5124164.4302703.7583817.3472201.7834461.438161.878718.928
T500Best60,671.925284,463.88060,666.928282,672.61761,382.773283,534.05532,957.020164,043.543
Avg65,297.000291,399.00066,204.000291,909.00065,410.000291,712.00033,269.000165,984.000
Worst73,599.101301,398.62876,683.101300,649.38774,272.381297,917.52733,819.042168,523.978
SD3065.8083745.9094217.9874207.0523169.8402995.869224.0711043.599
T600Best71,384.337338,853.35471,876.123339,199.14172,118.680338,253.55140,158.207200,817.097
Avg77,058.000348,454.00078,169.000347,967.00077,937.000347189.00040,845.000203,972.000
Worst83,617.806361,268.96891,370.647360,625.46686,283.611353,966.29041,495.153207,896.980
SD3655.5345785.5724347.6945222.7633596.3403873.134292.4741492.645
T700Best82,871.222389,875.87483,096.043393,296.08881,664.036394,618.24647,217.700235,668.911
Avg88,977.000403,648.00088,430.000402,422.00089,694.000404,010.00047,921.000239,351.000
Worst101,824.324415,541.87497,387.359412,616.229100,040.623421,919.79348,675.067243,753.678
SD3883.5626357.8643321.4124165.1204360.2245737.322330.6571741.489
T800Best95,526.392451,999.10795,363.041449,490.32492,544.224449,997.94753,779.212267,323.474
Avg102,116.000464,544.000101,688.000464,338.000102,840.000463,833.00054,867.000273,815.000
Worst110,433.706473,190.025113,090.328473,467.814114,386.119475,437.10156,282.630281,112.735
SD3423.4864557.4324046.0835894.8214732.5586066.622545.6232672.983
T900Best108,219.570507,238.701106,632.715512,446.913107,182.309512,662.89361,758.499308,781.618
Avg113,943.000524,054.000114,563.000523,459.000114,571.000524,364.00062626.000312,644.000
Worst122,612.509542,323.705126,840.703539,681.496123,633.119538,450.02163,399.167316,484.309
SD4002.0286410.7364548.5616594.3194337.9065915.235408.5291943.345
T1000Best115,252.394562,357.362115,970.112563,944.718117,570.996564,348.54069,515.446347,358.806
Avg123,900.000575,167.000124,553.000577018.000124,977.000576,748.00070,684.000352,903.000
Worst132,163.539584,019.817139,114.933594,497.693134,407.616590,320.83871,658.817358,351.049
SD3776.3935114.5374821.3046486.3204500.3415778.815620.2573011.747
T1200Best126,473.499622,184.636128,882.364619,891.703129,100.117620,290.71576,652.611383,424.603
Avg137,267.000637,357.000137,832.000636,864.000136,112.000634,428.00078,409.000391,183.000
Worst148,752.105656,539.187150,051.311650,397.990143,371.287647,445.07579,725.686398,628.411
SD4887.7278273.4904775.5036534.2923190.3966837.603705.7323541.414
T1500Best138,646.670679,212.397142,046.487679,305.037145,002.492684,350.82485,953.213428,881.806
Avg149,618.000693,741.000150,234.000695,673.000151,079.000696,135.00087,520.000437,170.000
Worst165,002.833707,135.205164,234.551708,533.638160,514.410708,776.62889,009.316443,311.339
SD6078.2657948.4965151.1437429.0444560.2865537.040721.1083530.292
T2000Best153,066.964737,643.035153,299.956734,958.438152,479.492740,924.56395,349.975476,515.523
Avg161,641.000754,762.000162,568.000754,581.000159,989.000752,314.00096,813.000483,857.000
Worst175,933.006763,273.414184,308.530766,039.466170,141.689770,347.95198,162.927491,408.639
SD5660.8296387.3415766.3327228.5894479.9527333.449698.6033462.359
T2500Best161,397.176794,939.206165,571.473792,458.130166,885.056799,726.645102,136.632510,870.211
Avg173,614.000809,173.000172,776.000810,402.000174375.000812,225.000104,022.000519,819.000
Worst184,331.464824,648.829183,006.363827,132.424189,002.607824,028.335106,190.281528,913.206
SD5457.3437346.4034412.0467584.3005523.0366417.060933.1834496.779
T3000Best176,200.277852,165.434177,661.627853,232.663178,819.097859,423.967112,625.384563,250.918
Avg184,378.000866,586.000186,939.000871,015.000186,791.000872,513.000113,811.000568,580.000
Worst196,778.207885,253.124201,877.303892,941.694201,607.603888,088.876115,274.338575,635.766
SD5833.6667420.4045587.6249231.7386381.1387657.639701.5793441.967
Bold value indicates the best result.

References

  1. El-Shafeiy, E.; Abohany, A. A new swarm intelligence framework for the Internet of Medical Things system in healthcare. In Swarm Intelligence for Resource Management in Internet of Things; Academic Press: Cambridge, MA, USA, 2020; pp. 87–107. [Google Scholar] [CrossRef]
  2. Hassan, K.M.; Abdo, A.; Yakoub, A. Enhancement of Health Care Services based on cloud computing in IOT Environment Using Hybrid Swarm Intelligence. IEEE Access 2022, 10, 105877–105886. [Google Scholar] [CrossRef]
  3. Nayar, N.; Ahuja, S.; Jain, S. Swarm intelligence and data mining: A review of literature and applications in healthcare. In Proceedings of the Third International Conference on Advanced Informatics for Computing Research, Shimla, India, 15–16 June 2019. [Google Scholar]
  4. Ben Alla, H.; Ben Alla, S.; Touhafi, A.; Ezzati, A. A novel task scheduling approach based on dynamic queues and hybrid meta-heuristic algorithms for cloud computing environment. Clust. Comput. 2018, 21, 1797–1820. [Google Scholar] [CrossRef]
  5. Singh, H.; Tyagi, S.; Kumar, P.; Gill, S.S.; Buyya, R. Metaheuristics for scheduling of heterogeneous tasks in cloud computing environments: Analysis, performance evaluation, and future directions. Simul. Model. Pr. Theory 2021, 111, 102353. [Google Scholar] [CrossRef]
  6. Huang, X.; Li, C.; Chen, H.; An, D. Task scheduling in cloud computing using particle swarm optimization with time varying inertia weight strategies. Clust. Comput. 2020, 23, 1137–1147. [Google Scholar] [CrossRef]
  7. Bezdan, T.; Zivkovic, M.; Antonijevic, M.; Zivkovic, T.; Bacanin, N. Enhanced Flower Pollination Algorithm for Task Scheduling in Cloud Computing Environment. In Machine Learning for Predictive Analysis; Springer: Berlin/Heidelberg, Germany, 2021; pp. 163–171. [Google Scholar] [CrossRef]
  8. Choudhary, A.; Gupta, I.; Singh, V.; Jana, P.K. A GSA based hybrid algorithm for bi-objective workflow scheduling in cloud computing. Futur. Gener. Comput. Syst. 2018, 83, 14–26. [Google Scholar] [CrossRef]
  9. Raghavan, S.; Sarwesh, P.; Marimuthu, C.; Chandrasekaran, K. Bat algorithm for scheduling workflow applications in cloud. In Proceedings of the 2015 International Conference on Electronic Design, Computer Networks & Automated Verification (EDCAV), Shillon, India, 29–30 January 2015; pp. 139–144. [Google Scholar]
  10. Tawfeek, M.A.; El-Sisi, A.; Keshk, A.E.; Torkey, F.A. Cloud task scheduling based on ant colony optimization. In Proceedings of the 8th International Conference on Computer Engineering & Systems (ICCES), Cairo, Egypt, 26–28 November 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  11. Hamad, S.A.; Omara, F.A. Genetic-Based Task Scheduling Algorithm in Cloud Computing Environment. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 550–556. [Google Scholar] [CrossRef] [Green Version]
  12. Bacanin, N.; Bezdan, T.; Tuba, E.; Strumberger, I.; Tuba, M.; Zivkovic, M. Task Scheduling in Cloud Computing Environment by Grey Wolf Optimizer. In Proceedings of the 27th Telecommunications Forum (TELFOR), Belgrade, Serbia, 26–27 November 2019; pp. 1–4. [Google Scholar] [CrossRef]
  13. Chen, X.; Cheng, L.; Liu, C.; Liu, Q.; Liu, J.; Mao, Y.; Murphy, J. A WOA-Based Optimization Approach for Task Scheduling in Cloud Computing Systems. IEEE Syst. J. 2020, 14, 3117–3128. [Google Scholar] [CrossRef]
  14. Alsaidy, S.A.; Abbood, A.D.; Sahib, M.A. Heuristic initialization of PSO task scheduling algorithm in cloud computing. J. King Saud Univ. Comput. Inf. Sci. 2020, 34, 2370–2382. [Google Scholar] [CrossRef]
  15. Alboaneen, D.A.; Tianfield, H.; Zhang, Y. Glowworm swarm optimisation based task scheduling for cloud computing. In Proceedings of the Second International Conference on Internet of Things, Data and Cloud Computing, Porto, Portugal, 24–26 April 2017. [Google Scholar]
  16. Durgadevi, P.; Srinivasan, D.S. Task scheduling using amalgamation of metaheuristics swarm optimization algorithm and cuckoo search in cloud computing environment. J. Res. 2015, 1, 10–17. [Google Scholar]
  17. Belgacem, A.; Beghdad-Bey, K.; Nacer, H. Task scheduling optimization in cloud based on electromagnetism metaheuristic algorithm. In Proceedings of the 3rd International Conference on Pattern Analysis and Intelligent Systems (PAIS), Tebessa, Algeria, 24–25 October 2018; pp. 1–7. [Google Scholar] [CrossRef]
  18. Masadeh, R.; Alsharman, N.; Sharieh, A.; Mahafzah, B.; Abdulrahman, A. Task scheduling on cloud computing based on sea lion optimization algorithm. Int. J. Web Inf. Syst. 2021, 17, 99–116. [Google Scholar] [CrossRef]
  19. Abdullahi, M.; Ngadi, A.; Dishing, S.I.; Abdulhamid, S.M. An adaptive symbiotic organisms search for constrained task scheduling in cloud computing. J. Ambient Intell. Humaniz. Comput. 2022, 1–12. [Google Scholar] [CrossRef]
  20. Strumberger, I.; Bacanin, N.; Tuba, M.; Tuba, E. Resource Scheduling in Cloud Computing Based on a Hybridized Whale Optimization Algorithm. Appl. Sci. 2019, 9, 4893. [Google Scholar] [CrossRef] [Green Version]
  21. Bacanin, N.; Tuba, E.; Bezdan, T.; Strumberger, I.; Tuba, M. Artificial Flora Optimization Algorithm for Task Scheduling in Cloud Computing Environment. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Manchester, UK, 14–16 November 2019; pp. 437–445. [Google Scholar] [CrossRef]
  22. Mansouri, N.; Zade, B.M.H.; Javidi, M.M. Hybrid task scheduling strategy for cloud computing by modified particle swarm optimization and fuzzy theory. Comput. Ind. Eng. 2019, 130, 597–633. [Google Scholar] [CrossRef]
  23. Ge, J.; He, Q.; Fang, Y. Cloud computing task scheduling strategy based on improved differential evolution algorithm. In AIP Conference Proceedings; AIP Publishing LLC.: Melville, NY, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  24. Li, Y.; Wang, S.; Hong, X.; Li, Y. Multi-objective task scheduling optimization in cloud computing based on genetic algorithm and differential evolution algorithm. In Proceedings of the 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; IEEE: New York, NY, USA, 2018. [Google Scholar]
  25. Zhou, Z.; Li, F.; Yang, S. A Novel Resource Optimization Algorithm Based on Clustering and Improved Differential Evolution Strategy Under a Cloud Environment. ACM Trans. Asian Low-Resource Lang. Inf. Process. 2021, 20, 1–15. [Google Scholar] [CrossRef]
  26. Tsai, J.-T.; Fang, J.-C.; Chou, J.-H. Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution algorithm. Comput. Oper. Res. 2013, 40, 3045–3055. [Google Scholar] [CrossRef]
  27. Chen, J.; Han, P.; Liu, Y.; Du, X. Scheduling independent tasks in cloud environment based on modified differential evolution. Concurr. Comput. Pr. Exp. 2021, e6256. [Google Scholar] [CrossRef]
  28. Elaziz, M.A.; Xiong, S.; Jayasena, K.; Li, L. Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowl.-Based Syst. 2019, 169, 39–52. [Google Scholar] [CrossRef]
  29. Shi, X.; Zhang, X.; Xu, M. A self-adaptive preferred learning differential evolution algorithm for task scheduling in cloud computing. In Proceedings of the 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 25–27 August 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  30. Rana, N.; Abd Latiff, M.S.; Abdulhamid, S.I.M.; Misra, S. A hybrid whale optimization algorithm with differential evolution optimization for multi-objective virtual machine scheduling in cloud computing. Eng. Optim. 2021, 54, 1–18. [Google Scholar] [CrossRef]
  31. Storn, R. International Computer Science Institute, Differrential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces. Tech. Rep. Int. Comput. Sci. Inst. 1995, 11, 353–358. [Google Scholar]
  32. Branke, J.; Deb, K.; Dierolf, H.; Osswald, M. Finding knees in multi-objective optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Birmingham, UK, 18–22 September 2004. [Google Scholar]
  33. Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 2004, 26, 369–395. [Google Scholar] [CrossRef]
  34. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  36. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  37. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  39. Price, K.V. Differential evolution. In Handbook of Optimization; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar]
  40. Bibu, G.D.; Nwankwo, G.C. Comparative analysis between first-come-first-serve (FCFS) and shortest-job-first (SJF) scheduling algorithms. Int. J. Comput. Sci. Mob. Comput. 2019, 8, 176–181. [Google Scholar]
  41. Jang, S.H.; Kim, T.Y.; Kim, J.K.; Lee, J.S. The study of genetic algorithm-based task scheduling for cloud computing. Int. J. Control Autom. 2012, 5, 157–162. [Google Scholar]
  42. Haynes, W. Wilcoxon rank sum test. In Encyclopedia of Systems Biology; Springer: New York, NY, USA, 2013; pp. 2354–2355. [Google Scholar]
Figure 1. Task scheduling process in the cloud computing environment [4].
Figure 1. Task scheduling process in the cloud computing environment [4].
Mathematics 10 04049 g001
Figure 2. A solution representation for task scheduling in cloud computing. This figure starts by numbering the cells with 0 and ends with an index of 9 for the last cell, such that the cell 0 indicates the first task, and the second cell indicates the second task, and so on.
Figure 2. A solution representation for task scheduling in cloud computing. This figure starts by numbering the cells with 0 and ends with an index of 9 for the last cell, such that the cell 0 indicates the first task, and the second cell indicates the second task, and so on.
Mathematics 10 04049 g002
Figure 3. The flowchart of the proposed algorithm: HDE (* is used to identify the best solution).
Figure 3. The flowchart of the proposed algorithm: HDE (* is used to identify the best solution).
Mathematics 10 04049 g003
Figure 4. Comparison among algorithms by fitness value.
Figure 4. Comparison among algorithms by fitness value.
Mathematics 10 04049 g004
Figure 5. Comparison in terms of average SD of the fitness values.
Figure 5. Comparison in terms of average SD of the fitness values.
Mathematics 10 04049 g005
Figure 6. Comparison among algorithms by makespan.
Figure 6. Comparison among algorithms by makespan.
Mathematics 10 04049 g006
Figure 7. Comparison with metaheuristics in terms of the average SD of makespan.
Figure 7. Comparison with metaheuristics in terms of the average SD of makespan.
Mathematics 10 04049 g007
Figure 8. Comparison among algorithms for the total execution time.
Figure 8. Comparison among algorithms for the total execution time.
Mathematics 10 04049 g008
Figure 9. Comparison in terms of the average SD of total execution time.
Figure 9. Comparison in terms of the average SD of total execution time.
Mathematics 10 04049 g009
Figure 10. Comparison among algorithms in terms of the Boxplot: (a) Comparison for the task size of 200; (b) Comparison for the task size of 300; (c) Comparison for the task size of 400; (d) Comparison for the task size of 500; (e) Comparison for the task size of 600; (f) Comparison for the task size of 700.
Figure 10. Comparison among algorithms in terms of the Boxplot: (a) Comparison for the task size of 200; (b) Comparison for the task size of 300; (c) Comparison for the task size of 400; (d) Comparison for the task size of 500; (e) Comparison for the task size of 600; (f) Comparison for the task size of 700.
Mathematics 10 04049 g010
Figure 11. Comparison of algorithms in terms of convergence speed: (a) Comparison for the task size of 100; (b) Comparison for the task size of 200; (c) Comparison for the task size of 300; (d) Comparison for the task size of 400; (e) Comparison for the task size of 500; (f) Comparison for the task size of 600; (g) Comparison for the task size of 700; (h) Comparison for the task size of 800; (i) Comparison for the task size of 900.
Figure 11. Comparison of algorithms in terms of convergence speed: (a) Comparison for the task size of 100; (b) Comparison for the task size of 200; (c) Comparison for the task size of 300; (d) Comparison for the task size of 400; (e) Comparison for the task size of 500; (f) Comparison for the task size of 600; (g) Comparison for the task size of 700; (h) Comparison for the task size of 800; (i) Comparison for the task size of 900.
Mathematics 10 04049 g011
Figure 12. Comparison of algorithms in terms of CPU time.
Figure 12. Comparison of algorithms in terms of CPU time.
Mathematics 10 04049 g012
Figure 13. Comparison among algorithms for makespan.
Figure 13. Comparison among algorithms for makespan.
Mathematics 10 04049 g013
Figure 14. Comparison with heuristic algorithms in terms of the average SD of makespan.
Figure 14. Comparison with heuristic algorithms in terms of the average SD of makespan.
Mathematics 10 04049 g014
Table 1. Parameter settings of HDE and DE.
Table 1. Parameter settings of HDE and DE.
DEHDE
T15001500
N2525
F0.10.1
Cr0.010.01
P 0.01
κ 0.2
Table 2. Parameters of the CloudSim platform.
Table 2. Parameters of the CloudSim platform.
ParametersValueParametersValueParametersValue
CloudletsVirtual MachinesHosts
Length of task10,000–50,000Number of VMs5No of Hosts2
Number of tasks100–1000MIPs250RAM2048
Filesize300RAM512Bandwidth10,000
Outputsize300Bandwidth1000Policy typeTime Shared
Data CenterPolicy typeTime SharedStorage1,000,000
No of DataCenter5VMMXen
Operating systemLinux
No of CPUs1
Table 3. Comparison among algorithms in terms of the makespan values for the Cloudsim-based simulation.
Table 3. Comparison among algorithms in terms of the makespan values for the Cloudsim-based simulation.
TSEOSCAGWOWOASMADEHDE
T10041,505.94349,620.71655,322.87249,135.56051,289.56828,106.49228,190.252
T200104,498.731102,703.63298,004.43196,811.803100,997.7454,684.48453,803.068
T300152,025.60150,194.280145,015.012142,787.62150,218.46889,242.48380,850.832
T400200,391.084202,607.707191,254.443192,639.200195,546.292124,179.684108,165.464
T500257,677.947247,760.576240,555.543234,875.059245,394.340156,930.84133,873.052
T600295,715.628310,647.336294,597.264286,597.052296,876.432199,452.735164,761.723
T700352,201.252334,329.304342,978.004317,822.051328,948.300236,092.719190,732.984
T800389,077.392376,469.660398,101.811384,081.343378,277.532277,318.875219,123.340
T900467,415.788417,388.984421,371.248434,179.028428,612.683322,661.800252,311.644
T1000487,985.68473,096.840477,129.911491,821.971474,688.392358,680.556282,979.820
Avg:274,849.5045266,481.9035266,433.0539263,075.0687265,084.9747184,735.0668151,479.2179
The bold values indicate the best values.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdel-Basset, M.; Mohamed, R.; Abd Elkhalik, W.; Sharawi, M.; Sallam, K.M. Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution. Mathematics 2022, 10, 4049. https://doi.org/10.3390/math10214049

AMA Style

Abdel-Basset M, Mohamed R, Abd Elkhalik W, Sharawi M, Sallam KM. Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution. Mathematics. 2022; 10(21):4049. https://doi.org/10.3390/math10214049

Chicago/Turabian Style

Abdel-Basset, Mohamed, Reda Mohamed, Waleed Abd Elkhalik, Marwa Sharawi, and Karam M. Sallam. 2022. "Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution" Mathematics 10, no. 21: 4049. https://doi.org/10.3390/math10214049

APA Style

Abdel-Basset, M., Mohamed, R., Abd Elkhalik, W., Sharawi, M., & Sallam, K. M. (2022). Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution. Mathematics, 10(21), 4049. https://doi.org/10.3390/math10214049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop