Next Article in Journal
Cancer Chemopreventive Role of Dietary Terpenoids by Modulating Keap1-Nrf2-ARE Signaling System—A Comprehensive Update
Next Article in Special Issue
Cooperative Buffering Schemes for Time-Shifted Live Streaming of Distributed Appliances
Previous Article in Journal
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management

Institute of Applied Computer Science—Faculty of Electrical, Electronic, Computer and Control Engineering, Lodz University of Technology, 90-924 Lodz, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(22), 10807; https://doi.org/10.3390/app112210807
Submission received: 27 September 2021 / Revised: 3 November 2021 / Accepted: 9 November 2021 / Published: 16 November 2021
(This article belongs to the Special Issue Distributed Computing Systems and Applications)

Abstract

:
Many computer problems that arise from real-world circumstances are NP-hard, while, in the worst case, these problems are generally assumed to be intractable. Existing distributed computing systems are commonly used for a range of large-scale complex problems, adding advantages to many areas of research. Dynamic load balancing is feasible in distributed computing systems since it is a significant key to maintaining stability of heterogeneous distributed computing systems (HDCS). The challenge of load balancing is an objective function of optimization with exponential complexity of solutions. The problem of dynamic load balancing raises with the scale of the HDCS and it is hard to tackle effectively. The solution to this unsolvable issue is being explored under a particular algorithm paradigm. A new codification strategy, namely hybrid nearest-neighbor ant colony optimization (ACO-NN), which, based on the metaheuristic ant colony optimization (ACO) and an approximate nearest-neighbor (NN) approaches, has been developed to establish a dynamic load balancing algorithm for distributed systems. Several experiments have been conducted to explore the efficiency of this stochastic iterative load balancing algorithm; it is tested with task and nodes accessibility and proved to be effective with diverse performance metrics.

1. Introduction

Distributed computing platforms are becoming increasingly important as cost-effective options to conventional high-performance computing platforms. The distributed systems keep going to expand in size, heterogeneity, and diversity of network resources [1,2]. In such complicated platforms, workload management and load balancing become critical factors in keeping business activities afloat. Workload management has reached the top of business and management engineering research priorities [3,4]. One of the most complex concerns in real-time optimization problems is the robust management of a wide range of workload patterns. Robust management includes a resilient load balancing system [5]. Load balancing is an important element in distributed and parallel environments [6,7], as it is used to achieve maximum use of resources, to avoid node overload, to reduce response time, to avoid network bottlenecks, and to ensure system scalability. The challenge of dynamic load balancing persists as a difficult issue of global optimization because of system structure heterogeneity, requisitions of Quality of Service (QoS) per application and administration of computational resources [8,9].
There are several algorithmic methods for optimizing load balancing problems [10,11,12]. These optimization algorithms were based on a metaheuristic (or stochastic) nature-inspired paradigm. Most metaheuristic algorithms generate random possible solutions at each iteration to enhance the chances of exploring the whole search space [13]. The first feasible solution can be improved using mechanisms as movement, mutation, exchange, and cooperative perception. The improvement process is then repeated many times until the best solution is discovered, and the optimization is completed using termination conditions. The stochastic methods use a cost function (or fitness function) [14]. The cost function is a set of agents passing through the solution space. The position of an agents in the solution space represents a set of parameters in the cost function. Thus, the aim of the metaheuristic algorithm is to find the global minimum or global maximum of the cost function and to supply high-quality solutions within a feasible lead time [15].
The swarmintelligence-based (SI-based) algorithms belong to the category of stochastic population-based bioinspired algorithms [16]. SI-based algorithms covered algorithms that are based on the behavior of different animal species, chemical processes, or even other natural processes [17]. Several nature-inspired metaheuristic algorithms have been presented to handle optimization problems in different applications. The ant colony optimization (ACO) [18], genetic algorithm (GA) [19], and particle swarm optimization (PSO) [20] are some examples of algorithms inspired by nature. ACO imitates the behavior of a colony of real foraging ants to find the most cost-effective path. The shortest or optimal path is found via the stigmergy process. It is a social network mechanism where pheromones push agents toward promising solutions. ACO is designed to solve the most challenging concern of combinatorial optimization problems, as well as of network applications, such as routing and load balancing. ACO algorithms can be applied to solve different problems, such as the traveling salesman problem (TSP) [21], vehicle routing [22], sequential ordering [23], and scheduling [24]. GA is a search algorithm based on the concepts of natural selection and natural genetics (crossover and mutation). The GA is employed in real-world optimization problems, for example, manufacturing [25], warehouses [26], robotics [27], and automatic control [28]. PSO mimics the behavior of a swarm of birds or fish in search of food. In PSO, each particle in a swarm does not exchange materials with other particles. A particle is impacted by its current position, the best position in the swarm, and its velocity. Lately, PSO has been used to solve real-world engineering problems such as the design of multilayered rectangular microstrip antenna using electromagnetic band-gap (EBG) structures [29], and bandwidth improvement of an inverted-F antenna (IFA) [30]. Furthermore, the PSO algorithm is also able to design new engineering components, for example, artificial magnetic conductors [31].
Apart from optimizing solutions based on bioinspired algorithms, the multiobjective optimization (MOO) algorithm is designed to solve the most challenging concern of combinatorial optimization problems, as well as of network applications. MOO algorithm is known as Pareto optimization, and it is classified as a multiple-criteria decision analysis (MCDA). The MOO algorithm is used in connection with optimization problems that include more than one conflicting objective function to be optimized simultaneously [32]. Multiobjective problems, such as routing in communication networks [33,34], compressor design [35,36], engineering [37,38], and logistics [39], require a number of objective functions for simultaneous optimization [40].
It is essential to solve the system’s task scheduling problem to increase resource utilization and sharing rate in distributed systems. Task scheduling became more difficult and complex for parallel and distributed environment due to resource distribution, heterogeneity, and autonomy. Different research projects have been carried out to improve the ant colony optimization performance, for example, a novel pheromone update strategy [41,42], a modified ant colony optimization with improved tour construction and pheromone updating strategies [43], an AntNet with reward-penalty reinforcement learning [44], and an AntNet routing technique for real-world application [45]. A further improvement on the original form of the ant system (AS) is called the elitist strategy (elitist-AS) [46]. The concept is to provide strong reinforcement at the edges of the optimal path determined at the beginning of the algorithm. Using the elitist method with an appropriate number of elitist ants enables AS to locate better routes earlier in the run. Despite the existing researches for ACO algorithm can improve the efficiency of task scheduling, they do not contribute significantly to improving the overall load imbalance of the system. In this paper, a novel ACO-NN approach is proposed to manage task assignment of load balancing system. The developed algorithm (ACO-NN) meets the following main contributions:
(i)
Take into consideration the system heterogeneity and the dynamic task scheduling;
(ii)
Carry out different computing data sets and experiments to investigate load balancing performance;
(iii)
Minimize the total cost of the computing system;
(iv)
Validate the obtained results with the results of previous research using makespan and the optimal solution as performance measures.
The structure of this paper is organized as follows. Section 2 introduces the hybrid of nearest neighbor and ACO algorithm. In Section 3, various experiments are presented, and the outcomes of the experiments are analyzed. Section 4 carries out the discussion of comparison tests. Finally, conclusion is provided in Section 5.

2. Methods

In this research, a new load balancing algorithm based on ant-inspired behavior called hybrid nearest-ant colony optimization (ACO-NN) is proposed. It is properly described in Algorithm 1. ACO-NN contains three strategies: approval procedure, nearest neighbor operator, and ACO operator.
Besides the heuristic and fitness functions of load balancing, the approval procedure P r e a p p r o v e _ T a s k s ( ) controls the management of workload. ACO-NN can minimize the amount of processing time used in scheduling.
Algorithm 1 Pseudocode of nearest-neighbor ant colony optimization.
1:
Initialization;
2:
Every task comes;
3:
P r e a p p r o v e _ T a s k s ( ) ;
4:
while  a n y _ t a s k _ n e e d s _ t o _ b e _ s c h e d u l e d   do
5:
    Apply NN operator;
6:
    Store the current value of optimal path;
7:
    Update routing table;
8:
    Sort the ants for search using the routing table;
9:
    Path construction;
10:
   Update pheromone table;
11:
   if all ants complete their tour then
12:
       Evaluate updated pheromone table;
13:
       Select the best path;
14:
       if  B e s t _ s o l u t i o n _ f o u n d  then
15:
           Choose the optimal node based on pheromone table;
16:
           Assign task to optimal node;
17:
       else
18:
           Find next optimal solution;
19:
       end if
20:
   end if
21:
end while

2.1. Nearest Neighbor (NN) with Ant Colony Optimization (ACO)

In order to minimize computation time and boost the scheduling result of a total system resource set N = { N 1 N m } and a finite set of tasks T = { T 1 T j } , it is important to choose the right job scheduling order through an approval procedure, P r e a p p r o v e _ T a s k s ( ) . The local search phase through the nearest neighbor operator constructs the problem graph and determines the beginning node of ant. The proposed method assumes that the job scheduling mechanism is analogous to the ant foraging process. According to the system resource node’s hardware performance parameters and the system average load gap, the pheromone is updated, and the estimated time to execute all tasks is kept to a minimum. The execution time is the amount of time required by a task to complete its execution. We define C i to be the execution time of task i. The parameters t i n and t o u t represent respectively the arrival time of task i for processing and the complete processing time of task i. The general formula for an execution time for a task i is defined by Equation (1):
C i = t o u t t i n
The estimated total execution time of n tasks is represented by Equation (2):
C n = i = 1 n C i
The heuristic function of the ant colony optimization algorithm is calculated using the estimated completion time. Moreover, to assign a task, a resource node with a strong pheromone concentration (high efficiency and low load), maximum remaining memory, and a short completion time is chosen.
The aim of the proposed task scheduling method, nearest-ant colony optimization (ACO-NN) is to assign each task of the set T to the resource set N on the basis of maintaining load balance and increasing the efficiency of the system’s task scheduling. This hybrid algorithm combined ACO with NN to get an efficient and feasible optimization method. Figure 1 displays the flowchart of the specific steps for ACO-NN in distributed systems based on load balancing, while the generic pseudocode of this proposed method is presented in Algorithm 1.

2.2. Preapprove Procedure

Preapprove is the first stage of ACO-NN at each time tasks arrive. In this stage, the algorithm will verify the available memory of each computing node and determine the maximum amount of memory left over. If the request requires a higher memory than the maximum left memory, the request will be denied prior to scheduling. The preapprove step reduces the size of the ACO solutions, and indeed, the computing time of the ACO scheduling is reduced. Considering N 1 and N 2 , two service nodes in the system. The residual memory in N 1 and N 2 is 2 and 3 GB. Supposing two requests T 1 and T 2 are reaching the system with demanding memory of 1 and 5 GB. Whenever a new request comes in, preapprove evaluates the maximum existing memory in each node, which is 3 GB. After this, see if the new request can be approved. Though T1 is demanding memory of 1 GB, which is less than the overall remaining memory in a single service node, so T 1 can be served by any node. While the demanding memory of T 2 is 5 GB, which is bigger than the total remaining memory of nodes. As consequence, T 2 will not be approved by P r e a p p r o v e _ T a s k s ( ) because there is no available resource to serve it. The rejected request will be in queue buffer until one of the available resources can handle it. The request that is in the queue has priority over new incoming requests (in case available resource can serve the pending workload).

2.3. Nearest Neighbor Operator

It is a local search strategy used for pattern recognition of the distributed system and construct the routing table for all ants. This is the simplest, easiest, and most straightforward heuristic method to generate the short tour using Euclidean distance calculation.
The Euclidean distance ( D ) between two nodes N 1 and N 2 of m dimensions is obtained by Equation (3):
D ( N 1 , N 2 ) = i = 1 m ( N 1 i N 2 i ) 2
The nearest neighbor is a structured approach that follows the following steps:
step 1
Select a random node.
step 2
Find the nearest unvisited node using a distance calculation.
step 3
If unvisited nodes exist, repeat step 2.

2.4. ACO Operator

It is used to build possible solutions for all ants. Each ant will choose the next resource node according to the probability matrix P x y as shown in Equation (4). The value of this transition probability of ant from resource node x to resource node y is used to select the next node to be allocated by a task.
P x y = τ x y α × η x y β z T ( τ x z α × η x z β ) , y T
In Equation (4), τ x y is the pheromone value for the transition of resource node x to resource node y. η x y is a heuristic function that represents a priori desirability of the move from resource node x to resource node y. α is a positive parameter used to control the influence of pheromone concentrations and heuristic information and which is the relative value of scheduling order; β is the expected heuristic component, which sets out the relative heuristic information in the scheduling sequence of selections for an ant. T denotes the set of unscheduled requests that remain. An ant can choose a task among the set T to execute in the next move.
The two main factors influencing the choosing of resource nodes, according to Equation (4), are τ x y and η x y . The complexity and emphasis of the research algorithm are considered the improvement of these two main factors. The heuristic function η x y is described by Equation (5).
η x y = ϕ 1 × C ( x ) + ϕ 2 × M ( x ) + ϕ 3 × D u ( x )
where ϕ 1 , ϕ 2 and ϕ 3 are the effect weights of CPU utilization, memory, and disk utilization, respectively; C ( x ) , M ( x ) , and D u ( x ) indicate the efficiency of resource node x in the three following resources:
C ( x ) = M a x c u C U x M a x c u
M ( x ) = R M x M a x m
D u ( x ) = M a x d u D U x M a x d u
where C U x is the CPU utilization for the node x and R M x and D U x represent the remaining memory amount and the disk utilization for the node x, respectively.
In ACO, the heuristic function corresponds to a local point, which implies that each ant has its decision when it comes to path selection. As a result, each ant chooses its path based on the available resources on each server. In terms of CPU and disk usage, lower utilization means higher resource availability, as shown in (6) and (8). From the standpoint of memory, more remaining memory on the server leads to better efficiency as shown in Equation (7). The sum of these three values represents the average remaining resource as indicated in Equation (5).

2.5. Global Pheromone Update Operator

The proposed ACO-NN method updates pheromone according to the load balancing value and the performance of resource node hardware. Ants lookup the shortest path in the neighborhood according to the current algorithm iteration. Pheromones rise exclusively on the routes that correspond to the current best solution in this iteration, whereas pheromones on the remaining routes decrease with evaporation mechanism. The pheromone trail update is done in accordance with the ant-cycle, where the ants update the pheromone after all the ants have built the tours. The global pheromone update is performed using Equation (9).
τ x y ( t + 1 ) = 1 ρ τ x y ( t ) + k = 1 n Δ τ x y k ( t )
where 1 ρ is the global residual coefficient of pheromones decay rate, 0 < ρ 1 . The pheromone trail evaporation prevents bad decisions made before. Δ τ x y k ( t ) represents the amount of pheromones dropped by ant k on arc (x, y). Generally, Δ τ x y k ( t ) is defined in Equation (10).
Δ τ x y k ( t ) = Q k L k , if ant k passes the arc ( x , y ) ; 0 , otherwise .
where Q k is the quantity of pheromone to be distributed along the route. L k is the path length performed by ant k.

3. Results

The proposed approach was developed and implemented in MATLAB environment R2018a. The MATLAB environment was used for numerical computing. In this dissertation, each of the developed algorithms for load balancing were implemented with the aid of MATLAB toolbox, which allows matrix manipulations and the plotting of functions. The algorithms were run with the following computer configuration: Intel Core i5, CPU 2.20 GHz, RAM 12 GB, and Windows 10.
To further explain the efficacy and practicability of the proposed algorithm, we used the problems TSPLIB ([47,48], source by: http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsp/index.html, accessed on 1 June 2021) for experimental studies. All the problems are solved in TSPLIB and the optimal values are given. Table 1 displays experimental data sets.
The optimal solution for each instance of data sets comes from http://elib.zib.de/pub/mp-testdata/tsp/tsplib/stsp-sol.html (accessed on 1 June 2021).
ACO-NN is a kind of heuristic algorithm; its performance is influenced by the various parameter values. Table 2 shows the parameters used by the proposed algorithm.
The layout of each experiment consists mainly of the total number of resource nodes where the scheduler assigns tasks to the available computing nodes according to system measurement and the optimal tour cost between nodes.

3.1. Experiment 1: Comparison ACO-NN to GA and SA Approaches

Bayg29, Eil76, Gr120, Pcb442, and Gr666 problems are used for testing the performance of ACO-NN. The obtained results are compared with GA and SA.
Figure 2 displays the convergence of a small-scale instance using Bayg29 as an example. ACO-NN performs better in terms of convergence rate while GA has the worst convergence rate.
As can be seen from Figure 3, ACO-NN obtains the optimal solution with the best convergence rate for a small-scale instance using Eil76. SA has the worst performance among the optimization algorithms.
According to Figure 4, ACO-NN has better convergence performance than GA and SA during 10 experimental runs. For medium-scale instance using Gr120, SA has worse performance.
As illustrated in Figure 5 and Figure 6, ACO-NN has minimal tour cost for five data sets with an increase in the number of runs. The performance of ACO-NN and GA is significantly similar for large-scale instances of Pcb442 and Gr666.

3.2. Experiment 2: Comparison ACO-NN to GRASP Approach

To verify the effectiveness of ACO-NN for data sets Eil51, St70 and Kroc100, we compare the proposed algorithm with GRASP.

3.2.1. Results of Optimal Solutions vs. Number of Runs

The comparison of optimal solution based on ACO-NN and GRASP for two data sets with an increase in the number of runs is shown by Figure 7 and Figure 8. The simulation results demonstrate that our proposed algorithm has the lowest value of optimal solutions during 10 runs. ACO-NN is superior to GRASP.

3.2.2. Results of Response Time vs. Number of Runs

The comparison of optimal solution based on ACO-NN and GRASP for two data sets with an increase in the number of runs is shown by Figure 9 and Figure 10. From the perspective of response time, the simulation results shows that our proposed algorithm has minimal response time during 10 runs and it is superior to GRASP.

3.3. Experiment 3: Comparison ACO-NN to GA and GRASP Approaches

The comparison of optimal solution based on ACO-NN, GA, and GRASP in terms of two factors: tour cost and response time, with an increase in the number of runs is carried out by Figure 11 and Figure 12.
According to Figure 11, the simulation results show that our proposed algorithm has the lowest value of optimal solutions during 10 runs for the data set Kroc100 from the perspective of tour cost.
As seen in Figure 12, the results of response time are similar for both algorithms ACO-NN and GA, while the GRASP provides the worst result for the instance KroC100.

4. Discussion

Several tests were carried out to evaluate the performance of the proposed method. The results obtained by the nearest-neighbor ant colony optimization (ACO-NN) algorithm are compared to renowned metaheuristic algorithms such as artificial bee colony (ABC), genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO), camel herd algorithm (CHA), black hole (BH), greedy randomized adaptive search procedure (GRASP), particle swarm optimization (PSO), traveling salesman problem based on simulated annealing and gene expression programming (TSP-SAGEP), simulated-annealing-based symbiotic organisms search (SOS-SA), multioffspring genetic algorithm (MO-GA), and discrete tree-seed algorithm (DTSA) with their variants (DTSA0, DTSAI, DTSAII).

4.1. Test1

In this experiment, three state-of-the-art algorithms based on ACO approaches developed in recent years from the literature were used to test the performance of ACO-NN. The modified ant system was proposed by Yan et al. in 2017 [49]. The adaptive tour construction and pheromone updating techniques are integrated into the standard ant system. The modified ACO (MACO) with improved tour construction and pheromone updating strategies is proposed by Gao in 2021 [43]. The hybrid elitist-ant system (Elitist-AS) with an external memory structure [46] gives strong reinforcement at the arcs of the optimal tour determined at the beginning of the algorithm. The computing results of instance Gr666 are summarized in Table 3.
As demonstrated in Table 3, ACO-NN proves its robustness through the low standard deviation (SD) value and the optimal solution. The comparison results of ACO-NN with Elitist-AS are shown in Table 4. It is noted that ACO-NN is faster than Elitist-AS for instances Eil76, Ch150, and D198. From the perspective of optimal solution value, Elitist-AS performed better than ACO-NN.

4.2. Test2

To demonstrate the benefits of the proposed algorithm in this research, we compared ACO-NN to ACO [50], PSO [50], GA [50], BH [50], DTSA [50], and CHA [51] for the instance Eil76 in terms of the meaning of best solutions, the rate of difference (R-mean), and standard deviation. The results are defined in Table 5.
According to Table 5, ACO-NN algorithm is superior in computational results to the other algorithms. In Test 2, the rate of difference was used to assess the merits and disadvantages of experiment outcomes based on ACO-NN and other methods. Equation (11) represents the rate of difference.
σ = P m e a n P o p t i m a l P m e a n
where σ defines the rate of difference, the well-known optimal solution is represented by P o p t i m a l while P m e a n describes the mean of the best solution obtained by ACO-NN.

4.3. Test3

In this experiment, we compared ACO-NN to ACO [50], ABC [50], DTSA [50], and CHA [51] for the instance Eil101. As shown in Table 6, the ACO-NN method has good results with the rank 3, and it proves a better experimental result for standard deviation.

4.4. Test4

In this test, ACO-NN was compared with other optimization algorithms for the instance KroA100. As illustrated in Table 7, the results revealed that ACO-NN performed much better than the other approaches for optimal solutions and the rate of difference metrics. ACO [50] has a low standard deviation (SD) value. A low SD value indicates that ACO [50] is a reliable algorithm. ACO-NN was in second place for the lower SD results and proves its robustness.

4.5. Test5

The ACO-NN was compared with SA [50], DSTA0, DSTAI, and DTSA [50] for the instance KroC100. According to Table 8, our approach confirmed better results than the other approaches.
As seen from Figure 13, ACO-NN is a greater scheduler in comparison with TSA, DSTA0, DSTAI, and DTSA for the instance Kroc100 from the perspective of rank values.

4.6. Test6

In this experiment, we compare ACO-NN to various approaches such as TSP-SAGEP [52], SOS-SA [53], and MO-GA [54]. For the instances Gr120 and Gr666, our method proves the best results while it provides an acceptable outcome with best execution time for the instance Pcb442. Table 9 displays the following outcomes for different data sets.
As seen from Figure 14, ACO-NN is a leader scheduler in comparison with TSP-SAGEP, SOS-SA, and MO-GA for instances GR120 and Pcb442. For the instance Gr666, ACO-NN proves better results than SOS-SA and MO-GA.

5. Conclusions

This paper addresses load-balancing-related optimization problems impacting the performance of entire systems. The research proposed a ACO-NN approach for load balancing in distributed computing systems to enhance load scheduling mechanism. ACO-NN can manage task scheduling based on node status information. To expedite the ACO process, we reject dissatisfied requests before scheduling. The preapprove procedure reduces the solution dimensions of the nearest neighbor and ACO, thereby saving computing time in a high-load condition.
To validate the performance of the proposed algorithm, nine trials were carried out on ten benchmark instances taken from the TSPLIB. The experimental results of ACO-NN were compared to fourteen heuristic algorithms. Hybrid ACO-NN proves its efficiency over diverse performance metrics, the experimental results show that the proposed method outperforms existing approaches in maintaining load balance in a dynamic environment.

Author Contributions

F.M. and V.M. contributed to the design of research; F.M. implemented and performed the experiments; V.M. and F.M. analysed the study data; V.M. conducted the supervision and validation of the research; and F.M. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was particularly funded by the Intelligent Development Operational Program 2014–2020 cofinanced by the European Regional Development Fund, project POIR.04.01.04-00-0074/19.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The submitted article contains all of the data, and models developed or used during the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ranjan, R.; Rana, O.; Nepal, S.; Yousif, M.; James, P.; Wen, Z.; Barr, S.; Watson, P.; Jayaraman, P.P.; Georgakopoulos, D.; et al. The next grand challenges: Integrating the Internet of Things and data science. IEEE Cloud Comput. 2018, 5, 12–26. [Google Scholar] [CrossRef]
  2. Rana, B.; Singh, Y.; Singh, P.K. A systematic survey on internet of things: Energy efficiency and interoperability perspective. Trans. Emerg. Telecommun. Technol. 2021, 32, e4166. [Google Scholar] [CrossRef]
  3. Poola, D.; Salehi, M.A.; Ramamohanarao, K.; Buyya, R.A.; Ramamohanarao, K.; Buyya, R. A taxonomy and survey of fault-tolerant workflow management systems in cloud and distributed computing environments. In Software Architecture for Big Data and the Cloud; Elsevier: Amsterdam, The Netherlands, 2017; pp. 285–320. [Google Scholar]
  4. Thoman, P.; Dichev, K.; Heller, T.; Iakymchuk, R.; Aguilar, X.; Hasanov, K.; Gschwandtner, P.; Lemarinier, P.; Markidis, S.; Jordan, H.; et al. A taxonomy of task-based parallel programming technologies for high-performance computing. J. Supercomput. 2018, 74, 1422–1434. [Google Scholar] [CrossRef] [Green Version]
  5. Gamoura, S.C. A New Non-Stigmergic-Ant Algorithm to Make Load Balancing Resilient in Big Data Processing for Enterprises. Artif. Algorithms Natural Algorithms 2020, 1–30. Available online: https://www.researchgate.net/profile/Samia-Gamoura/publication/353806980_A_New_Non-Stigmergic-Ant_Algorithm_to_Make_Load_Balancing_Resilient_in_Big_Data_Processing_for_Enterprises/links/6112bec4169a1a0103f20303/A-New-Non-Stigmergic-Ant-Algorithm-to-Make-Load-Balancing-Resilient-in-Big-Data-Processing-for-Enterprises.pdf (accessed on 1 June 2021).
  6. Ghomi, E.J.; Rahmani, A.M.; Qader, N.N. Load-balancing algorithms in cloud computing: A survey. J. Netw. Comput. Appl. 2017, 88, 50–71. [Google Scholar] [CrossRef]
  7. Semmoud, A.; Hakem, M.; Benmammar, B. A survey of load balancing in distributed systems. Int. J. High Perform. Comput. Netw. 2019, 15, 233–248. [Google Scholar] [CrossRef]
  8. De Schepper, T.; Latré, S.; Famaey, J. Scalable load balancing and flow management in dynamic heterogeneous wireless networks. J. Netw. Syst. Manag. 2020, 28, 133–159. [Google Scholar] [CrossRef]
  9. Zeebaree, S.R.; Jacksi, K.; Zebari, R.R. Impact analysis of syn flood ddos attack on haproxy and nlb cluster-based web servers. Indones. J. Electr. Eng. Comput. Sci. 2020, 19, 510–517. [Google Scholar]
  10. Ebadifard, F.; Babamir, S.M. A PSO-based task scheduling algorithm improved using a load-balancing technique for the cloud computing environment. Concurr. Comput. Pract. Exp. 2018, 30, e4368. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Diabat, A. A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in cloud computing environments. Clust. Comput. 2021, 24, 205–223. [Google Scholar] [CrossRef]
  12. Ding, S.; Chen, C.; Xin, B.; Pardalos, P.M. A bi-objective load balancing model in a distributed simulation system using NSGA-II and MOPSO approaches. Appl. Soft Comput. 2018, 63, 249–267. [Google Scholar] [CrossRef]
  13. Tzanetos, A.; Dounias, G. Nature inspired optimization algorithms or simply variations of metaheuristics? Artif. Intell. Rev. 2021, 54, 1841–1862. [Google Scholar] [CrossRef]
  14. Nedjah, N.; Mourelle, L.D.M.; Morais, R.G. Inspiration-wise swarm intelligence meta-heuristics for continuous optimisation: A survey-part I. Int. J. Bio-Inspired Comput. 2020, 15, 207–223. [Google Scholar] [CrossRef]
  15. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic Algorithms on Feature Selection: A Survey of One Decade of Research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  16. Slowik, A.; Kwasnicka, H. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Trans. Ind. Inform. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  17. Fister, I., Jr.; Fister, I. A brief overview of swarm intelligence-based algorithms for numerical association rule mining. arXiv 2020, arXiv:2010.15524. [Google Scholar]
  18. Chen, X.; Yu, L.; Wang, T.; Liu, A.; Wu, X.; Zhang, B.; Lv, Z.; Sun, Z. Artificial intelligence-empowered path selection: A survey of ant colony optimization for static and mobile sensor networks. IEEE Access 2020, 8, 71497–71511. [Google Scholar] [CrossRef]
  19. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  20. Sengupta, S.; Basak, S.; Peters, R.A. Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives. Mach. Learn. Knowl. Extr. 2019, 1, 10. [Google Scholar] [CrossRef] [Green Version]
  21. Yang, K.; You, X.; Liu, S.; Pan, H. A novel ant colony optimization based on game for traveling salesman problem. Appl. Intell. 2020, 50, 4529–4542. [Google Scholar] [CrossRef]
  22. Huang, Y.H.; Blazquez, C.A.; Huang, S.H.; Paredes-Belmar, G.; Latorre-Nuñez, G. Solving the feeder vehicle routing problem using ant colony optimization. Comput. Ind. Eng. 2019, 127, 520–535. [Google Scholar] [CrossRef]
  23. Skinderowicz, R. An improved ant colony system for the sequential ordering problem. Comput. Oper. Res. 2017, 86, 1–17. [Google Scholar] [CrossRef] [Green Version]
  24. Kumar, S.; Solanki, V.K.; Choudhary, S.K.; Selamat, A.; Gonzalez Crespo, R. Comparative Study on Ant Colony Optimization (ACO) and K-Means Clustering Approaches for Jobs Scheduling and Energy Optimization Model in Internet of Things (IoT). Int. J. Interact. Multimed. Artif. Intell. 2020, 6, 107–116. [Google Scholar] [CrossRef]
  25. Liu, Z.; Wang, L.; Li, X.; Pang, S. A multi-attribute personalized recommendation method for manufacturing service composition with combining collaborative filtering and genetic algorithm. J. Manuf. Syst. 2021, 58, 348–364. [Google Scholar] [CrossRef]
  26. Grznár, P.; Krajčovič, M.; Gola, A.; Dulina, L.; Furmannová, B.; Mozol, Š.; Plinta, D.; Burganová, N.; Danilczuk, W.; Svitek, R. The Use of a Genetic Algorithm for Sorting Warehouse Optimisation. Processes 2021, 9, 1197. [Google Scholar] [CrossRef]
  27. Lamini, C.; Benhlima, S.; Elbekri, A. Genetic algorithm based approach for autonomous mobile robot path planning. Procedia Comput. Sci. 2018, 127, 180–189. [Google Scholar] [CrossRef]
  28. Zhang, H.; Zhao, X.; Yang, J.; Zhang, W. Optimizing automatic transmission double-transition shift process based on multi-objective genetic algorithm. Appl. Sci. 2020, 10, 7794. [Google Scholar] [CrossRef]
  29. Gaharwar, M.; Dhubkarya, D.C. X-Band Multilayer Stacked Microstrip Antenna Using Novel Electromagnetic Band-Gap Structures. IETE J. Res. 2021, 1–10. [Google Scholar] [CrossRef]
  30. Alnas, J.; Giddings, G.; Jeong, N. Bandwidth improvement of an inverted-F antenna using dynamic hybrid binary particle swarm optimization. Appl. Sci. 2021, 11, 2559. [Google Scholar] [CrossRef]
  31. Kwon, O.H.; Park, W.B.; Yun, J.; Lim, H.J.; Hwang, K.C. A low-profile HF meandered dipole antenna with a ferrite-loaded artificial magnetic conductor. Appl. Sci. 2021, 11, 2237. [Google Scholar] [CrossRef]
  32. Malar, A.; Kowsigan, M.; Krishnamoorthy, N.; Karthick, S.; Prabhu, E.; Venkatachalam, K. Multi constraints applied energy efficient routing technique based on ant colony optimization used for disaster resilient location detection in mobile ad-hoc network. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 4007–4017. [Google Scholar] [CrossRef]
  33. Wu, J.; Xu, M.; Liu, F.F.; Huang, M.; Ma, L.; Lu, Z.M. Solar Wireless Sensor Network Routing Algorithm Based on Multi-Objective Particle Swarm Optimization. J. Inf. Hiding Multim. Signal Process. 2021, 12, 1–11. [Google Scholar]
  34. Vijayalakshmi, K.; Anandan, P. A multi objective Tabu particle swarm optimization for effective cluster head selection in WSN. Clust. Comput. 2019, 22, 12275–12282. [Google Scholar] [CrossRef]
  35. Sharma, M.; Baloni, B.D. Design optimization of S-shaped compressor transition duct using particle swarm optimization algorithm. SN Appl. Sci. 2020, 2, 1–17. [Google Scholar] [CrossRef] [Green Version]
  36. Mofid, H.; Jazayeri-Rad, H.; Shahbazian, M.; Fetanat, A. Enhancing the performance of a parallel nitrogen expansion liquefaction process (NELP) using the multi-objective particle swarm optimization (MOPSO) algorithm. Energy 2019, 172, 286–303. [Google Scholar] [CrossRef]
  37. Lalbakhsh, A.; Afzal, M.U.; Esselle, K.P.; Zeb, B.A. Multi-objective particle swarm optimization for the realization of a low profile bandpass frequency selective surface. In Proceedings of the 2015 International Symposium on Antennas and Propagation (ISAP), Hobart, TAS, Australia, 9–12 November 2015; pp. 1–4. Available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7447507 (accessed on 1 June 2021).
  38. Lalbakhsh, A.; Afzal, M.U.; Esselle, K.P. Multiobjective particle swarm optimization to design a time-delay equalizer metasurface for an electromagnetic band-gap resonator antenna. IEEE Antennas Wirel. Propag. Lett. 2016, 16, 912–915. [Google Scholar] [CrossRef]
  39. Chan, F.T.; Wang, Z.X.; Goswami, A.; Singhania, A.; Tiwari, M.K. Multi-objective particle swarm optimisation based integrated production inventory routing planning for efficient perishable food logistics operations. Int. J. Prod. Res. 2020, 58, 5155–5174. [Google Scholar] [CrossRef]
  40. Mbarek, F.; Mosorov, V. Load Balancing Based on Optimization Algorithms: An Overview. J. Telecommun. Inf. Technol. 2019, 4, 3–12. [Google Scholar] [CrossRef]
  41. Zhao, J.; Li, H.; Yang, C.; Wang, W. A novel path planning method for wheel-legged unmanned vehicles based on improved ant colony algorithm. In Proceedings of the 2021 60th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Tokyo, Japan, 8–10 September 2021; pp. 696–701. [Google Scholar]
  42. Lalbakhsh, P.; Zaeri, B.; Lalbakhsh, A. An improved model of ant colony optimization using a novel pheromone update strategy. IEICE Trans. Inf. Syst. 2013, 96, 2309–2318. [Google Scholar] [CrossRef] [Green Version]
  43. Gao, W. Modified ant colony optimization with improved tour construction and pheromone updating strategies for traveling salesman problem. Soft Comput. 2021, 25, 3263–3289. [Google Scholar] [CrossRef]
  44. Lalbakhsh, P.; Zaeri, B.; Lalbakhsh, A.; Fesharaki, M.N. AntNet with reward-penalty reinforcement learning. In Proceedings of the 2010 2nd International Conference on Computational Intelligence, Communication Systems and Networks, Liverpool, UK, 28–30 July 2010; pp. 17–21. [Google Scholar]
  45. Wilfert, J.; Paprotta, N.; Kosak, O.; Stieber, S.; Schiendorfer, A.; Reif, W. A Real-Word Realization of the AntNet Routing Algorithm with ActivityBots; Institute for Software and Systems Engineering, University of Augsburg: Augsburg, Germany; Available online: https://www.researchgate.net/profile/Simon-Stieber/publication/354889888_A_Real-Word_Realization_of_the_AntNet_Routing_Algorithm_with_ActivityBots/links/6152ebed522ef665fb660ab8/A-Real-Word-Realization-of-the-AntNet-Routing-Algorithm-with-ActivityBots.pdf (accessed on 1 June 2021).
  46. Jaradat, G.M. Hybrid elitist-ant system for a symmetric traveling salesman problem: Case of Jordan. Neural Comput. Appl. 2018, 29, 565–578. [Google Scholar] [CrossRef]
  47. Weise, T.; Wu, Z. Difficult features of combinatorial optimization problems and the tunable w-model benchmark problem for simulating them. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Kyoto, Japan, 15–19 July 2018; pp. 1769–1776. [Google Scholar]
  48. Lendl, S.; Ćustić, A.; Punnen, A.P. Combinatorial optimization with interaction costs: Complexity and solvable cases. Discret. Optim. 2019, 33, 101–117. [Google Scholar] [CrossRef]
  49. Yan, Y.; Sohn, H.S.; Reyes, G. A modified ant system to achieve better balance between intensification and diversification for the traveling salesman problem. Appl. Soft Comput. 2017, 60, 256–267. [Google Scholar] [CrossRef]
  50. Cinar, A.C.; Korkmaz, S.; Kiran, M.S. A discrete tree-seed algorithm for solving symmetric traveling salesman problem. Eng. Sci. Technol. Int. J. 2020, 23, 879–890. Available online: https://www.sciencedirect.com/science/article/pii/S2215098619313527 (accessed on 1 June 2021). [CrossRef]
  51. Ahmed, Z.O.; Sadiq, A.T.; Abdullah, H.S. Solving the Traveling Salesman’s Problem Using Camels Herd Algorithm. In Proceedings of the 2019 2nd Scientific Conference of Computer Sciences (SCCS), Baghdad, Iraq, 27–28 March 2019; pp. 1–5. [Google Scholar]
  52. Zhou, A.H.; Zhu, L.P.; Hu, B.; Deng, S.; Song, Y.; Qiu, H.; Pan, S. Traveling-salesman-problem algorithm based on simulated annealing and gene-expression programming. Information 2019, 10, 7. [Google Scholar] [CrossRef] [Green Version]
  53. Ezugwu, A.E.S.; Adewumi, A.O.; Frîncu, M.E. Simulated annealing based symbiotic organisms search optimization algorithm for traveling salesman problem. Expert Syst. Appl. 2017, 77, 189–210. [Google Scholar] [CrossRef]
  54. Wang, J.; Ersoy, O.K.; He, M.; Wang, F. Multi-offspring genetic algorithm and its application to the traveling salesman problem. Appl. Soft Comput. 2016, 43, 415–423. [Google Scholar] [CrossRef]
Figure 1. The flow chart of nearest-neighbour ant colony optimization task scheduling technique.
Figure 1. The flow chart of nearest-neighbour ant colony optimization task scheduling technique.
Applsci 11 10807 g001
Figure 2. Comparison of ACO-NN to GA and SA for Bayg29.
Figure 2. Comparison of ACO-NN to GA and SA for Bayg29.
Applsci 11 10807 g002
Figure 3. Comparison of ACO-NN to GA and SA for Eil76.
Figure 3. Comparison of ACO-NN to GA and SA for Eil76.
Applsci 11 10807 g003
Figure 4. Comparison of ACO-NN to GA and SA for Gr120.
Figure 4. Comparison of ACO-NN to GA and SA for Gr120.
Applsci 11 10807 g004
Figure 5. Comparison of ACO-NN to GA and SA for Pcb442.
Figure 5. Comparison of ACO-NN to GA and SA for Pcb442.
Applsci 11 10807 g005
Figure 6. Comparison of ACO-NN to GA and SA for Gr666.
Figure 6. Comparison of ACO-NN to GA and SA for Gr666.
Applsci 11 10807 g006
Figure 7. Comparison of ACO-NN with GRASP for Eil51 regarding optimal solution metric.
Figure 7. Comparison of ACO-NN with GRASP for Eil51 regarding optimal solution metric.
Applsci 11 10807 g007
Figure 8. Comparison of ACO-NN with GRASP for St70 regarding optimal solution metric.
Figure 8. Comparison of ACO-NN with GRASP for St70 regarding optimal solution metric.
Applsci 11 10807 g008
Figure 9. Comparison of ACO-NN with GRASP for Eil51 regarding response time metric.
Figure 9. Comparison of ACO-NN with GRASP for Eil51 regarding response time metric.
Applsci 11 10807 g009
Figure 10. Comparison of ACO-NN with GRASP for St70 regarding response time metric.
Figure 10. Comparison of ACO-NN with GRASP for St70 regarding response time metric.
Applsci 11 10807 g010
Figure 11. Results of optimal solutions vs. number of runs for KroC100.
Figure 11. Results of optimal solutions vs. number of runs for KroC100.
Applsci 11 10807 g011
Figure 12. Results of response time vs. number of runs for KroC100.
Figure 12. Results of response time vs. number of runs for KroC100.
Applsci 11 10807 g012
Figure 13. Comparison of rank values for ACO-NN, SA, DSTA0, DSTAI, and DTSA.
Figure 13. Comparison of rank values for ACO-NN, SA, DSTA0, DSTAI, and DTSA.
Applsci 11 10807 g013
Figure 14. Comparison of rank values for ACO-NN, TSP-SAGEP, SOS-SA, and MO-GA.
Figure 14. Comparison of rank values for ACO-NN, TSP-SAGEP, SOS-SA, and MO-GA.
Applsci 11 10807 g014
Table 1. Data sets used for testing.
Table 1. Data sets used for testing.
InstanceNb.NodeOptimal Solution
Bayg29291610
Eil5151426
St7070675
Eil7676538
KroA10010021,282
KroC10010020,749
Eil101101629
Ch1501506528
D19819815,780
Gr1201206942
Pcb44244250,778
Gr666666294,358
Table 2. Parameters settings of test data.
Table 2. Parameters settings of test data.
ParametersValues
α 0.8–0.9
β 8–35
ρ 0.4–0.5
Population size500–800
Number of nodes29–666
Number of tasks100–500
Memory demand2–10 GB
Table 3. Comparison of ACO-NN to MACO and modified ant system for Gr666.
Table 3. Comparison of ACO-NN to MACO and modified ant system for Gr666.
Alg.Nb.NodeOptimalBestMeanStd.DevRank
ACO-NN666294,3583597363283.71981
MACO [43]666294,358294,358294,972.324,702.13
Modified ant system [49]666294,358294,358294,899.623,551.352
Table 4. Comparison of ACO-NN to Elitist-AS for Eil76, Ch150, and D198.
Table 4. Comparison of ACO-NN to Elitist-AS for Eil76, Ch150, and D198.
InstancesNb.NodeOptimalAlg.BestMeanStd.DevTime (s)
Eil7676538ACO-NN5625672.89465.04
Elitist-AS [46]538551042
Ch1501506528ACO-NN6846.456914.8840.119.29
Elitist-AS [46]6528655011.2253
D19819815,780ACO-NN17,054.0917,123.0460.8312.48
Elitist-AS [46]15,88815,94064.83103
Table 5. Comparison of ACO-NN to various approaches for Eil76.
Table 5. Comparison of ACO-NN to various approaches for Eil76.
Alg.Nb.NodeOptimalMeanR-Mean ( σ )Std.DevRank
ACO [50]765385940.094240.21523
PSO [50]765389750.4482152.40617
GA [50]765386520.1748122.09724
BH [50]765386590.1836152.17545
DTSA [50]765385880.08505.72962
CHA [51]765386870.2168N/R6
ACO-NN765385670.05112.89461
Table 6. Comparison of ACO-NN to various approaches for Eil101.
Table 6. Comparison of ACO-NN to various approaches for Eil101.
Alg.Nb.NodeOptimalMeanR-Mean ( σ )Std.DevRank
ACO [50]1016296930.09236.802
ABC [50]10162913150.521635.285
DTSA [50]1016296890.08704.471
CHA [51]1016298620.2703N/R4
ACO-NN1016296950.09493.703
Table 7. Comparison of ACO-NN to various approaches for KroA100.
Table 7. Comparison of ACO-NN to various approaches for KroA100.
Alg.Nb.NodeOptimalMeanR-Mean ( σ )Std.DevRank
ACO [50]10021,28222,8800.069840.21523
ABC [50]10021,28253,8400.60472198.366
DSTA0 [50]10021,28223,2130.0831906.114
DSTAI [50]10021,28222,8350.0680715.852
CHA [51]10021,28231,7860.3304N/R5
ACO-NN10021,28222,7930.0662162.0901
Table 8. Comparison of ACO-NN to various approaches for KroC100.
Table 8. Comparison of ACO-NN to various approaches for KroC100.
Alg.Nb.NodeOptimalMeanR-Mean ( σ )Std.DevRank
SA [50]10020,74922,2230.0663522.204
DSTA0 [50]10020,74922,8770.0930709.875
DSTAI [50]10020,74921,8910.0521536.883
DTSA [50]10020,74921,8170.0489217.772
ACO-NN10020,74921,7530.0461106.261
Table 9. Comparison of ACO-NN to various approaches for Gr120, Pcb442, and Gr666.
Table 9. Comparison of ACO-NN to various approaches for Gr120, Pcb442, and Gr666.
InstancesNb.NodeOptimalAlg.BestWorstAverageTime (s)
Gr1201206942ACO-NN1770184318116.6174
TSP-SAGEP [52]69427406699519.4612
SOS-SA [53]694211,237778625.3581
MO-GA [54]694211,0087655236,709
Pcb44244250,778ACO-NN56,81558,89458,33340.3890
TSP-SAGEP [52]50,81152,14750,87851.0964
SOS-SA [53]51,10755,72351,958610,876
MO-GA [54]51,09755,00651,828594,571
Gr666666294,358ACO-NN35973655363283.7198
TSP-SAGEP [52]294,419305,036295,54279.7853
SOS-SA [53]311,855417,702331,02490.0014
MO-GA [54]311,003441,298329,19988.0163
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mbarek, F.; Mosorov, V. Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management. Appl. Sci. 2021, 11, 10807. https://doi.org/10.3390/app112210807

AMA Style

Mbarek F, Mosorov V. Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management. Applied Sciences. 2021; 11(22):10807. https://doi.org/10.3390/app112210807

Chicago/Turabian Style

Mbarek, Fatma, and Volodymyr Mosorov. 2021. "Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management" Applied Sciences 11, no. 22: 10807. https://doi.org/10.3390/app112210807

APA Style

Mbarek, F., & Mosorov, V. (2021). Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management. Applied Sciences, 11(22), 10807. https://doi.org/10.3390/app112210807

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop