Next Article in Journal
Travel Route Planning with Optimal Coverage in Difficult Wireless Sensor Network Environment
Next Article in Special Issue
Anti-Multipath Performance Improvement of an M-ary Position Phase Shift Keying Modulation System
Previous Article in Journal
Specific Direction-Based Outlier Detection Approach for GNSS Vector Networks
Previous Article in Special Issue
Adaptive Rate-Compatible Non-Binary LDPC Coding Scheme for the B5G Mobile System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem

1
Department of Information System, College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
2
Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(8), 1837; https://doi.org/10.3390/s19081837
Submission received: 21 March 2019 / Revised: 7 April 2019 / Accepted: 14 April 2019 / Published: 17 April 2019

Abstract

:
This paper presents an adaptation of the flying ant colony optimization (FACO) algorithm to solve the traveling salesman problem (TSP). This new modification is called dynamic flying ant colony optimization (DFACO). FACO was originally proposed to solve the quality of service (QoS)-aware web service selection problem. Many researchers have addressed the TSP, but most solutions could not avoid the stagnation problem. In FACO, a flying ant deposits a pheromone by injecting it from a distance; therefore, not only the nodes on the path but also the neighboring nodes receive the pheromone. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. In this work, we modified the FACO algorithm to make it suitable for TSP in several ways. For example, the number of neighboring nodes that received pheromones varied depending on the quality of the solution compared to the rest of the solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of regular and flying ants. These modifications aim to help the DFACO algorithm obtain better solutions in less processing time and avoid getting stuck in local minima. This work compared DFACO with (1) ACO and five different methods using 24 TSP datasets and (2) parallel ACO (PACO)-3Opt using 22 TSP datasets. The empirical results showed that DFACO achieved the best results compared with ACO and the five different methods for most of the datasets (23 out of 24) in terms of the quality of the solutions. Further, it achieved better results compared with PACO-3Opt for most of the datasets (20 out of 21) in terms of solution quality and execution time.

1. Introduction

The traveling salesman problem (TSP) [1] involves finding the shortest tour distance for a salesperson who wants to visit each city in a group of fully connected cities exactly once. TSP is a discrete optimization problem. It is a classic example of a category of computing problems known as NP-hard problems [2,3]. Although there are simple algorithms for solving these problems, these algorithms require exponential time, which makes them impractical for solving large-size problems. Hence, metaheuristic optimization algorithms are usually applied to find good solutions, although these solutions may not be optimal.
The TSP problem can be used for modeling several wireless sensor network (WSN) problems [4]. In a WSN, the sensors are located in a sensing field to collect data, and send these data to the source node wirelessly. There are two ways to increase the lifetime of the sensors: first, by reducing the size and number of data [5,6], and second, by reducing the cost of transferring the data [4]. For example, a good solution to the TSP problem can also be considered an efficient diffusion method for reducing the transferring cost.
Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them were unable to avoid the stagnation problem, or they may have obtained good solutions but took a long execution time to do so [7]. In this work, we enhanced the ant colony optimization (ACO) algorithm based on imaginary ants that can fly. These ants deposit their pheromones with neighboring nodes while flying by injecting them from a distance. This allows not only the nodes on a good path to receive some pheromones but also the neighboring nodes. The algorithm also makes use of the 3-Opt algorithm to help avoid reaching a local minimum.
The main contributions of this work on the flying ACO (FACO) algorithm are as follows: (1) proposing a dynamic neighboring selection mechanism to balance between exploration and exploitation, (2) reducing the execution time of FACO by making flying ants equal to half the ants, and (3) adapting the flying process to work with the TSP problem.
The main contributions of this work in general are as follows: (1) obtaining better-quality solutions, (2) significantly reducing the execution time, and (3) avoiding getting stuck at a local minimum.
The paper is organized as follows: in Section 2, we discuss related works; Section 3 presents the proposed enhancement of the ACO algorithm; Section 4 shows the experimental results; and the conclusion is shown in Section 5.

2. Related Work

This section reviews some important and more recent works in this area.

2.1. Metaheuristic Solutions

The TSP has been widely used as a benchmark problem to evaluate many metaheuristic and nature-inspired algorithms. Chen and Chien [8] presented a hybrid method using simulated genetic annealing, ACO, and particle swarm optimization. Each algorithm performs a specific task, where ACO generates the initial solutions for the genetic simulated annealing algorithm, which searches for better solutions based on the initial solutions. Then, the better solutions are used to update the pheromone trails. Finally, the particle swarm optimization algorithm exchanges the pheromone information after a predefined number of cycles. Deng et al. [9] presented a hybrid method that combined a genetic algorithm and ACO. They used a multipopulation strategy to enhance the local search. In addition, they used chaotic optimization to avoid the ACO slow convergence problem. They controlled the trade-off between exploration and exploitation by using an adaptive control strategy to distribute the pheromones uniformly. Eskandari et al. [10] proposed a local solution enhancement for ACO based on mutation operators. The local solution is mutated to generate a new solution and keeps it if it is better than the original solution. The mutation operators include swap, insertion, and reversion. A comparison between ACO and cuckoo search (CS) algorithms was conducted in [11] for solving the TSP. In this comparison, only five city plans were used. Mavrovouniotis et al. [12] used local search operators to support the ACO algorithm. This new method was used for dynamic TSP. The best solution from ACO is passed to local search operators for removing and inserting cities to generate a new solution. Alves et al. [13] introduced an adapted ACO algorithm based on social interaction called social interaction ant colony optimization (SIACO). The social interaction was introduced to enhance pheromone deposition. Han et al. [14] proposed a niching ant colony system (NACS) algorithm. This algorithm enhances the ACO algorithm in two ways: it applies a niching strategy and uses multiple pheromone deposition. Pintea et al. [15] introduced an enhancement for ACO based on clustering, where the cities are divided into clusters and ACO is used to find the minimum cost for each cluster. Xiao et al. [16] proposed a multistage ACO algorithm that reduces the initial pheromone concentration based on the nearest-neighbor method. Then, the mean cross-evolution strategy is used to enhance the solution space. Zhou et al. [17] proposed an ACO algorithm for large-scale problems that utilizes graphics processing units.
The 3-Opt algorithm is widely used to enhance the local search of metaheuristic algorithms. Mahi et al. [18] presented a hybrid algorithm using particle swarm optimization and ACO. The particle optimizes the α and β parameters, which affect the performance of ACO. The 3-Opt algorithm is then used to avoid the stagnation problem. Gülcü et al. [7] introduced a parallel cooperative method, which is a hybridization of the parallel ACO (PACO) and the 3-Opt algorithm. The proposed algorithm is named as PACO-3Opt and it uses a parallel set of colonies. These multiple colonies have a master–slave paradigm. The 3-Opt algorithm is used by these colonies based on a predefined number of iterations. Khan et al. [19] used 2-Opt and 3-Opt with an artificial bee colony (ABC) algorithm. Also, they created a new different path by combing swap sequences with ABC.
Ouaarab et al. [20] proposed a discretized version of the CS algorithm and also added a new cuckoo category. This new category aims to manage the exploration and exploitation by using Lévy flights and multiple searching methods. Osaba et al. [21] presented a discrete version of the bat algorithm where each bat moves based on the best bat. If a bat is located far from the best bat, then the movement will be large, but if it is close to the best bat, the step will be small. Choong et al. [22] introduced a hybrid algorithm of the ABC algorithm and modified choice function. The modified choice function is used to regulate the neighborhood selection of employed and onlooker bees.
Many works in the literature present improvements on existing algorithms by suggesting methods to control the tradeoff between exploitation and exploration, which are the two main steps that form the basis of many metaheuristic approaches and nature-inspired algorithms. In exploitation, the accumulated knowledge about the search space is used to guide the search, while in exploration, risk is taken to explore the unfamiliar region of the search space, in the hope that this region may contain a solution better than the known solutions [23].
Researchers usually use hybrid methods to merge different algorithms’ capabilities; however, new methods can become too complex and sometimes even incomprehensible. By contrast, we aimed in this work to enhance the ACO algorithm by adding extra procedures while keeping the method understandable and easy to use.

2.2. Opt Algorithm

The 3-Opt algorithm was introduced to solve the TSP. It exchanges three edges from the old tour by another three edges to produce a new tour [24] and retains the new tour if it is better. The process is repeated until no further improvement is found. 3-Opt is a local search algorithm; therefore, it is used to help ACO avoid local minima [7] by optimizing the solutions locally [25].
There are ( n 3 ) . possible combinations to replace three edges from the tour with n cities [26]. For instance, from three edges, we can obtain eight possible combinations, as shown in Figure 1 where (a) to (h) represent these eight combinations [27].
The experiments reported in [26,28] show that combining the 3-Opt algorithm with other metaheuristic algorithms improves the solutions found by these algorithms. This is because the 3-Opt algorithm mitigates the effect of local minima [28].

2.3. Ant Colony Optimization

ACO was inspired by the way real ants forage for food. Initially, real ants forage for food randomly, depositing a chemical substances called pheromones on their paths. The path between the colony and the nearest food source tends to receive more pheromones, which attracts more ants to follow the same path. Once the food source is exhausted, the ants abandon the path and the pheromones evaporate, forcing the ants to start searching for another food source randomly.
The ACO algorithm [28,29,30] simulates the foraging behavior of real ants. It initializes all ants randomly, and each ant searches for a potential solution. In addition, ACO assigns an amount of pheromone to each edge of the solution path that is proportional to the quality of the solution.
In each iteration, each ant moves to unvisited nodes in order to construct a potential solution. The next node to visit is selected according to a probability distribution that favors the nodes with large amounts of pheromones ( τ i j ). ACO also takes into account a local heuristic function ( η i j ). Equation (1) shows the solution generation formula which computes the probability of selecting the edge from nodes i to j :
P i j k ( t ) = [ τ i j ] α [ η i j ] β l N i k [ τ i l ] α [ η i l ] β i f   j N i k
where α and β are coefficient parameters that determine the importance of the pheromone value ( τ i j ) and the value of the local heuristic ( η i j ). The local heuristic η i j is problem dependent. For the TSP, it is defined as 1 divided by the length of the edge between nodes i and j . Nik is the list of unvisited nodes from node i by ant k.
The ants update the pheromones locally using Equation (2). This update is called the local pheromone update:
τ i j ( t + 1 ) = ( 1 ρ ) τ i j ( t ) + ρ τ 0
where τ0 is the initial pheromone value, and ρ is the evaporation rate.
ACO selects the best solution greedily. The best ant updates the pheromone trails on its path using Equation (3). This process is called the global pheromone update:
τ i j ( t + 1 ) = ( 1 ρ ) τ i j ( t ) + ρ Δ τ i j ( t )
Δ τ i j ( t ) is defined as
Δ τ i j ( t ) = { 1 L g b ( t )   i f   a r c ( i , j ) t h e   b e s t   t o u r 0    o t h e r w i s e
where Lgb(t) is the tour length for the best ant.
These ACO algorithm processes are illustrated in Figure 2 [31]. The ACO algorithm suffers from a stagnation problem [32] because the amounts of pheromone are accumulated on the explored paths, and as a result, the chances of exploring other paths decrease. The FACO algorithm, discussed next, addresses this issue.

2.4. Flying Ant Colony Optimization

In [33], a flying ant colony algorithm was proposed to solve the quality of service (QoS)-aware web service composition problem. Web service composition involves selecting the best combination of web services, where each service is selected from a set of candidate services that fulfill a certain task. The solutions are evaluated according to a set of QoS properties, such as reliability, cost, response time, and availability.
The algorithm assumes that, in addition to walking normal ants, there are also flying ants. Flying ants inject their pheromones from a distance, so that not only the nodes on the path receive some pheromones but also their neighboring nodes. The amount of pheromone a neighboring node receives is inversely proportional to the distance between it and the node on the path. This makes the ants more likely to explore these nodes during future iterations, which encourages exploration.
Since determining the nearest nodes might be an expensive iteration in terms of execution time, we only considered the ant that finds the best solution as a flying ant in each iteration. The rest of the ants are dealt with in the usual way. The flying ant then determines the nearest neighboring nodes (web services) by using Equation (5) to calculate the distance between two web services x and y . Equation (5) considers two web services that are similar (close to each other) if they have similar QoS properties:
d ( x , y ) = ( C x C y ) 2 + ( R T x R T y ) 2 + ( A x A y ) 2 + ( R x R y ) 2
where C, RT, A, and R are the cost, response time, availability, and reliability, respectively; x is the best web service obtained by the best ant in task ti+1; y is one of the neighboring web services to x in task ti+1; and x ≠ y.
The nearest neighbors receive an amount of pheromone that is inversely proportional to their distance from the best node. Equation (6) describes how much pheromone a node receives:
τ ( i + 1 , x ) ( i + 1 , l ) ( t + 1 ) = τ ( i + 1 , x ) ( i , l ) ( t ) + ( τ ( i + 1 , x ) ( i + 1 , x ) ( t + 1 ) ( 1 + d ( η ( i + 1 , x ) ( i + 1 , l ) ) ) )
where l is the neighbor’s number where l [ 1 , N S ] . NS is the number of neighbors. τ ( i , x ) ( i + 1 , x ) ( t + 1 ) represents the pheromone trails from the global pheromone update. d ( η ( i , x ) ( i , l ) ) is the normalized distance between web services in task i + 1 and its neighbor web service l in the same task.
The distance (local heuristic) η i j . is normalized according to Equation (7):
d ( η ( i + 1 , x ) ( i + 1 , l ) ) = η ( i + 1 , x ) ( i + 1 , l ) q = 1 N S η ( i + 1 , x ) ( i + 1 , q )
Figure 3 illustrates the process of the FACO algorithm [33], which was added to the process of ACO.

3. Dynamic Flying Ant Colony Optimization (DFACO) Algorithm

Many methods, including heuristic or hybrid, have been proposed for solving the TSP, but most of them cannot avoid the stagnation problem, or they may obtain solutions but take a long execution time [7]. In this work, we proposed an enhanced ACO algorithm that finds better solutions in less computation time and a robustness mechanism to avoid the stagnation problem.
In this section, we present a modification of the FACO algorithm to make it suitable for the TSP problem in the following ways.
First, the number of neighboring nodes in FACO were static based on the experiments. However, the number of neighbors in DFACO that may be injected with pheromones was dynamic. The number of neighboring nodes varied in each iteration depending on the quality of the best solution reached so far compared to the other solutions. The number of neighbors was determined based on two cases: (1) If the best solution is slightly better than the other solutions, then the number of neighbors should be large to obtain more neighbors. This encourages exploration in future iterations. (2) If the best solution is considerably better than the other solutions, then the number of neighbors should be small to encourage exploitation in future iterations.
The number of nearest neighbors, NS, was determined according to the following formula:
N S = ( L g b ( t ) k S L g l ( t ) ) × N
where Lgb(t) is the tour length of the global best tour at time t, Lkl(t) is the tour length of ant k at time t, S is the number of ants, and N is the number of cities.
The second modification aimed to reduce the execution time. If all ants were flying ants (such as in FACO), we would have to determine many neighbors for each node on the best path, which may substantially increase the execution time. Therefore, here, only 50% of the ants were flying ants and the rest were normal walking ants.
The third modification aimed to encourage exploration at early stages and exploitation at later stages. This modification is similar to the FACO algorithm, but we modified the process to adapt it to the TSP problem. The intuition behind this is that at early stages, we have no idea about the location of the best solution in the search space; and therefore, ants should be encouraged to explore the search space. On the other hand, at later iterations, the region that contains the best solution is more likely to have been located, and therefore, exploitation should be encouraged. This was achieved by injecting pheromones at farther neighbors during early iterations, while at later iterations, we injected the pheromones at only the nearest neighbors. Equation (6) was used to determine the amount of pheromone for each neighbor.
As a final modification, we embedded the 3-Opt algorithm in FACO to reduce the chances of getting stuck at a local minimum.
Figure 4 shows the complete algorithm in detail, and Figure 5 illustrates the process of the DFACO algorithm.

4. Experimental Results

We conducted empirical experiments using TSP datasets from TSPLIB [34] to test the performance of the proposed algorithm. We performed three comparisons. First, we compared DFACO combined with the 3-Opt algorithm with ACO combined with the 3-Opt algorithm using 24 datasets. Second, we compared DFACO performance with PACO-3Opt [7] in detail using 21 datasets. Third, we compared DFACO performance with five more recent methods [8,9,18,20,21] using 24 datasets.
We implemented DFACO in Java and used the ACOTSPJava (http://adibaba.github.io/ACOTSPJava/) implementation of ACO.
We compared the methods with respect to the average of the best solutions for all runs (Mean), the standard deviation (SD), and the best solution for each run (Best). We also compared DFACO, PACO-3Opt, and ACO with respect to execution time in seconds.
Table 1 lists the parameter values of both algorithms, which were empirically determined. These values were also used to compare DFACO to all of the other methods.
Both ACO and DFACO were run for 100 iterations (Z) and each dataset was used in 30 independent experiments. Table 2 lists the comparison results for DFACO and ACO. The first column shows the name of the TSP datasets. The second column shows the best-known solution (BKS) as reported on the TSPLIB website (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/STSP.html). Bold font indicates the best results.
Table 2 indicates that DFACO was able to find the BKS in all runs for 16 datasets with zero SD, while ACO was able to find the BKS in all runs for 15 datasets with zero SD. However, the proposed method obtained the BKS in all runs faster than ACO for four datasets (bier127, ch130, kroB150, and kroA200), while ACO obtained the BKS faster than DFACO for two datasets (eil101 and ch150).
Figure 6 shows the average of the best solutions, while Figure 7 presents the execution time in seconds for all datasets in the table. As can be seen in Figure 6, DFACO was better than ACO in terms of the average of the best solutions and obtains a shorter distance on average for all datasets. Also, DFACO was faster than ACO by 54.7 s for all datasets.
For the remaining eight datasets, DFACO found the best Mean solution for seven datasets, while ACO found the best Mean solution for one dataset (fl1400). These results were found to be statistically significant according to the Wilcoxon signed-rank test, with N = 8 and p ≤ 0.05. A t-test was used to see if the results were statistically significant in the 30 independent runs for each one of the eight datasets. The results indicate that the proposed method’s results were statistically significant for four out of eight datasets, namely, for the datasets rat575, rat783, rl1323, and d1655. We did not perform a t-test for the remaining 16 datasets because both algorithms achieved the BKS.
For the second set of comparisons, we followed the comparison method of PACO-3Opt [7]. In the PACO-3Opt experiments, the TSP datasets were divided based on the problem size into small-scale and large-scale datasets (the size of the large-scale datasets was between 400 and 600). The small-size datasets used 10 TSP datasets (shown in Table 3), and for the large size, 11 TSP datasets were used (shown in Table 4). Table 3 presents the experimental results of comparing DFACO with PACO-3Opt for small-scale TSP instances. The table shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that the DFACO obtained the optimum distances for all datasets in terms of best, worst, and average values. Meanwhile, PACO-3Opt obtained the optimum distances for only six datasets, one dataset, and one dataset in terms of best, worst, and average, respectively. With regard to execution time, Table 3 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances.
Table 4 presents the results of comparing DFACO with PACO-3Opt for large-scale TSP instances. It also shows the best, worst, and average values of both algorithms for comparison. Boldface text indicates the better results for both algorithms. The results reveal that DFACO obtained better distances for all datasets in terms of best, worst, and average values, except for rat783. With regard to execution time, Table 4 shows that DFACO significantly reduced the execution time and obtained better results faster than PACO-3Opt for all TSP instances except for rat783, where DFACO was faster than PACO-3Opt but did not obtain better results.
Table 5 shows the third type of comparison between DFACO and five recent methods [5,6,7,8,9]. In this set of experiments, we used 24 TSP datasets and 30 independent runs.
The table reveals that the proposed algorithm achieved the best results for all datasets except one (rat783), for which Deng’s method [9] achieved the best result. Also, DFACO found the BKS for 18 datasets, and for one dataset (rat575), it found an even better solution than the BKS.
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30 and Figure 31 show the average of the best solutions for each TSP instance obtained by different algorithms. These figures visualize the results of the 24 datasets shown in Table 5. From these figures, it is clear that DFACO’s performance was better than that of the other algorithms for most datasets except rat783.
Table 6 compares DFACO with five recent methods with respect to the percentage deviation of the average solution to the BKS value (PDav) and the percentage deviation of the best solution to the BKS value (PDbest) in the experimental results. PDav and PDbest were calculated using Equations (9) and (10), respectively:
P D a v = M e a n B K S B K S × 100 ,
P D b e s t = B e s t B K S B K S × 100 .
The results revealed that the values of PDav and PDbest for DFACO were better than those for the other methods on all datasets except one (rat783), for which Deng’s method [9] was better.

5. Conclusions

This paper proposed a modified FACO algorithm for the TSP. We modified FACO in several ways to reduce the execution time and to better balance exploration and exploitation. For example, the number of neighboring nodes receiving pheromones varied depending on the quality of the solution compared to the other solutions. This helped to balance the exploration and exploitation strategies. We also embedded the 3-Opt algorithm to improve the solution by mitigating the effect of the stagnation problem. Moreover, the colony contained a combination of normal and flying ants. These modifications aimed to achieve better solutions with less processing time and to avoid getting stuck in local minima.
DFACO was compared with (1) ACO and five recent methods for the TSP [8,9,18,20,21] using 24 TSP datasets and (2) PACO-3Opt using 22 TSP datasets. Our empirical results showed that DFACO achieved the best results compared to ACO and the five different methods for most datasets (23 out of 24) in terms of solution quality. Also, DFACO achieved the best results compared with PACO-3Opt for most datasets (20 out of 21) in terms of solution quality and the execution time. Furthermore, for one dataset, DFACO achieved a better solution than the best-known solution.

Author Contributions

F.D. and K.E.H. proposed the idea and wrote the manuscript. All authors took part in designing the solution. F.D. implemented the algorithms and compared them. H.M. and H.A. validated the implementation and analyzed the results. All authors reviewed and proofread the final manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group No. (RG-1439-35).

Acknowledgments

The authors thank the Deanship of Scientific Research and RSSU at King Saud University for their technical support.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Lenstra, J.K.; Kan, A.R.; Lawler, E.L.; Shmoys, D. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization; John Wiley & Sons: Hoboken, NJ, USA, 1985. [Google Scholar]
  2. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [Google Scholar] [CrossRef]
  3. Garey, M.R.; Graham, R.L.; Johnson, D.S. Some NP-complete geometric problems. In Proceedings of the 8th Annual ACM Symposium on Theory of Computing, Hershey, PA, USA, 3–5 May 1976; pp. 10–22. [Google Scholar]
  4. Tito, J.E.; Yacelga, M.E.; Paredes, M.C.; Utreras, A.J.; Wójcik, W.; Ussatova, O. Solution of travelling salesman problem applied to Wireless Sensor Networks (WSN) through the MST and B&B methods. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, Wilga, Poland, 3–10 June 2018; p. 108082F. [Google Scholar]
  5. Kravchuk, S.; Minochkin, D.; Omiotek, Z.; Bainazarov, U.; Weryńska-Bieniasz, R.; Iskakova, A. Cloud-based mobility management in heterogeneous wireless networks. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments, Wilga, Poland, 28 May–6 June 2017; p. 104451W. [Google Scholar]
  6. Farrugia, L.I. Wireless Sensor Networks; Nova Science Publishers, Inc.: New York, NY, USA, 2011. [Google Scholar]
  7. Gülcü, Ş.; Mahi, M.; Baykan, Ö.K.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. 2018, 22, 1669–1685. [Google Scholar] [CrossRef]
  8. Chen, S.-M.; Chien, C.-Y. Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst. Appl. 2011, 38, 14439–14450. [Google Scholar] [CrossRef]
  9. Deng, W.; Zhao, H.; Zou, L.; Li, G.; Yang, X.; Wu, D. A novel collaborative optimization algorithm in solving complex optimization problems. Soft Comput. 2017, 21, 4387–4398. [Google Scholar] [CrossRef]
  10. Eskandari, L.; Jafarian, A.; Rahimloo, P.; Baleanu, D. A Modified and Enhanced Ant Colony Optimization Algorithm for Traveling Salesman Problem. In Mathematical Methods in Engineering; Springer: Berlin/Heidelberg, Germany, 2019; pp. 257–265. [Google Scholar]
  11. Hasan, L.S. Solving Traveling Salesman Problem Using Cuckoo Search and Ant Colony Algorithms. J. Al-Qadisiyah Comput. Sci. Math. 2018, 10, 59–64. [Google Scholar]
  12. Mavrovouniotis, M.; Müller, F.M.; Yang, S. Ant colony optimization with local search for dynamic traveling salesman problems. IEEE Trans. Cybern. 2017, 47, 1743–1756. [Google Scholar] [CrossRef]
  13. de S Alves, D.R.; Neto, M.T.R.S.; Ferreira, F.d.S.; Teixeira, O.N. SIACO: A novel algorithm based on ant colony optimization and game theory for travelling salesman problem. In Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, Phu Quoc Island, Viet Nam, 2–4 February 2018; pp. 62–66. [Google Scholar]
  14. Han, X.-C.; Ke, H.-W.; Gong, Y.-J.; Lin, Y.; Liu, W.-L.; Zhang, J. Multimodal optimization of traveling salesman problem: A niching ant colony system. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Kyoto, Japan, 15–19 July 2018; pp. 87–88. [Google Scholar]
  15. Pintea, C.-M.; Pop, P.C.; Chira, C. The generalized traveling salesman problem solved with ant algorithms. Complex Adapt. Syst. Model. 2017, 5, 8. [Google Scholar] [CrossRef]
  16. Xiao, Y.; Jiao, J.; Pei, J.; Zhou, K.; Yang, X. A Multi-strategy Improved Ant Colony Algorithm for Solving Traveling Salesman Problem. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Shanxi, China, 18–20 May 2018; p. 042101. [Google Scholar]
  17. Zhou, Y.; He, F.; Qiu, Y. Dynamic strategy based parallel ant colony optimization on GPUs for TSPs. Sci. China Inf. Sci. 2017, 60, 068102. [Google Scholar] [CrossRef]
  18. Mahi, M.; Baykan, Ö.K.; Kodaz, H. A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl. Soft Comput. 2015, 30, 484–490. [Google Scholar] [CrossRef]
  19. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  20. Ouaarab, A.; Ahiod, B.; Yang, X.-S. Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput. Appl. 2014, 24, 1659–1669. [Google Scholar] [CrossRef]
  21. Osaba, E.; Yang, X.-S.; Diaz, F.; Lopez-Garcia, P.; Carballedo, R. An improved discrete bat algorithm for symmetric and asymmetric Traveling Salesman Problems. Eng. Appl. Artif. Intell. 2016, 48, 59–71. [Google Scholar] [CrossRef]
  22. Choong, S.S.; Wong, L.-P.; Lim, C.P. An artificial bee colony algorithm with a modified choice function for the Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 622–635. [Google Scholar] [CrossRef]
  23. Civicioglu, P.; Besdok, E. A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif. Intell. Rev. 2013, 39, 315–346. [Google Scholar] [CrossRef]
  24. Lin, S. Computer solutions of the traveling salesman problem. Bell Syst. Tech. J. 1965, 44, 2245–2269. [Google Scholar] [CrossRef]
  25. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–351. [Google Scholar]
  26. Reinelt, G. The Traveling Salesman: Computational Solutions for TSP Applications; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
  27. Blazinskas, A.; Misevicius, A. combining 2-opt, 3-opt and 4-opt with k-swap-kick perturbations for the traveling salesman problem. In Proceedings of the 17th International Conference on Information and Software Technologies, Kaunas, Lithuania, 27–29 April 2011; pp. 50–401. [Google Scholar]
  28. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  29. Dorigo, M.; Gambardella, L.M. Ant colonies for the travelling salesman problem. BioSystems 1997, 43, 73–81. [Google Scholar] [CrossRef]
  30. Gambardella, L.M.; Dorigo, M. Solving Symmetric and Asymmetric TSPs by Ant Colonies. In Proceedings of the International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 622–627. [Google Scholar]
  31. Deepa, O.; Senthilkumar, A. Swarm intelligence from natural to artificial systems: Ant colony optimization. Networks (Graph-Hoc) 2016, 8, 9–17. [Google Scholar]
  32. Aljanaby, A. An Experimental Study of the Search Stagnation in Ants Algorithms. Int. J. Comput. Appl. 2016, 148. [Google Scholar] [CrossRef]
  33. Dahan, F.; El Hindi, K.; Ghoneim, A. An Adapted Ant-Inspired Algorithm for Enhancing Web Service Composition. Int. J. Semant. Web Inf. Syst. 2017, 13, 181–197. [Google Scholar] [CrossRef]
  34. Reinelt, G. TSPLIB—A traveling salesman problem library. Orsa J. Comput. 1991, 3, 376–384. [Google Scholar] [CrossRef]
Figure 1. 3-Opt possible combinations.
Figure 1. 3-Opt possible combinations.
Sensors 19 01837 g001
Figure 2. Ant colony optimization (ACO) flowchart.
Figure 2. Ant colony optimization (ACO) flowchart.
Sensors 19 01837 g002
Figure 3. Flying ACO (FACO) flowchart.
Figure 3. Flying ACO (FACO) flowchart.
Sensors 19 01837 g003
Figure 4. The dynamic FACO (DFACO) algorithm.
Figure 4. The dynamic FACO (DFACO) algorithm.
Sensors 19 01837 g004
Figure 5. DFACO flowchart.
Figure 5. DFACO flowchart.
Sensors 19 01837 g005
Figure 6. DFACO and ACO comparison in terms of Mean as shown in Table 2.
Figure 6. DFACO and ACO comparison in terms of Mean as shown in Table 2.
Sensors 19 01837 g006
Figure 7. DFACO and ACO comparison in terms of execution time in seconds as shown in Table 2.
Figure 7. DFACO and ACO comparison in terms of execution time in seconds as shown in Table 2.
Sensors 19 01837 g007
Figure 8. Average of best solutions obtained for instance eil51 by all algorithms.
Figure 8. Average of best solutions obtained for instance eil51 by all algorithms.
Sensors 19 01837 g008
Figure 9. Average of best solutions obtained for instance eil76 by all algorithms.
Figure 9. Average of best solutions obtained for instance eil76 by all algorithms.
Sensors 19 01837 g009
Figure 10. Average of best solutions obtained for instance eil101 by all algorithms.
Figure 10. Average of best solutions obtained for instance eil101 by all algorithms.
Sensors 19 01837 g010
Figure 11. Average of best solutions obtained for instance berlin52 by all algorithms.
Figure 11. Average of best solutions obtained for instance berlin52 by all algorithms.
Sensors 19 01837 g011
Figure 12. Average of best solutions obtained for instance bier127 by all algorithms.
Figure 12. Average of best solutions obtained for instance bier127 by all algorithms.
Sensors 19 01837 g012
Figure 13. Average of best solutions obtained for instance ch130 by all algorithms.
Figure 13. Average of best solutions obtained for instance ch130 by all algorithms.
Sensors 19 01837 g013
Figure 14. Average of best solutions obtained for instance ch150 by all algorithms.
Figure 14. Average of best solutions obtained for instance ch150 by all algorithms.
Sensors 19 01837 g014
Figure 15. Average of best solutions obtained for instance rd100 by all algorithms.
Figure 15. Average of best solutions obtained for instance rd100 by all algorithms.
Sensors 19 01837 g015
Figure 16. Average of best solutions obtained for instance lin105 by all algorithms.
Figure 16. Average of best solutions obtained for instance lin105 by all algorithms.
Sensors 19 01837 g016
Figure 17. Average of best solutions obtained for instance lin318 by all algorithms.
Figure 17. Average of best solutions obtained for instance lin318 by all algorithms.
Sensors 19 01837 g017
Figure 18. Average of best solutions obtained for instance kroA100 by all algorithms.
Figure 18. Average of best solutions obtained for instance kroA100 by all algorithms.
Sensors 19 01837 g018
Figure 19. Average of best solutions obtained for instance kroA150 by all algorithms.
Figure 19. Average of best solutions obtained for instance kroA150 by all algorithms.
Sensors 19 01837 g019
Figure 20. Average of best solutions obtained for instance kroA200 by all algorithms.
Figure 20. Average of best solutions obtained for instance kroA200 by all algorithms.
Sensors 19 01837 g020
Figure 21. Average of best solutions obtained for instance kroB100 by all algorithms.
Figure 21. Average of best solutions obtained for instance kroB100 by all algorithms.
Sensors 19 01837 g021
Figure 22. Average of best solutions obtained for instance kroB150 by all algorithms.
Figure 22. Average of best solutions obtained for instance kroB150 by all algorithms.
Sensors 19 01837 g022
Figure 23. Average of best solutions obtained for instance kroB200 by all algorithms.
Figure 23. Average of best solutions obtained for instance kroB200 by all algorithms.
Sensors 19 01837 g023
Figure 24. Average of best solutions obtained for instance kroC100 by all algorithms.
Figure 24. Average of best solutions obtained for instance kroC100 by all algorithms.
Sensors 19 01837 g024
Figure 25. Average of best solutions obtained for instance kroD100 by all algorithms.
Figure 25. Average of best solutions obtained for instance kroD100 by all algorithms.
Sensors 19 01837 g025
Figure 26. Average of best solutions obtained for instance kroE100 by all algorithms.
Figure 26. Average of best solutions obtained for instance kroE100 by all algorithms.
Sensors 19 01837 g026
Figure 27. Average of best solutions obtained for instance rat575 by all algorithms.
Figure 27. Average of best solutions obtained for instance rat575 by all algorithms.
Sensors 19 01837 g027
Figure 28. Average of best solutions obtained for instance rat783 by all algorithms.
Figure 28. Average of best solutions obtained for instance rat783 by all algorithms.
Sensors 19 01837 g028
Figure 29. Average of best solutions obtained for instance rl1323 by all algorithms.
Figure 29. Average of best solutions obtained for instance rl1323 by all algorithms.
Sensors 19 01837 g029
Figure 30. Average of best solutions obtained for instance fl1400 by all algorithms.
Figure 30. Average of best solutions obtained for instance fl1400 by all algorithms.
Sensors 19 01837 g030
Figure 31. Average of best solutions obtained for instance d1655 by all algorithms.
Figure 31. Average of best solutions obtained for instance d1655 by all algorithms.
Sensors 19 01837 g031
Table 1. Control parameters for ACO and DFACO.
Table 1. Control parameters for ACO and DFACO.
ParametersValue
α1
β2
ρ0.1
τ00.1
S100
Z100
Th (for DFACO)80
Table 2. Experimental results of ACO and DFACO.
Table 2. Experimental results of ACO and DFACO.
TSP InstanceBKSACO with 3-OptThe Proposed Method
MeanSDBestTimeMeanSDBestTime
eil51426426.000.00426.001426.000.00426.001
eil76538538.000.00538.003538.000.00538.003
eil101629629.000.00629.0010629.000.00629.0012
berlin5275427542.000.007542.0017542.000.007542.001
bier127118,282118,282.000.00118,282.0056118,282.000.00118,282.0047
ch13061106110.000.006110.00166110.000.006110.0013
ch15065286528.000.006528.00176528.000.006528.0024
rd10079107910.000.007910.0027910.000.007910.002
lin10514,37914,379.000.0014,379.00214,379.000.0014,379.002
lin31842,02942,243.7052.2742,135.0034942,228.0348.3042,123.00381
kroA10021,28221,282.000.0021,282.00221,282.000.0021,282.002
kroA15026,52426,524.070.2526,524.008326,524.030.1826,524.0057
kroA20029,36829,378.7311.0929,368.0021129,368.000.0029,368.00168
kroB10022,14122,141.000.0022,141.00222,141.000.0022,141.002
kroB15026,13026,130.000.0026,130.00926,130.000.0026,130.007
kroB20029,43729,443.207.0729,437.0013729,441.605.1629,437.00185
kroC10020,74920,749.000.0020,749.00220,749.000.0020,749.002
kroD10021,29421,294.000.0021,294.00321,294.000.0021,294.003
kroE10022,06822,068.000.0022,068.00222,068.000.0022,068.002
rat57567736384.877.216368.004436367.38.486348.00126.2
rat783880610,524.6013.3610,483.0092210,491.914.6210,455.00151.6
rl1323270,199273,969.57515.57272,639.002267273,367.9484.78272,487.002285
fl140020,12720,291.7729.0120,225.00247120,300.7735.2720,239.002452
d165562,12863,722.2095.5363,520.00175463,707.87114.8463,428.001523
Table 3. DFACO and PACO-3Opt comparison on small-scale datasets.
Table 3. DFACO and PACO-3Opt comparison on small-scale datasets.
TSP InstanceBKSBest Tour LengthWorst Tour LengthAverage Tour LengthTime (s)
DFACOPACO-3OptDFACOPACO-3OptDFACOPACO-3OptDFACOPACO-3Opt
Eil51426426426426427426426.3512.39
Berlin52754275427542754275427542754212.1
Rat991211121112131211122512111217.1419.79
Eil76538538538538542538539.8538.18
St70675675676675679675677.855.66.97
KroA10021,28221,28221,28221,28221,38221,28221,326.8221.1
Lin10514,37914,37914,37914,37914,42214,37914,393214.57
KroA20029,36829,36829,53329,36829,72129,36829,644.5168213.12
Ch1506528652865706528662765286601.42479.35
Eil101629629629629639629630.551220.79
Table 4. DFACO and PACO-3Opt comparison for large-scale datasets.
Table 4. DFACO and PACO-3Opt comparison for large-scale datasets.
TSP InstanceBKSBest Tour LengthWorst Tour LengthAverage Tour LengthTime (s)
DFACOPACO-3OptDFACOPACO-3OptDFACOPACO-3OptDFACOPACO-3Opt
rd40015,281.015,321.015,578.015,428.015,667.015,384.015,613.9133.11496.0
fl41711,861.011,867.011,972.011,900.012,000.011,880.311,987.495.22046.0
pr439107,217.0107,310.0108,482.0107,698.0108,973.0107,515.9108,702.0142.62132.0
pcb44250,778.050,910.051,962.051,186.052,283.051,047.452,202.4129.42088.0
d49335,002.035,12435,735.035,38035,982.035,266.435,841.0137.593175.0
u57436,905.037,168.037,981.037,518.038,165.037,366.838,030.7116.95460.0
rat5756773.06348.07003.06380.07037.06367.37012.4126.25004.0
p65434,643.034,695.035,045.034,864.035,116.034,740.635,075.0101.79105.0
d65748,912.049,250.050,206.049,618.050,386.049,462.750,277.5135.18715.0
u72441,910.042,284.042,764.042,634.043,272.042,437.843,122.5137.011,458.0
rat7838806.010,455.09111.010,522.09152.010,491.99127.3151.614,277.0
Table 5. Experimental results of proposed method compared with recent research (NA indicates that results were not reported in the original source).
Table 5. Experimental results of proposed method compared with recent research (NA indicates that results were not reported in the original source).
TSP InstanceBKSChen and Chien (2011) [8]Ouaarab et al. (2014) [20]Mahi et al. (2015) [18]Osaba et al. (2016) [21]Deng et al. (2016) [9]DFACO
MeanSDBestMeanSDBestMeanSDBestMeanSDBestMeanSDBestMeanSDBest
eil51426427.270.454274260426426.450.61426428.11.6426427.21NA4264260426
eil76538540.22.94538538.030.17538538.30.47538548.13.8539540.302NA539.1535380538
eil101629635.233.59630630.431.14629632.72.12631646.44.9634634.68NA630.016290629
berlin5275427542075427542075427543.22.377542754207542NANANA754207542
bier127118,282119,421.8580.83118,282118,36012.73118,282NANANANANANANANANA118,2820118,282
ch13061106205.6343.76141613621.246110NANANANANANA6123.92NA6113.26611006110
ch15065286563.722.4565286549.920.5165286563.9527.66538NANANA6539.86NA6528652806528
rd10079107987.5762.067910NANANANANANANANANA7934.69NA7910791007910
lin10514,37914,406.3737.2814,37914,379014,37914,379.20.4814,379NANANA14,394.1NA14,381.814,379014,379
lin31842,02943,002.9307.5142,48742,435185.442,125NANANANANANA42,368.3NA42,284.942,228.0348.342,123
kroA10021,28221,370.47123.3621,28221,282021,28221,445.178.221,30121,445.3116.521,282NANANA21,282021,282
kroA15026,52426,899.2133.0226,52426,56956.2626,524NANANANANANANANANA26,524.030.1826,524
kroA20029,36829,738.73356.0729,38329,44795.6829,38229,646.111529,468NANANA29,434.7NA29,380.229,368029,368
kroB10022,14122,282.87183.9922,14122,1422.8722,141NANANA22,506.4221.322,140NANANA22,141022,141
kroB15026,13026,448.33266.7626,13026,15934.7226,130NANANANANANA26,175.3NA26,139.426,130026,130
kroB20029,43730,035.23357.4829,54129,54292.1729,448NANANANANANA29,512.4NA29,502.629,441.65.16129,437
kroC10020,74920,878.97158.6420,74920,749020,749NANANA21,050164.720,749NANANA20,749020,749
kroD10021,29421,620.47226.621,30921,30421.7921,294NANANA21,593.4141.621,294NANANA21,294021,294
kroE10022,06822,183.47103.3222,06822,08118.522,068NANANA22,349.6169.622,068NANANA22,068022,068
rat57567736933.8727.626891NANANANANANANANANA6901.25NA6859.856367.38.486348
rat78388069079.2352.698988NANANANANANANANANA8976.92NA8940.3710,491.914.6210,455
rl1323270,199280,181.51761.7277,642NANANANANANANANANANANANA273,367.9484.8272,487
fl140020,12721,349.63527.8820,593NANANANANANANANANA20,776.2NA20,683.820,300.7735.2720,239
d165562,12865,621.131031.964,151NANANANANANANANANA64,133.7NA63,615.663,707.87114.8463,428.00
Table 6. Results of PDav and PDbest for DFACO compared with recent methods.
Table 6. Results of PDav and PDbest for DFACO compared with recent methods.
TSP InstanceBKSChen and Chien (2011) [8]Ouaarab et al. (2014) [20]Mahi et al. (2015) [18]Osaba et al. (2016) [21]Deng et al. (2016) [9]DFACO
PDavPDbestPDavPDbestPDavPDbestPDavPDbestPDavPDbestPDavPDbest
eil514260.300.230.000.000.110.000.490.000.280.000.000.00
eil765380.410.000.010.000.060.001.880.190.430.210.000.00
eil1016290.990.160.230.000.590.322.770.790.900.160.000.00
berlin5275420.000.000.000.000.020.000.000.00NANA0.000.00
bier127118,2820.960.000.070.00NANANANANANA0.000.00
ch13061101.570.510.420.00NANANANA0.230.050.000.00
ch15065280.550.000.340.000.550.00NANA0.180.000.000.00
rd10079100.980.00NANANANANANA0.310.000.000.00
lin10514,3790.190.000.000.000.000.00NANA0.110.020.000.00
lin31842,0292.321.090.970.23NANANANA0.810.610.470.22
kroA10021,2820.420.000.000.000.770.090.770.00NANA0.000.00
kroA15026,5241.410.000.170.00NANANANANANA0.000.00
kroA20029,3681.260.050.270.050.950.34NANA0.230.04d0.000.00
kroB10022,1410.640.000.000.00NANA1.650.00NANA0.000.00
kroB15026,1301.220.000.110.00NANANANA0.170.040.000.00
kroB20029,4372.030.350.360.04NANANANA0.260.220.020.00
kroC10020,7490.630.000.000.00NANA1.450.00NANA0.000.00
kroD10021,2941.530.070.050.00NANA1.410.00NANA0.000.00
kroE10022,0680.520.000.060.00NANA1.280.00NANA0.000.00
rat57567732.381.74NANANANANANA1.891.28−5.85−6.19
rat78388063.102.07NANANANANANA1.941.5319.3019.01
rl1323270,1993.692.75NANANANANANANANA1.170.85
fl140020,1276.072.32NANANANANANA3.232.770.860.56
d165562,1285.623.26NANANANANANA3.232.392.382.02

Share and Cite

MDPI and ACS Style

Dahan, F.; El Hindi, K.; Mathkour, H.; AlSalman, H. Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem. Sensors 2019, 19, 1837. https://doi.org/10.3390/s19081837

AMA Style

Dahan F, El Hindi K, Mathkour H, AlSalman H. Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem. Sensors. 2019; 19(8):1837. https://doi.org/10.3390/s19081837

Chicago/Turabian Style

Dahan, Fadl, Khalil El Hindi, Hassan Mathkour, and Hussien AlSalman. 2019. "Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem" Sensors 19, no. 8: 1837. https://doi.org/10.3390/s19081837

APA Style

Dahan, F., El Hindi, K., Mathkour, H., & AlSalman, H. (2019). Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem. Sensors, 19(8), 1837. https://doi.org/10.3390/s19081837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop