Next Article in Journal
Recursive Convex Model for Optimal Power Flow Solution in Monopolar DC Networks
Previous Article in Journal
MS-CheXNet: An Explainable and Lightweight Multi-Scale Dilated Network with Depthwise Separable Convolution for Prediction of Pulmonary Abnormalities in Chest Radiographs
Previous Article in Special Issue
A Family of Hybrid Stochastic Conjugate Gradient Algorithms for Local and Global Minimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimized Discrete Dragonfly Algorithm Tackling the Low Exploitation Problem for Solving TSP

by
Bibi Aamirah Shafaa Emambocus
1,
Muhammed Basheer Jasser
1,*,
Angela Amphawan
1 and
Ali Wagdy Mohamed
2
1
Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Petaling Jaya 47500, Selangor, Malaysia
2
Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(19), 3647; https://doi.org/10.3390/math10193647
Submission received: 21 July 2022 / Revised: 31 August 2022 / Accepted: 7 September 2022 / Published: 5 October 2022

Abstract

:
Optimization problems are prevalent in almost all areas and hence optimization algorithms are crucial for a myriad of real-world applications. Deterministic optimization algorithms tend to be computationally costly and time-consuming. Hence, heuristic and metaheuristic algorithms are more favoured as they provide near-optimal solutions in an acceptable amount of time. Swarm intelligence algorithms are being increasingly used for optimization problems owing to their simplicity and good performance. The Dragonfly Algorithm (DA) is one which is inspired by the swarming behaviours of dragonflies, and it has been proven to have a superior performance than other algorithms in multiple applications. Hence, it is worth considering its application to the traveling salesman problem which is a predominant discrete optimization problem. The original DA is only suitable for solving continuous optimization problems and, although there is a binary version of the algorithm, it is not easily adapted for solving discrete optimization problems like TSP. We have previously proposed a discrete adapted DA algorithm suitable for TSP. However, it has low effectiveness, and it has not been used for large TSP problems. In this paper, we propose an optimized discrete adapted DA by using the steepest ascent hill climbing algorithm as a local search. The algorithm is applied to a TSP problem modelling a package delivery system in the Kuala Lumpur area and to benchmark TSP problems, and it is found to have a higher effectiveness than the discrete adapted DA and some other swarm intelligence algorithms. It also has a higher efficiency than the discrete adapted DA.

1. Introduction

Optimization is crucial in almost every domain where certain processes, designs, or systems are adjusted so as to be the most effective possible. Its aim is to find the best solution among a set of available solutions which either maximizes or minimizes an objective function. Hence, optimization algorithms are usually iterative where the objective function is evaluated repeatedly and the best solution found is chosen. Optimization algorithms can be classified into two types; deterministic algorithms, and heuristic algorithms [1]. Deterministic algorithms consist of exact methods which find the best solution to a problem. However, they are computationally costly and time-consuming, especially for large-scale real-world applications. Conversely, heuristic algorithms try to find a near-optimal solution in a feasible amount of time. Heuristic algorithms can be further developed to produce metaheuristic algorithms which use a combination of diversification and intensification techniques to perform better than simple heuristic algorithms. They can be classified as either trajectory-based or population-based algorithms [1].
Swarm intelligence algorithms are population-based metaheuristic algorithms which are inspired by various biological organisms. In nature, the simple and self-organizing interactions that individuals from a specific biological population have with each other and with their environment cause a functional global pattern to emerge [2]. Swarm intelligence algorithms are inspired by these simple interactions of different groups of animals or swarms of insects. They make use of a population of search agents which replicate the interactions of a biological population for solving complex optimization problems.
The Dragonfly Algorithm (DA) [3] is a swarm intelligence algorithm which is inspired by dragonfly swarms, in particular, their hunting and migrating behaviours. Dragonflies swarm statically and dynamically while hunting and migrating, respectively, and these two types of swarming aptly represent the two requisite phases of optimization algorithms; exploration, and exploitation. DA has been found to have a better performance than multiple other swarm intelligence algorithms in various applications [4]. In [3], three versions of the dragonfly algorithm are proposed, namely, the original continuous DA for solving continuous optimization problems, the binary DA (BDA) to cater for discrete or binary optimization problems, and the multi-objective DA (MODA) to cater for multi-objective optimization problems.
The Traveling Salesman Problem (TSP) is a combinatorial optimization problem with a discrete search space. A myriad of real-world problems can be represented as TSP and hence it has numerous real-world applications. Large TSP problems are difficult to be solved using exact algorithms and the solvable instances consume a significant amount of computational time and resources [5]. Hence, heuristic algorithms are often used to solve TSP by providing near-optimal solutions. A number of swarm intelligence algorithms such as the Particle Swarm Optimization (PSO) [6], and the Ant Colony Optimization (ACO) [7], and their variants [8,9,10,11] have been successfully employed for solving TSP.
Considering the good performance of DA and its superior performance over other swarm intelligence algorithms in multiple applications, it is worth applying it for solving TSP. The original DA was proposed to solve continuous optimization problems and although a binary version of the original DA is proposed in [3], it is difficult to be adapted for discrete problems like TSP. Hence, in [12], we proposed a discrete adapted dragonfly algorithm suitable for TSP. This is to propose a new discretized variant of the dragonfly algorithm which is suitable for solving discrete optimization problems like TSP and which can be further improved. However, this algorithm has not been applied to large TSP problems and it has low effectiveness.
In this paper, we propose to improve the low effectiveness of the adapted discrete dragonfly algorithm in [12] by employing the steepest ascent Hill Climbing (HC) algorithm as a local search to improve the exploitation phase. The algorithm is tested using a TSP problem consisting of 50 locations in the area of Kuala Lumpur. The TSP problem models a package delivery system where the shortest route to deliver parcels at specific locations and return to the initial location needs to be found. Moreover, the performance of the algorithm is compared to other swarm intelligence algorithms in solving benchmark TSP problems from TSPLIB. From the experiments conducted, the proposed algorithm is found to have higher effectiveness, that is, it produces solutions with a lower cost as compared to the adapted discrete DA [12], and the enhanced Swap Sequence based PSO (SSPSO) [8] algorithms, and it also has higher effectiveness as compared to some other swarm intelligence algorithms as it provides the optimal solution or close to the optimal solutions for benchmark TSP problems. Moreover, it has a higher efficiency than the adapted discrete DA as it converges to the optimal solution in a shorter amount of time.
The contributions of this paper include an optimized discrete dragonfly algorithm suitable for solving discrete optimization problems such as TSP, which provides optimal or near-optimal solutions for benchmark TSP problems, an application of the proposed algorithm to a TSP problem which models a package delivery system in the area of Kuala Lumpur, and a comparison of the performance of the proposed algorithm to that of other swarm intelligence algorithms in solving benchmark TSP problems.
The remaining of the paper is structured as follows: In Section 2, a background on swarm intelligence algorithms, the original dragonfly algorithm, and the hill climbing algorithm is provided. In Section 3, an explanation on the traveling salesman problem is given. In Section 4, some previous works using swarm intelligence algorithms for solving TSP, and the adapted discrete DA algorithm are presented. In Section 5, a description of the proposed algorithm is given. In Section 6, the results and discussions are presented and, finally, in Section 7, the conclusions and some future work are presented.

2. Background on Swarm Intelligence, Dragonfly Algorithm, and Hill Climbing Algorithm

2.1. Swarm Intelligence Algorithms

Swarm intelligence algorithms are metaheuristic optimization algorithms that are inspired by the simple and self-organizing interactions of biological organisms that give rise to a functional global pattern [2]. They make use of a number of search agents which replicate the actions of individuals in a specific biological population as they interact among themselves and their environment. Each search agent, which represents a solution, moves in the state space by considering a fitness function. This allows the algorithm to solve complex optimization problems. Some well-known swarm intelligence algorithms include the particle swarm optimization that is inspired by the swarming of bird flocks or fish schools, and the ant colony optimization that is based on the food searching behaviour of ants.
Swarm intelligence algorithms have a plethora of applications in various domains as they can be used for solving different types of optimization problems including continuous optimization problems, discrete optimization problems, and multi-objective optimization problems. A continuous optimization problem is one in which the solution can be any real value within a certain range of values whereas a discrete optimization problem is one in which the solution can be a specific one from a set of possible solutions. A multi-objective optimization problem is one which has more than one objective function.
Some recent applications of swarm intelligence algorithms include in agricultural technology drones used for improving the productivity of farming areas [13], in fog computing systems for task scheduling [14], in gene selection profile for the classification of microarray data [15], in feature selection [16], and in artificial neural networks for optimizing the parameters of the network [17,18]. Furthermore, they have various applications in data science [19], in Internet of Things (IoT) systems [20], in surveillance systems [21], in water resources engineering [22], and in supply chain management [23].

2.2. Dragonfly Algorithm

The dragonfly algorithm [3] is a metaheuristic optimization algorithm classified under swarm intelligence algorithms. It is inspired by the swarming behaviours of dragonflies during hunting and migrating. During hunting, the dragonflies swarm statically, that is they form small groups and fly over a small area by abruptly changing their flying path. This type of swarming is congruent with the exploration phase of optimization algorithms where the algorithm tries to find a good region of the search space. Conversely, during migration, the dragonflies fly together in a sole group and along one direction over long distances. This type of swarming is congruent with the exploitation phase of optimization algorithms where the algorithm tries to converge to the optimal solution. Figure 1 shows a static and a dynamic swarm of dragonflies.
Five factors are used to control the movement of the dragonflies in the search space during both the exploration and exploitation phases; separation, alignment, cohesion, attraction to food, and distraction from enemy. Each factor has a corresponding weight which is used to tune the factor to enable the algorithm to transition between the exploration and exploitation phases. The factors, along with the weights, also ensure the survival of the swarm by causing it to attract towards food sources and distract away from enemies. The best solution which has been obtained by the population of search agents is taken as the food source and the worst solution is taken as the enemy.
The separation factor of a dragonfly is used to prevent its static collision with its neighbours, and is calculated using (1):
S i = j = 1 N X i X j
where X i , and X j are the current dragonfly’s position and the j-th neighbour’s position respectively, and N is the number of dragonflies in the neighbourhood.
The alignment factor of a dragonfly matches its velocity to that of its neighbours, and is calculated using (2):
A i = j = 1 N V j N
where V j is the j-th neighbour’s velocity and N is the number of dragonflies in the neighbourhood.
The cohesion factor of a dragonfly is its tendency towards the centre of mass of the neighbourhood, and is calculated using (3):
C i = j = 1 N X j N X i
where X j is the j-th neighbour’s position and N is the number of dragonflies in the neighbourhood.
The attraction to food factor of a dragonfly is its attraction towards a food source, and is calculated using (4):
F i = X + X i
where X + is the food source’s position.
The distraction from enemy factor of a dragonfly is its repulsion from an enemy, and is calculated using (5):
E i = X + X i
where X is the enemy’s position.
To allow the dragonflies to move in the search space by considering these factors, two vectors are used: the step vector ( Δ X ) and the position vector (X).
The step vector determines the direction of the movement, and it is calculated using (6):
Δ X i t + 1 = ( s S i + a A i + c C i + f F i + e E i ) + w Δ X i t
where S i , A i , C i , F i , and E i are the separation, alignment, cohesion, food factor, and enemy factors of the i-th dragonfly respectively, s, a, c, f, and e are the separation, alignment, cohesion, food factor, and enemy factor’s weights respectively, w is the inertia weight, and t is the iteration counter.
The position vector allows the movement in the search space, and it is calculated using (7):
X i t + 1 = X i t + Δ X i t + 1
In DA, in order to replicate the static and dynamic swarming behaviours of dragonflies, it is important to consider the neighbourhood of each search agent. This is achieved by considering a radius around each artificial dragonfly. The radius is increased proportionally to the iteration number so as to change the static swarms to dynamic ones until the whole population form one dynamic swarm and converges to the global optimum during the final iterations. This is another way by which the algorithm transitions from the exploration phase to the exploitation phase.
If a dragonfly has no neighbouring dragonfly, its position is updated using the Levy flight mechanism, which is a random walk employed to increase the stochasticity of the algorithm. The position vector of the dragonfly is calculated using (8):
X i t + 1 = X i t + L e v y ( d ) × X i t
where t is the current iteration number and d is the dimension of the position vectors.
The pseudocode of DA is given in Algorithm 1.
Algorithm 1: Dragonfly Algorithm
Mathematics 10 03647 i001

2.3. Hill Climbing Algorithm

The hill climbing algorithm is a heuristic local search algorithm which is used for optimization mainly in the field of artificial intelligence. Generally, starting from one position, it considers all the possible neighbouring solutions in a search region and selects the one which either maximizes or minimizes an objective function the most.
There are three main types of hill climbing, namely, the simple hill climbing algorithm, the steepest ascent hill climbing algorithm, and the stochastic hill climbing algorithm.
The simple hill climbing algorithm checks only one neighbouring solution at a time. If it is better than the current solution, the neighbouring solution is selected as the current solution. This type of hill climbing algorithm requires less computing power. However, a good solution is not guaranteed.
The steepest ascent hill climbing algorithm checks all the neighbouring solutions and then selects the best one. This type of hill climbing algorithm requires more computing power, however, it is more likely to find the most optimal solution.
The stochastic hill climbing algorithm checks a random neighbouring solution and if it is better than the current solution, the neighbouring solution is selected as the current solution, otherwise, another random neighbouring solution is selected.

3. Problem Formulation

The traveling salesman problem, a combinatorial optimization problem, is classified as a Non-Deterministic Polynomial-hard (NP-hard) problem as large TSP problems are difficult to solve [5]. It consists of a discrete state space with a finite set of possible solutions and the aim is to find the best solution from the set. TSP can be defined as follows: given a number of inputs called cities and the cost of travel between each possible pair, the aim is to find the route with the least cost to visit each city exactly once and to return to the starting city.
TSP has numerous real-world applications since a large number of real-world problems can be represented as TSP. For example, it can be used in X-Ray crystallography, computer wiring, vehicle routing, and order picking in warehouses [24]. Some more recent applications of TSP include route planning for Unmanned Aerial Vehicles (UAVs) [25], in delivery services using UAVs [26], in emergency air logistics [27], in robotic automated storage and retrieval system [28] and robot path planning [29].

4. Related Works

4.1. Existing Swarm Intelligence Algorithms Applied to TSP

In this section, some previous works in which swarm intelligence algorithms have been adapted and enhanced for solving TSP are presented.
In [30], the firefly algorithm is adapted and enhanced for solving TSP by using the method of swap sequences and Genetic Algorithm (GA). GA is first used to initialise the population of search agents. The serial number coding method is used for representing the discrete state space of TSP and the method of swap sequences is used to redefine the equations of the firefly algorithm in order to adapt the algorithm to TSP. Three neighbourhood structures are also used to redefine the disturbance mechanism of the firefly algorithm. The algorithm is tested using five TSP problems from TSPLIB and it is found to provide solutions which are close to the known optimal solutions.
In [31], a discrete sparrow search algorithm is proposed for solving TSP. For initialization of the population’s positions, the roulette wheel selection is used. The path representation method is used for representing a TSP path. The position of the search agents is updated using the same equation as the original sparrow search algorithm and a decoding method called the order-based decoding is used to decode the solution obtained. To increase the diversity of the population, the Gaussian mutation perturbation is used together with swap operators. Furthermore, the 2-opt algorithm is used as a local search to improve the quality of the solutions and to increase the convergence rate. The algorithm is tested using 34 TSP instances and is found to be robust and have good convergence characteristics.
In [32], the Grey Wolf Optimization (GWO) is adapted for solving TSP by the use of swap sequences and swap operators. It makes use of the general steps of the Grey Wolf optimization algorithm. In the initialization stage, the population is initialized with a random TSP path and the agents with the three best solutions are chosen as the Alpha, Beta, and Delta wolves. The position of each search agent is then updated by considering the Alpha, Beta, and Delta wolves by making use of the method of swap sequences. The algorithm is used for solving benchmark TSP problems and is found to provide better results than ACO and GA for several TSP problems.
In [33], the chicken swarm optimization algorithm is adapted for solving TSP by using the methods of swap operators, order crossover, and reverse order mutation. The integer coding method is used for the solution representation, and the method of swap operators is used for updating the position of the search agents. The order crossover, and reverse order mutation are also used for updating the position of the search agents in the state space by increasing the population diversity. Five instances from TSPLIB are used for testing the algorithm which is found to be effective in solving TSP.
In [34], a discrete spider monkey optimization algorithm is proposed for solving TSP. It makes use of the method of swap sequences and swap operators for updating the position of the search agents. The position of each search agent is updated based on the position of a subgroup leader, known as the local leader, and the position of a main group leader, known as the global leader, and also their self-experience. The algorithm is tested using TSP instances from TSPLIB and its performance is compared with other swarm intelligence algorithms. It is found to have a good performance in solving TSP.
In [8], an enhanced swap sequence based PSO algorithm is proposed for solving TSP. It makes use of path representation for representing solutions in the state space, and the method of swap operators for updating the positions. It incorporates the three strategies of the continuous XPSO algorithm and uses the method of swap operators in order to be suitable for solving TSP. The three strategies include a forgetting ability for each particle, the adjustment of the acceleration coefficients by making use of the population’s experience, and the use of both the global and local exemplars for learning. The algorithm is found to provide better results than the swap sequence-based PSO.

4.2. Discrete Adapted Dragonfly Algorithm

In this section, a description of the adapted discrete dragonfly algorithm that we proposed in [12] is given. The algorithm makes use of the path representation method to represent a potential solution, that is a TSP path, and it makes use of the method of swap sequence [35] which was originally used for adapting PSO to TSP. The adapted discrete DA adapts the original DA algorithm to be suitable for solving TSP by the following: firstly, the method of path representation is used to represent a potential solution, that is, a TSP path which is taken as the position of a search agent. Secondly, the equations for calculating the five factors used in DA, the step vector, and the position vector are adapted to be suitable for the path representation. Thirdly, the five factors, the step vector, and the position vector are calculated using the method of swap sequence. The pseudocode of the adapted discrete DA is given in Algorithm 2.
Algorithm 2: Adapted Discrete DA Algorithm for TSP
Mathematics 10 03647 i002
In the discrete adapted DA algorithm, the position and step vector of the search agents are first initialized with a random TSP path and a random swap sequence respectively. Then in each iteration, the objective value of each search agent’s position, that is the cost of the TSP path represented by the search agent’s position, is calculated. The position with the lowest objective value is taken as the food source and that with the highest objective value is taken as the enemy. The separation, alignment, cohesion, attraction to food, and distraction from enemy factors are calculated, and their corresponding weights are updated. Finally, the position of the search agents is updated using the step and the position vectors. Contrary to the original continuous DA, the radius of neighbourhood of the dragonflies is not considered in the discrete adapted DA. This is because the Euclidean distance between the dragonflies in a discrete state space cannot be easily calculated and hence all the search agents are considered to be in the same neighbourhood.

4.3. Initialization

In the initialization phase, the positions of the search agents are first initialized with a random solution, that is a TSP path. The TSP path is represented using path representation, that is, a permutation of numbers representing cities. The numbers indicates the order of visit of the cities. An example of a TSP path for a TSP problem consisting of 4 cities can be: 3 2 1 4 3. This indicates that starting from the city denoted by the number 3, city 2 will be visited next, followed by city 1 and city 4, before returning to the starting point, that is, city 3.
The step vector of the search agents is then initialized with a velocity, which in this case, is a random swap sequence. A swap sequence consists of a number of swap operators, and each swap operator contains a pair of indices which represents cities in a TSP path. It indicates that the position of the two cities will be swapped when the swap operator is applied to a TSP path. For example, a swap operator, S O , can be represented by S O ( 2 , 4 ) , and a swap sequence, S S , can be represented by S S = ( S O ( 2 , 4 ) , S O ( 1 , 2 ) ) .

4.4. Calculation of Factors

The separation, alignment, cohesion, attraction to food, and distraction from enemy factors are calculated using Equations (9)–(13). These equations have been produced by adapting the Equations (1)–(5) from the original DA. This is because the factors of a search agent are calculated using the positions and step vectors of its neighbours, and in the adapted discrete DA, the positions and step vectors are TSP paths and swap sequences respectively. Hence the equations used in the original DA are not suitable for the adapted discrete DA.
The separation factor is calculated as follows:
S i = I n v ( j = 1 N X X j )
where X, X j , and N are the the current search agent’s position, the j-th neighbour’s position, and the total number of neighbours respectively.
The ‘⊖’ operator indicates the subtraction of two positions, that is two TSP paths, to produce a swap sequence. For example, the subtraction of two paths, X = 1 3 4 2 and X j =  2 3 1 4, is X X j = S O ( 1 , 3 ) , S O ( 3 , 4 ) .
The ‘⨁’ operator indicates the merging of the swap sequences into only one swap sequence which contains all the swap operators from the different swap sequences in sequential order. For example, for the swap sequences S S 1 = S O ( 2 , 3 ) , S O ( 4 , 1 ) and S S 2 = S O ( 1 , 3 ) , S S 1 S S 2 = S O ( 2 , 3 ) , S O ( 4 , 1 ) , S O ( 1 , 3 ) .
I n v ’ indicates that the swap sequence is inversed. For example, for S S = S O ( 2 , 3 ) , S O ( 4 , 1 ) , I n v ( S S ) = S O ( 1 , 4 ) , S O ( 3 , 2 ) .
The alignment factor is calculated as follows:
A i = V a v g
where V a v g is the step vector of the neighbour having the fitness closest to the average fitness in the neighbourhood.
The cohesion factor is calculated as follows:
C i = X a v g X
where X a v g is the position of the neighbour having the fitness closest to the average fitness in the neighbourhood and X is the current search agent’s position.
The attraction to the food source factor is calculated as follows:
F i = X f X
where X f is the food source’s position and X is the current search agent’s position.
The distraction from the enemy factor is calculated using the procedure ‘ C a l c u l a t e E i ( ) ’ which provides a swap sequence.
E i = C a l c u l a t e E i ( X e , X )
X e is the enemy’s position and X is the current search agent’s position. The procedure compares each city in X e and X and if a city is similar in both positions, it generates a swap operator which consists of that city and another different random city. This is to decrease the similarity between the two TSP paths.

4.5. Update of Positions

For updating the position of the search agents, the step vector is first calculated as follows:
Δ X t + 1 = ( s S i a A i c C i f F i e E i ) w Δ X t
where S i , A i , C i , F i , and E i are the separation, alignment, cohesion, food factor, and enemy factors of the i-th dragonfly respectively, s, a, c, f, and e are the separation, alignment, cohesion, food factor, and enemy factor’s weights respectively, w is the inertia weight, and t is the iteration counter.
In this equation, ‘⊕’ indicates the merging of two swap sequences, resulting in one swap sequence with all the swap operators in the first swap sequence followed by all the swap operators in the second swap sequence.
The position vector of the search agent is then calculated as follows:
X t + 1 = X t Δ X t + 1
In this equation, ‘⊗’ indicates that the swap sequence ‘ Δ X t + 1 ’ will be applied to the path ‘ X t ’. The application of a swap sequence to a path means that each swap operator in the swap sequence will be applied to the path sequentially to produce a new path. An example of applying a swap operator to a path is given below.
Considering a path X = 2 4 1 3 and a swap operator S O ( 2 , 3 ) , the cities in the second and third positions of X will be swapped. Hence, the resultant path will be X = 2 1 4 3.

5. Proposed Enhanced Adapted Discrete DA

In this section, a description of the proposed enhanced discrete adapted DA algorithm is provided. The proposed algorithm improves the exploitation phase of the adapted discrete DA by using the steepest ascent hill climbing algorithm as a local search. After the position of the search agents is updated using Equation (15), the hill climbing algorithm is employed to further exploit the region obtained and to update the position of the search agent to a better one. The steepest ascent hill climbing algorithm starts at the position obtained by Equation (15) and then looks for every possible position in that area of the search space. It then selects the one with the lowest cost. Hence, the steepest ascent hill climbing algorithm is able to locate the optimum solution in the area initially obtained by the search agent. Moreover, to prevent the algorithm from getting stuck in a local optimum, the position of a search agent is changed to a random solution when it cannot be further improved. This is done by keeping track of the personal best solution that is found by a search agent. If the personal best solution does not change over a certain number of iterations, the search agent’s position is changed to another random position so as to allow it to get out of the local optimum and to search other regions of the state space. The pseudocode of the proposed algorithm is given in Algorithm 3.
Algorithm 3: Enhanced Adapted Discrete DA Algorithm for TSP
Mathematics 10 03647 i003
In the initialization phase, the position and step vectors are initialized with a random TSP path, and a random swap sequence respectively. This step is similar to the adapted discrete DA in Section 4.2. In addition, a personal best stagnancy variable is initialized to zero. This variable is used to keep track of the number of iterations in which the best solution found by a search agent has not improved.
In the main loop of iteration, the objective cost of each search agent is first calculated, and the food and enemy are updated with the best and worst positions respectively. Then the separation, alignment, cohesion, attraction to food and distraction from enemy factors are calculated. Contrary to the adapted discrete DA in Section 4.2, in the proposed algorithm, these factors are calculated based on the path obtained after applying the previous factor; that is the separation factor is first calculated and the swap operators obtained are immediately applied to the TSP path represented by the position of the search agent to update the path. Then the alignment factor is calculated based on the updated path after applying the separation factor. Similarly, the path is updated and then the cohesion factor is calculated. The food factor is then calculated based on the updated path after applying the swap operators of the cohesion factor and the enemy factor is calculated based on the updated path after applying the food factor. The path with the lowest cost is then chosen as the next position of the search agent. This is done so as to increase the efficiency of the algorithm so that it can provide good solutions in a short amount of time.
After the position of a search agent is updated using Equation (15), the hill climbing algorithm is employed to further update the position as follows: the position obtained by (15) is taken as the current position for the hill climbing algorithm. Then a set of neighbouring solutions is generated and the one with the lowest cost is selected as the current position. The steps of generating neighbours and selecting the one with the lowest cost as the current position are repeated until a solution with a lower cost cannot be found. The use of the hill climbing algorithm is to improve the exploitation of the dragonfly algorithm.
In order to prevent the algorithm from being stuck in local optima, the personal best solution of each search agent, which is the best solution found by the search agent, is recorded. In each iteration, after the position of the search agent has been updated, the personal best solution is checked and updated if it has changed. If the personal best solution remains unchanged over a certain amount of iterations, then the position of the search agent is changed to a random solution. This is to allow the search agent to get out of the local optimum and explore other regions of the search space.

5.1. Solution Representation

There are several ways to represent a TSP path as the position of a search agent in a discrete state space such as binary, path, adjacency, ordinal, and matrix representations [31]. In this paper, the path representation is used to encode the TSP solutions since this is the most natural representation of a path. This is the same representation used in [12] for the adapted discrete DA algorithm.

5.2. Objective Function

The objective function is the cost of the TSP path, which in this case is the total distance of the TSP path. This is obtained by calculating the distance between each pair of adjacent cities in the TSP path. It is considered that the distance to travel from city i to city j is the same as the distance to travel from city j to city i.

5.3. Update of Positions

Similar to the adapted discrete DA algorithm in Section 4.2, the method of swap sequences is used for updating a position, that is a TSP path. A sequence contains a number of swap operators which indicate that the two cities in the position denoted by the swap operator will be swapped to produce a new TSP path. The swap operators in the swap sequence are applied sequentially to the TSP path. For example, for a TSP path 2 4 1 3, and swap sequence S O ( 1 , 2 ) , S O ( 3 , 2 ) , the TSP path 4 1 2 3 will be obtained when the swap sequence is applied to the TSP path.

5.4. Experimental Parameters

Table 1 shows the parameters used for the optimized DA algorithm, their description, and their values.

6. Experimental Results and Analysis

In this section, a description of the experimental datasets used, the experimental setup, and the results with some discussions are provided.

6.1. Experimental Dataset

The test data consist of a TSP problem with 50 nodes, where each node represents a location in the area of Kuala Lumpur (KL). The cost between each pair of nodes is the distance needed to travel from one node to the other. It is considered that the distance to travel from city i to city j is the same as the distance to travel from city j to city i. The aim is to find the shortest route to visit each node once and to return to the initial node. The distance between two locations is taken as the shortest distance in kilometers (km) that can be taken by a vehicle based on Google Maps.
To test the proposed algorithm with TSP problems of different sizes, the TSP data is changed to 10, 20, and 40 nodes by taking the first 10, 20, and 40 locations respectively.
Moreover, several benchmark datasets from TSPLIB are used to test the performance of the proposed algorithm. Specifically, the burma14, ulysses16, ulysses22, bays29, eil51, berlin52, st70, eil76, rat99 and kroA100 datasets consisting of 14, 16, 22, 29, 51, 52, 70, 76, 99, and 100 nodes respectively are used. The datasets are in the form of coordinates and the distance matrix of each dataset is constructed by calculating the Euclidean distance between the nodes.

6.2. Experimental Setup

The proposed algorithm is used for solving the TSP problem with 50, 40, 20, and 10 locations in KL and the results are recorded in terms of the cost of the solution obtained, the time taken to converge to the optimal solution, and the total time taken by the algorithm.
To compare the performance of the proposed algorithm, the discrete adapted dragonfly algorithm in [12], and the enhanced SSPSO in [8] are used for solving the same TSP problems of 50, 40, 20, and 10 locations and the results are recorded in terms of the cost of the solution obtained, the time taken to converge to the optimal solution, and the total time taken by the algorithm. The results of the proposed algorithm are compared to that of the discrete adapted dragonfly algorithm and the enhanced SSPSO algorithm. The enhanced SSPSO is used for comparing the performance of the proposed algorithm since it is our own algorithm which had been used for the same dataset, and TSP problem, that is the delivery system in the area of Kuala Lumpur. In order to have a better algorithm for the delivery system, the new optimized DA algorithm is proposed in this paper.
The experiments are repeated for different numbers of maximum iteration and search agents. Specifically, the number of maximum iterations used are 20, 50, 100, 200, and 500, and the number of search agents used is 5, 10, 20, and 40.
Furthermore, in order to compare the performance of the proposed algorithm to that of other swarm intelligence algorithms, the algorithm is applied to benchmark TSP problems from TSPLIB and the best solution obtained by the proposed algorithm is compared to the best solution obtained by Ant Colony Optimization (ACO), Velocity Tentative PSO (VTPSO), Artificial Bee Colony with Swap Sequence (ABCSS), Discrete Spider Monkey Optimization (DSMO), Genetic Algorithm (GA), Producer Scrounger Method (PSM), and Grey Wolf Optimizer (GWO) algorithms. The results of ACO, VTPSO, ABCSS, and DSMO are taken from [34]. The results of ACO GA, PSM, and GWO are taken from [32]. The maximum number of iterations used for the proposed optimized DA is 500 and the number of search agents used is between 20 and 100.
Both the proposed optimized discrete adapted DA and the discrete adapted DA were implemented in MATLAB and all experiments were conducted on a MacOs Monterey operating system, Apple M2 chip CPU, and 8 GB RAM.

6.3. Results and Discussion

6.3.1. Greater Kuala Lumpur TSP Problem

In this section, we present a comparison and discussion on the performance of the proposed algorithm when applied to our own dataset consisting of locations in the area of Greater Kuala Lumpur which models a delivery system.
Table 2, Table 3, Table 4 and Table 5 show the results obtained when the proposed optimized discrete DA, the discrete adapted DA, and the enhanced swap sequence based PSO are used for solving the TSP problem with 50, 40, 20, and 10 locations respectively. The experiments are conducted by using different number of maximum iterations and search agents and the results are recorded in terms of the cost of the solution, that is TSP path, provided by the algorithms, the time taken to converge to the global optimal solution, and the total time taken by the algorithms.
Figure 2, Figure 3, Figure 4 and Figure 5 show the convergence curve of the proposed optimized discrete adapted DA and the discrete adapted DA in solving a TSP of 50, 40, 20, and 10 locations respectively. The figures show the convergence of the algorithms for 5, 10, 20, and 40 search agents and for a maximum iteration of 200. The figures show the convergence rate of the two algorithms and also the cost of the solution, that is the TSP path provided.
An example of a TSP path for 20 locations in Kuala Lumpur area provided by the optimized discrete adapted DA algorithm is given in Figure 6. The cost of the TSP path is 105.4 and the path is as follows: 11 15 18 16 10 9 4 13 2 3 12 1 20 19 17 7 6 5 8 14 11. The numbers represent the cities and their order represent the order in which they will be visited.
From Table 2, Table 3, Table 4 and Table 5 it can be deduced that the proposed optimized discrete adapted DA algorithm has a higher effectiveness than the discrete adapted DA algorithm as the cost of the TSP path provided by the proposed algorithm is significantly lower in all of the experiments conducted. For example, for a maximum iteration of 500 and 40 search agents, the costs of the TSP paths obtained by the discrete adapted DA for 50, 40, 20, and 10 locations are 507.8, 409.3, 161.9, and 69.6 respectively while those obtained by the proposed optimized adapted discrete DA are 200.0, 178.7, 105.4, and 65.9 respectively. This means that the optimized adapted discrete DA improves the solution obtained by the adapted discrete DA by 60.6%, 56.3%, 34.9%, and 5.3% for 50, 40, 20, and 10 locations respectively.
Even for smaller number of iterations and search agents, the proposed algorithm converges to solutions with lower costs than the discrete adapted DA. For example, for a maximum of 20 iterations and only five search agents, the costs of the TSP paths obtained by the discrete adapted DA for 50, 40, 20, and 10 locations are 556.2, 478.1, 191.9, and 82.9 respectively while those obtained by the optimized discrete adapted DA are 229.4, 217.8, 117.5, and 65.9 respectively. This indicates that the optimized adapted discrete DA provides solutions which are 58.8%, 54.4%, 38.8%, and 20.5% better than those provided by the adapted discrete DA for 50, 40, 20, and 10 locations respectively.
Moreover, the costs of the best TSP paths that can be provided by the adapted discrete DA for 50, 40, 20, and 10 locations are 487.2, 409.3, 155.7, and 69.6 respectively while the costs of the best TSP paths that can be provided by the proposed optimized adapted discrete DA for 50, 40, 20, and 10 locations are 193.6, 179.4, 105.4, and 65.9 respectively.
In comparison to the enhanced SSPSO algorithm in [8], the proposed optimized discrete DA algorithm has a higher effectiveness as it provides solutions with a lower cost in all the experiments conducted.
In terms of efficiency, it can be seen from Table 2, Table 3, Table 4 and Table 5 that the proposed optimized adapted discrete DA algorithm takes less time than the adapted discrete DA algorithm for execution until the maximum number of iterations. Moreover, in terms of the time taken to converge to the global optimal solution, it can be seen that the proposed algorithm is better than the discrete adapted DA as it takes less time to converge. Hence, it can be deduced that the proposed algorithm has a higher convergence rate as compared to the adapted discrete DA algorithm.
From Figure 2, Figure 3, Figure 4 and Figure 5, it can be seen that the proposed optimized adapted discrete DA algorithm provides better solution than the adapted discrete DA as the proposed algorithm converges to solutions with lower costs in all cases. Moreover, it can be seen that in multiple cases the proposed algorithm has a higher convergence rate than the adapted discrete DA as the proposed algorithm converges at earlier iterations.

6.3.2. Benchmark TSP Problems

In this section, we present a comparison and discussion on the performance of the proposed algorithm in solving benchmark TSP problems.
Table 6 shows a comparison of the performance of the proposed optimized DA algorithm and other swarm intelligence algorithms, namely ACO, GA, PSM, and GWO in solving benchmark TSP problems. The results of ACO, GA, PSM, and GWO are taken from [32]. The results are compared in terms of the cost of the best solution that can be obtained by the algorithm. The maximum number of iterations used for the proposed optimized DA, and the other swarm intelligence algorithms is 500, and the maximum number of search agents used is 100.
Table 7 shows a comparison of the performance of the proposed optimized DA algorithm and other swarm intelligence algorithms, namely ACO, VTPSO, ABCSS and DSMO in solving benchmark TSP problems. The results for ACO, VTPSO, ABCSS and DSMO are taken from [34]. The results are compared in terms of the cost of the best solution that can be obtained by the algorithm. The maximum number of iterations used for ACO, VTPSO, and DSMO is 500, while that for ABCSS is 1000. The number of search agents used for ACO, VTPSO, ABCSS and DSMO is between 100 and 300. For the proposed optimized DA, the maximum number of iterations used is 500, and the maximum number of search agents used is 100. Although the number of search agents and maximum iteration for the proposed algorithm and the other swarm intelligence algorithms are not the same, the results are shown for comparison purposes.
From Table 6, it can be seen that the proposed algorithm achieves the same optimal solution for burma14 as GA, PSM, and GWO. For the ulysses16 dataset, the proposed algorithm provides the same optimal solution as PSM, and GWO. As for all the other datasets, that is, ulysses22, bays29, eil51, berlin52, st70, eil76, and kroA100, the proposed optimized DA provides better solutions as compared to ACO, GA, PSM, and GWO when the same number of search agents and maximum iterations is used.
From Table 7, it can be seen that the proposed algorithm can achieve the optimal solution for five of the datasets, namely for burma14, ulysses16, ulysses22, bays29, and berlin52, even though a smaller number of search agents and maximum iteration is used as compared to ACO, VTPSO, ABCSS, and DSMO. Although in some cases, VTPSO, ABCSS, and DMSO can provide a solution with lower cost as compared to our proposed algorithm, it is expected that our proposed algorithm will be able to provide similar or even better solutions if the same number of search agents and maximum iteration is used.

7. Conclusions and Future Work

Swarm intelligence algorithms are popular metaheuristic algorithms for solving complex optimization problems owing to the simple interactions of the search agents which give rise to a global intelligent behaviour. The dragonfly algorithm is a swarm intelligence algorithm inspired by the swarming behaviours of dragonflies while hunting and migrating. It has been found to have a higher performance than multiple other swarm intelligence algorithms in various applications.
Since DA has been found to have a better performance than multiple swarm intelligence algorithms in various applications, it is worth considering its application to the traveling salesman problem which is a popular discrete optimization problem having a plethora of real-world applications. In [3], a binary version of the dragonfly algorithm is proposed. However, this algorithm is difficult to be adapted for discrete problems like TSP. Hence, in [12], we proposed a discrete adapted dragonfly algorithm suitable for TSP. However, this algorithm has not been applied to large TSP problems and it has low effectiveness.
In this paper, we propose an optimized adapted discrete dragonfly algorithm which improves the low effectiveness of the adapted discrete DA in [12]. The proposed algorithm improves the exploitation phase of the adapted discrete DA by using the steepest ascent hill climbing algorithm as a local search. The proposed optimized adapted discrete DA has been tested using TSP instances consisting of 50, 40, 30, 20, and 10 cities. It has been found to provide better solutions, that is TSP paths with a lower cost, as compared to the adapted discrete DA [12], and the enhanced swap sequence based PSO [8] algorithms. It also has a higher convergence rate than the adapted discrete DA. Moreover, it has been tested using several benchmark TSP problems and it has been found to provide optimal solutions or solutions close to the optimal solutions.
For future work, the proposed algorithm can be applied to other real-world applications such as channel routing, planning, scheduling, and logistics.

Author Contributions

Conceptualization, B.A.S.E. and M.B.J.; writing—original draft preparation, B.A.S.E. and M.B.J.; writing—review and editing, M.B.J., A.A. and A.W.M.; supervision, M.B.J.; project administration, M.B.J.; funding acquisition, M.B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sunway University Internal Grant Scheme 2022 grant number GRTIN-IGS-DCIS[S]-11-2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DADragonfly Algorithm
TSPTraveling Salesman Problem
BDABinary Dragonfly Algorithm
MODAMulti-Objective Dragonfly Algorithm
PSOParticle Swarm Optimization
ACOAnt Colony Optimization
HCHill Climbing
GAGenetic Algorithm
GWOGrey Wolf Optimizer
XPSOExpanded PSO
SOSwap Operator
SSSwap Sequence
VTPSOVelocity Tentative PSO
ABCSSArtificial Bee Colony with Swap Sequence
DSMODiscrete Spider Monkey Optimization
PSMProducer Scrounger Method

References

  1. Yang, X.S. Nature-Inspired Optimization Algorithms; Academic Press: Cambridge, MA, USA, 2020. [Google Scholar]
  2. Slowik, A.; Kwasnicka, H. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Trans. Ind. Inform. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  3. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2015, 27, 1053–1073. [Google Scholar] [CrossRef]
  4. Emambocus, B.A.S.; Jasser, M.B.; Mustapha, A.; Amphawan, A. Dragonfly algorithm and its hybrids: A survey on performance, objectives and applications. Sensors 2021, 21, 7542. [Google Scholar] [CrossRef] [PubMed]
  5. Gutin, G.; Punnen, A.P. The Traveling Salesman Problem and Its Variations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 12. [Google Scholar]
  6. Zhi, X.H.; Xing, X.; Wang, Q.; Zhang, L.; Yang, X.; Zhou, C.; Liang, Y. A discrete PSO method for generalized TSP problem. In Proceedings of the 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 04EX826), Shanghai, China, 26–29 August 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 4, pp. 2378–2383. [Google Scholar]
  7. Yang, J.; Shi, X.; Marchese, M.; Liang, Y. An ant colony optimization method for generalized TSP problem. Prog. Nat. Sci. 2008, 18, 1417–1422. [Google Scholar] [CrossRef]
  8. Emambocus, B.A.S.; Jasser, M.B.; Hamzah, M.; Mustapha, A.; Amphawan, A. An enhanced swap sequence-based particle swarm optimization algorithm to solve TSP. IEEE Access 2021, 9, 164820–164836. [Google Scholar] [CrossRef]
  9. Yao, X.S.; Ou, Y.; Zhou, K.Q. TSP solving utilizing improved ant colony algorithm. J. Phys. Conf. Ser. 2021, 2129, 012026. [Google Scholar] [CrossRef]
  10. Wei, B.; Xing, Y.; Xia, X.; Gui, L. A novel particle swarm optimization with genetic operator and its application to tsp. Int. J. Cogn. Inform. Nat. Intell. 2021, 15, 1–17. [Google Scholar] [CrossRef]
  11. Rokbani, N.; Kumar, R.; Abraham, A.; Alimi, A.M.; Long, H.V.; Priyadarshini, I.; Son, L.H. Bi-heuristic ant colony optimization-based approaches for traveling salesman problem. Soft Comput. 2021, 25, 3775–3794. [Google Scholar] [CrossRef]
  12. Emambocus, B.A.S.; Jasser, M.B.; Amphawan, A. A discrete adapted dragonfly algorithm for solving the traveling salesman problem. In Proceedings of the 2021 Fifth International Conference on Intelligent Computing in Data Sciences (ICDS), Fez, Morocco, 20–22 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  13. Spanaki, K.; Karafili, E.; Sivarajah, U.; Despoudi, S.; Irani, Z. Artificial intelligence and food security: Swarm intelligence of AgriTech drones for smart AgriFood operations. Prod. Plan. Control. 2021, 32, 1–19. [Google Scholar] [CrossRef]
  14. Boveiri, H.R.; Khayami, R.; Elhoseny, M.; Gunasekaran, M. An efficient swarm-intelligence approach for task scheduling in cloud-based internet of things applications. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 3469–3479. [Google Scholar] [CrossRef]
  15. Jahwar, A.; Ahmed, N. Swarm intelligence algorithms in gene selection profile based on classification of microarray data: A review. J. Appl. Sci. Technol. Trends 2021, 2, 1–9. [Google Scholar] [CrossRef]
  16. Brezočnik, L.; Fister, I.; Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef] [Green Version]
  17. Bacanin, N.; Bezdan, T.; Tuba, E.; Strumberger, I.; Tuba, M. Optimizing convolutional neural network hyperparameters by enhanced swarm intelligence metaheuristics. Algorithms 2020, 13, 67. [Google Scholar] [CrossRef] [Green Version]
  18. Emambocus, B.A.S.; Jasser, M.B. Towards an optimized dragonfly algorithm using hill climbing local search to tackle the low exploitation problem. In Proceedings of the 2021 International Conference on Software Engineering & Computer Systems and 4th International Conference on Computational Science and Information Management (ICSECS-ICOCSIM), Pekan, Malaysia, 24–26 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 306–311. [Google Scholar]
  19. Yang, J.; Qu, L.; Shen, Y.; Shi, Y.; Cheng, S.; Zhao, J.; Shen, X. Swarm intelligence in data science: Applications, opportunities and challenges. In International Conference on Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2020; pp. 3–14. [Google Scholar]
  20. Sun, W.; Tang, M.; Zhang, L.; Huo, Z.; Shu, L. A survey of using swarm intelligence algorithms in IoT. Sensors 2020, 20, 1420. [Google Scholar] [CrossRef] [Green Version]
  21. Paramanandham, N.; Rajendiran, K. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications. Infrared Phys. Technol. 2018, 88, 13–22. [Google Scholar] [CrossRef]
  22. Janga Reddy, M.; Nagesh Kumar, D. Evolutionary algorithms, swarm intelligence methods, and their applications in water resources engineering: A state-of-the-art review. H2Open J. 2020, 3, 135–188. [Google Scholar] [CrossRef]
  23. Soni, G.; Jain, V.; Chan, F.T.; Niu, B.; Prakash, S. Swarm intelligence approaches in supply chain management: Potentials, challenges and future research directions. Supply Chain. Manag. Int. J. 2018, 24, 107–123. [Google Scholar] [CrossRef]
  24. Davendra, D. Traveling Salesman Problem: Theory and Applications; BoD–Books on Demand: Rijeka, Croatia, 2010. [Google Scholar]
  25. Xu, Y.; Che, C. A brief review of the intelligent algorithm for traveling salesman problem in UAV route planning. In Proceedings of the 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 12–14 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
  26. Huang, S.H.; Huang, Y.H.; Blazquez, C.A.; Chen, C.Y. Solving the vehicle routing problem with drone for delivery services using an ant colony optimization algorithm. Adv. Eng. Inform. 2022, 51, 101536. [Google Scholar] [CrossRef]
  27. Muren; Wu, J.; Zhou, L.; Du, Z.; Lv, Y. Mixed steepest descent algorithm for the traveling salesman problem and application in air logistics. Transp. Res. Part Logist. Transp. Rev. 2019, 126, 87–102. [Google Scholar] [CrossRef]
  28. Foumani, M.; Moeini, A.; Haythorpe, M.; Smith-Miles, K. A cross-entropy method for optimising robotic automated storage and retrieval systems. Int. J. Prod. Res. 2018, 56, 6450–6472. [Google Scholar] [CrossRef]
  29. Nedjatia, A.; Vizvárib, B. Robot path planning by traveling salesman problem with circle neighborhood: Modeling, algorithm, and applications. arXiv 2020, arXiv:2003.06712. [Google Scholar]
  30. Teng, L.; Li, H. Modified discrete firefly algorithm combining genetic algorithm for traveling salesman problem. TELKOMNIKA (Telecommun. Comput. Electron. Control.) 2018, 16, 424–431. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, Z.; Han, Y. Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. 2022, 118, 108469. [Google Scholar] [CrossRef]
  32. Sopto, D.S.; Ayon, S.I.; Akhand, M.; Siddique, N. Modified grey wolf optimization to solve traveling salesman problem. In Proceedings of the 2018 International Conference on Innovation in Engineering and Technology (ICIET), Dhaka, Bangladesh, 27–28 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  33. Liu, Y.; Liu, Q.; Tang, Z. A discrete chicken swarm optimization for traveling salesman problem. J. Phys. Conf. Ser. 2021, 1978, 012034. [Google Scholar] [CrossRef]
  34. Akhand, M.; Ayon, S.I.; Shahriyar, S.; Siddique, N.; Adeli, H. Discrete spider monkey optimization for travelling salesman problem. Appl. Soft Comput. 2020, 86, 105887. [Google Scholar] [CrossRef]
  35. Wang, K.P.; Huang, L.; Zhou, C.G.; Pang, W. Particle swarm optimization for traveling salesman problem. In Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 03EX693), Xi’an, China, 5 November 2003; Volume 3, pp. 1583–1585. [Google Scholar] [CrossRef]
Figure 1. Static and dynamic dragonfly swarms.
Figure 1. Static and dynamic dragonfly swarms.
Mathematics 10 03647 g001
Figure 2. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 50 cities.
Figure 2. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 50 cities.
Mathematics 10 03647 g002
Figure 3. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 40 cities.
Figure 3. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 40 cities.
Mathematics 10 03647 g003
Figure 4. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 20 cities.
Figure 4. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 20 cities.
Mathematics 10 03647 g004
Figure 5. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 10 cities.
Figure 5. Convergence curve of optimized discrete adapted DA and discrete adapted DA in solving TSP of 10 cities.
Mathematics 10 03647 g005
Figure 6. TSP path provided by optimized discrete adapted DA for 20 locations.
Figure 6. TSP path provided by optimized discrete adapted DA for 20 locations.
Mathematics 10 03647 g006
Table 1. Experimental parameters.
Table 1. Experimental parameters.
ParameterDescriptionValue
X i The position of search agent iA TSP path, example: 1 3 4 2 1
V i The step vector of search agent iA swap sequence, example: S O ( 1 , 3 ) S O ( 2 , 1 )
X f The food positionA TSP path, example: 1 4 2 3 1
X e The enemy positionA TSP path, example: 1 2 4 3 1
S i The separation factor of the i t h search agentA swap sequence, example: S O ( 2 , 4 ) S O ( 1 , 2 )
A i The alignment factor of the i t h search agentA swap sequence, example: S O ( 1 , 4 ) S O ( 3 , 2 )
C i The cohesion factor of the i t h search agentA swap sequence, example: S O ( 1 , 3 ) S O ( 3 , 4 )
F i The food factor of the i t h search agentA swap sequence, example: S O ( 3 , 4 ) S O ( 1 , 2 )
E i The enemy factor of the i t h search agentA swap sequence, example: S O ( 3 , 2 ) S O ( 4 , 1 )
sThe separation weightA real value, example: 1.5
cThe cohesion weightA real value, example: 1.2
aThe alignment weightA real value, example: 2.3
fThe attraction to food weightA real value, example: 1.2
eThe distraction from enemy weightA real value, example: 0.5
wThe step vector weightA real value, example: 0.1
Table 2. Performance comparison of proposed optimized discrete DA and discrete adapted DA in solving TSP of 50 cities.
Table 2. Performance comparison of proposed optimized discrete DA and discrete adapted DA in solving TSP of 50 cities.
Maximum
No. of
Iterations
No. of
Search
Agents
Enhanced SSPSODiscrete Adapted DAProposed Optimized Discrete Adapted DA
Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)
205532.00.03590.0481556.23.5197.91229.43.53256.6253
2010550.00.040710.1139571.018.061727.6912223.36.93139.0644
2020542.00.22140.2959570.715.7653130.8685223.525.508426.5358
2040547.00.13650.5762546.02.1136770.9831232.357.873158.7077
505525.00.08920.3160561.66.246654.9875217.210.73412.7241
5010525.00.00210.6858539.03.3454190.3065225.419.01524.0916
5020541.00.58381.2465524.053.0974823.7265211.864.879667.1365
5040509.03.31336.02159542.3587.39944322.1249206.0147.4982150.3943
1005525.01.08831.3751537.56.8421200.1581223.57.180218.6268
10010510.00.18483.7681538.0178.4626788.3791213.746.689848.3536
10020518.03.19119.6636539.21429.933302.5555209.429.4048122.456
10040504.07.340417.9399534.8426.254715,899.7002210.6229.6718240.2979
2005530.04.89356.3373537.0180.8259429.2883217.948.149848.6066
20010501.00.794714.5220529.2798.11251676.7151206.659.3682109.5615
20020497.017.576038.0765527.74210.00237259.5666199.4126.6875195.9365
20040494.020.006482.9766519.51934.881634,189.6828206.7324.0921554.3716
5005478.026.497842.7225519.6504.94826311.1325213.196.5744100.8358
50010494.045.6093101.8348487.217,838.576126,586.0832196.8121.3713245.6557
50020488.021.9401309.6251517.316,875.0334113,766.0094193.6262.4892500.1441
50040494.030.9125588.0762507.8283,915.1445546,679.3105200.0534.51061198.7924
Table 3. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 40 cities.
Table 3. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 40 cities.
Maximum
No. of
Iterations
No. of
Search
Agents
Enhanced SSPSODiscrete Adapted DAProposed Optimized Discrete Adapted DA
Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)
205431.00.021260.03446478.10.1340611.4666217.82.75674.3227
2010461.00.029270.0843463.91.929429.2706222.85.24885.4381
2020442.00.01400.1880452.3107.0152131.4918197.210.026412.3698
2040435.00.016730.4016430.620.2805686.2293188.929.32236.2265
505448.00.061110.1847428.932.392665.8366207.33.98976.7687
5010415.00.44490.8789436.895.0279256.7351195.512.768813.4064
5020427.00.19881.0586449.887.1151004.6713200.414.904828.4775
5040418.01.88803.1482444.32194.29674545.5709194.365.846671.1544
1005422.00.00271.2992422.6114.8649270.0771190.77.24711.6643
10010419.01.37893.1608458.0609.6101950.9694193.320.914735.4175
10020433.01.69424.0857444.62547.6283991.4558185.040.16549.4274
10040407.01.651615.0465441.41860.160225,521.9187194.0103.962119.8753
2005402.01.44855.3046422.81.31361.4874186.813.250420.6362
20010421.02.995811.0945444.0862.47171301.4488199.844.965757.7674
20020418.01.709922.8645418.62661.09115393.6283181.644.927110.284
20040420.045.777156.4051436.06848.259925,271.4722178.7212.0086257.4372
5005421.030.139130.2641435.52122.9687159.2775190.043.84463.6809
50010417.014.512393.8408429.84701.793123,959.7151185.085.5298137.7637
50020389.022.4112194.3102423.1100,073.4386124,577.1437184.8182.1964292.4497
50040404.07.1911498.5430409.372,913.9465357,630.9028179.4501.9859681.6409
Table 4. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 20 cities.
Table 4. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 20 cities.
Maximum
No. of
Iterations
No. of
Search
Agents
Enhanced SSPSODiscrete Adapted DAProposed Optimized Discrete Adapted DA
Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)
205195.00.0044880.01227191.92.04483.8275117.50.326871.0266
2010173.00.024510.0298187.65.970814.7066109.11.71941.8762
2020182.00.00960.0752192.77.347964.1203107.12.23493.5711
2040168.00.091210.1348186.118.9694396.1467106.13.76126.3904
505173.00.00350.0603175.06.405624.9236105.40.633441.9217
5010176.00.16150.2060160.10.009219697.1998106.13.84543.9797
5020162.00.00330.4717168.9192.3332501.511105.43.46976.4025
5040151.01.05691.4717168.8341.00882616.6784105.41.602316.6091
1005171.00.00480.2566186.168.1113125.241108.81.02333.0716
10010173.00.23450.6208171.6269.7031475.5543106.83.37585.4659
10020167.00.35711.9250171.17.4192047.3412105.711.40112.6007
10040170.02.04944.0331174.95646.58388242.855105.41.643432.2757
2005163.00.97341.1322180.189.7254164.0154105.70.85726.74
20010166.00.10773.3940175.411.5837624.1664105.43.940110.1722
20020163.00.07676.4863162.7202.56372352.474105.49.367721.9359
20040155.00.047519.5566170.74165.031511,489.6363105.423.782260.105
5005164.00.031110.7217163.4875.63023632.2101105.49.490213.2498
50010164.03.288025.2614175.81215.429515,191.4602105.49.281630.2117
50020153.00.004456.4343155.737,104.521661,427.5274105.42.667964.409
50040155.01.7389111.9266161.983,639.0656247,067.0486105.479.663157.4536
Table 5. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 10 cities.
Table 5. Performance comparison of proposed optimized discrete adapted DA and discrete adapted DA in solving TSP of 10 cities.
Maximum
No. of
Iterations
No. of
Search
Agents
Enhanced SSPSODiscrete Adapted DAProposed Optimized Discrete Adapted DA
Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)Cost of Solution (km)Time Taken to Converge to Optimum (s)Total Time Taken (s)
20580.00.00220.002882.90.504243.416965.90.266970.56227
201078.00.00720.007471.24.84099.604465.90.253080.91832
202075.00.00550.017178.021.010435.84965.90.224861.2627
204069.00.02480.036979.00.72344146.109965.90.416282.3977
50575.00.01240.0196181.713.289115.767365.90.127561.0816
501071.00.00550.044080.225.369148.560265.90.615731.4417
502072.00.09960.117473.413.4782222.612665.90.227482.5893
504067.00.16590.239776.9661.0827915.747865.90.375375.4787
100573.00.07150.073381.342.13771.076165.90.217331.6297
1001072.00.26680.272177.327.525229.608365.90.355352.3059
1002066.00.48290.483276.7102.0761959.87565.90.225074.4552
1004068.00.36440.883472.1607.67195944.722565.91.137711.2092
200575.00.31770.317975.81.11770.721765.90.471843.1885
2001065.00.02341.08077.56.5001253.845365.90.170534.7651
2002071.01.70711.774872.0372.1562991.313265.90.241878.242
2004067.03.45414.013768.8416.19224307.495665.90.5633719.9458
500569.00.08802.414074.70.598511637.60665.90.155245.3221
5001071.05.77175.913569.8502.3826421.546465.90.906369.7569
5002068.012.017312.740371.48413.49528,095.844465.90.2560819.0239
5004067.025.615825.827769.68756.6288117,232.445365.90.3703247.7772
Table 6. Performance comparison of proposed optimized discrete adapted DA and other swarm intelligence algorithms in solving benchmark TSP using a maximum of 100 search agents.
Table 6. Performance comparison of proposed optimized discrete adapted DA and other swarm intelligence algorithms in solving benchmark TSP using a maximum of 100 search agents.
TSP InstanceCost of Best Solution Obtained
Proposed Optimized DAACOGAPSMGWO
burma1430.878531.2130.8730.8730.87
ulysses1673.987677.1374.073.9973.99
ulysses2275.309786.7476.0975.5175.51
bays299074.1489964.789336.829076.989076.98
eil51430.244499.92524.18438.7455.24
berlin527544.36598046.069184.198109.918048.91
st70687.0724734.191015.0767.65752.84
eil76566.5564595.58805.78591.89604.32
kroA10024,205.450824,504.951446.826,419.825,983.8
Table 7. Performance comparison of proposed optimized discrete adapted DA and other swarm intelligence algorithms in solving benchmark TSP using different number of search agents.
Table 7. Performance comparison of proposed optimized discrete adapted DA and other swarm intelligence algorithms in solving benchmark TSP using different number of search agents.
TSP InstanceCost of Best Solution Obtained
Proposed Optimized DAACOVTPSOABCSSDSMO
burma1430.878531.2130.8730.8730.87
ulysses1673.987677.1373.9973.9973.99
ulysses2275.309784.7875.3175.3175.31
bays299074.1489964.789074.159074.159074.15
eil51430.244499.92429.51428.98428.86
berlin527544.36597870.457544.377544.377544.37
st70687.0724734.19682.57682.57677.11
eil76566.5564581.42559.25550.24558.68
rat991298.8881366.31256.251242.321225.56
kroA10024,205.450824,504.921,307.4421,299.021,298.21
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Emambocus, B.A.S.; Jasser, M.B.; Amphawan, A.; Mohamed, A.W. An Optimized Discrete Dragonfly Algorithm Tackling the Low Exploitation Problem for Solving TSP. Mathematics 2022, 10, 3647. https://doi.org/10.3390/math10193647

AMA Style

Emambocus BAS, Jasser MB, Amphawan A, Mohamed AW. An Optimized Discrete Dragonfly Algorithm Tackling the Low Exploitation Problem for Solving TSP. Mathematics. 2022; 10(19):3647. https://doi.org/10.3390/math10193647

Chicago/Turabian Style

Emambocus, Bibi Aamirah Shafaa, Muhammed Basheer Jasser, Angela Amphawan, and Ali Wagdy Mohamed. 2022. "An Optimized Discrete Dragonfly Algorithm Tackling the Low Exploitation Problem for Solving TSP" Mathematics 10, no. 19: 3647. https://doi.org/10.3390/math10193647

APA Style

Emambocus, B. A. S., Jasser, M. B., Amphawan, A., & Mohamed, A. W. (2022). An Optimized Discrete Dragonfly Algorithm Tackling the Low Exploitation Problem for Solving TSP. Mathematics, 10(19), 3647. https://doi.org/10.3390/math10193647

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop