Next Article in Journal
Chaotic Sand Cat Swarm Optimization
Previous Article in Journal
Systems of Linear Equations with Non-Negativity Constraints: Hyper-Rectangle Cover Theory and Its Applications
Previous Article in Special Issue
An Efficient Metaheuristic Algorithm for Job Shop Scheduling in a Dynamic Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CTOA: Toward a Chaotic-Based Tumbleweed Optimization Algorithm

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(10), 2339; https://doi.org/10.3390/math11102339
Submission received: 26 March 2023 / Revised: 27 April 2023 / Accepted: 10 May 2023 / Published: 17 May 2023

Abstract

:
Metaheuristic algorithms are an important area of research in artificial intelligence. The tumbleweed optimization algorithm (TOA) is the newest metaheuristic optimization algorithm that mimics the growth and reproduction of tumbleweeds. In practice, chaotic maps have proven to be an improved method of optimization algorithms, allowing the algorithm to jump out of the local optimum, maintain population diversity, and improve global search ability. This paper presents a chaotic-based tumbleweed optimization algorithm (CTOA) that incorporates chaotic maps into the optimization process of the TOA. By using 12 common chaotic maps, the proposed CTOA aims to improve population diversity and global exploration and to prevent the algorithm from falling into local optima. The performance of CTOA is tested using 28 benchmark functions from CEC2013, and the results show that the circle map is the most effective in improving the accuracy and convergence speed of CTOA, especially in 50D.

1. Introduction

As science and technology continue to advance and production capacity improves, the complexity of optimization problems is increasing. Finding efficient solution algorithms for these complex problems has become an urgent issue to be addressed. The emergence of metaheuristic algorithms provides a promising approach to solving optimization problems [1]. They are inspired by natural evolutionary laws, which have been shaped over tens of thousands of years to improve survival and to ensure that populations evolve.
Researchers have investigated these natural phenomena and designed algorithms to solve complex problems. The optimization problem is abstracted into the optimal solution of population evolution in this algorithm, the space of numerical search is abstracted into the living environment, and the behavior of each individual in the population represents a set of solutions. By continuously evolving the biological characteristics of its own population from an initial state, an optimal solution can be obtained, such as particle swarm optimization (PSO) [2], genetic algorithm (GA) [3], whale optimization algorithm (WOA) [4], grey wolf optimizer (GWO) [5], salp swarm algorithm (SSA) [6], ant colony optimization (ACO) [7], or shuffled frog leaping algorithm (SFLA) [8]. These efficient and robust algorithms have been successfully applied to solve various problems such as complex engineering problems [9,10,11], neural network [12,13,14,15], shortest path optimization [16,17], feature selection [18,19,20], and power scheduling [21].
Although metaheuristic algorithms can solve optimization problems in large-scale search spaces, Sheikholeslami et al. [22] showed that a sufficiently random sequence is required to ensure better performance in the algorithm’s global search phase, especially for metaheuristic algorithms that simulate and make decisions for complex natural phenomena. Population initialization is a critical component of metaheuristic algorithms. It has a direct impact on the algorithm’s search efficiency and the quality of the final solution. As a result, researchers have been investigating various methods to improve population initialization in order to solve practical problems more effectively. Randomized initialization is one of the most common metaheuristic initialization methods, in which a population is built by randomly generating solutions in the search space. This method is used by the majority of algorithms, such as those in [23,24]. THe initialization of opposition-based learning (OBL) randomly generates a set of solutions as the initial population and then generates an opposite solution, such as in [25,26]. Cluster-based initialization is a method that uses a clustering algorithm to divide the solutions. These solutions with similar patterns are assigned into several categories [27], such as in [28,29]. The use of chaotic maps is one of the most effective ways to generate a sufficiently random and well-distributed initialization sequence for metaheuristic algorithms, such as in [30,31,32,33]. By combining chaotic maps and metaheuristic algorithms, various optimization problems have been improved. For example, Gandomi et al. [34] proposed a chaos-accelerated particle swarm algorithm in 2013, and Arora et al. [35] proposed using chaos to improve the butterfly algorithm in 2017. The above-mentioned studies have obtained good experimental results. Kohli et al. [36] proposed a chaotic gray wolf optimization algorithm for constrained problems, and experiments proved that this method is very effective. Jia et al. [37] applied chaos theory to differential evolution and proved the feasibility of a chaotic local search strategy.
The tumbleweed optimization algorithm (TOA) [38] is a newly proposed metaheuristic algorithm inspired by the growth and reproduction behavior of tumbleweeds. Here are several existing research gaps and our motivations in this study:
  • Currently, no studies focus on chaotic-based TOA algorithms. Previous studies have shown that by combining chaotic maps with metaheuristic algorithms, various optimization problems have been improved. Thus, to the best of our knowledge, we propose a chaotic-based tumbleweed optimization algorithm (named CTOA).
  • In order to obtain the best performance of CTOA, we verify 12 chaotic maps. In our experiments, CEC2013, Friedman ranking test, and Wilcoxon test are adopted. Meanwhile, a real problem–power generation prediction is involved for evaluation.
Therefore, the main contributions of this paper are as follows:
  • In this study, we combine chaotic maps with the TOA algorithm for the first time to propose a chaotic-based tumbleweed optimization algorithm (CTOA).
  • We select 12 different chaotic maps and 28 popular benchmark functions to evaluate the performance of the proposed CTOA algorithm. The experimental results demonstrate that the performance and convergence of CTOA are greatly enhanced. We conclude that the best CTOA algorithm is CTOA9 (circle map + TOA).
  • Finally, we compare CTOA9 with famous state-of-art optimization algorithms, including GA [3], PSO [2], ACO [7], and SFLA [8]. The results demonstrate that CTOA9 is not only the best in the Friedman ranking test and Wilcoxon test, but it also has the minimum error when applied to power generation prediction problems.
The remainder of this paper is structured as follows: In Section 2, a literature review is provided to summarize the existing research on chaotic-based metaheuristic optimization algorithms. We present the CTOA, which integrates twelve selected chaotic maps into TOA in Section 3. The experimental evaluation of the CTOA on a range of benchmark functions is presented in Section 4. Section 5 provides a detailed application of real problem–power generation prediction. The discussion is described in Section 6 and the conclusion is described in Section 7.

2. Related Work

One area of interest in recent years has been the use of chaotic map strategies to enhance metaheuristic algorithms. Several studies have explored the use of chaotic maps to improve the performance of metaheuristic algorithms. In 2018, Sayed et al. [39] proposed a new chaotic multi-variate optimization algorithm (CMVO) to overcome the problems of low convergence speed and local optimum in MVO. To help control the rate of exploration and exploitation, ten well-known chaotic maps were selected for their research. Experimental results show that the sinusoidal map can significantly improve the performance of the original MVO. Similarly, Du et al. [40], in 2018, proposed the use of linear decreasing and a logical chaotic map to enhance the fruit fly algorithm, named DSLC-FOA, which produced better results than the original FOA and other metaheuristic algorithms.
Tharwat et al. [41], in 2019, developed a chaotic particle swarm optimization (CPSO) to optimize path planning and demonstrated its high accuracy by adjusting multiple variables in a Bezier curve-based path planning model. Kaveh et al. [42] proposed a Gauss map-based chaotic firefly algorithm (CGFA). Experimental results show that chaotic maps can improve convergence and prevent the algorithm from getting stuck in locally optimal solutions. In 2020, Demidova et al. [43] applied two strategies of different chaotic maps with symmetric distribution and exponential step decay to the fish school search optimization algorithm (FSS) to solve the shortcomings of poor convergence speed of FSS and low precision in high-dimensional optimization problems. Ultimately, the results of the study showed that FSS using tent map produced the most accurate results. Similar to this study, Li et al. [44] also proved that the chaotic whale optimization algorithm generated by the tent map when improving the WOA has higher accuracy in numerical simulation.
In 2021, Agrawal et al. [45] used chaotic maps to improve the gaining sharing knowledge-based optimization algorithm (GSK) and applied this new algorithm to feature selection. The results indicate that the Chebyshev map shows the best result among all chaotic maps, improving the original algorithm’s performance accuracy and convergence speed. In 2022, Li et al. [46] proposed the chaotic arithmetic optimization algorithm (CAOA). The CAOA based on chaotic disturbance factors has the advantage of balancing exploration and exploitation in the optimization process. Onay et al. [47] applied ten chaotic maps to the classic hunger games search (HGS), and they were also evaluated on the classic benchmark problems in CEC2017.
Recently, Yang et al. [48] used the tent map to improve the population diversity of WOA [4]. They also optimized the parameters and network size of the radial basis function neural network (RBFNN). Luo et al. [49] proposed an improved bald eagle algorithm that combined with the tent map and Levy fight method. This improved algorithm can expand the diversity of the population and search space. Naik et al. [50] introduced chaos theory into the modification of the social group optimization (SGO) algorithm, by replacing constant values with chaotic maps. The chaotic social group optimization algorithm proposed by them improves its convergence speed and results in precision.
In summary, the use of chaotic map strategies has shown promise in improving the performance of various metaheuristic algorithms. The selection of a suitable chaotic map for a given optimization algorithm can lead to a significant improvement in its convergence speed and precision. The summaries of these studies are shown in Table 1. The potential of chaotic map strategies in improving optimization algorithms remains an active area of research, and further studies are necessary to explore their effectiveness in different optimization problems.

3. Proposed Chaotic-Based Tumbleweed Optimization Algorithm

In this section, we briefly review the TOA, and a chaotic-based TOA called CTOA is proposed. To make the algorithm easier to understand, some important notations are described in Table 2.

3.1. Tumbleweed Optimization Algorithm (TOA)

TOA not only sets up search individuals such as traditional algorithms but also uses a grouping structure. That is, a tumbleweed population has subpopulations, each of which has multiple search individuals [38]. A multi-level grouping structure such as this can improve the TOA algorithm’s ability to find optimal values, and multi-subgroups can also prevent the appearance of local optima.
These two steps in the algorithm correspond to the tumbleweed population’s individual growth and reproduction procedures. The two stages of individual growth and individual reproduction are both equally important for population evolution and hence account for half of the tumbleweed growth cycle.

3.1.1. Individual Growth-Local Search

In a local searching process, the influence of the environment on the ith individual during kth cycle ( x i k ) is represented by P i k in Equation (1):
P i k = f i t ( x i k ) s u m ( f i t ( X k ) ) + ξ
where ξ is a random number between 0 and 1, and X k is a matrix whose elements are all individuals of this iteration. The greater the value of P i k , the greater the adaptability of tumbleweed seeds x i k in this environment. The fitness will be sorted in the TOA and will generate G i ( i = 1 , 2 , 3 , n ) . The top 50% G i will be saved to compete against other subpopulations using Equations (2) and (3).
Factor = c 1 gbest x i k + c 2 pbest k x i k + c 3 pbest k + 1 x i k 3 , k = 1 c 1 pbest k x i k + c 2 pbest k 1 x i k 2 , k = K c 1 pbest k 1 x i k + c 2 pbest k x i k + c 3 pbest k + 1 x i k 3 , k = otherwise .
Then, the mathematical expression for each individual in the subpopulation is shown as follows:
x k + 1 i = x k i + r 1 Factor
where c 1 , c 2 , c 3 are random numbers, and they are all in the range from 0 to 2. The remaining 50% cannot compete with other subpopulations using (4). The subpopulation with poor environmental adaptability cannot compete, but it can take out intra-population evolution, and the evolution formula is expressed in Equation (4):
x k + 1 i = x k i + r 1 c 4 pbest k x k i + c 5 x select , k i x k i
where c 4 , c 5 is a random number from 0 to 1, and r 1 represents the influence of the external environment on the individual, which will gradually decrease linearly with iteration.

3.1.2. Individual Reproduction—Global Search

The global search phase corresponds to tumbleweed reproduction after adulthood. Tumbleweeds spread their own seeds for reproduction while doing so. Equation (5) depicts the evolutionary formula for this process:
x k + 1 i = gbest k + V M a x _ i t e r t i o n g c M a x _ i t e r t i o n
where V is a random velocity vector of the seed falling.

3.2. Chaotic-Based Tumbleweed Optimization Algorithm (CTOA)

In the TOA, the population p o p is randomly initialized with a Gaussian probability distribution in Equation (6):
p o p = l b + ( u b l b ) r a n d
where r a n d is a random matrix with elements in the range from 0 to 1. Then, these two processes in TOA correspond to the tumbleweed population’s local searching and global searching. After random initialization of TOA, the produced sequences may not be well distributed, and this may affect the search results according to different sequences. This phenomenon will reduce the robustness of the optimization algorithm.
Here, we proposed an approach to replace the randomly generated population. First, an initial x 1 vector needs to be given, and Equation (7) is used to generate an X matrix containing a chaotic sequence:
X = x 1 , x 2 , , x i , x i = f x i 1 , i = | p o p |
where f ( x ) represents the selected chaotic map. The solution of the next generation in the chaotic sequence is to input the solution of the previous generation into the chaotic mapping function.
Next, then p o p _ c h a o t i c with chaotic properties is generated using Equation (8):
p o p _ c h a o t i c = l b + ( u b l b ) X
where l b is the minimum value of the search solution space, and u b is the maximum value of the search solution space. The dimension of X must be the same as the dimension of p o p _ c h a o t i c .
After finishing Equations (7) and (8), the chaotic population initialization of p o p _ c h a o t i c is completed. When an algorithm requires a random sequence to initialize the population position, the chaotic sequence replaces the original random sequence using Equations (7) and (8) and chaotic maps. Therefore, the chaotic-based tumbleweed optimization algorithm (CTOA) is proposed. The distribution of the initial population is affected by the chaotic sequence, and the position in the target space is more random. Therefore, the basic steps of the CTOA are as follows:
Step 1: Generate the initial input data randomly.
Step 2: Iterate the selected chaotic map and produce a chaotic sequence X.
Step 3: Generate a chaotic population p o p _ c h a o t i c , where the boundary of p o p _ c h a o t i c needs to be controlled by Equations (7) and (8).
Step 4: Complete the individual search part of CTOA using the chaotic population from Step 3.
Step 5: Complete the global exploration part of CTOA.
Step 6: Obtain a feasible solution.
Algorithm 1 shows the pseudocode for the CTOA’s entire optimization process. The complete process of CTOA is also given in Figure 1 in the form of a flowchart.
Algorithm 1: Pseudo-code of the CTOA
Mathematics 11 02339 i001

4. Experimental Result

In this section, we conduct experiments to determine which chaotic map is best suited for use in our proposed CTOA algorithm.

4.1. Experimental Environment and Benchmark Function

All the results presented in this section were obtained through MATLAB R2022b and Python3 simulations on a machine equipped with 11th Gen Intel(R) Core(TM) i9-11900 and 64 G RAM. Before conducting the experimental tests, we initialized the running parameters as shown in Table 3. The initial population size p o p _ s i z e was set to 100 to ensure that the position distribution of individuals initialized using chaotic maps was more characteristic. The number of function runs r u n _ n u m s was set to 50 to prevent random results from causing errors in the final evaluation results. To facilitate comparisons between the CTOA algorithm and the unimproved TOA algorithm, the 12 different CTOA algorithms are labeled as CPP1 to CPPE12, as shown in Table 4.
The IEEE Evolutionary Computing Conference announced the appearance of the benchmark function [60] in order to conduct a comprehensive performance comparison of metaheuristic algorithms and to verify the performance of the proposed algorithm. To evaluate the performance of the proposed algorithm, we used the benchmark functions from the CEC2013 suite, which includes 28 benchmark functions separated into three categories: unimodal function, multimodal function, and composition function. Functions F1–F5 are unimodal functions, F6–F20 are multimodal functions, and F21–F28 are composite functions. The names and details of the benchmark functions are shown in Table 5, Table 6 and Table 7. Unimodal benchmark functions have a single minimum value in the search interval, which is used to test the convergence speed of CTOA. Multimodal benchmark functions have higher requirements for CTOA than unimodal functions since they have multiple local minima that can test CTOA’s ability to jump out of local optima. The composition benchmark function is a combination of the two aforementioned functions. These benchmark functions enable us to evaluate CTOA’s performance across multiple dimensions.

4.2. Experimental Result on Numerical Statistics

In the experiments, we used three criterions of algorithm performance as follows:
B e s t = m i n ( h s i t o r y )
M e a n = 1 n i = 1 n x i
S t d = 1 n i = 1 n x i M e a n 2
where B e s t represents the algorithm’s best result in the test, M e a n is the average of 50 tests, and S t d stands for standard deviation. The historical results of the algorithm obtained after 50 runs of CEC2013 are represented by h i s t o r y = [ x 1 , x 2 , , x 50 ] . Each algorithm’s B e s t , M e a n , and S t d conditions are used to determine whether an experiment is good or bad. The B e s t reflects the algorithm’s limit, the M e a n reflects the algorithm’s accuracy, and the S t d reflects the algorithm’s stability. To demonstrate the performance and robustness of CTOA in different dimensions, we run CTOA, presented in Table 4, at 30D, 50D, and 100D, and record the experimental data in these dimensions.
In addition, in order to visually demonstrate the improvement of the TOA by the initialization of the chaotic map, the experimental results of TOA in these three dimensions are also counted in the table. Table 8, Table 9, Table 10 and Table 11 record the results of the algorithms in Table 4 running in 30D, Table 12, Table 13, Table 14 and Table 15 record the results of running in 50D, and Table 16, Table 17, Table 18 and Table 19 record the results of running in 100D. Note that the experiments’ best results have been highlighted in bold.
To evaluate the performance of the proposed algorithm, we defined the best results as “win” in the experiments, and the better the performance of the algorithm, the more “wins” it will obtain. The experimental results of the final algorithm running in different dimensions, as well as the number of “win” obtained, are presented in Figure 2, Figure 3 and Figure 4. The results indicate that CTOA initialized with chaotic mappings outperforms TOA in most cases. Specifically, in 30D, the number of wins of TOA is only one, while the number of wins of CTOA is greater than that of TOA. Furthermore, the effect of CTOA9 is found to be the best in 30D, 50D, and 100D, far exceeding other CTOAs and TOAs. Notably, in 30D and 50D, the number of wins of CTOA is about 20 times higher than that of TOA. In 50D, CTOA9 has 22 wins, which is also the highest among these algorithms. These results suggest that the circle map has the most significant improvement in the algorithm performance of TOA.
The Friedman rank test is a nonparametric statistical test method. It is used to understand and compare overall rankings of algorithms. The Wilcoxon test is used to statistically compare the performance of two algorithms chosen to solve a particular problem. Table 20, Table 21 and Table 22 show the results of the Friedman ranking test for different algorithms in different dimensions. In the Friedman ranking, each algorithm obtains a Friedman Ranking, and the smaller the Friedman Ranking, the better the performance of the algorithm. From the tables, CTOA9 ranks first in both 30D and 50D, which means that the algorithm can achieve the best performance in these dimensions. Although CTOA9 did not obtain first place in Friedman’s ranking in 100D, the ranking is also very high, only a 6% difference from the optimal algorithm. The results of the Wilcoxon test are shown in Table 23. We selected a significance level of 0.05 and converted the weighted average of the evaluation indicators in the three dimensions into rank data. A p value < 0.05 indicates that the difference between the two algorithms is significant and the result is marked as 1. Except for CTOA3 and CTOA7, CTOA9 shows a significant difference compared to all other algorithms.

4.3. Experimental Results on Convergence

In this section, we run the CTOA presented in Table 4 at 30D, 50D, and 100D and record their convergence curves to demonstrate whether the CTOA algorithm has an advantage over the TOA in terms of convergence. Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 report the results of the algorithms in Table 4 running in 30D, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 report the results in 50D, and Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25 report the results in 100D. The best fitness value obtained by different algorithms on the benchmark function is used to evaluate the algorithm’s performance in terms of convergence speed during the iterative visualization process.
The experimental convergence curves also show that CTOA9 has obvious convergence compared to TOA and other algorithms using chaotic maps. CTOA9 achieves a faster convergence rate on most of the test functions, and after 300 iterations, the final fitness function results are also optimal. CTOA9 is not the fastest convergence speed in Figure 10d, Figure 13c and Figure 24a, but it is also faster than the other algorithms. On the unimodal function, there is only one global optimum of the benchmark function; thus, it is easier for the algorithm to search for the optimal position. Therefore, on the unimodal function F1–F5, all algorithms stagnate around iteration 150. On multimodal functions and composition functions, CTOA9 reaches the stagnation state later, and most algorithms reach the stagnation state before it, for example, at F8, F9, F14.
In addition, there is an “interesting” experimental phenomenon by observing the convergence curve of the CTOA9 algorithm running in 100D. That is, the convergence curves become similar, such as in Figure 19a,c and Figure 20b which means that the performance of CTOA and TOA in convergence tends to be consistent.

4.4. Comparison with State-of-the-Art Algorithms

From previous experiments, we can prove that CTOA9 is the best among all CTOAs. To accurately evaluate and draw conclusions regarding the quality of the proposed algorithm in terms of stagnation, exploration, and diversity, we conducted experiments to compare other algorithms of the same type. Therefore, we selected GA [3], PSO [2], ACO [7], and SFLA [8] and compared them with CTOA9. According to previous studies, the algorithm parameter settings are shown in Table 24.
We selected some benchmark functions, and the convergence speeds of the algorithm on these functions are shown in Figure 26, Figure 27 and Figure 28. On the unimodal functions F1, F2, F5, SFLA has the fastest convergence speed and the best fitness value. After about 50 iterations, CTOA9 converged and then stagnated. Among these algorithms, SFLA has the best performance on unimodal functions, and CTOA9 is slightly weaker than it. On multimodal functions F9–F14, CTOA9 stagnates after 700 iterations except for F9 and F10. GA and SFLA can converge in less than 100 iterations, but their fitness values are very poor. Although the convergence speed of CTOA is slow, the final fitness value is optimal among all algorithms. Similarly, CTOA9 also obtains the best fitness value on the mixture function F21–F23. In summary, although the convergence speed of CTOA9 is slow, it has a strong global exploration ability, can jump out of the local optimum, and continuously updates the optimal value.
Nonparametric statistical methods can help us compare the performance of different algorithms. Therefore, we conducted a Friedman test and Wilcoxon signed-rank test on CTOA9, GA, PSO, ACO, SFLA, and statistical values were obtained on the benchmark functions. The results of the Friedman test are shown in Table 25. The results of the Wilcoxon signed-rank test are shown in Table 26. From Table 25, CTOA can achieve the best performance in the Friedman test. SFLA performance is slightly weaker than CTOA9. GA was the worst performer, ranking fifth. In addition, through Table 26, CTOA9 is significantly different from all other algorithms. In summary, through nonparametric statistical experiments, CTOA9 is the best in terms of overall performance.

5. Real Problem: Power Generation Prediction

With the rapid development of the economy, people’s electricity consumption has increased, such that the huge electricity load has increased the requirements for power generation. Power generation prediction is critical for accurate decision making by electric utilities and power plants. For the feature data collected from the power plant, the relationship between each parameter is mutually influenced, and many features cannot be expressed by a simple functional relationship. Support vector machine (SVM) is a commonly used machine learning algorithm. The SVM model has unique advantages in reflecting the nonlinear relationship between parameters. The performance of SVM largely depends on the values of the selected hyperparameters. The penalty parameter c and the width parameter g play a decisive role in the classification results and prediction accuracy of SVM.
Cross-validation is often used to select hyperparameters, but this approach tends to consume a lot of computational resources and time. Because the metaheuristic algorithm performs well in random searches and iterations in the hyperparameter space, it is widely used in the hyperparameter optimization of SVM. The proposed CTOA algorithm can help us find the best combination of hyperparameters. To examine the performance of the proposed algorithm, we applied the proposed algorithm to an optimized power forecasting model and compared the results of the proposed optimal CTOA9 with GA, PSO, ACO, SFLA ACO. According to previous studies, the algorithm parameter settings are shown in Table 24.
The framework of this method is shown in Figure 29.
Additionally, the prediction error used as the fitness function in this problem is shown in the following equation:
F i t n e s s = 1 m i = 1 N p y i y ^ i 2
where m is the total number of training samples, and y i and y ^ i are the actual and model predicted values. In order to evaluate the model more comprehensively, we also used R 2 as the evaluation standard, and its equation is shown below:
R 2 = 1 i y ^ i y i 2 i y ¯ i y i 2
where y i is the true value, and y ^ is the predicted value.
All algorithms were used to optimize SVM parameters and are applied in power forecasting, and the obtained results are shown in the Table 27. Among the two selected evaluation indicators, the smaller RMSE is better, and the closer R 2 is to 1, the better. The final RMSE obtained by CTOA = 0.051033, which is the best among the six algorithms, 7.4% higher than the second PSO algorithm. The final R 2 = 0.96719 obtained by CTOA is very close to 1. It performs best among all algorithms. In summary, the newly proposed CTOA-SVM model performs best.

6. Discussion

The experimental results demonstrate that the use of chaotic maps improves the performance of TOA in solving optimization problems. Specifically, the numerical statistics and convergence speed of CTOA are significantly better than that of TOA. The circle map is observed to have the most prominent effect on improving the algorithm’s performance, particularly in 50D. When the population position is initialized, the chaotic sequence generated by the circle map is used to replace the random initial sequence, which makes the population evenly distributed and expands the global search range. However, as the dimensions increase, the performance of TOA also improves, and its performance is comparable to that of CTOA in 100D. This implies that the advantages of using chaotic maps to improve TOA’s performance diminish in higher dimensions, and the performance of both TOA and CTOA tends to become average. This suggests that CTOA is more suited for optimization problems at lower dimensions, specifically in the range of 30–50D. Moreover, the increase in the number of variables in higher dimensions makes it difficult for chaotic maps to handle such complexity, leading to a reduction in performance improvement. Although CTOA9 is not significant to CTOA3 and CTOA7, it does not imply that CTOA9 performs poorly in the Wilcoxon test experiment. The Wilcoxon rank sum test is just a statistical method used to compare the differences between two samples, and its results are affected by many factors, which need to be fully considered in future research. In the convergence results, the fitness value of CTOA9 is still being updated, indicating that the algorithm still has more exploration capabilities when the iteration terminates ( M a x _ i t e r t i o n = 300 ). In future research, it is necessary to increase the number of iterations so that the algorithm converges eventually, to fully evaluate the convergence speed.

7. Conclusions

In this paper, we presented a novel optimization algorithm called the chaotic-based tumbleweed optimization algorithm (CTOA). The use of chaotic maps increases the diversity of the initial population, leading to improved algorithm performance. The performance of CTOA was tested on 28 benchmark functions of CEC2013. The comparative experimental results showed that the improved CTOA using the circle map is better than the original TOA in both the accuracy and convergence speed of finding the optimal value. We conclude that the CTOA9 algorithm using the circle map outperforms other CTOAs and TOA in terms of optimal results and benchmark function convergence. We compared CTOA9 with famous state-of-the-art optimization algorithms, including GA, PSO, ACO, SFLA. The results demonstrated that CTOA9 is not only best on the Friedman ranking test and Wilcoxon test, but it also has the minimum error when applied to power generation prediction problems. In addition, the performance of CTOA based on a circle map is more outstanding in lower search space dimensions (30D and 50D), but as the dimension increases to 100D, the performance of CTOA and TOA become more similar. Therefore, the CTOA algorithm is more suitable for solving problems with lower search space dimensions.

Author Contributions

Conceptualization, T.-Y.W.; methodology, T.-Y.W. and A.S.; software, A.S.; formal analysis, J.-S.P.; investigation, T.-Y.W.; writing—original draft preparation, T.-Y.W., A.S. and J.-S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

This study did not involve humans.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOparticle swarm optimization
GAgenetic algorithm
WOAwhale optimization algorithm
GWOgrey wolf optimizer
SSAsalp swarm algorithm
TOAtumbleweed optimization algorithm
CTOAchaotic-based tumbleweed optimization algorithm
CMOVchaotic multi-variate optimization
CPSOchaotic particle swarm optimization
FCFAGauss map-based chaotic firefly algorithm
FSSfish school search
CWOAchaotic whale optimization algorithm
GSKgaining sharing knowledge
CAOAchaotic arithmetic optimization algorithm
HGShunger games search
RBFNNradial basis function neural network
SGOsocial group optimization
ICMICiterative chaotic map with infinite collapse
S t d standard

References

  1. Zhang, F.; Wu, T.Y.; Wang, Y.; Xiong, R.; Ding, G.; Mei, P.; Liu, L. Application of quantum genetic optimization of LVQ neural network in smart city traffic network prediction. IEEE Access 2020, 8, 104555–104564. [Google Scholar] [CrossRef]
  2. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  3. Mirjalili, S. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Springer International Publishing: Cham, Switzerland, 2019; pp. 43–55. [Google Scholar] [CrossRef]
  4. Nasiri, J.; Khiyabani, F.M. A whale optimization algorithm (WOA) approach for clustering. Cogent Math. Stat. 2018, 5, 1483565. [Google Scholar] [CrossRef]
  5. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  7. Kumar, S.; Kumar-Solanki, V.; Kumar Choudhary, S.; Selamat, A.; González-Crespo, R. Comparative Study on Ant Colony Optimization (ACO) and K-Means Clustering Approaches for Jobs Scheduling and Energy Optimization Model in Internet of Things (IoT); ACM: New York, NY, USA, 2020. [Google Scholar]
  8. Eusuff, M.; Lansey, K.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  9. Chen, J.; Jiang, J.; Guo, X.; Tan, L. A self-Adaptive CNN with PSO for bearing fault diagnosis. Syst. Sci. Control Eng. 2021, 9, 11–22. [Google Scholar] [CrossRef]
  10. Malviya, S.; Kumar, P.; Namasudra, S.; Tiwary, U.S. Experience Replay-Based Deep Reinforcement Learning for Dialogue Management Optimisation; ACM: New York, NY, USA, 2022. [Google Scholar]
  11. Houssein, E.H.; Saad, M.R.; Hussain, K.; Zhu, W.; Shaban, H.; Hassaballah, M. Optimal Sink Node Placement in Large Scale Wireless Sensor Networks Based on Harris’ Hawk Optimization Algorithm. IEEE Access 2020, 8, 19381–19397. [Google Scholar] [CrossRef]
  12. Singh, P.; Chaudhury, S.; Panigrahi, B.K. Hybrid MPSO-CNN: Multi-level particle swarm optimized hyperparameters of convolutional neural network. Swarm Evol. Comput. 2021, 63, 100863. [Google Scholar] [CrossRef]
  13. Wu, T.Y.; Li, H.; Chu, S.C. CPPE: An Improved Phasmatodea Population Evolution Algorithm with Chaotic Maps. Mathematics 2023, 11, 1977. [Google Scholar] [CrossRef]
  14. Shaik, A.L.H.P.; Manoharan, M.K.; Pani, A.K.; Avala, R.R.; Chen, C.M. Gaussian Mutation–Spider Monkey Optimization (GM-SMO) Model for Remote Sensing Scene Classification. Remote Sens. 2022, 14, 6279. [Google Scholar] [CrossRef]
  15. Xue, X.; Guo, J.; Ye, M.; Lv, J. Similarity Feature Construction for Matching Ontologies through Adaptively Aggregating Artificial Neural Networks. Mathematics 2023, 11, 485. [Google Scholar] [CrossRef]
  16. Li, D.; Xiao, P.; Zhai, R.; Sun, Y.; Wenbin, H.; Ji, W. Path Planning of Welding Robots Based on Developed Whale Optimization Algorithm. In Proceedings of the 2021 6th International Conference on Control, Robotics and Cybernetics (CRC), Shanghai, China, 9–11 October 2021; pp. 101–105. [Google Scholar] [CrossRef]
  17. Chen, C.M.; Lv, S.; Ning, J.; Wu, J.M.T. A Genetic Algorithm for the Waitable Time-Varying Multi-Depot Green Vehicle Routing Problem. Symmetry 2023, 15, 124. [Google Scholar] [CrossRef]
  18. Chuang, L.Y.; Chang, H.W.; Tu, C.J.; Yang, C.H. Improved binary PSO for feature selection using gene expression data. Comput. Biol. Chem. 2008, 32, 29–38. [Google Scholar] [CrossRef] [PubMed]
  19. Xue, X. Complex ontology alignment for autonomous systems via the Compact Co-Evolutionary Brain Storm Optimization algorithm. ISA Trans. 2023, 132, 190–198. [Google Scholar] [CrossRef] [PubMed]
  20. Huang, C.L.; Wang, C.J. A GA-based feature selection and parameters optimizationfor support vector machines. Expert Syst. Appl. 2006, 31, 231–240. [Google Scholar] [CrossRef]
  21. Ziadeh, A.; Abualigah, L.; Elaziz, M.A.; Şahin, C.B.; Almazroi, A.A.; Omari, M. Augmented grasshopper optimization algorithm by differential evolution: A power scheduling application in smart homes. Multimed. Tools Appl. 2021, 80, 31569–31597. [Google Scholar] [CrossRef]
  22. Sheikholeslami, R.; Kaveh, A.a. A Survey of chaos embedded meta-heuristic algorithms. Int. J. Optim. Civ. Eng. 2013, 3, 617–633. [Google Scholar]
  23. Rajabioun, R. Cuckoo optimization algorithm. Appl. Soft Comput. 2011, 11, 5508–5518. [Google Scholar] [CrossRef]
  24. Bastos Filho, C.J.A.; de Lima Neto, F.B.; Lins, A.J.C.C.; Nascimento, A.I.S.; Lima, M.P. A novel search algorithm based on fish school behavior. In Proceedings of the 2008 IEEE International Conference on Systems, Man and Cybernetics, Singapore, 12–15 October 2008; pp. 2646–2651. [Google Scholar] [CrossRef]
  25. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  26. Li, W.; Wang, G.G. Improved elephant herding optimization using opposition-based learning and K-means clustering to solve numerical optimization problems. J. Ambient Intell. Humaniz. Comput. 2023, 14, 1753–1784. [Google Scholar] [CrossRef]
  27. Khamkar, R.; Das, P.; Namasudra, S. SCEOMOO: A novel Subspace Clustering approach using Evolutionary algorithm, Off-spring generation and Multi-Objective Optimization. Appl. Soft Comput. 2023, 139, 110185. [Google Scholar] [CrossRef]
  28. Bajer, D.; Martinović, G.; Brest, J. A population initialization method for evolutionary algorithms based on clustering and Cauchy deviates. Expert Syst. Appl. 2016, 60, 294–310. [Google Scholar] [CrossRef]
  29. Poikolainen, I.; Neri, F.; Caraffini, F. Cluster-Based Population Initialization for differential evolution frameworks. Inf. Sci. 2015, 297, 216–235. [Google Scholar] [CrossRef]
  30. Chuang, L.Y.; Hsiao, C.J.; Yang, C.H. Chaotic particle swarm optimization for data clustering. Expert Syst. Appl. 2011, 38, 14555–14563. [Google Scholar] [CrossRef]
  31. Wang, Y.; Liu, H.; Ding, G.; Tu, L. Adaptive chimp optimization algorithm with chaotic map for global numerical optimization problems. J. Supercomput. 2022, 79, 6507–6537. [Google Scholar] [CrossRef]
  32. Gao, W.f.; Liu, S.y.; Huang, L.l. Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4316–4327. [Google Scholar] [CrossRef]
  33. Yang, J.; Liu, Z.; Zhang, X.; Hu, G. Elite Chaotic Manta Ray Algorithm Integrated with Chaotic Initialization and Opposition-Based Learning. Mathematics 2022, 10, 2960. [Google Scholar] [CrossRef]
  34. Gandomi, A.H.; Yun, G.J.; Yang, X.S.; Talatahari, S. Chaos-enhanced accelerated particle swarm optimization. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 327–340. [Google Scholar] [CrossRef]
  35. Arora, S.; Singh, S. An improved butterfly optimization algorithm with chaos. J. Intell. Fuzzy Syst. 2017, 32, 1079–1088. [Google Scholar] [CrossRef]
  36. Kohli, M.; Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 2018, 5, 458–472. [Google Scholar] [CrossRef]
  37. Jia, D.; Zheng, G.; Khan, M.K. An effective memetic differential evolution algorithm based on chaotic local search. Inf. Sci. 2011, 181, 3175–3187. [Google Scholar] [CrossRef]
  38. Pan, J.S.; Yang, Q.; Shieh, C.S.; Chu, S.C. Tumbleweed Optimization Algorithm and Its Application in Vehicle Path Planning in Smart City. J. Internet Technol. 2022, 23, 927–945. [Google Scholar]
  39. Sayed, G.I.; Darwish, A.; Hassanien, A.E. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems. J. Exp. Theor. Artif. Intell. 2018, 30, 293–317. [Google Scholar] [CrossRef]
  40. Du, T.S.; Ke, X.T.; Liao, J.G.; Shen, Y.J. DSLC-FOA: Improved fruit fly optimization algorithm for application to structural engineering design optimization problems. Appl. Math. Model. 2018, 55, 314–339. [Google Scholar] [CrossRef]
  41. Tharwat, A.; Elhoseny, M.; Hassanien, A.E.; Gabel, T.; Kumar, A. Intelligent Bézier curve-based path planning model using Chaotic Particle Swarm Optimization algorithm. Clust. Comput. 2019, 22, 4745–4766. [Google Scholar] [CrossRef]
  42. Kaveh, A.; Mahdipour Moghanni, R.; Javadi, S. Optimum design of large steel skeletal structures using chaotic firefly optimization algorithm based on the Gaussian map. Struct. Multidiscip. Optim. 2019, 60, 879–894. [Google Scholar] [CrossRef]
  43. Demidova, L.A.; Gorchakov, A.V. A study of chaotic maps producing symmetric distributions in the fish school search optimization algorithm with exponential step decay. Symmetry 2020, 12, 784. [Google Scholar] [CrossRef]
  44. Li, Y.; Han, M.; Guo, Q. Modified whale optimization algorithm based on tent chaotic mapping and its application in structural optimization. KSCE J. Civ. Eng. 2020, 24, 3703–3713. [Google Scholar] [CrossRef]
  45. Agrawal, P.; Ganesh, T.; Mohamed, A.W. Chaotic gaining sharing knowledge-based optimization algorithm: An improved metaheuristic algorithm for feature selection. Soft Comput. 2021, 25, 9505–9528. [Google Scholar] [CrossRef]
  46. Li, X.D.; Wang, J.S.; Hao, W.K.; Zhang, M.; Wang, M. Chaotic arithmetic optimization algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  47. Onay, F.K.; Aydemir, S.B. Chaotic hunger games search optimization algorithm for global optimization and engineering problems. Math. Comput. Simul. 2022, 192, 514–536. [Google Scholar] [CrossRef]
  48. Yang, P.; Wang, T.; Yang, H.; Meng, C.; Zhang, H.; Cheng, L. The Performance of Electronic Current Transformer Fault Diagnosis Model: Using an Improved Whale Optimization Algorithm and RBF Neural Network. Electronics 2023, 12, 1066. [Google Scholar] [CrossRef]
  49. Chaoxi, L.; Lifang, H.; Songwei, H.; Bin, H.; Changzhou, Y.; Lingpan, D. An improved bald eagle algorithm based on Tent map and Levy flight for color satellite image segmentation. Signal Image Video Process. 2023, 1–9. [Google Scholar] [CrossRef]
  50. Naik, A. Chaotic Social Group Optimization for Structural Engineering Design Problems. J. Bionic Eng. 2023, 1–26. [Google Scholar] [CrossRef]
  51. Phatak, S.C.; Rao, S.S. Logistic map: A possible random-number generator. Phys. Rev. E 1995, 51, 3670–3678. [Google Scholar] [CrossRef]
  52. Devaney, R.L. A piecewise linear model for the zones of instability of an area-preserving map. Phys. D Nonlinear Phenom. 1984, 10, 387–393. [Google Scholar] [CrossRef]
  53. Ibrahim, R.A.; Oliva, D.; Ewees, A.A.; Lu, S. Feature selection based on improved runner-root algorithm using chaotic singer map and opposition-based learning. In Proceedings of the International Conference on Neural Information Processing, Long Beach, CA, USA, 4–9 December 2017; pp. 156–166. [Google Scholar]
  54. Hua, Z.; Zhou, Y. Image encryption using 2D Logistic-adjusted-Sine map. Inf. Sci. 2016, 339, 237–253. [Google Scholar] [CrossRef]
  55. Driebe, D.J. The bernoulli map. In Fully Chaotic Maps and Broken Time Symmetry; Springer: Berlin/Heidelberg, Germany, 1999; pp. 19–43. [Google Scholar]
  56. Jensen, M.H.; Bak, P.; Bohr, T. Complete devil’s staircase, fractal dimension, and universality of mode-locking structure in the circle map. Phys. Rev. Lett. 1983, 50, 1637. [Google Scholar] [CrossRef]
  57. Rogers, T.D.; Whitley, D.C. Chaos in the cubic mapping. Math. Model. 1983, 4, 9–25. [Google Scholar] [CrossRef]
  58. Sayed, G.I.; Darwish, A.; Hassanien, A.E. A new chaotic whale optimization algorithm for features selection. J. Classif. 2018, 35, 300–344. [Google Scholar] [CrossRef]
  59. Liu, W.; Sun, K.; He, Y.; Yu, M. Color image encryption using three-dimensional sine ICMIC modulation map and DNA sequence operations. Int. J. Bifurc. Chaos 2017, 27, 1750171. [Google Scholar] [CrossRef]
  60. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; RMIT University, Evolutionary Computation and Machine Learning Group: Melbourne, Australia, 2013. [Google Scholar]
Figure 1. Flow chart of the CTOA procedure.
Figure 1. Flow chart of the CTOA procedure.
Mathematics 11 02339 g001
Figure 2. The number of wins for different algorithms in 30D.
Figure 2. The number of wins for different algorithms in 30D.
Mathematics 11 02339 g002
Figure 3. The number of wins for different algorithms in 50D.
Figure 3. The number of wins for different algorithms in 50D.
Mathematics 11 02339 g003
Figure 4. The number of wins for different algorithms in 100D.
Figure 4. The number of wins for different algorithms in 100D.
Mathematics 11 02339 g004
Figure 5. Algorithm convergence curves for benchmark functions F1–F4 run in 30D.
Figure 5. Algorithm convergence curves for benchmark functions F1–F4 run in 30D.
Mathematics 11 02339 g005
Figure 6. Algorithm convergence curves for benchmark functions F5–F8 run in 30D.
Figure 6. Algorithm convergence curves for benchmark functions F5–F8 run in 30D.
Mathematics 11 02339 g006
Figure 7. Algorithm convergence curves for benchmark functions F9–F12 run in 30D.
Figure 7. Algorithm convergence curves for benchmark functions F9–F12 run in 30D.
Mathematics 11 02339 g007
Figure 8. Algorithm convergence curves for benchmark functions F13–F16 run in 30D.
Figure 8. Algorithm convergence curves for benchmark functions F13–F16 run in 30D.
Mathematics 11 02339 g008
Figure 9. Algorithm convergence curves for benchmark functions F17–F20 run in 30D.
Figure 9. Algorithm convergence curves for benchmark functions F17–F20 run in 30D.
Mathematics 11 02339 g009
Figure 10. Algorithm convergence curves for benchmark functions F21–F24 run in 30D.
Figure 10. Algorithm convergence curves for benchmark functions F21–F24 run in 30D.
Mathematics 11 02339 g010
Figure 11. Algorithm convergence curves for benchmark functions F25–F28 run in 30D.
Figure 11. Algorithm convergence curves for benchmark functions F25–F28 run in 30D.
Mathematics 11 02339 g011
Figure 12. Algorithm convergence curves for benchmark functions F1–F4 run in 50D.
Figure 12. Algorithm convergence curves for benchmark functions F1–F4 run in 50D.
Mathematics 11 02339 g012
Figure 13. Algorithm convergence curves for benchmark functions F5–F8 run in 60D.
Figure 13. Algorithm convergence curves for benchmark functions F5–F8 run in 60D.
Mathematics 11 02339 g013
Figure 14. Algorithm convergence curves for benchmark functions F9–F12 run in 50D.
Figure 14. Algorithm convergence curves for benchmark functions F9–F12 run in 50D.
Mathematics 11 02339 g014
Figure 15. Algorithm convergence curves for benchmark functions F13–F16 run in 50D.
Figure 15. Algorithm convergence curves for benchmark functions F13–F16 run in 50D.
Mathematics 11 02339 g015
Figure 16. Algorithm convergence curves for benchmark functions F17–F20 run in 50D.
Figure 16. Algorithm convergence curves for benchmark functions F17–F20 run in 50D.
Mathematics 11 02339 g016
Figure 17. Algorithm convergence curves for benchmark functions F21–F24 run in 50D.
Figure 17. Algorithm convergence curves for benchmark functions F21–F24 run in 50D.
Mathematics 11 02339 g017
Figure 18. Algorithm convergence curves for benchmark functions F25–F28 run in 50D.
Figure 18. Algorithm convergence curves for benchmark functions F25–F28 run in 50D.
Mathematics 11 02339 g018
Figure 19. Algorithm convergence curves for benchmark functions F1–F4 run in 100D.
Figure 19. Algorithm convergence curves for benchmark functions F1–F4 run in 100D.
Mathematics 11 02339 g019
Figure 20. Algorithm convergence curves for benchmark functions F5–F8 run in 100D.
Figure 20. Algorithm convergence curves for benchmark functions F5–F8 run in 100D.
Mathematics 11 02339 g020
Figure 21. Algorithm convergence curves for benchmark functions F9–F12 run in 100D.
Figure 21. Algorithm convergence curves for benchmark functions F9–F12 run in 100D.
Mathematics 11 02339 g021
Figure 22. Algorithm convergence curves for benchmark functions F13–F16 run in 100D.
Figure 22. Algorithm convergence curves for benchmark functions F13–F16 run in 100D.
Mathematics 11 02339 g022
Figure 23. Algorithm convergence curves for benchmark functions F17–F20 run in 100D.
Figure 23. Algorithm convergence curves for benchmark functions F17–F20 run in 100D.
Mathematics 11 02339 g023
Figure 24. Algorithm convergence curves for benchmark functions F21–F24 run in 100D.
Figure 24. Algorithm convergence curves for benchmark functions F21–F24 run in 100D.
Mathematics 11 02339 g024
Figure 25. Algorithm convergence curves for benchmark functions F25–F28 run in 100D.
Figure 25. Algorithm convergence curves for benchmark functions F25–F28 run in 100D.
Mathematics 11 02339 g025
Figure 26. Algorithm convergence curves for benchmark functions.
Figure 26. Algorithm convergence curves for benchmark functions.
Mathematics 11 02339 g026
Figure 27. Algorithm convergence curves for benchmark functions.
Figure 27. Algorithm convergence curves for benchmark functions.
Mathematics 11 02339 g027
Figure 28. Algorithm convergence curves for benchmark functions.
Figure 28. Algorithm convergence curves for benchmark functions.
Mathematics 11 02339 g028
Figure 29. The framework of the power prediction model.
Figure 29. The framework of the power prediction model.
Mathematics 11 02339 g029
Table 1. Literature summary of optimization algorithms improved by a chaotic map.
Table 1. Literature summary of optimization algorithms improved by a chaotic map.
LiteratureChaotic Map + MetaheuristicsOptimization ProblemConclusions
[39]sinusoidal map + MVOengineering and mechanical
design
CMVO better than MVO
[40]logical map + FOAstructural engineering design
optimization
CFOA better than FOA
[41]sine map + PSOBezier curve-based path
planning
higher accuracy than PSO
[42]Gauss map + GFAoptimum design of
large steel skeletal
structures
CGFA can improve
convergence speed
[43]tent map + FSSsymmetric distributionsfaster convergence speed
and better precision in
high-dimensional optimization
[44]tent map + WOAapplication in
structural optimization
CWOA better than WOA
[45]Chebyshev map + GSKfeature selectionimproved the original
algorithm’s performance
accuracy and convergence speed
[46]piecewise map + AOAengineering design issuesenhanced the convergence
accuracy
[47]sine map + HGSglobal optimization and
engineering problems
better results than HGS
[48]tent map + WOAoptimized the parameters
and network size of RBFNN
faster convergence speed
and powerful exploration ability
[49]tent map + BEAcolor satellite image
segmentation
CBEA can expand the
diversity of the
population and search space
[50]logistic map + SGOstructural engineering
design problems
CSGO obtained better
convergence quality and accuracy
Table 2. Relevant notations.
Table 2. Relevant notations.
SymbolMeaning
p o p Tumbleweed population
GSubpopulation
KMaximum number of iterations
f i t Individual fitness
p b e s t Local best individual
g b e s t Global best individual
u b Search space upper bound
l b Search space lower bound
Table 3. Names of parameters and their default values.
Table 3. Names of parameters and their default values.
SymbolPreset Value
p o p _ s i z e 100
r u n _ n u m s 50
m a x _ i t e r 300
dim30, 50, 100
Table 4. The meanings of algorithm symbols.
Table 4. The meanings of algorithm symbols.
SymbolMeaning
CTOA1TOA + logistic map [51]
CTOA2TOA + piecewise map [52]
CTOA3TOA + singer map [53]
CTOA4TOA + sine map [54]
CTOA5TOA + Gauss map [42]
CTOA6TOA + tent map [43]
CTOA7TOA + Bernoulli map [55]
CTOA8TOA + Chebyshev map [55]
CTOA9TOA + circle map [56]
CTOA10TOA + cubic map [57]
CTOA11TOA + sinusoidal map [58]
CTOA12TOA + ICMIC map [59]
Table 5. Descriptive information about the unimodal benchmark function.
Table 5. Descriptive information about the unimodal benchmark function.
Unimodal Functions
Symbol Name
F1Sphere
F2Rotated High Conditioned Elliptic
F3Rotated Bent Cigar
F4Rotated Discus
F5Different Powers
Table 6. Descriptive information about the multimodal benchmark function.
Table 6. Descriptive information about the multimodal benchmark function.
Multimodal Functions
SymbolName
F6Rotated Rosenbrock’s Function
F7Rotated Schaffers F7 Function
F8Rotated Ackley’s Function
F9Rotated Weierstrass Function
F10Rotated Griewank’s Function
F11Rastrigin’s Function
F12Rotated Rastrigin’s Function
F13Non-Continuous Rotated Rastrigin’s Function
F14Schwefel’s Function
F15Rotated Schwefel’s Function
F16Rotated Katsuura Function
F17Lunacek Bi_Rastrigin Function
F18Rotated Lunacek Bi_Rastrigin Function
F19Expanded Griewank’s plus Rosenbrock’s Function
F20Expanded Scaffer’s F6 Functionn
Table 7. Descriptive information about the composition benchmark function.
Table 7. Descriptive information about the composition benchmark function.
Composition Functions
SymbolName
F21Composition Function 1 (n = 5, Rotated)
F22Composition Function 2 (n = 3, Unrotated)
F23Composition Function 3 (n = 3, Rotated)
F24Composition Function 4 (n = 3, Rotated)
F25Composition Function 5 (n = 3, Rotated)
F26Composition Function 6 (n = 5, Rotated)
F27Composition Function 7 (n = 5, Rotated)
F28Composition Function 8 (n = 5, Rotated)
Table 8. Experimental results running on F1–F8 in 30D.
Table 8. Experimental results running on F1–F8 in 30D.
F1BestMeanStdF2BestMeanStd
TOA1.4305927554.6417312.480966TOA11,473,867.5434,462,604.6614,601,957
CTOA11.3565639754.8211752.266016CTOA113,145,120.4240,300,671.6814,502,168
CTOA22.1721049235.0537722.030593CTOA215,487,711.8237,557,028.9614,565,769
CTOA32.1721049234.6575462.175062CTOA315,487,711.8238,253,389.6116,509,389
CTOA41.5469609094.4286862.417921CTOA46,523,850.19535,940,710.6917,492,089
CTOA51.8373063164.9768432.274063CTOA59,265,487.40631,802,326.5413,703,389
CTOA61.2240582884.4857011.835378CTOA616,328,752.434,694,526.9213,762,606
CTOA71.1237540484.8058622.795162CTOA76,257,637.87236,435,707.0916,837,924
CTOA81.7257398285.2409682.542584CTOA812,313,22738,175,833.9815,679,630
CTOA90.8880301143.9737341.635657CTOA911,532,968.231,333,746.0212,639,456
CTOA101.8613304644.2743231.990519CTOA1014,008,982.4734,391,752.8715,362,061
CTOA111.6531794274.3587691.778436CTOA1110,443,066.738,167,456.8317,836,779
CTOA121.1733761144.5103182.069676CTOA125,277,183.48534,464,024.7111,813,327
F3BestMeanStdF4BestMeanStd
TOA 1 . 45 × 10 8 7.42 × 10 9 8.12 × 10 9 TOA35,475.6020773,183.9746318,858.371
CTOA1 1.19 × 10 9 9.64 × 10 9 8.31 × 10 9 CTOA138,800.8173777,107.9558419,576.841
CTOA2 8.47 × 10 8 8.50 × 10 9 9.10 × 10 9 CTOA221,634.2932670,535.3548822,998.472
CTOA3 8.47 × 10 8 1.08 × 10 10 1.00 × 10 10 CTOA321,634.2932671,190.5864318,545.374
CTOA4 3.18 × 10 8 8.24 × 10 9 8.33 × 10 9 CTOA433,931.4138477,808.5739418,384.651
CTOA5 2.52 × 10 9 8.78 × 10 9 7.68 × 10 9 CTOA537,761.5795170,942.0062619,069.629
CTOA6 1.78 × 10 8 8.97 × 10 9 9.86 × 10 9 CTOA628,886.662973,846.6941523,766.912
CTOA7 2.36 × 10 8 7 . 19 × 10 9 6.68 × 10 9 CTOA739,749.6418174,221.047119,434.414
CTOA8 7.81 × 10 8 7.47 × 10 9 5 . 77 × 10 9 CTOA829,124.2781672,806.7052220,911.395
CTOA9 7.06 × 10 8 7.28 × 10 9 6.09 × 10 9 CTOA927,021.2527256,943.7488816,380.353
CTOA10 6.57 × 10 8 1.04 × 10 10 8.99 × 10 9 CTOA1038,838.6918575,038.9913921,096.963
CTOA11 6.03 × 10 8 9.00 × 10 9 6.77 × 10 9 CTOA1131,035.21771,643.1225519,231.639
CTOA12 9.26 × 10 8 8.77 × 10 9 9.02 × 10 9 CTOA1233,858.7363672,004.9911220,686.456
F5BestMeanStdF6BestMeanStd
CTOA6.64129543.8690733.70866CTOA23.2600875.8102930.20365
CTOA17.24069950.9815843.35798CTOA129.3778882.7900930.15121
CTOA27.44861445.811133.21403CTOA223.1345486.0147937.35468
CTOA37.44861447.107639.57283CTOA323.1345484.5673827.91947
CTOA47.22953754.5436836.60177CTOA421.1038975.5561830.1907
CTOA58.67100255.6644142.5454CTOA524.5350285.3241235.90527
CTOA68.31379653.2555638.6086CTOA626.5966782.4931230.82995
CTOA75.53305742.1986532.58633CTOA722.5554980.0285132.46554
CTOA84.24954653.0573740.30658CTOA830.6164383.2124232.14927
CTOA97.3445742.4100435.593CTOA920.8470684.7951833.58295
CTOA104.8727249.0289837.55706CTOA1017.7854273.8102927.75425
CTOA116.82097751.1795235.98522CTOA1121.0371578.7805728.70238
CTOA127.15498947.7841138.08524CTOA1223.1729476.6252328.75425
F7BestMeanStdF8BestMeanStd
TOA69.42176172.095249.69714TOA20.8760121.058510.057536
CTOA174.3429164.01640.48377CTOA120.9839221.080690.039275
CTOA252.15573169.400750.64785CTOA220.9778221.080170.047848
CTOA352.15573169.028359.16052CTOA320.9778221.056090.056827
CTOA457.95494164.956948.58436CTOA420.8448821.055120.066816
CTOA565.19426146.180145.96157CTOA520.9291621.06420.053411
CTOA662.15799162.457653.75279CTOA620.8853121.066490.056811
CTOA769.76826157.511946.9178CTOA720.9368621.065870.048774
CTOA864.50866172.454757.10363CTOA820.9379121.059560.057127
CTOA962.14251143.903350.76479CTOA920.7870221.064980.061698
CTOA1065.2948167.285558.53906CTOA1020.8460621.053440.063907
CTOA1174.50113159.845456.96794CTOA1120.8500321.048710.058645
TOA1249.67303150.414255.90276TOA1220.9251921.061030.059276
Table 9. Experimental results running on F9–F16 in 30D.
Table 9. Experimental results running on F9–F16 in 30D.
F9BestMeanStdF10BestMeanStd
TOA19.4763230.369873.079732TOA26.2293182.3521226.96656
CTOA123.685930.084222.410925CTOA136.8568687.3238229.07486
CTOA218.9624830.190113.150221CTOA236.1949284.5754327.17525
CTOA318.9624830.131383.067458CTOA336.1949287.6773430.95377
CTOA422.8405630.66822.287681CTOA435.712787.1900132.80739
CTOA520.8117429.670862.776753CTOA527.4683881.254930.55649
CTOA621.7610128.900342.665711CTOA638.8763186.5883629.73541
CTOA725.2430930.580172.705951CTOA733.2002779.2757528.61971
CTOA818.2893330.584952.794156CTOA826.1443283.67834.17651
CTOA918.9431729.245222.631635CTOA923.3886773.8753325.42127
CTOA1023.1885529.952082.470231CTOA1021.0444381.6111629.87273
CTOA1119.0522329.732773.148697CTOA1117.8659892.6548739.57157
CTOA1220.6946829.599423.377856CTOA1226.1017884.0947323.51075
F11BestMeanStdF12BestMeanStd
TOA83.99478184.95168.01735TOA91.25174211.499555.09605
CTOA152.58211189.848985.23317CTOA188.95786198.649376.84806
CTOA278.39617184.212266.12921CTOA2124.0647219.884253.86497
CTOA378.39617160.908580.6409CTOA3124.0647221.257985.30896
CTOA468.34443172.444482.54131CTOA494.24765218.44577.4122
CTOA593.28622188.848770.54447CTOA582.40928222.460952.09472
CTOA661.4155181.645271.13292CTOA6122.7949202.909840.47806
CTOA797.55595191.759166.90959CTOA798.03852234.554884.43587
CTOA860.38598184.255883.72744CTOA869.61156215.198275.49402
CTOA984.5647149.254643.80927CTOA996.40827199.08650.27889
CTOA1071.89555181.625102.8933CTOA1064.21004213.770769.99723
CTOA1159.81794183.345391.25827CTOA1185.6472217.886771.39717
CTOA1253.47842164.309266.89424CTOA1261.59151204.154357.38563
F13BestMeanStdF14BestMeanStd
TOA158.7791230.748434.13018TOA370.1354981.4271395.315
CTOA175.41791237.946449.72082CTOA12590.5285234.0831,164.151
CTOA2152.7115236.577436.13152CTOA22229.5135326.3351309.164
CTOA3152.7115223.259731.3048CTOA32229.5134330.8621383.245
CTOA4127.9629227.125433.32703CTOA42113.4164909.0461347.483
CTOA5153.3039245.469838.51728CTOA52181.0835726.3571279.679
CTOA6131.3499239.423445.30911CTOA61049.5555145.8481378.694
CTOA7141.2002238.68540.63646CTOA72658.655166.6211272.206
CTOA8159.901238.10341.95559CTOA82192.5655007.6711432.373
CTOA9116.513230.570237.28941CTOA92256.7235596.8391221.914
CTOA10147.9437228.762339.40842CTOA101657.2454579.7211460.737
CTOA11140.8689230.205237.49626CTOA112925.4435502.2321255.026
CTOA12143.2574229.602135.7941CTOA122371.3845041.321423.715
F15BestMeanStdF16BestMeanStd
TOA5,641.2666,704.208423.7036TOA2.3646313.1972330.348217
CTOA15,492.4246,700.285536.451CTOA11.8045533.1798330.421851
CTOA25844.4246783.61352.2824CTOA22.4318843.2307020.35488
CTOA35844.4246756.636379.1114CTOA32.4318843.2437840.381423
CTOA45853.1986726.53402.9906CTOA42.0433033.2051990.449456
CTOA55675.8966713.259490.0271CTOA52.4038753.1818410.379426
CTOA65866.3646686.728340.3926CTOA62.1408883.2568580.364961
CTOA75,651.6586727.874415.1401CTOA72.0857663.2242590.427801
CTOA85047.7756879.885503.5179CTOA81.6894643.2259490.389588
CTOA95809.9456596.72407.5732CTOA92.1765993.2084460.396451
CTOA105411.9166665.79456.1787CTOA102.4582893.3040630.319186
CTOA113720.6676728.399612.8474CTOA112.5358563.2256950.371798
CTOA125348.3476662.602518.1656CTOA122.286813.2876510.369558
Table 10. Experimental results running on F17–F24 in 30D.
Table 10. Experimental results running on F17–F24 in 30D.
F17BestMeanStdF18BestMeanStd
TOA191.1994255.499725.7751TOA235.7625276.461321.39819
CTOA1187.7217253.724633.70481CTOA1221.4716270.580521.67858
CTOA2169.3405254.958639.27158CTOA2233.0861278.842821.98578
CTOA3169.3405244.770934.22477CTOA3233.0861270.284924.28905
CTOA4186.7281256.290423.41873CTOA4206.8143274.877329.38987
CTOA5192.9143259.26931.58694CTOA5203.0203278.034422.9301
CTOA6166.0398256.956527.88832CTOA6190.8338272.602428.12443
CTOA7132.5832251.897737.64356CTOA7210.1343270.052523.33517
CTOA8146.7767250.293631.07202CTOA8225.6414273.921226.46477
CTOA9206.8276258.643423.38194CTOA9232.0053278.004219.69428
CTOA10173.7475244.134527.17231CTOA10219.1796276.012922.75225
CTOA11182.298244.340924.40894CTOA11198.8819271.627527.12828
CTOA12169.2201248.699639.51309CTOA12191.3392268.946123.80379
F19BestMeanStdF20BestMeanStd
TOA8.82352716.36982.460191TOA12.1120513.776310.857837
CTOA19.96851315.656982.717404CTOA112.0213514.1210.944627
CTOA29.50778216.177512.687117CTOA212.131213.78740.966827
CTOA39.50778215.989892.328031CTOA312.131214.701370.623059
CTOA47.68353615.548142.957507CTOA412.1036913.909931.000377
CTOA59.42374316.865743.039682CTOA512.2671513.84740.922043
CTOA69.34450516.185532.596153CTOA612.4591713.952530.9019
CTOA710.2291316.585732.306279CTOA712.2427313.853830.870465
CTOA810.3381417.4762.959561CTOA812.1956414.275760.937514
CTOA97.66329916.292752.838714CTOA912.1191713.356930.829536
CTOA109.86631616.150572.232672CTOA1012.0309614.426130.862933
CTOA119.87978216.270212.127274CTOA1112.6420613.825690.896084
CTOA1210.811716.419162.60959CTOA1212.2921214.287510.881478
F21BestMeanStdF22BestMeanStd
TOA221.5507320.729277.89905TOA2492.2545457.6961573.757
CTOA1224.3926338.665567.01389CTOA12492.6875158.9061359.081
CTOA2228.3312345.022170.27384CTOA22447.0925314.3071589.62
CTOA3228.3312390.326575.39317CTOA32447.0924512.8921469.588
CTOA4223.1676317.385459.91028CTOA42290.6995077.6631546.049
CTOA5229.7998345.699463.40178CTOA52525.5325355.7181510.444
CTOA6221.131327.84370.1972CTOA62145.7535356.2831524.727
CTOA7226.7677330.316880.79807CTOA72389.5414995.4641485.753
CTOA8239.3515372.987663.48176CTOA82807.8145130.6341500.005
CTOA9151.2828340.427685.9303CTOA91767.6615502.5641575.352
CTOA10231.9426366.284266.6869CTOA102085.1684617.9661366.792
CTOA11144.8216326.72364.16248CTOA111726.5525435.6131503.185
CTOA12227.8882355.709371.40169CTOA122573.4235136.5661429.554
F23BestMeanStdF24BestMeanStd
TOA5745.1636987.26593.466TOA258.6309280.51599.064905
CTOA14752.8497019.605578.4228CTOA1255.5681281.44129.676793
CTOA25733.226849.891509.0746CTOA2261.4231278.02239.020633
CTOA35733.227083.667634.6992CTOA3261.4231280.754510.68703
CTOA43445.9027021.244746.902CTOA4257.7895280.77878.778902
CTOA54346.8946910.548693.303CTOA5261.2685281.48159.309686
CTOA65486.6447006.287534.5366CTOA6254.2371280.81828.424572
CTOA74405.1196904.682596.9329CTOA7262.5834282.379810.07625
CTOA85926.27073.831425.523CTOA8259.4419281.17438.740838
CTOA95569.9266909.94562.6803CTOA9258.8804278.18839.733462
CTOA105306.7147035.998605.8532CTOA10262.2337280.25278.71317
CTOA114864.5356960.24550.5438CTOA11253.8493281.73629.510098
CTOA125198.5077067.133589.5727CTOA12260.3661281.69528.436596
Table 11. Experimental results running on F25–F28 in 30D.
Table 11. Experimental results running on F25–F28 in 30D.
F25BestMeanStdF26BestMeanStd
TOA273.0118287.27447.030919TOA200.6034354.032659.18926
CTOA1268.2402289.48219.246821CTOA1201.2018363.479439.86875
CTOA2263.9079289.207910.62954CTOA2200.9928348.894861.49499
CTOA3263.9079290.16138.282695CTOA3200.9928361.356445.71191
CTOA4271.37290.95497.802634CTOA4200.7299336.114271.6734
CTOA5268.3196288.3099.57809CTOA5200.8854323.86178.67613
CTOA6272.7219289.64558.917811CTOA6201.0962342.584967.12981
CTOA7270.4478287.11938.792928CTOA7200.9794324.443779.27322
CTOA8261.0774286.4629.807126CTOA8200.9593345.906864.36845
CTOA9274.7382288.2577.665817CTOA9200.6327300.247684.35678
CTOA10261.1317290.51199.435829CTOA10201.4667351.656658.12046
CTOA11271.1873288.93378.883081CTOA11201.7781356.002854.79191
CTOA12271.8485288.517810.21368CTOA12200.4804347.431865.64149
F27BestMeanStdF28BestMeanStd
TOA871.07111067.81874.45373TOA365.7219540.8311331.938
CTOA1889.23581074.29994.31119CTOA1382.1719471.1867180.0546
CTOA2817.4161060.7796.08947CTOA2376.2239487.178229.1578
CTOA3817.4161050.32591.33763CTOA3376.2239798.954534.3881
CTOA4838.49561062.82987.493CTOA4199.8576539.2765344.8095
CTOA5848.52461060.98597.76764CTOA5365.4524465.2611176.5598
CTOA6913.99021089.69474.84623CTOA6355.3892467.7049228.928
CTOA7925.18721065.87966.1941CTOA7362.0028474.912186.4551
CTOA8787.35331051.82795.20461CTOA8364.9942716.584493.1192
CTOA9802.6381035.46894.99268CTOA9173.6509569.9431488.5328
CTOA10834.28851059.7471.8523CTOA10372.0484613.7077416.3061
CTOA11805.31481068.097106.2754CTOA11358.9237497.9177274.4063
CTOA12867.21021055.54379.59421CTOA12368.3432595.7947412.9015
Table 12. Experimental results running on F1–F8 in 50D.
Table 12. Experimental results running on F1–F8 in 50D.
F1BestMeanStdF2BestMeanStd
TOA47.9333695.1147928.91364TOA41,084,347.1999,934,056.8634,099,495.64
CTOA134.904594.4552832.11812CTOA142,291,631.58118,694,884.553,959,926.18
CTOA247.4362292.6979528.15622CTOA231,492,325.0598,563,069.6235,177,164.77
CTOA347.43622103.559639.20834CTOA331,492,325.0585,470,333.0932,099,117.51
CTOA440.5744493.3372232.53762CTOA452,376,457.15105,263,004.534,615,169.58
CTOA536.3179789.7017829.7691CTOA526,539,157.3396,278,699.5833,363,342.28
CTOA635.3290379.8357825.19787CTOA636,551,438.45102,373,582.734,779,463.74
CTOA732.2009893.5781925.68838CTOA734,362,723.34108,909,903.437,210,665.71
CTOA850.2974297.9422830.97021CTOA832,613,735.7399,479,035.0138,945,154.25
CTOA927.1939182.6171925.84005CTOA934,431,771.1191,858,862.9933,124,113.91
CTOA1037.8690996.343131.09465CTOA1049,408,842.6895,765,681.8932,726,320.96
CTOA1130.169493.6294835.38649CTOA1139,317,874.31114,310,927.236,819,144.93
CTOA1236.27405102.899136.37749CTOA1230,172,910.293,505,723.7733,490,234.19
F3BestMeanStdF4BestMeanStd
TOA 1.10 × 10 10 5.63 × 10 10 3.03 × 10 10 TOA84,777.91131,966.926,642.19
CTOA1 1.12 × 10 10 6.37 × 10 10 3.93 × 10 10 CTOA174,253.14145,774.328,144.54
CTOA2 1.57 × 10 10 5.73 × 10 10 2.62 × 10 10 CTOA280,255.7713,076226,760.39
CTOA3 1.57 × 10 10 6.16 × 10 10 3.27 × 10 10 CTOA380,255.77144,831.522,728.45
CTOA4 1.33 × 10 10 5.87 × 10 10 3.51 × 10 10 CTOA484,808.8138,823.925,367.29
CTOA5 9.64 × 10 9 5.71 × 10 10 3.01 × 10 10 CTOA587,829.8137,195.730,456.33
CTOA6 1.49 × 10 10 5.44 × 10 10 3.16 × 10 10 CTOA684,565.71137,910.829,164.56
CTOA7 8.16 × 10 09 5.63 × 10 10 2.83 × 10 10 CTOA772,567.08132,362.624,210.28
CTOA8 1.23 × 10 10 5.99 × 10 10 3.42 × 10 10 CTOA881,541.39144628.528,494.84
CTOA9 1.14 × 10 10 4 . 80 × 10 10 2 . 59 × 10 10 CTOA944,350.31114,72229,349.97
CTOA10 1.60 × 10 10 6.35 × 10 10 3.68 × 10 10 CTOA1077,112.54149,794.930,334.24
CTOA11 4 . 39 × 10 9 6.08 × 10 10 2.97 × 10 10 CTOA1191,002.13145,502.529,185.3
CTOA12 1.92 × 10 10 6.19 × 10 10 2.96 × 10 10 CTOA1294,817.95141,163.325,052.11
F5BestMeanStdF6BestMeanStd
TOA99.03806250.241656.99093TOA59.3082138.589262.55081
CTOA1164.8707258.260548.80763CTOA153.21279138.126172.29992
CTOA2101.2389234.135553.17557CTOA254.99738134.875866.50206
CTOA3101.2389240.041343.43521CTOA354.99738125.709152.0445
CTOA4153.2694244.665355.86586CTOA453.897149.33581.68212
CTOA5145.2499250.285949.50397CTOA560.98863148.436562.64033
CTOA6115.056237.477650.40263CTOA660.54749176.617199.92527
CTOA7141.4791244.152756.49436CTOA760.2818145.128471.4382
CTOA8146.7977247.340746.18892CTOA854.10226121.468468.49735
CTOA993.25496204.894149.13547CTOA955.34641184.495875.70716
CTOA10153.3158246.847151.27477CTOA1058.27578134.252962.65023
CTOA11140.5322245.563250.01144CTOA1154.10269151.227683.53149
CTOA12111.2616247.516453.58613CTOA1251.66511140.390780.9856
F7BestMeanStdF8BestMeanStd
TOA145.07220.982637.64836TOA21.1386121.232590.037738
CTOA1140.9811225.78535.92001CTOA121.1659921.240260.035016
CTOA2149.2201225.481641.60381CTOA221.1351221.23290.042212
CTOA3149.2201214.870540.45722CTOA321.1351221.232740.032241
CTOA4144.899229.97747.38471CTOA421.1233321.234750.037583
CTOA5127.4234217.540741.26043CTOA521.0897121.235320.039473
CTOA6141.953222.51944.4323CTOA621.0637821.230420.047104
CTOA7146.5261227.568642.63456CTOA721.1176621.237280.042119
CTOA8134.3188224.009944.08832CTOA821.1059221.221820.045104
CTOA9153.0399197.549932.38021CTOA921.1336421.226560.041457
CTOA10152.5624219.999538.21989CTOA1021.1113821.227430.039115
CTOA11156.7244225.015437.59788CTOA1121.1108921.238030.039791
CTOA12156.3481230.709135.6285CTOA1221.0998621.226160.031283
Table 13. Experimental results running on F9–F16 in 50D.
Table 13. Experimental results running on F9–F16 in 50D.
F9BestMeanStdF10BestMeanStd
TOA51.4515759.574493.825761TOA201.8683512.4316152.7787
CTOA141.6460559.78554.621395CTOA1223.5617526.1229177.7425
CTOA245.2314659.545564.797162CTOA2222.9015512.9366159.1014
CTOA345.2314659.783844.70706CTOA3222.9015551.9805159.475
CTOA449.6156660.204043.856908CTOA4222.8496545.2075172.0383
CTOA547.3420760.134323.732362CTOA5294.1571522.3137137.4475
CTOA649.4200459.630043.83985CTOA6285.4589524.9346127.5325
CTOA749.5914160.364084.014266CTOA7264.6706507.5844172.0069
CTOA851.0675259.930454.031302CTOA8254.771531.7375155.2429
CTOA944.5664859.307754.735694CTOA9180.2026392.0369133.6758
CTOA1046.6512359.132624.279494CTOA10279.9391553.1651151.1298
CTOA1145.928559.806174.059403CTOA11155.7072498.1528185.5661
CTOA1243.0721959.44264.961187CTOA12281.5639528.0257148.9916
F11BestMeanStdF12BestMeanStd
TOA202.4885425.8454147.228TOA333.6475564.8086123.9431
CTOA1193.1398371.4938158.5099CTOA1317.1222525.0204157.263
CTOA2207.4459428.3898136.4808CTOA2279.2988543.163397.22959
CTOA3207.4459314.6289115.309CTOA3279.2988465.432395.89378
CTOA4208.9741399.9909169.4385CTOA4310.2108535.3492126.2113
CTOA5242.8699443.0413154.8247CTOA5338.1129555.84137.8974
CTOA6187.5576409.1792143.4178CTOA6287.6132539.7874119.8936
CTOA7208.0959410.4099155.6475CTOA7272.1977543.798120.2503
CTOA8177.2906327.4294131.058CTOA8257.715508.1558106.0107
CTOA9181.8226317.452763.29728CTOA9237.0802495.517198.54498
CTOA10180.6447320.9195115.4608CTOA10274.0202528.3227129.1037
CTOA11162.8652406.3214200.7118CTOA11294.8029519.6554141.327
CTOA12169.1461356.275145.6271CTOA12270.8147493.44999.98
F13BestMeanStdF14BestMeanStd
TOA436.0461567.778464.67605TOA6308.85910,709.211976.852
CTOA1381.628545.460970.37456CTOA16592.32510,332.312182.163
CTOA2428.9403554.106157.51002CTOA25669.64210,351.632351.777
CTOA3428.9403546.277673.60519CTOA35669.6429754.1552129.36
CTOA4399.2063559.409171.64018CTOA44981.22710,460.022225.211
CTOA5441.7775557.396352.42537CTOA55656.66110,577.262429.533
CTOA6429.1183553.536757.40065CTOA65775.55410,267.942257.291
CTOA7409.0477554.642472.04844CTOA75269.5110,462.152136.355
CTOA8439.4703544.015855.05047CTOA85604.9889649.7482224.881
CTOA9408.3318535.095961.93543CTOA95947.82111344.191745.503
CTOA10421.2888541.287967.65827CTOA105614.6789714.6022152.191
CTOA11442.0623544.714759.228CTOA115914.1910,617.12068.622
CTOA12389.8412555.797857.31587CTOA124792.57610,080.582270.388
F15BestMeanStdF16BestMeanStd
TOA11,830.113,501.17581.1357TOA3.3133264.2285360.365848
CTOA112,583.113,601.45505.4042CTOA13.2947374.3067040.427229
CTOA211,278.5713,293.68681.5553CTOA23.5062794.2151370.374083
CTOA311,278.5713,496.54654.1637CTOA33.5062794.2309330.45455
CTOA412,209.9313,507.25588.4897CTOA43.0744844.2550580.371441
CTOA511,556.4113,506.37615.3785CTOA53.0552814.3457940.394799
CTOA612,088.2713,549.69589.0859CTOA63.074.2688210.380532
CTOA711,54513,371.58633.3576CTOA73.3794834.2777090.374405
CTOA811,994.7813,360.58539.282CTOA82.9478594.2137670.445558
CTOA911,274.8913,299723.3836CTOA93.4604574.2297470.401068
CTOA1011,426.0213,421.82751.7903CTOA103.1344724.2509790.37598
CTOA1110,332.0513,328.76728.8364CTOA113.2439434.1602060.399226
CTOA1211,44513,369.95659.8026CTOA123.2470164.2960710.41248
Table 14. Experimental results running on F17–F24 in 50D.
Table 14. Experimental results running on F17–F24 in 50D.
F17BestMeanStdF18BestMeanStd
TOA480.3804573.056341.62288TOA497.8693612.471644.17009
CTOA1418.0995564.089255.55246CTOA1476.1289594.520446.58823
CTOA2455.7556576.915949.14111CTOA2518.5532612.653945.57987
CTOA3455.7556579.359365.88425CTOA3518.5532614.30739.44276
CTOA4455.9066565.208450.36234CTOA4496.0631613.41453.27447
CTOA5392.9427574.726660.17637CTOA5514.5905603.915341.95238
CTOA6442.965580.51662.5196CTOA6498.523602.94548.8578
CTOA7441.4105564.368556.01363CTOA7515.0777608.99749.55836
CTOA8470.8075578.250853.33804CTOA8517.8969604.333842.79912
CTOA9417.0029574.859466.85133CTOA9490.2215604.095448.37033
CTOA10478.4479580.280652.12733CTOA10525.3301609.858646.59068
CTOA11505.5884589.093951.13322CTOA11490.6751611.860152.8092
CTOA12430.3107573.872555.43454CTOA12545.5884613.912441.83909
F19BestMeanStdF20BestMeanStd
TOA30.1897444.007788.1584TOA21.9670823.693330.665196
CTOA124.0318941.100347.445552CTOA122.7029123.729850.711473
CTOA228.4676244.444367.782507CTOA222.3876223.572210.725435
CTOA328.4676245.282286.061791CTOA322.3876224.708930.496085
CTOA426.5022443.009927.174469CTOA422.3391123.732720.680642
CTOA533.5846743.550865.655209CTOA522.3439523.642110.728987
CTOA630.416843.11126.12804CTOA622.3316523.591860.588184
CTOA733.4608243.957165.581733CTOA722.0176823.784820.824475
CTOA834.492745.457067.094966CTOA822.3579224.266760.777374
CTOA928.6790144.724936.887457CTOA922.1610823.337850.591785
CTOA1030.8692742.500627.678287CTOA1022.7171324.214570.70456
CTOA1130.8784642.265174.838191CTOA1122.5837424.045890.730946
CTOA1230.4777443.949897.037634CTOA1222.3825224.082830.690602
F21BestMeanStdF22BestMeanStd
TOA327.7914890.582353.0427TOA4,905.22910,591.852416.849
CTOA1345.2566971.2165346.4438CTOA16211.51410,631.962632.582
CTOA2352.0457966.1944345.9517CTOA26501.44411,174.22270.215
CTOA3352.0457830.6496385.3888CTOA36501.44410,033.792348.71
CTOA4363.94121032.962317.326CTOA46121.50110,465.862039.65
CTOA5369.2878996.6293330.4367CTOA56947.34611,076.812143.838
CTOA6350.811926.88352.874CTOA66170.211,002.72324.04
CTOA7319.4219864.8827380.6891CTOA76678.99610,567.012363.718
CTOA8348.5268868.3168397.7172CTOA86427.28910,406.052287.282
CTOA9328.2194984.5259299.4723CTOA96429.71811,047.882078.408
CTOA10322.234875.9508396.4327CTOA105588.18410,258.952431.827
CTOA11349.88161015.039330.551CTOA116830.0410,983.992195.861
CTOA12342.6955848.8454390.3914CTOA127121.2610,498.442159.198
F25BestMeanStdF26BestMeanStd
TOA341.3034378.128114.44046TOA205.5451442.768235.61185
CTOA1351.4246377.364113.37185CTOA1415.301447.786311.94769
CTOA2334.7624379.297117.34367CTOA2213.0823443.438134.24711
CTOA3334.7624378.045615.14113CTOA3213.0823448.065413.32695
CTOA4343.9612374.464315.07159CTOA4414.787447.585413.33255
CTOA5343.5602376.853114.64781CTOA5207.4477444.278334.84123
CTOA6340.2544369.499317.96263CTOA6208.472438.116246.3816
CTOA7340.5446373.372615.10695CTOA7206.8534440.665347.70027
CTOA8336.583378.709615.68455CTOA8423.6976449.394211.74123
CTOA9339.1641374.329317.11287CTOA9205.9741427.338570.87132
CTOA10333.7473377.954616.54656CTOA10415.857448.611911.29372
CTOA11341.0597375.858617.06495CTOA11415.0986447.598212.38993
CTOA12328.8846375.738717.23426CTOA12206.1975441.897336.31589
Table 15. Experimental results running on F25–F28 in 50D.
Table 15. Experimental results running on F25–F28 in 50D.
F25BestMeanStdF26BestMeanStd
TOA341.3034378.128114.44046TOA205.5451442.768235.61185
CTOA1351.4246377.364113.37185CTOA1415.301447.786311.94769
CTOA2334.7624379.297117.34367CTOA2213.0823443.438134.24711
CTOA3334.7624378.045615.14113CTOA3213.0823448.065413.32695
CTOA4343.9612374.464315.07159CTOA4414.787447.585413.33255
CTOA5343.5602376.853114.64781CTOA5207.4477444.278334.84123
CTOA6340.2544369.499317.96263CTOA6208.472438.116246.3816
CTOA7340.5446373.372615.10695CTOA7206.8534440.665347.70027
CTOA8336.583378.709615.68455CTOA8423.6976449.394211.74123
CTOA9339.1641374.329317.11287CTOA9205.9741427.338570.87132
CTOA10333.7473377.954616.54656CTOA10415.857448.611911.29372
CTOA11341.0597375.858617.06495CTOA11415.0986447.598212.38993
CTOA12328.8846375.738717.23426CTOA12206.1975441.897336.31589
F27BestMeanStdF28BestMeanStd
TOA1400.5541814.028139.5524TOA484.59421529.7891583.094
CTOA11458.4591836.968124.295CTOA1476.93862453.0751720.385
CTOA21570.2721816.282125.6411CTOA2469.33352204.5461724.87
CTOA31570.2721812.687145.8124CTOA3469.33352030.1291737.381
CTOA41557.9651832.559138.9335CTOA4468.3391944.4151698.713
CTOA51387.4061,803.153179.4981CTOA5460.05531831.4241694.643
CTOA61390.5581823.212133.4097CTOA6458.69061876.841688.8
CTOA715021785.089151.5179CTOA7459.70522337.9171749.708
CTOA81379.8941815.98153.4012CTOA8464.1232367.781710.853
CTOA91529.5121751.109119.9348CTOA9468.1461211.8021404.408
CTOA101397.5951796.916136.5813CTOA10473.03292249.0551719.032
CTOA111489.4431810.935131.8909CTOA11476.95072878.621600.359
CTOA121509.6161802.984130.2309CTOA12469.78522023.2721728.143
Table 16. Experimental results running on F1–F8 in 100D.
Table 16. Experimental results running on F1–F8 in 100D.
F1BestMeanStdF2BestMeanStd
TOA2139.3996283252.464274740.92TOA172,602,289.9447,533,512.791,685,341.24
CTOA12398.0731663474.836814699.91CTOA1231,713,011447,264,632.7102,561,566.8
CTOA22100.34143360.326236801.93TOA2261,140,623.7414,096,007104,531,429.2
TOA32100.34143508.182549656.54CTOA3261,140,623.7419,300,467.4101,483,596.7
CTOA42069.1302453299.70168721.25CTOA4240,956,371.8424,309,651.384,232,520.5
CTOA52141.3466323331.353578590.78TOA5239,045,761.9435,705,48294,182,876.9
TOA62027.6059693269.456482620.89CTOA6237,292,733.6420,704,062.599,990,783.01
CTOA72405.2295253320.014774624.6CTOA7260,680,854.64,62,800,15.8104,976,598.2
CTOA82069.083083498.620606642.46CTOA8251,538,007.7438,910,629112,203,068.9
CTOA92118.1717263134.233206623.59CTOA9208,046,895.741,886,4883.6126,574,639
CTOA102333.0592983533.930869595.27CTOA10228,647,833412,418,20895,269,405.78
CTOA112222.3210663584.212646671.87CTOA11194,518,436.8449,389,132.3102,025,310.2
CTOA121880.0822153422.023288715.04CTOA12254,967,863.6431,936,549.3101,136,645.7
F3BestMeanStdF4BestMeanStd
TOA 1.59705 × 10 11 2.97826 × 10 11 8 . 7106 × 10 10 TOA242,942344,677.450,758.4
CTOA1 1.35792 × 10 11 3.54216 × 10 11 2.0326 × 10 11 CTOA1240,010.5349,754.549,483.5
CTOA2 1.70001 × 10 11 3.36049 × 10 11 1.6165 × 10 11 CTOA2262,230342,400.740,655.6
CTOA3 1.70001 × 10 11 5.16057 × 10 11 8.1197 × 10 11 CTOA3262,230358,713.741,154.6
CTOA4 1.72765 × 10 11 4.10067 × 10 11 4.5462 × 10 11 CTOA4271,398.6353,731.141,958.2
CTOA5 1.36576 × 10 11 3.2552 × 10 11 1.4502 × 10 11 CTOA5229,148.7338,988.747,971.6
CTOA6 1.47682 × 10 11 3.09028 × 10 11 1.4294 × 10 11 CTOA6206,415.8337,985.643,048.8
CTOA7 1.63911 × 10 11 3.00173 × 10 11 1.3173 × 10 11 CTOA7248,414.5344,284.446,233.6
CTOA8 1.56606 × 10 11 3.84847 × 10 11 3.3427 × 10 11 CTOA8287,827.3357,776.741,517.3
CTOA9 1 . 10743 × 10 11 3 . 16973 × 10 11 2.5012 × 10 11 CTOA9232,195.2313,641.644,483.8
CTOA10 1.42184 × 10 11 3.24281 × 10 11 1.3674 × 10 11 CTOA10264,029.4351,039.737,753.6
CTOA11 1.61727 × 10 11 4.55778 × 10 11 6.564 × 10 11 CTOA11257,374.1356,218.342,668.2
CTOA12 1.1814 × 10 11 3.14433 × 10 11 1.903 × 10 11 CTOA12235,024.8342,845.247,596.1
F5BestMeanStdF6BestMeanStd
TOA887.77141681.186412.4712TOA633.33948.34156.23
CTOA11048.1661696.234393.8469CTOA1661.45943.98197.45
CTOA2882.10981703.354349.1738CTOA2528.79940.09174.96
CTOA3882.10981725.302402.239CTOA3528.79917.19152.48
CTOA4844.88051676.628442.7563CTOA4634.26933.76179.8
CTOA5978.82091654.513376.7156CTOA5639.19965.19190.5
CTOA6886.79131644.643361.2454CTOA6598.06957.4202.32
CTOA7820.22821605.73402.9493CTOA7598.28961.24198.54
CTOA8816.07961712.45428.1783CTOA8654.96889.16130.46
CTOA9722.44681405.416382.5769CTOA9771.91089.1174
CTOA10917.63141714.638408.1695CTOA10621.29886.87152.72
CTOA11983.92591681.585399.0413CTOA11559.63921.75194.42
CTOA12971.73071768.384394.3836CTOA12600.3898.01134.14
F7BestMeanStdF8BestMeanStd
TOA239.61583.821,090.5TOA21.2684721.37630.03656054
CTOA1242.64499.34415.68CTOA121.2636221.372660.03567635
CTOA2212.37469.75595.15CTOA221.2466721.36930.03756239
CTOA3212.37623.51,127.1CTOA321.2466721.369010.03259256
CTOA4198.39483.64578.59CTOA421.2803521.369320.03458522
CTOA5217.9479.59370.19CTOA521.2498321.37070.03606908
CTOA6198.89449.21396.67CTOA621.2456221.373860.03700353
CTOA7199.74605.861,103.5CTOA721.2756321.361010.03966604
CTOA8232.86422.03335.35CTOA821.2390821.36030.04661032
CTOA9193.5685.75979.86CTOA921.2269721.368590.04142569
CTOA10219.16450.11316.9CTOA1021.2914821.371120.03301894
CTOA11219.01423.36235.7CTOA1121.2546621.369410.03838689
CTOA12213.15440.99429.49CTOA1221.2714521.36340.03841287
Table 17. Experimental results running on F9–F16 in 100D.
Table 17. Experimental results running on F9–F16 in 100D.
F9BestMeanStdF10BestMeanStd
TOA123.64141.76.8574TOA1311.8572810.311484.0023
CTOA1119.7143.855.3254CTOA11650.9912788.406423.312
CTOA2118.85142.296.3419CTOA21730.7562785.156503.7041
CTOA3118.85140.619.8626CTOA31730.7563018.903491.3541
CTOA4111.49144.36.5471CTOA41724.172854.501470.5136
CTOA5127.04143.385.7127CTOA51890.5522915.824504.7304
CTOA6118.13143.46.9413CTOA62064.7682904.924470.6114
CTOA7121.7143.946.1164CTOA71583.9842738.125468.6064
CTOA8118.72143.826.8683CTOA81882.2272967.859515.704
CTOA9124.19143.285.635CTOA91502.7492381.599437.7085
CTOA10136.38145.484.3317CTOA102038.6672851.567441.4562
CTOA11120.92142.17.6489CTOA111975.6442810.702537.6147
CTOA12117.58143.727.0724CTOA122000.1562952.191426.3003
F11BestMeanStdF12BestMeanStd
TOA782.0931,344.245347.54TOA1027.1721445.814215.9327
CTOA1823.21281,072.799166.715CTOA11056.3221384.86145.0878
CTOA2947.29981,370.846303.3031CTOA21056.9291442.546271.6762
CTOA3947.29981,072.55122.932CTOA31056.9291384.999145.0276
CTOA4764.36161,122.971232.9715CTOA41108.911413.493187.1444
CTOA5918.80091,320.505291.6236CTOA51093.8931482.733223.3889
CTOA6806.33961,235.863283.4218CTOA6990.17971530.761281.9684
CTOA7836.02261,213.874212.3033CTOA7998.69961445.851165.3558
CTOA8804.51071,047.444131.7387CTOA81015.0171,361.283193.8797
CTOA91,009.1431,316.869173.9817CTOA91,135.8931455.077172.8708
CTOA10801.51281,078.188128.2543CTOA10949.35871377.226177.4148
CTOA11750.75871,152.561226.1175CTOA11937.63931326.569177.2872
CTOA12905.17541,116.131130.5852CTOA121,023.1171435.691162.1762
F13BestMeanStdF14BestMeanStd
TOA1,201.8841,551.413178.851TOA16,381.0127,933.63,746.064
CTOA11269.8381,501.91105.0421CTOA119,002.4327,672.333382.778
CTOA21242.541529.476141.1283CTOA219,450.8727,995.122962.188
CTOA31242.541496.165132.7099CTOA319,450.8726,247.033621.942
CTOA41176.1551489.229136.2595CTOA419,446.9627,408.83225.933
CTOA51372.7281542.419105.7201CTOA519,164.5928,846.32949.902
CTOA61302.9281547.336122.5809CTOA620,093.0728,365.672982.52
CTOA71263.3131567.934162.486CTOA717,444.327,800.983620.782
CTOA81237.1661492.269136.3437CTOA817,636.9925,612.594098.689
CTOA91285.3521512.922126.2399CTOA922,316.5429,138.442218.411
CTOA101265.4281522.556116.2767CTOA1020,580.7428,074.752680.936
CTOA111215.81505.6136.66CTOA1119,816.9628,290.683238.623
CTOA1212911517.1139.09CTOA1217,287.127,601.033690.96
F15BestMeanStdF16BestMeanStd
TOA28,154.8329,950.66872.0593TOA3.7381454.8554290.327762
CTOA128,226.2930,093.41869.0388CTOA14.3237854.8765390.28861
CTOA226,822.4329,932.09937.1318CTOA23.7765734.7615120.330903
CTOA326,822.4329,966.01819.4143CTOA33.7765734.8507930.350139
CTOA427,966.9530,001.04799.4298CTOA43.9354644.9439710.2954
CTOA526,870.6730,128.57981.7578CTOA54.1295934.8611070.336834
CTOA628,107.1530,101.49894.0004CTOA63.6277794.8121670.366722
CTOA724,068.0429,895.641,067.113CTOA73.9686884.8663290.311919
CTOA826,720.7530,157.56879.5838CTOA83.7259014.884540.327247
CTOA927,247.8529,808.67857.5536CTOA94.187234.8868540.271421
CTOA1027,394.8230,009.37967.8268CTOA103.8065034.8141550.342405
CTOA1127,973.9130,121.8823.9142CTOA113.9459594.8182250.342649
CTOA1228,188.5330,261.43781.8707CTOA123.7007454.8322350.281363
Table 18. Experimental results running on F17–F24 in 100D.
Table 18. Experimental results running on F17–F24 in 100D.
F17BestMeanStdF18BestMeanStd
TOA1483.8061855.596178.1559TOA1598.4521810.092117.2306
CTOA11611.0671851.803140.7411CTOA11538.271853.757153.6239
CTOA21578.2841858.823163.5562CTOA21496.2781854.898158.7919
CTOA31578.2841914.236175.5354CTOA31496.2781865.568143.2234
CTOA41534.3631880.708142.6621CTOA41532.8311847.873137.3685
CTOA51450.3241905.645191.0423CTOA51571.5551856.824137.2144
CTOA61466.7231829.358152.6456CTOA61609.0361846.872137.6364
CTOA71508.8111895.095203.4111CTOA71524.3651889.037158.8678
CTOA81492.2271858.309150.965CTOA81548.0581854.012161.8966
CTOA91419.2441723.545162.061CTOA91458.7451781.356141.7308
CTOA101470.9531886.808178.805CTOA101549.0361869.344140.7951
CTOA111584.511871.112156.8525CTOA111473.2881854.892141.4038
CTOA121524.311849.215171.8648CTOA121513.0221843.106154.3383
F19BestMeanStdF20BestMeanStd
TOA295.46631,299.743940.6833TOA5050 7.37 × 10 9
CTOA1306.0248959.1571490.4215CTOA15050 7.37 × 10 9
CTOA2220.77321,210.53776.8853CTOA25050 7.37 × 10 9
CTOA3220.77511,456.5071,122.251CTOA35050 7.37 × 10 9
CTOA4263.99561,044.573790.9504CTOA45050 7.37 × 10 9
CTOA5278.25661,257.159826.3161CTOA55050 7.37 × 10 9
CTOA6353.1181,505.6361,163.484CTOA65050 7.37 × 10 9
CTOA7340.15241,282.734788.6813CTOA75050 7.37 × 10 9
CTOA8303.48241,395.077841.2369CTOA85050 7.37 × 10 9
CTOA9290.44691,517.1121,037.962CTOA95050 7.37 × 10 9
CTOA10275.58731,431.942917.4171CTOA105050 7.37 × 10 9
CTOA11287.2933774.3478437.001CTOA115050 7.37 × 10 9
CTOA12291.551,447.61053.9CTOA125050 7.37 × 10 9
F21BestMeanStdF22BestMeanStd
TOA866.142428.91016.9TOA20,193.228,305.43572.9
CTOA18292495.7944.19CTOA118,665.627,128.73883.5
CTOA2887.972447.2945.41CTOA220,142.827,866.34040.5
CTOA3887.972362.1947.87CTOA320,142.824,239.83884.5
CTOA4860.32708.4817.23CTOA419,331.526,564.52984.9
CTOA5995.52396.41025.5CTOA518,828.226,778.34172.5
CTOA61009.525921143.4CTOA619,798.728,077.53468.8
CTOA7852.062316.4736.24CTOA719,292.127,7713565.4
CTOA8898.292173.7776.43CTOA818,85526,0674050.5
CTOA9920.942032.51389.2CTOA919,674.528,475.13657.9
CTOA10799.1822551035.1CTOA1018,910.825,546.63911.2
CTOA11905.32426.5836.24CTOA1119,931.128,200.43468.5
CTOA12875.312320.31025.7CTOA1218,541.127,376.93896.5
F23BestMeanStdF24BestMeanStd
TOA29,76731,678.9967.48TOA515.63562.2623.613
CTOA129,746.431,711.7908.48CTOA1516.52563.4121.138
CTOA224,342.931,659.21,518.7CTOA2526.9567.6626.498
CTOA324,342.931,879.51,183.1CTOA3526.9569.3224.676
CTOA429,565.332,058.3921.15CTOA4523.82564.6323.689
CTOA528,95831,582.9875.01CTOA5521.75568.0325.61
CTOA629,072.831,584.3950.12CTOA6529.72570.4622.514
CTOA729,21131,761.71,002CTOA7515.08562.8123.823
CTOA829,895.231,945.31,072.6CTOA8531.98565.3719.642
CTOA928,957.131,770.41,246.4CTOA9496.74565.5325.123
CTOA1029,750.931,847.31,087.7CTOA10515.16562.2824.828
CTOA1128,734.831,592.81,189.2CTOA11520.65566.9424.266
CTOA1229,23531,907.31,038.3CTOA12527.5566.7824.079
Table 19. Experimental results running on F25–F28 in 100D.
Table 19. Experimental results running on F25–F28 in 100D.
F25BestMeanStdF26BestMeanStd
TOA570.97616.0325.28TOA605.64653.3719.974
CTOA1557.81607.7729.975CTOA1596.71649.2323.78
CTOA2544.22604.0427.977CTOA2585.99653.1124.213
CTOA3544.22622.0927.573CTOA3585.99931.14418.44
CTOA4551.65616.6131.779CTOA4598.6652.0923.355
CTOA5555.28615.8533.21CTOA5590.85658.1520.685
CTOA6539.27611.2232.751CTOA6592.76651.3522.915
CTOA7555.95616.0829.62CTOA7588.64648.2725.268
CTOA8558.65609.9729.814CTOA8229.31642.9262.589
CTOA9552.58610.4931.12CTOA9600.25653.6122.443
CTOA10557.08615.6332.068CTOA10231.12654.98112.07
CTOA11567.22610.727.697CTOA11585.69651.7423.476
CTOA12551.37609.5929.49CTOA12604.05653.9720.052
F27BestMeanStdF28BestMeanStd
TOA3392.7963919.681231.5979TOA4201.4357311.1291744.548
CTOA13259.6853917.855262.1164CTOA14193.5436951.3261508.54
CTOA23259.5483893.783276.7739CTOA23815.4026805.7251645.86
CTOA33259.5483861.78254.4354CTOA33815.4026716.8052079.802
CTOA43215.9073854.504303.3637CTOA44109.3366851.3351741.201
CTOA53275.9073864.261268.2263CTOA54198.8417186.7641556.756
CTOA63235.7283889.609283.8143CTOA64169.4326762.4181446.997
CTOA73246.9263921.525261.544CTOA74358.5816985.0351341.934
CTOA83378.8793882.204271.7141CTOA84098.1587107.1492005.808
CTOA93161.9533874.175241.2633CTOA94499.3897435.8092229.463
CTOA103285.2533901.605267.3331CTOA104229.4266891.6341983.091
CTOA113359.5213894.444250.7726CTOA114125.3036868.331407.036
CTOA123401.573855.544257.0391CTOA124050.6206802.4551873.949
Table 20. Friedman ranking of different algorithms in 30D.
Table 20. Friedman ranking of different algorithms in 30D.
AlgorithmFriedman RankingFinal Ranking
CTOA94.31
TOA4.672
CTOA106.003
CTOA86.004
CTOA56.335
CTOA116.676
CTOA37.177
CTOA47.178
CTOA67.839
CTOA78.1710
CTOA18.311
CTOA28.6712
CTOA129.6713
Table 21. Friedman ranking of different algorithms in 50D.
Table 21. Friedman ranking of different algorithms in 50D.
AlgorithmFriedman RankingFinal Ranking
CTOA92.671
CTOA53.332
CTOA64.673
CTOA74.674
CTOA16.335
TOA6.676
CTOA26.837
CTOA107.338
CTOA118.509
CTOA128.5010
CTOA89.6711
CTOA310.1712
CTOA411.6713
Table 22. Friedman ranking of different algorithms in 100D.
Table 22. Friedman ranking of different algorithms in 100D.
AlgorithmFriedman RankingFinal Ranking
CTOA65.001
CTOA15.002
CTOA95.333
TOA5.674
CTOA106.175
CTOA46.336
CTOA26.507
CTOA126.508
CTOA379
CTOA117.16666666710
CTOA57.33333333311
CTOA88.16666666712
CTOA78.513
Table 23. The results of the Wilcoxon test.
Table 23. The results of the Wilcoxon test.
AlgorithmCompared Algorithmp ValueResult
CTOA9TOA0.03391
CTOA10.02111
CTOA20.03751
CTOA30.05950
CTOA40.04661
CTOA50.01681
CTOA60.04571
CTOA70.05160
CTOA80.42811
CTOA100.04281
CTOA110.03571
CTOA120.27901
Table 24. The parameter settings for the relevant algorithm.
Table 24. The parameter settings for the relevant algorithm.
AlgorithmParameterDescription
CTOA9 g c = 50seed growth cycle
GA [3] p m = 0.005mutation rate
p c = 0.7crossover rate
PSO [2] c 1 = c 2 = 1learning factor
w = 0.9weight
v m a x = 6maximum velocity limit
ACO [7] r h o = 0.2pheromone volatilization speed
SFLA [8]q = 2number of parents
α = 3number of offsprings
β = 5maximum number of iterations
σ = 2step size
n M e m e p l e x = 5number of memeplexes
Table 25. Friedman ranking of different algorithms.
Table 25. Friedman ranking of different algorithms.
AlgorithmFriedman RankingFinal Ranking
CTOA92.461
SFLA [8]3.172
PSO [2]3.333
ACO [7]4.754
GA [3]55
Table 26. The results of the Wilcoxon test comparing CTOA with state-of-the art algorithms.
Table 26. The results of the Wilcoxon test comparing CTOA with state-of-the art algorithms.
AlgorithmCompared Algorithmp-ValueResult
CTOA9SFLA [8]0.01331
ACO [7]0.03551
PSO [2]0.04071
GA [3]0.04031
Table 27. The results of different algorithms.
Table 27. The results of different algorithms.
AlgorithmRMSE R 2
CTOA90.0510330.96719
SFLA [8]0.0510710.95767
ACO [7]0.0515730.92724
PSO [2]0.0515720.92623
GA [3]0.0516550.93110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, T.-Y.; Shao, A.; Pan, J.-S. CTOA: Toward a Chaotic-Based Tumbleweed Optimization Algorithm. Mathematics 2023, 11, 2339. https://doi.org/10.3390/math11102339

AMA Style

Wu T-Y, Shao A, Pan J-S. CTOA: Toward a Chaotic-Based Tumbleweed Optimization Algorithm. Mathematics. 2023; 11(10):2339. https://doi.org/10.3390/math11102339

Chicago/Turabian Style

Wu, Tsu-Yang, Ankang Shao, and Jeng-Shyang Pan. 2023. "CTOA: Toward a Chaotic-Based Tumbleweed Optimization Algorithm" Mathematics 11, no. 10: 2339. https://doi.org/10.3390/math11102339

APA Style

Wu, T. -Y., Shao, A., & Pan, J. -S. (2023). CTOA: Toward a Chaotic-Based Tumbleweed Optimization Algorithm. Mathematics, 11(10), 2339. https://doi.org/10.3390/math11102339

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop