Next Article in Journal
Filtering Useful App Reviews Using Naïve Bayes—Which Naïve Bayes?
Previous Article in Journal
Predicting the Multiphotonic Absorption in Graphene by Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Exploration Artificial Bee Colony for Mathematical Optimization

by
Shaymaa Alsamia
1,2,*,
Edina Koch
1,
Hazim Albedran
2 and
Richard Ray
1
1
Department of Structural and Geotechnical Engineering, Széchenyi István University, 9026 Győr, Hungary
2
Faculty of Engineering, University of Kufa, Najaf P.O. Box 21, Iraq
*
Author to whom correspondence should be addressed.
AI 2024, 5(4), 2218-2236; https://doi.org/10.3390/ai5040109
Submission received: 3 October 2024 / Revised: 25 October 2024 / Accepted: 1 November 2024 / Published: 5 November 2024
(This article belongs to the Section AI Systems: Theory and Applications)

Abstract

:
The artificial bee colony (ABC) algorithm is a famous swarm intelligence method utilized across various disciplines due to its robustness. However, it exhibits limitations in exploration mechanisms, particularly in high-dimensional or complex landscapes. This article introduces the adaptive exploration artificial bee colony (AEABC), a novel variant that reinspires the ABC algorithm based on real-world phenomena. AEABC incorporates new distance-based parameters and mechanisms to correct the original design, enhancing its robustness. The performance of AEABC was evaluated against 33 state-of-the-art metaheuristics across twenty-five benchmark functions and an engineering application. AEABC consistently outperformed its counterparts, demonstrating superior efficiency and accuracy. In a variable-sized problem (n = 10), the traditional ABC algorithm converged to 3.086 × 106, while AEABC achieved a convergence of 2.0596 × 10−255, highlighting its robust performance. By addressing the shortcomings of the traditional ABC algorithm, AEABC significantly advances mathematical optimization, especially in engineering applications. This work underscores the significance of the inspiration of the traditional ABC algorithm in enhancing the capabilities of swarm intelligence.

1. Introduction

Optimization algorithms [1] use a technique to find the best solution in a space of candidate solutions. Since its inspiration by Dervis Karaboga in 2005 [2], the artificial bee colony (ABC) algorithm has been considered a cornerstone in swarm intelligence (SI). The ABC algorithm mimics the behavior of foraging honeybees, consisting of three categories: employed bees, onlookers, and scouts. The algorithm governs the exploration and exploitation processes, defining the search engine as globally optimal in a search landscape. Despite its proven efficiency, the ABC algorithm, like many contemporaries, presents some disadvantages [3]. Researchers have identified key areas, particularly in its exploration mechanisms, where it performs sub-optimally, particularly in high-dimensional or complex landscapes. This shortcoming has encouraged ongoing research on enhancing the algorithm’s robustness through modifications and hybridizations with other metaheuristic approaches [4,5].
The inspiration engine of the artificial bee colony came with a significant issue: the concept did not consider that in real life, honeybees consider the distance between the exhausted source of food and the new proposed food source. Employed bees naturally tend to move to the closest food sources rather than with an equal probability of moving to the nearest or farthest ones. Ignoring this fact reduced the capability of ABC’s inspiration engine and mathematical model (discussed in Section 3). Experiments discussed later also demonstrate this shortcoming.
Optimization has seen significant advancements in developing and applying various novel algorithms. These algorithms, which draw inspiration from natural phenomena, biological processes, and physical laws, provide varied methods for addressing complex optimization problems in fields such as engineering [6], finance, optimal design [1], logistics, control [7], and artificial intelligence [8]. This paper provides an overview of some innovative optimization algorithms applied to the experiments while evaluating the adaptive exploration artificial bee colony (AEABC) algorithm. One of the critical algorithms is the Ant Lion Optimization (ALO) [9], developed based on the hunting behavior of antlions. On the other hand, the Bayesian Optimization Algorithm (BOA) [9] uses Bayesian techniques to guide the search for optimal solutions, providing a robust strategy for dealing with complex landscape explorations. The Gray Wolf Optimization (GWO) [10] imitates the hierarchy of leading and hunting behavior seen in a pack of grey wolves in the environment. The Particle Swarm Optimization (PSO) [11] is an algorithm inspired by social behavior in fish and birds. Many researchers have applied it due to its simplicity and effectiveness in navigating the search space.
Similarly, the Sine Cosine Algorithm (SCA) [12] utilizes mathematical functions to simulate the explorative moves of search agents. The Whale Optimization Algorithm (WOA) [13] was developed and inspired by the bubble-net hunting strategy of humpback whales. Another noteworthy algorithm is Dynamic Differential Annealed Optimization (DDAO) [14,15], which introduces dynamic elements into the differential optimization framework to enhance convergence speed and accuracy. The Bat Algorithm (BA) [16] models the echolocation behavior of bats, while the Firefly Algorithm (FF) [17,18] mimics the bioluminescent communication of fireflies. The Krill Herd (KH) algorithm [19] simulates the herding behavior of krill individuals in finding denser areas of food. The Multi-verse Optimizer (MVO) [20] originates from the theory of multi-verse in physics, representing multiple possible solutions through universes. The Squirrel Search Algorithm (SSA) [21] simulates the foraging behavior of squirrels. The gravitational search algorithm (GSA) [22] uses mass interactions and the law of gravity to perform the search, and the Dolphin Echolocation (DE) [23,24] simulates how dolphins identify and locate their prey through echolocation. Furthermore, the Flower Pollination Algorithm (FPA) [25] and the Fast Evolutionary Programming (FEP) [26] are influenced by the natural pollination process of flowers and evolutionary strategies, respectively. The State of Matter Search (SMS) [27], Moth-Flame Optimization (MFO) [28], and the Genetic Algorithm (GA) [29] draw from physics, the navigational method of moths in nature, and biological evolution principles, respectively. The Fertilization Algorithm (FO) [30] emulates the process of biological fertilization, and the Harmony Search (HS) is developed based on the improvisation process of musicians. Each of these algorithms has unique characteristics and has been applied successfully to solve specific optimization problems, demonstrating the richness and diversity of approaches in the optimization field.
This work introduces the adaptive exploration artificial bee colony (AEABC), a novel variant of the traditional ABC algorithm. Integrating new distance-based parameters into the bees’ decision-making processes allows AEABC to navigate the search space more effectively, thereby improving the algorithm’s efficiency and accuracy in finding a global optimal. This enhancement addresses the deficiencies observed in ABC’s ability to handle complex mathematical optimization problems. This study performs a rigorous comparative analysis against 33 efficient metaheuristics across twenty-five benchmark functions and an engineering application. The results illustrate AEABC’s superior performance, notably in scenarios involving large-scale optimization problems, where it dramatically outpaces the traditional ABC. Also, the experiments express the robustness and superiority of the AEABC algorithm on small-scale and constrained optimization problems. The following sections will guide the reader through the theoretical underpinnings of the AEABC, followed by detailed experimental results and analyses, highlighting the algorithm’s versatility and robustness. By considering both the theoretical and practical aspects, this work contributes a substantial advancement to the field of mathematical optimization and demonstrates the applicability of AEABC in critical engineering contexts.

2. The ABC Algorithm

Dervis Karaboga [31] introduced an optimization method known as artificial bee colony, which utilizes the SI principles. The algorithm is a metaheuristic that successfully solves multidimensional optimization issues. The behavior of the honey bee colony’s foraging mimics that based on a model presented by Tereshko and Loengarov [32]. The model of the ABC algorithm consists of the following components:
  • Employed bees: Each employed bee (proposed solution) is associated with a specific food source (solution), which it explores locally to find better positions (solutions). The employed bee randomly modifies the current solution to generate a new one in the neighborhood. If this new solution has a higher fitness value (indicating a more desirable solution), the food source’s position is updated. Employed bees share information about the food source’s quality with onlooker bees upon returning to the hive.
  • Onlooker bees: They remain in the hive and receive information from the employed bees; bees share their information using waggle dance [33]. In optimization, we can say that onlooker bees are proposed solutions that improve themselves based on the fitness values of the employed bee solutions.
  • Scout bees: When a food source (proposed solution) fails to improve after several attempts, it is abandoned. In other words, if a proposed solution does not improve over a specific number of iterations, then it should be deleted and replaced by a new randomly generated solution. The associated employed bee then becomes a scout and begins a random search for a new food source. This mechanism introduces randomness, helping the algorithm escape local optima and discover new, potentially more fruitful areas of the search space.
  • Food source: This is the location where nectar resides in multidimensional space; in optimization, it is a proposed solution that can be improved over iterations.
Bees in the ABC algorithm communicate through shared information about food source quality (fitness value of the solutions), similar to the “waggle dance” observed in natural bee colonies. Employed bees returning to the hive signal their food source’s quality, allowing onlooker bees to make informed choices. This communication enhances the colony’s search efficiency. In the ABC algorithm, the employed bee solutions communicate with onlooker solutions via Equation (3). Additionally, bees transition between roles dynamically: employed bees can become scouts, and scouts can become employed when discovering new sources. ABC algorithm (Algorithm 1) can be summarized as follows:
Algorithm 1: Pseudocodeof the ABC algorithm
Scout Bee Phase
do
  Employed bee Phase
  Onlooker bee Phase
If (Employed bee Phase and Onlooker bee Phase) have no improvement
Then, call for Scout bee Phase
  Memorize the best solution in the current trial
Until (stop condition)
The low-level structures of the algorithm’s sections interact to influence the global level. All bees can be considered scouts at the initialization stage, randomly looking for new source food (solution). Let x be a solution vector:
x = ( x 1 , x 2 , x i , x m 1 , x m )
where m R n  j = 1…m.
The foraging behavior in the ABC algorithm utilizes the following equation to express finding new solutions in the search space [2]:
v i j = x i j + φ i ( x i j x k j )
where v i is a new solution vector; φ i is a randomly generated number between −1 and 1; and i (i = 1 to n) is the solution index in the (n) number of populations. j is the index of a variable in a solution, and k is a random number representing different random indexes in the population of solutions.
The onlooker bees employ a probability function as a function of the fitness value to choose the optimal option. The roulette wheel selection method [34] provided a means to accomplish this. The probability of a solution ( P i ) should be:
P i = f i i = 1 n f i
f i = 1 1 + O i O i 0 1 + a b s ( O i ) O i < 0
where f i is the fitness value, and O i is the value of the objective function. Initially, all the bees in the algorithm are considered scouts. However, as the algorithm progresses, these bees exchange roles between employed and onlookers. The bees whose solution remains unchanged after several trials must relinquish their employment and convert into scouts. The abandonment criteria, also known as limit control, play a crucial role in escaping local minimums and enabling the search for the global minimum in an optimization problem. The MathWorks website [35] provided the ABC algorithm’s code implementation using MATLAB R2021a.

3. The AEABC Algorithm

With its three segments, the ABC algorithm does not pay attention to the distance to the food source. In other words, new solutions emerge using the neighborhood structure expressed in Equation (2). The random index k in Equation (2) means that a randomly chosen solution from the population provides the basis for a new solution. Thus, the original ABC algorithm ignored the bees’ preference for the nearest food source if they had multiple choices. The far-source food has a lower probability than the probability of the nearest source food, and the bee leaves its current position for another position with a higher probability. This probability is calculated based on the distance between the current position of a bee and the randomly chosen position of another bee in the swarm. This probability parameter enhances the shared knowledge among the agents of the swarm. In other words, bees in the swarm share their positions with a random set of other bees, and that enhances the swarm’s intelligence. We can say the same thing for onlooker bees in the hive when extracting information from employed bees through waggle dance; they consider the nearest food source if different employed bees come from various sources. The proposed algorithm, adaptive exploration artificial bee colony (AEABC), comes to redesign the original ABC algorithm to include the distance criteria in the mathematical model. In machine learning, various distance metrics measure the similarity or dissimilarity between data points, such as the Manhattan or Hamming distance. Euclidean distance can be used for Cartesian spaces:
d = x i j x k j
where d is the distance between any solution in the population and a solution of random index k in the population. The distance probability Pd, based on the distance parameter, can be calculated as follows:
P d = e 1 d
The distance probability significantly impacts the search engine, as revealed in Section 5. If the random number r 0 , 1 > P d , then the candidate solution generated by Equation (2) can be shifted as follows:
S i j = r   v i j
where S i j is the shifted solution. Table 1 explains how the distance between two solutions can affect the probability of shifting a solution in a search landscape.

4. Designing and Executing the Experiment

Several benchmarks with different complexities provide a basis for comprehensively examining the AEABC algorithm. Also, the benchmarks supported other metaheuristics, allowing for their comparisons to the AEABC on a specific benchmark. Table 2 shows seven unimodal test functions with their variable size (D), search space (Range), and global minimum (fmin).
Table 3 reveals six multimodal benchmarks with a more complex search landscape with many local minima/maxima. The benchmarks in Table 4 and Table 5 are used to evaluate the AEABC algorithm on large-scale and small-scale variable sizes, respectively where F1 to F7 refer to the test function number.

5. Contribution of the Distance Parameter

Applying distance parameters to a specific section of the ABC algorithm is crucial. Table 6 reveals that applying distance criteria to both employed and onlooker sections produces the best result versus applying it to one section. The experimental setup for results in Table 6 was performed for 30 independent runs, with a population size of 50 and a maximum number of iterations of 500, while the variable size was changed to 100, 500, and 10, respectively. The best results in Table 6 are highlighted in bold.
The results indicate that applying the distance parameter in the ABC algorithm’s employed and onlooker bee phases significantly enhances its performance, leading to lower mean values and minimal variability (as shown by the standard deviation). This effect is consistent across different variable sizes, with the best results occurring when the distance parameter is applied in both phases. When the distance parameter is applied only in the employed phase, the algorithm’s performance is still relatively strong, though not as robust as when used in both phases. Conversely, applying the distance parameter only in the onlooker phase results in the worst performance, with higher mean values and greater variability, indicating that the onlooker phase alone is insufficient to capitalize on the benefits of the distance parameter. These findings underscore the importance of integrating the distance parameter across multiple phases of the ABC algorithm to achieve optimal performance, particularly in handling complex optimization tasks.

6. Experimental Results

This section presents how the new optimization algorithm AEABC is examined comprehensively by comparing its results with 33 other metaheuristic algorithms. The competitive internal parameters and their source codes in final versions can be free downloaded at [36,37], and this work is not responsible for tuning these competitive algorithms.

6.1. Time-Based Experiment

AEABC was evaluated on the seven unimodal benchmarks in Table 2 and compared with ABC, dynamic differential annealed optimization (DDAO) [14], Whale Optimization Algorithm (WOA), Sine Cosine Algorithm (SCA), Particle Swarm Optimization (PSO), Gray Wolf Optimization (GWO), Butterfly optimization algorithm (BOA), and Ant Lion Optimizer (ALO). All the algorithms used 30 independent runs, a population size of 50, and a maximum runtime of one second for each independent run. The results are shown in
Table 7 and Table 8 express how the AEABC algorithm significantly improved its origin ABC and superior performance with other metaheuristics in the experiment. The AEABC algorithm consistently outperforms the other metaheuristics on most functions, as shown in Table 7, especially in terms of best values and standard deviations, indicating both high accuracy and consistency. Its performance is particularly notable on functions F1, F2, F3, and F4, where it significantly outclasses all other algorithms. On functions F5, F6, and F7, while AEABC is still highly competitive, a few other algorithms like GWO, WOA, and BOA show comparable or slightly better performance in certain cases. Consistency: The low standard deviations across all functions for AEABC suggest that it is highly reliable and produces consistent results, which is critical in optimization problems. Competitors: GWO and WOA are the closest competitors to AEABC, performing well across most functions but still generally lagging behind AEABC in terms of best values. AEABC demonstrates superior performance across a wide range of test functions compared to other metaheuristic algorithms, making it a highly effective choice for optimization tasks.
Table 8 reveals that AEABC is among the top performers, particularly on functions F9, F10, and F11, where it achieves near-perfect scores with zero variation. This indicates that AEABC is highly effective for certain types of multimodal functions. The zero standard deviation in functions F9, F10, and F11 highlights AEABC’s consistency and reliability in optimization. For other functions like F8 and F13, its standard deviation is moderate, suggesting that while its performance is good, there is some variability. While AEABC is competitive, especially regarding consistency, algorithms like WOA and GWO sometimes achieve slightly better best values, particularly for F8 and F12. However, AEABC’s balance of good performance across functions and low standard deviation makes it a robust choice. Overall, AEABC demonstrates strong and consistent performance across a range of multimodal benchmark functions, making it a competitive algorithm. Its particular strength lies in its ability to find optimal or near-optimal solutions consistently, even though it might not always achieve the absolute best performance on every function compared to other top algorithms like WOA or GWO.

6.2. Comparison Against ABC

This section demonstrates how the AEABC algorithm can outperform its previous form by examining it in large-scale optimization. The variable size for the functions in Table 9 is 1000, the run condition was 30 independent runs, the population size is 100, and 1000 is the maximum number of iterations.
AEABC shows a dramatic improvement in the mean values across all functions compared to the original ABC algorithm. For functions F1, F14, F15, F16, and F17, the reductions in error are several orders of magnitude, indicating that AEABC is far more effective in solving these large-scale optimization problems. AEABC consistently shows a lower standard deviation across all functions, often reaching zero, suggesting that it performs better on average and does so with much greater reliability and stability. This consistency is essential in optimization tasks where predictable performance is crucial. In function F18, both algorithms perform equally well, suggesting that AEABC does not lose any effectiveness compared to ABC on simpler tasks, even though it shows significant improvements in more complex ones. AEABC outperforms its predecessor, ABC, across almost all tested functions, particularly in accuracy and consistency. The improvements are most notable in more challenging functions, making AEABC a more powerful tool for large-scale optimization problems.

6.3. AEABC on F1 and F16

Comparison results were found in the literature [14,21] among a set of metaheuristics on F1 and F16 benchmarks and used to evaluate the performance of the AEABC algorithm as shown in Table 10. We have followed the same running conditions for a fair comparison with 25,000 function evaluations, 30 independent runs, and a variable size of 30. Table 10 shows that AEABC outperforms its competitors, which are Bat Algorithm (BA), Firefly (FF) optimization, Multi-Verse Optimizer (MVO), Krill Herd (KH) optimization, and Squirrel Search Algorithm (SSA). The optimal solutions of the test functions F1 and F16 are zero, and AEABC converged perfectly to zero, while other algorithms in this experiment could not converge well.
The AEABC consistently achieves a mean value of 0 for both F1 and F16, meaning it optimally solves these benchmark functions without any error, which none of the other algorithms achieve. The standard deviation of 0 for both functions indicates that AEABC’s performance is perfectly consistent across all runs, a significant advantage over the different algorithms that exhibit varying degrees of variability. Among the other algorithms, SSA shows the closest performance to AEABC in terms of low mean values and relatively low standard deviations. However, it still cannot match AEABC’s perfect results. Algorithms like PSO, BA, and MVO show significantly worse performance in accuracy (higher mean values) and consistency (higher standard deviations). AEABC performs vastly superior to other metaheuristics on the F1 and F16 benchmark functions. Its ability to achieve perfect optimization with zero error and zero variability makes it the most reliable and effective algorithm among those compared.

6.4. Results of Large-Scale Benchmarks

In this experiment, AEABC is evaluated on the benchmarks shown in Table 11 and compared with five metaheuristics: GWO, PSO, GSA, DE, and FEP. The statistical results of these algorithms were found in the literature [10], while AEABC results were obtained by following the same run conditions, which are 30 independent runs, a maximum number of iterations of 500, and a population size equal to 30.
The AEABC achieves optimal or near-optimal mean values across all large-scale optimization functions, often matching the performance of other top algorithms like DE, GWO, PSO, and GSA. The standard deviation for AEABC is minimal across all functions, indicating that it reliably produces results close to the best-known solutions with slight variation across multiple runs. In functions like F14, AEABC shows impressive consistency, outperforming even the best alternatives. While DE slightly outperforms AEABC in one or two cases (e.g., F15), it remains highly competitive and often superior to algorithms like PSO, GWO, and FEP, particularly regarding consistency and reliability. AEABC performs strongly on large-scale optimization problems, combining high accuracy with remarkable consistency. It is a robust and reliable algorithm that competes well with, and often surpasses, other state-of-the-art metaheuristics, making it a highly effective choice for complex optimization tasks.

6.5. Results of Small-Scale Benchmarks

In this experiment, AEABC is evaluated on the benchmarks shown in Table 12 and compared with five metaheuristics: GWO, PSO, gravitational search algorithm (GSA), differential evolution (DE), and Fast Evolutionary Programing (FEP). The statistical results of these algorithms were found in the literature [10], while AEABC results were obtained by following the same run conditions in Section 6.4.
AEABC consistently performs well across all functions, often achieving mean values close to the best-performing algorithms. While not the lowest in every case, its standard deviations are generally moderate, indicating that AEABC provides reliable and consistent results. Particularly in F20, AEABC matches the best-performing algorithm (GSA) in terms of mean value but consistently outperforms it. In some cases, particularly in functions F19 and F21, AEABC’s mean values are slightly less optimal than GWO and DE, and its standard deviations are higher, suggesting room for improvement in achieving more consistent performance. While AEABC may not consistently achieve the absolute best performance, it remains highly competitive, outperforming algorithms like PSO, GSA, and FEP in most cases. Its overall performance across these small-scale optimization problems demonstrates its robustness and reliability as an optimization tool. In conclusion, AEABC is a strong contender in small-scale optimization problems, often producing results close to or better than many leading metaheuristics. Its performance is characterized by good accuracy and reasonable consistency, making it a valuable algorithm for solving complex optimization challenges.

6.6. AEABC on Multimodal Benchmarks

For comprehensive examination, the AEABC algorithm was compared with other optimization algorithms using their results from the literature [28] shown in Table 13. Using the same run condition, 30 independent runs, 1000 maximum number of iterations, and 30 population size, the AEABC was compared with six metaheuristics shown in Table 13.
AEABC consistently achieves the best or near-best average values, especially on functions F9, F10, F11, and F12, significantly outperforming all other algorithms. This demonstrates AEABC’s superior ability to optimize complex multimodal functions. The AEABC often achieves a standard deviation of zero, indicating perfect consistency, particularly on F9, F10, and F11. This reliability is unmatched by other algorithms, which show varying degrees of performance. Even in cases where AEABC does not have the best average (e.g., F8 and F13), it remains highly competitive and offers strong performance with relatively low variability. The AEABC demonstrates excellent overall performance on multimodal benchmark functions, frequently outperforming other state-of-the-art metaheuristics. Its ability to achieve near-perfect optimization with minimal variability makes it a highly reliable and effective algorithm for complex optimization tasks.

6.7. Small-Scale Optimization

This section presents the evaluation of the AEABC on small-scale optimization problems expressed in Table 5, where all of them have variable size D = 2. This experiment was achieved by 30 independent runs and 25,000 function evaluations, compared with six optimization algorithms. The statistical results of the competitors in Table 14 were adopted from the literature [14], while the results of AEABC were obtained using the abovementioned run conditions.
AEABC consistently achieves optimal or near-optimal best values across all functions in this experiment. In many cases (F19, F21, F24, and F25), it reaches the perfect solution (zero error) with no variability. Perfect Consistency: For several functions (F19, F21, and F24), AEABC demonstrates zero standard deviation, consistently finding the optimal solution in every run, outperforming all other reliable algorithms. Even in cases where AEABC does not achieve a perfect solution (e.g., F20 and F23), it still demonstrates strong performance, with low mean errors and relatively low standard deviations compared to other algorithms. In conclusion, AEABC performs exceptionally on small-scale, two-variable test functions, often outperforming other state-of-the-art metaheuristics in accuracy and consistency. This makes AEABC a highly reliable and effective algorithm for solving small-scale optimization problems.

7. Statistical Analysis

The Wilcoxon rank-sum test was used to evaluate the statistical significance of the performance differences between the adaptive exploration artificial bee colony algorithm and other metaheuristic algorithms in Table 7 and Table 8 across multiple benchmark functions (F1 to F13). The purpose of this statistical analysis is to assess whether the observed performance differences are statistically significant or due to random variations. The results are summarized in three parts: Table 15 (for test functions F1–F5), Table 16 (for test functions F6–F10), and Table 17 (for test functions F10–F13). In each case, the Wilcoxon statistics and p-value are reported, which indicate the direction and significance of the performance differences.
Across functions F1 to F5, AEABC outperforms most algorithms with highly significant p-values (2.872 × 10−11) in all cases, indicating a robust difference in performance in favor of AEABC. Overall, the Wilcoxon rank-sum test results highlight that AEABC demonstrates superior performance compared to many of the competing metaheuristic algorithms across the majority of benchmark functions. However, specific cases such as GWO and WOA on functions F5, F8, F9, and F13 suggest that these algorithms may perform better on certain types of optimization problems. These findings provide valuable insights into AEABC’s strengths and weaknesses, offering direction for future improvements to enhance its robustness across a broader range of functions.

8. Practical Application

For extensive examination, this section employs statistical results from the literature [10] for the welded beam design problem. This work followed the same run condition mentioned in the original work to have a fair comparison and to give the reader a broader insight into the proposed algorithm. This engineering problem is represented in Figure 1, which has four variables, and the objective is reducing the fabrication cost. The problem is well expressed in [10]. The problem can be described as follows:
Consider   x = x 1 x 2 x 3 x 4 = h l t b ,
Minimize   f x = 1.10471   x 1 2 x 2 + 0.04811   x 3 x 4 14.0 + x 2 ,
subject   to   g 1 x = τ x τ max 0 ,
g 2 x = σ x σ max 0 ,
g 3 x = δ x δ max 0 ,
g 4 x = x 1 x 4 0 ,
g 5 x = P P C x 0 ,
g 6 x = 0.125 x 1 0 ,
g 7 x = 0.10471   x 1 2 + 0.04811   x 3 x 4 14.0 + x 2 0.5 ,
where
τ x τ 2 + 2 τ τ x 2 2 R + τ 2 ,   τ = P 2 x 1 x 2 ,   τ = M R J ,   M = P L + x 2 2 ,   R = x 2 2 4 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 x 2 2 4 x 1 + x 3 2 2 ,   σ x = 6 P L x 4 x 3 2 ,   δ x = 6 P L 3 E x 3 2 x 4 ,   P C x = 4.013 x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G
P = 6000 l b , L = 14 i n . , δ max = 0.25 i n . , E = 30 10 6   p s i . , G = 12 10 6   p s i . , τ max = 13600   p s i . , σ max = 30000   p s i . ,
0.1 x 1 2
0.1 x 2 10
0.1 x 3 10
0.1 x 4 2
Table 18 reveals the performance of the AEABC and the other ten algorithms in this study [10]. The adopted algorithms for this reference study are gray wolf optimizer GWO, gravitational search algorithm GSA [22], genetic algorithm GA [38,39], harmony search HS [40], random (Richardson’s random method), Simplex method, Davidon–Fletcher–Powell, APPROX [41]. The results of 30 independent runs and 500 iterations reveal that AEABC has a lower fitness value than other metaheuristics.
Also, AEABC’s performance was compared with the results of CGWO and others [32], and it had the best performance, as shown in Table 19.
AEABC demonstrates the best overall performance, achieving the lowest best cost and maintaining a low mean and worst cost, along with a very low standard deviation. This indicates that AEABC is the most cost-effective and reliable algorithm, consistently producing near-optimal solutions with minimal variation. The comparison underscores AEABC’s superiority over other state-of-the-art algorithms in solving the welded beam design problem, making it an excellent choice for complex engineering optimization tasks.

9. Exploration and Exploitation in AEABC

The adaptive exploration artificial bee colony (AEABC) algorithm introduces a novel distance-based parameter to enhance the balance between exploration and exploitation, two crucial phases in optimization. Exploration involves searching for new regions of the solution space to avoid local optima, while exploitation refines existing solutions to achieve better convergence. In traditional ABC, the neighborhood search process does not consider the distance between solutions when generating new candidates. The AEABC algorithm addresses this limitation by introducing a distance-based probability, which governs the likelihood of shifting solutions based on their distance from each other. This distance is calculated using the Euclidean metric (Equation (5)), and the shift is determined (Equation (6)).
Exploitation in AEABC is enhanced by prioritizing solutions in close proximity. When the distance d between two solutions is small, the probability Pd approaches 0, leading to a higher chance of refining nearby solutions (Equation (7)). This increases the likelihood of local search intensification, helping the algorithm converge more effectively. Conversely, if d is large, the probability Pd is close to 1, and the current solution is less likely to shift, allowing broader exploration of the search space. Table 1 further explains how this probability mechanism functions, illustrating how long or short distances between solutions influence the likelihood of a solution shift. The results in Table 6 also demonstrate AEABC’s enhanced exploitation capabilities. For example, in function F1 with a variable size of 100, applying the distance parameter to the onlooker bees leads to a substantial reduction in standard deviation (e.g., 8.31 × 10−115). This indicates that the algorithm converges more consistently when prioritizing the exploitation phase, as onlooker bees focus on refining nearby food sources. The distance-based probability Pd enables AEABC to dynamically adjust its behavior, enhancing exploration during the early stages of optimization and shifting toward exploitation as the algorithm progresses. This adaptability is key to avoiding premature convergence while ensuring rapid convergence once high-quality solutions are identified.

10. Conclusions

The artificial bee colony (ABC) algorithm, renowned for its simplicity and efficiency, has been widely applied across various domains as a robust swarm intelligence approach. However, this article introduces a significant advancement to the original ABC through the development of the adaptive exploration artificial bee colony (AEABC) algorithm. Unlike mere hybridization or combination with other algorithms, AEABC reimagines the fundamental mechanisms of ABC by incorporating a crucial new parameter: the distance between food sources. This adjustment reflects a more realistic modeling of the foraging behavior observed in natural bee colonies, enhancing the algorithm’s exploration and exploitation capabilities. The performance of AEABC was evaluated against 33 state-of-the-art metaheuristic algorithms across 25 benchmark functions, where it consistently demonstrated superior efficiency and accuracy. The results showed improvement over the traditional ABC algorithm, validating the effectiveness of this innovative approach. AEABC achieved remarkable consistency in producing optimal solutions, often outperforming its counterparts in mean performance, best and worst-case scenarios, and standard deviation, reflecting its reliability and robustness.
Furthermore, AEABC’s application to a useful engineering problem underscored its practical utility, outperforming leading algorithms in producing cost-effective and reliable solutions. This success highlights AEABC’s potential as a powerful tool for solving real-world optimization problems. In conclusion, the AEABC algorithm represents a robust and straightforward approach to optimization. This work enhances the ABC algorithm and opens new avenues for future research, encouraging the exploration and reinvention of existing algorithms with innovative, nature-inspired ideas.

Author Contributions

Conceptualization, S.A., and H.A.; methodology, S.A.; software, H.A.; validation, S.A. and H.A.; formal analysis, S.A.; investigation, S.A.; resources, S.A.; data curation, H.A.; writing—original draft preparation, S.A.; writing—review and editing, R.R.; visualization, H.A.; supervision, E.K.; project administration, E.K.; funding acquisition, E.K. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was financially supported by Széchenyi István University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghafil, H.N.; Jármai, K. Optimum dynamic analysis of a robot arm using flower pollination algorithm. In Advances and Trends in Engineering Sciences and Technologies III—Proceedings of the 3rd International Conference on Engineering Sciences and Technologies, ESaT 2018; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  2. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  3. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  4. Thammano, A.; Phu-ang, A. A hybrid artificial bee colony algorithm with local search for flexible job-shop scheduling problem. Procedia Comput. Sci. 2013, 20, 96–101. [Google Scholar] [CrossRef]
  5. Choong, S.S.; Wong, L.-P.; Lim, C.P. An artificial bee colony algorithm with a modified choice function for the traveling salesman problem. Swarm Evol. Comput. 2019, 44, 622–635. [Google Scholar] [CrossRef]
  6. Alsamia, S.; Albedran, H.; Mahmood, M.S. Contamination depth prediction in sandy soils using fuzzy rule-based expert system. Int. Rev. Appl. Sci. Eng. 2023, 14, 87–99. [Google Scholar] [CrossRef]
  7. Albedran, H.; Jármai, K. Evolutionary control system of asymmetric quadcopter. Int. Rev. Appl. Sci. Eng. 2023, 14, 374–382. [Google Scholar] [CrossRef]
  8. Ghafil, H.N.; László, K.; Jármai, K. Investigating three learning algorithms of a neural networks during inverse kinematics of robots. In Solutions for Sustainable Development; CRC Press: Boca Raton, FL, USA, 2019; pp. 33–40. [Google Scholar]
  9. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Alsamia, S.; Albedran, H.; Jármai, K. Comparative Study of Different Metaheuristics on CEC 2020 Benchmarks. In Vehicle and Automotive Engineering 4: Select Proceedings of the 4th VAE2022, Miskolc, Hungary; Springer: Berlin/Heidelberg, Germany, 2022; pp. 709–719. [Google Scholar]
  12. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  13. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  14. Ghafil, H.N.; Jármai, K. Dynamic differential annealed optimization: New metaheuristic optimization algorithm for engineering applications. Appl. Soft Comput. 2020, 93, 106392. [Google Scholar] [CrossRef]
  15. Alsamia, S.M.; Mahmood, M.S.; Akhtarpour, A. Prediction of the contamination track in Al-Najaf city soil using numerical modelling. IOP Conf. Ser. Mater. Sci. Eng. 2020, 888, 12050. [Google Scholar] [CrossRef]
  16. Yang, X.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  17. Yang, X.-S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36. [Google Scholar] [CrossRef]
  18. Alsamia, S.; Koch, E. Evaluation the behavior of pullout force and displacement for a single pile: Experimental validation with plaxis 3D. Kufa J. Eng. 2023, 14, 105–116. [Google Scholar] [CrossRef]
  19. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  21. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  22. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  23. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  24. Alsamia, S.; Koch, E. Random forest regression on pullout resistance of a pile. Pollack Period. 2024, 19, 28–33. [Google Scholar] [CrossRef]
  25. Yang, X.-S. Flower pollination algorithm for global optimization. In International Conference on Unconventional Computing and Natural Computation; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  26. Yao, X.; Liu, Y. Fast Evolutionary Programming. Evol. Program. 1996, 3, 451–460. [Google Scholar]
  27. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef]
  28. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Mirjalili, S. Genetic algorithm. Evol. Algorithms Neural Netw. Theory Appl. 2019, 780, 43–55. [Google Scholar]
  30. Alsamia, S.; Koch, E.; Hamadi, H.S. Comparative study of metaheuristics on optimal design of gravity retaining wall. Pollack Period. 2023, 18, 35–40. [Google Scholar] [CrossRef]
  31. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-tr06; Erciyes University, Engineering Faculty, Computer Engineering Department: Kayseri, Türkiye, 2005. [Google Scholar]
  32. Tereshko, V.; Loengarov, A. Collective decision making in honey-bee foraging dynamics. Comput. Inf. Syst. 2005, 9, 1. [Google Scholar]
  33. Georgia Tech College of Computing “youtub, 2011” The Waggle Dance of the Honeybee. Available online: https://www.youtube.com/watch?v=bFDGPgXtK-U&t=312s (accessed on 26 October 2024).
  34. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  35. Heris, M. Artificial Bee Colony. 2015. Available online: https://www.mathworks.com/matlabcentral/fileexchange/52966-artificial-bee-colony-abc-in-matlab (accessed on 26 October 2024).
  36. MathWorks. Available online: https://www.mathworks.com/matlabcentral/ (accessed on 26 October 2024).
  37. Yarpiz.Optimization Algorithms. Available online: https://yarpiz.com/ (accessed on 26 October 2024).
  38. Coello, C.A.C. Constraint-handling using an evolutionary multiobjective optimization technique. Civ. Eng. Syst. 2000, 17, 319–346. [Google Scholar] [CrossRef]
  39. Deb, K. Optimal design of a welded beam via genetic algorithms. AIAA J. 1991, 29, 2013–2015. [Google Scholar] [CrossRef]
  40. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  41. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  42. Ragsdell, K.M.; Phillips, D.T. Optimal design of a class of welded structures using geometric programming. J. Eng. Ind. 1976, 98, 1021–1025. [Google Scholar] [CrossRef]
  43. Kohli, M.; Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 2018, 5, 458–472. [Google Scholar] [CrossRef]
Figure 1. The welded beam design problem.
Figure 1. The welded beam design problem.
Ai 05 00109 g001
Table 1. Probabilities of shifting a new solution based on its distance from the current population.
Table 1. Probabilities of shifting a new solution based on its distance from the current population.
dPdDescription Condition :   Is   ( r 0 , 1 > P )
Long distance P d = e 1 High   value High value of Pd (close to 1)There is a lower probability that the current solution shifts according to Equation (7).
Short distance P d = e 1 Low   value Low value of Pd (close to 0)There is a higher probability that the current solution shifts according to Equation (7).
Table 2. Unimodal benchmarks.
Table 2. Unimodal benchmarks.
Symbol Description Dimension Range fmin
F1 f = i = 1 n x i 2 30[−100, 100]0
F2 f = i = 1 n x i + i = 1 n x i 30[−100, 100]0
F3 f = i = 1 n j 1 i x j 2 30[−100, 100]0
F4 f = max i x i , 1 i n 30[−100, 100]0
F5 f = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30[−100, 100]0
F6 f = i = 1 n x i + 0.5 2 30[−100, 100]0
F7 f = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Table 3. Multimodal benchmarks.
Table 3. Multimodal benchmarks.
Symbol Description Dimension Range fmin
F8 f = i = 1 n x i sin x i 30[−500, 500] 418.9829 × 5
F9 f = i = 1 n x i 2 10 cos 2 π   x i + 10 30[−5.12, 5.12]0
F10 f = 20 exp 20 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
F11 f = 1 4000 i = 1 n x i 2 Π i n cos x i i + 1 30[−600, 600]0
F12 f = π n 10 sin π y 1 + i = 1 n y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4
y i = 1 + x i + 1 4
u ( x i , u , k , m ) = k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a
30[−50, 50]0
F13 f = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5 , 100 , 4 30[−50, 50]0
Table 4. Large-scale benchmarks.
Table 4. Large-scale benchmarks.
Symbol DescriptionDimensionRangefmin
F14 f = i = 1 n i x i 2 1000[−100, 100]0
F15 f = i = 2 D i ( 2 x i 2 x i 1 ) 2 + ( x 1 1 ) 2 1000[−65.535, 65.535]0
F16 f = i = 1 D ( x i 2 10 cos ( 2 π   x i ) + 10 ) 1000[−5.12, 5.12]0
F17 f = i = 1 D x i 100 2 4000 i = 1 D cos x i 100 i + 1 1000[−600, 600]0
F18 20 exp 0.2 1 D i = 1 D x i 2 exp i D i = 1 D cos ( 2 π   x i ) + 20 + e 1000[−32.768, 32.768]0
Table 5. Two-dimension benchmarks.
Table 5. Two-dimension benchmarks.
Symbol DescriptionDimensionRangefmin
F19 f x = x 1 2 + 2 x 2 2 0.3 cos 3 π . x 1 0.4 cos 4 π . x 2 + 0.7 2[−100, 100]0
F20 f x = x 1 + 2 x 2 7 2 + 2 x 1 + x 2 5 2 2[−10, 10]0
F21 f x = 0.26 x 1 2 + x 2 2 0.48 x 1 . x 2 2[−10, 10]0
F22 f x = cos x 1 . cos x 2 . exp x 1 π 2 x 2 π 2[−100, 100]−1
F23 f x = 1.5 x 1 + x 1 x 2 2 + 2.25 x 1 + x 1 x 2 2 2 + 2.625 x 1 + x 1 x 2 3 2 2[−4.5, 4.5]0
F24 f x = 0.5 + sin 2 x 1 2 + x 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2 2[−100, 100]0
F25 f x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 + 4 + 4 x 2 2 x 2 2 2[−2, 2]−1.03163
Table 6. Effect of applying distance parameters to a specific section on the ABC algorithm.
Table 6. Effect of applying distance parameters to a specific section on the ABC algorithm.
Function Variable Size MeanSD d on Employedd on Onlooker
F11001.0723 × 10−2390
1.516 × 10−1158.308 × 10−115X
6.1473 × 10−21.0334 × 10−1X
5008.6509 × 10−974.7383 × 10−96
1.4291 × 10−477.8273 × 10−47X
4.575 × 10−18.540 × 10−1X
107.9746 × 10−834.3679 × 10−82
7.487 × 10−304.101 × 10−29X
7.552 × 10−11.698X
Table 7. Evaluating AEABC on unimodal benchmarks.
Table 7. Evaluating AEABC on unimodal benchmarks.
Algorithm F1F2F3F4F5F6F7
ALOBest6.151 × 1033.210 × 1019.584 × 1032.654 × 1011.245 × 1065.550 × 1031.317
SD2.958 × 1031.113 × 10107.199 × 1034.6532.856 × 1062.623 × 1031.403
BOABest2.146 × 10−161.149 × 10−361.341 × 10−71.655 × 10−122.860 × 1017.309 × 10−17.975 × 10−4
SD5.378 × 10−159.932 × 10−129.753 × 10−71.216 × 10−113.869 × 10−24.699 × 10−18.289 × 10−4
GWOBest8.996 × 10−831.334 × 10−485.896 × 10−151.976 × 10−192.496 × 1012.355 × 10−52.056 × 10−4
SD7.811 × 10−792.491 × 10−464.247 × 10−102.566 × 10−88.810 × 10−12.621 × 10−14.478 × 10−4
PSOBest3.880 × 10−101.952 × 10−51.057 × 1013.171 × 10−11.238 × 1011.142 × 10−103.356 × 10−2
SD2.004 × 10−62.955 × 10−31.032 × 1011.735 × 10−15.347 × 1019.456 × 10−73.545 × 10−2
SCABest7.230 × 10−72.613 × 10−103.019 × 1023.4282.782 × 1013.8771.485 × 10−3
SD2.565 × 10−29.183 × 10−62.114 × 1038.7886.807 × 1027.101 × 10−12.838 × 10−2
WOABest1.343 × 10−2281.828 × 10−1424.146 × 1021.579 × 10−61.261 × 10−21.222 × 10−92.770 × 10−5
SD0.02.481 × 10−1295.808 × 1032.607 × 1016.1922.958 × 10−91.383 × 10−3
DDAOBest4.980 × 10−53.869 × 10−31.104 × 10−33.915 × 10−32.901 × 1016.4393.980 × 10−4
SD1.090 × 1011.2914.291 × 1013.344 × 10−11.267 × 1013.5344.219 × 10−2
ABCBest7.2553.950 × 10−12.341 × 1044.091 × 1011.777 × 1047.2642.332 × 10−1
SD2.119 × 1012.410 × 10−14.527 × 1035.4017.007 × 1045.9551.906 × 10−1
AEABCBest1.988 × 10−1737.755 × 10−1191.060 × 10−343.258 × 10−862.784 × 1012.985 × 10−11.309 × 10−5
SD1.271 × 10−851.393 × 10−1057.106 × 10−106.232 × 10−642.309 × 10−15.818 × 10−28.189 × 10−5
Table 8. Evaluating AEABC on multimodal benchmarks.
Table 8. Evaluating AEABC on multimodal benchmarks.
Algorithm F8F9F10F11F12F13
ALOBest−5.537 × 1032.337 × 1021.381 × 1015.618 × 1019.461 × 1031.381 × 106
SD4.409 × 1012.110 × 1019.181 × 10−12.300 × 1012.336 × 1068.849 × 106
BOABest−3.597 × 1030.04.751 × 10−120.01.675 × 10−11.536
SD4.677 × 1020.05.433 × 10−120.06.462 × 10−23.853 × 10−1
GWOBest−7.369 × 1030.07.994 × 10−150.04.014 × 10−61.017 × 10−1
SD5.302 × 1021.751 × 1014.580 × 10−151.061 × 10−21.755 × 10−21.664 × 10−1
PSOBest−8.850 × 1032.494 × 1012.806 × 10−62.452 × 10−115.164 × 10−131.584 × 10−11
SD7.533 × 1021.147 × 1011.686 × 10−19.501 × 10−31.178 × 10−84.987 × 10−3
SCABest−4.757 × 1032.793 × 10−77.387 × 10−47.744 × 10−74.028 × 10−12.656
SD2.082 × 1021.505 × 1019.5862.702 × 10−13.481 × 1014.884 × 104
WOABest−1.257 × 1040.08.882 × 10−160.08.898 × 10−43.633 × 10−2
SD5.622 × 1020.02.312 × 10−152.177 × 10−38.655 × 10−39.132 × 10−2
DDAOBest−5.214 × 1031.089 × 10−45.817 × 10−26.800 × 10−41.1473.015
SD3.388 × 1024.4509.099 × 10−13.915 × 10−13.337 × 10−11.463
ABCBest−5.700 × 1031.664 × 1023.0051.3415.297 × 1049.635 × 105
SD2.439 × 1021.776 × 1015.221 × 10−11.804 × 10−11.025 × 1062.727 × 106
AEABCBest−6.428 × 1030.08.882 × 10−160.04.855 × 10−28.790 × 10−1
SD4.076 × 1020.00.00.02.540 × 10−24.212 × 10−1
Table 9. AEABC against its ancestor on large-scale optimization.
Table 9. AEABC against its ancestor on large-scale optimization.
FunctionABCAEABC
MeanSDMeanSD
F13.086 × 1064.975 × 1042.0596 × 10−2550
F144.616 × 10−162.063 × 10−164.091 × 10−501.9 × 10−49
F153.542 × 1097.708 × 1076.76 × 10−14.081 × 10−3
F161.68 × 1042.132 × 10200
F172.758 × 1046.663 × 1023.526 × 10−11.024 × 10−1
F188.881 × 10−1608.881 × 10−160
Table 10. Statistical results of different metaheuristics and AEABC on F1 and F16 benchmarks.
Table 10. Statistical results of different metaheuristics and AEABC on F1 and F16 benchmarks.
FunctionMetric PSOBAFFMVOKHSSADDAOAEABC
F1Mean1.35 × 1033.93 × 1041.15 × 10−27.85 × 10−15.75 × 10−24.16 × 10−80.0860
SD6.42 × 1021.07 × 1044.32 × 10−32.47 × 10−15.03 × 10−21.43 × 10−70.0870
F16Mean1.03 × 1021.21 × 1022.50 × 1011.18 × 1021.23 × 1014.90 × 10−70.1800
SD2.47 × 1013.93 × 1016.953.39 × 1015.411.50 × 10−60.0970
Table 11. AEABC on large-scale optimization problems.
Table 11. AEABC on large-scale optimization problems.
FMetric GWOPSOGSADEFEPAEABC
F14Mean4.0423.6275.869.98 × 10−11.229.98 × 10−1
SD4.2532.5613.8313.3 × 10−165.6 × 10−12.302 × 10−13
F15Mean3.37 × 10−45.77 × 10−43.673 × 10−34.5 × 10−145.0 × 10−45.793 × 10−4
SD6.25 × 10−42.22 × 10−41.647 × 10−33.3 × 10−43.2 × 10−47.948 × 10−5
F16Mean−1.032−1.032−1.032−1.032−1.03−1.032
SD−1.0326.25 × 10−164.88 × 10−163.1 × 10−134.9 × 10−72.868 × 10−9
F17Mean3.979 × 10−13.979 × 10−13.979 × 10−13.979 × 10−13.98 × 10−13.979 × 10−1
SD3.979 × 10−10.00.09.9 × 10−91.5 × 10−71.453 × 10−11
F18Mean3.03.03.03.03.023.0
SD3.01.33 × 10−154.17 × 10−152 × 10−151.1 × 10−14.134 × 10−4
Table 12. AEABC on small-scale optimization problems.
Table 12. AEABC on small-scale optimization problems.
FMetric GWOPSOGSADEFEPAEABC
F19Mean−3.863−3.863−3.863N/A−3.86−3.862
SD−3.8632.58 × 10−152.29 × 10−15N/A1.4 × 10−54.262 × 10−4
F20Mean−3.287−3.266−3.318N/A−3.27−3.318
SD−3.2516.052 × 10−22.308 × 10−2N/A5.9 × 10−25.025 × 10−3
F21Mean−1.015 × 101−6.865−5.955−1.015 × 101−5.52−1.007 × 101
SD−9.143.023.7372.5 × 10−61.593.132 × 10−1
F22Mean−1.04 × 101−8.457−9.684−1.04 × 101−5.53−1.025 × 101
SD−8.5843.0872.0143.9 × 10−72.126.643 × 10−1
F23Mean−1.053 × 101−9.953−1.054 × 101−1.054 × 101−6.57−1.021 × 101
SD−8.5591.7832.6 × 10−151.9 × 10−73.141.103
N/A: Not available.
Table 13. AEABC on multimodal benchmarks.
Table 13. AEABC on multimodal benchmarks.
FMFOPSOGSAAEABC
MeanSDMeanSDMeanSDMeanSd
F8−8.497 × 1037.259 × 102−3.571 × 1034.308 × 102−2.352 × 1033.822 × 102−5.527 × 1034.323 × 102
F98.460 × 1011.617 × 1011.243 × 1021.425 × 1013.100 × 1011.366 × 1010.00.0
F101.2607.296 × 10−19.1681.5693.7411.713 × 10−18.882 × 10−160.0
F111.908 × 10−22.173 × 10−21.242 × 1014.1664.868 × 10−14.979 × 10−20.00.0
F128.940 × 10−18.813 × 10−11.387 × 1015.8544.634 × 10−11.376 × 10−14.981 × 10−49.375 × 10−5
F131.158 × 10−11.930 × 10−11.181 × 1043.070 × 1047.6171.2252.245 × 10−11.736 × 10−1
FPASMSFAGA
F8−8.087 × 1031.553 × 106−3.943 × 1034.042 × 106−3.662 × 1032.142 × 102−6.331 × 1033.326 × 102
F99.269 × 1011.422 × 1011.528 × 1021.855 × 1012.149 × 1061.722 × 1012.368 × 1061.903 × 101
F106.8451.2501.913 × 1012.385 × 10−11.457 × 1014.675 × 10−11.785 × 1015.311 × 10−1
F112.7167.277 × 10−14.205 × 1022.526 × 1016.966 × 1011.211 × 1011.799 × 1023.244 × 101
F124.1051.0438.743 × 1061.406 × 1063,684,0081.721 × 1053.413 × 1071.893 × 106
F136.240 × 1019.484 × 1011.0 × 1080.05.558 × 1061.690 × 1061.080 × 1083.850 × 106
Table 14. Statistical results of AEABC and other metaheuristics on two-dimensional benchmarks.
Table 14. Statistical results of AEABC and other metaheuristics on two-dimensional benchmarks.
Function PSOBAMVOKHDDAOAEABC
F19Best4.4298 × 10−141.64381.0021 × 10−52.8890 × 10−89.64105 × 10−70
SD2.7559 × 10−101.1194 × 1023.7081 × 10−42.9457 × 10−70.0186959170
F20Best1.3482 × 10−143.0619 × 10−114.2336 × 10−95.9289 × 10−124.4529 × 10−59.431 × 10−7
SD2.3629 × 10−105.4689 × 10−106.5125 × 10−73.1189 × 10−100.019521825.491 × 10−5
F21Best8.6209 × 10−171.4036 × 10−122.0125 × 10−102.1689 × 10−145.66862 × 10−120.0
SD1.5676 × 10−124.8089 × 10−111.1549 × 10−81.2854 × 10−119.8777 × 10−70.0
F22Best−1−1−1−1−0.99843−1.0
SD2.8316 × 10−111.8257 × 10−11.8257 × 10−11.8257 × 10−1−0.803884.139 × 10−5
F23Best1.3624 × 10−142.8649 × 10−119.2179 × 10−94.4607 × 10−139.06 × 10−57.950 × 10−9
SD1.6315 × 10−13.1004 × 10−11.9334 × 10−13.0481 × 10−100.0031051.327 × 10−5
F24Best09.7159 × 10−32.1517 × 10−61.2317 × 10−71.7525 × 10−90.0
SD4.7621 × 10−31.0699 × 10−13.3529 × 10−34.1334 × 10−61.6223 × 10−60.0
F25Best−1.03163−1.03163−1.03163−1.03163−1.03157−1.032
SD8.2049 × 10−112.4904 × 10−11.6383 × 10−76.3582 × 10−10−1.030662.120 × 10−8
Table 15. Wilcoxon test results for F1–F5.
Table 15. Wilcoxon test results for F1–F5.
AlgorithmMetricF1F2F3F4F5
ALOStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−11
BOAStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−6.638 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−113.175 × 10−11
GWOStatistic−6.653 × 100−6.653 × 100−5.130 × 100−6.653 × 1005.736 × 100
p-value2.872 × 10−112.872 × 10−112.894 × 10−72.872 × 10−119.673 × 10−9
PSOStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−2.218 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−112.658 × 10−2
SCAStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−6.017 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−111.774 × 10−9
WOAStatistic6.653 × 1006.653 × 100−6.653 × 100−6.653 × 1006.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−11
DDAOStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−11
ABCStatistic−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−112.872 × 10−11
Table 16. Wilcoxon test results for F6–F10.
Table 16. Wilcoxon test results for F6–F10.
AlgorithmMetricF6F7F8F9F10
ALOStatistic−6.653 × 100−6.653 × 1005.396315278−6.652991439−6.652991439
p-value2.872 × 10−112.872 × 10−116.80234 × 10−82.87195 × 10−112.87195 × 10−11
BOAStatistic−6.653 × 100−6.653 × 100−6.6529914390−6.652991439
p-value2.872 × 10−112.872 × 10−112.87195 × 10−1112.87195 × 10−11
GWOStatistic1.493 × 100−6.579 × 1006.224243101−4.435327626−6.652991439
p-value1.354 × 10−14.734 × 10−114.83886 × 10−109.19324 × 10−62.87195 × 10−11
PSOStatistic6.653 × 100−6.653 × 1006.224243101−6.652991439−6.652991439
p-value2.872 × 10−112.872 × 10−114.83886 × 10−102.87195 × 10−112.87195 × 10−11
SCAStatistic−6.653 × 100−6.653 × 100−6.120752124−6.652991439−6.652991439
p-value2.872 × 10−112.872 × 10−119.31347 × 10−102.87195 × 10−112.87195 × 10−11
WOAStatistic6.653 × 100−5.219 × 1006.6529914390−4.878860388
p-value2.872 × 10−111.800 × 10−72.87195 × 10−1111.06701 × 10−6
DDAOStatistic−6.653 × 100−6.653 × 100−5.677219361−6.652991439−6.652991439
p-value2.872 × 10−112.872 × 10−111.36902 × 10−82.87195 × 10−112.87195 × 10−11
ABCStatistic−6.653 × 100−6.653 × 1001.907190879−6.652991439−6.652991439
p-value2.872 × 10−112.872 × 10−110.0564958742.87195 × 10−112.87195 × 10−11
Table 17. Wilcoxon test results for F11–F13.
Table 17. Wilcoxon test results for F11–F13.
AlgorithmMetricF11F12F13
ALOStatistic−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−11
BOAStatistic0.000 × 100−6.653 × 100−2.927 × 100
p-value1.000 × 1002.872 × 10−113.419 × 10−3
GWOStatistic−1.109 × 1006.195 × 1006.653 × 100
p-value2.675 × 10−15.841 × 10−102.872 × 10−11
PSOStatistic−6.653 × 1006.653 × 1006.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−11
SCAStatistic−6.653 × 100−6.653 × 100−6.638 × 100
p-value2.872 × 10−112.872 × 10−113.175 × 10−11
WOAStatistic−4.435 × 10−16.653 × 1006.653 × 100
p-value6.574 × 10−12.872 × 10−112.872 × 10−11
DDAOStatistic−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−11
ABCStatistic−6.653 × 100−6.653 × 100−6.653 × 100
p-value2.872 × 10−112.872 × 10−112.872 × 10−11
Table 18. Comparing the AEABC optimum solution with other methods from [10] on welded beam design.
Table 18. Comparing the AEABC optimum solution with other methods from [10] on welded beam design.
AlgorithmOptimum Variables Optimum Cost
hltb
AEABC0.10715.44028.86600.10.8981
GWO0.20563.47839.03680.20571.7262
APPROX0.24446.21898.29150.24442.3815
GSA0.18213.856910.0000.20231.8799
HS 0.24426.22318.29150.24432.3807
GA [38]N/AN/AN/AN/A1.8245
Random0.45754.73135.08530.664.1185
GA [42]N/AN/AN/AN/A2.38
Simplex0.27925.62567.75120.27962.5307
GA [39]0.24896.17308.17890.25332.4331
David0.24346.25528.29150.24442.3841
N/A: not available.
Table 19. Comparing the performance of AEABC with other algorithms adopted by [43].
Table 19. Comparing the performance of AEABC with other algorithms adopted by [43].
Algorithm WorstMeanBestSD
AEABC1.15051.03940.89810.0642
GWO2.91362.85941.94212.6908
CPSO1.78211.74881.72801.29 × 10−2
CDeN/A1.76811.7334N/A
GA41.99341.79261.72827.47 × 10−2
CGWO2.43572.42891.72541.3578
SC6.39963.00252.38549.60 × 10−1
UPSON/A2.83721.92190.683
GA31.7858351.77191.74831.12 × 10−2
N/A: Not available.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsamia, S.; Koch, E.; Albedran, H.; Ray, R. Adaptive Exploration Artificial Bee Colony for Mathematical Optimization. AI 2024, 5, 2218-2236. https://doi.org/10.3390/ai5040109

AMA Style

Alsamia S, Koch E, Albedran H, Ray R. Adaptive Exploration Artificial Bee Colony for Mathematical Optimization. AI. 2024; 5(4):2218-2236. https://doi.org/10.3390/ai5040109

Chicago/Turabian Style

Alsamia, Shaymaa, Edina Koch, Hazim Albedran, and Richard Ray. 2024. "Adaptive Exploration Artificial Bee Colony for Mathematical Optimization" AI 5, no. 4: 2218-2236. https://doi.org/10.3390/ai5040109

APA Style

Alsamia, S., Koch, E., Albedran, H., & Ray, R. (2024). Adaptive Exploration Artificial Bee Colony for Mathematical Optimization. AI, 5(4), 2218-2236. https://doi.org/10.3390/ai5040109

Article Metrics

Back to TopTop