1. Introduction
The problem of finding the global minimum of a multidimensional function occurs in various scientific areas and has wide applications. In this context, the programmer seeks the absolute minimum of a function subject to some assumptions or constraints. The objective function is defined as
and the mathematical formulation of the global optimization problem is as follows:
where the set
S is as follows:
Such problems frequently arise in sciences like physics, where genetic algorithms prove effective in locating positions in magnetic plasmas and in developing optimization tools [
1]. Additionally, the combined use of various optimization techniques enhances performance and stability in complex problems [
2]. Furthermore, the hybrid approach that combines artificial physics techniques with particle swarm optimization has proven to be particularly effective in solving energy allocation problems [
3]. In the field of chemistry, minimizing potential energy functions is crucial for understanding the ground states of molecular clusters and proteins [
4]. Optimizing these functions has been used for accurate protein structure prediction, with results closely matching crystal structures [
5]. Moreover, a new computational framework combining variance-based methods with artificial neural networks has significantly improved accuracy and reduced computational cost in sensitivity analyses [
6]. Even in the field of economics, optimization plays a crucial role. The particle swarm optimization (PSO) method is shown to be highly effective in solving the economic dispatch problem in power systems, providing higher quality solutions with greater efficiency compared to genetic algorithms [
7]. Additionally, a multi-objective optimization approach for economic load dispatch and emission reduction in hydroelectric and thermal plants achieves optimal solutions that align with the user’s preferences and goals [
8]. Finally, optimization is also applied in biological process models [
9] and medical data classification, achieving high accuracy in diagnosing diseases and analyzing genomes [
10]. A series of methods have been proposed in the recent literature to handle problems described by Equation (
1). Methods used to handle problems described by Equation (
1) are usually divided into two categories: deterministic and stochastic methods. In the first category, the most common method is the interval method [
11,
12], where the set
S is divided through a series of steps into subregions and some subregions that do not contain the global solution can be removed using some pre-defined criteria. The second category contains stochastic techniques that do not guarantee finding the total minimum, and they are easier to program since they do not rely on some assumptions about the objective function. In addition, they also constitute the vast majority of global optimization methods. Among them, there are methods based on considerations derived from Physics, such as the Simulated Annealing method [
13], the Henry’s Gas Solubility Optimization (HGSO) [
14], the Gravitational Search Algorithm (GSA) [
15], the Small World Optimization Algorithm (SWOA) [
16], etc. Also, a series of evolutionary-based techniques have also been suggested for Global Optimization problems, such as Genetic Algorithms [
17], the Differential Evolution method [
18,
19], Particle Swarm Optimization (PSO) [
20,
21], Ant Colony Optimization (ACO) [
22], Bat Algorithm (BA) [
23], Whale Optimization Algorithm (WOA) [
24], Grasshopper Optimization Algorithm (GOA) [
25], etc.
However, the above optimization methods require significant computational power and time. Therefore, parallel processing of these methods is essential. Recently, various methods have been proposed to leverage parallel processing. Specifically, one study details the parallel implementation of the Particle Swarm Optimization (PSO) algorithm, which improves performance for load-balanced problems but shows reduced efficiency for load-imbalanced ones. The study suggests using larger particle populations and implementing asynchronous processing to enhance performance [
26]. Another review provides a comprehensive overview of derivative-free optimization methods, useful for problems where objective and constraint functions come only from a black-box or simulator interface. The review focuses on recent developments and categorizes methods based on assumptions about functions and their characteristics [
27]. Lastly, the PDoublePop version 1.0 software is presented, which implements parallel genetic algorithms with advanced features, such as an enhanced stopping rule and advanced mutation schemes. The software allows for coding the objective function in C++ or Fortran77 and has been tested on well-known benchmark functions, showing promising results [
28]. Moreover, there are methods that utilize GPU architectures for solving complex optimization problems. In the case of the Traveling Salesman Problem (TSP), a parallel GPU implementation has been developed to achieve high performance on large scales, solving problems of up to 6000 cities with great efficiency [
29]. Additionally, the use of the multi-start model in local search algorithms has proven to be particularly effective when combined with GPUs, reducing computation time and improving solution quality for large and time-intensive problems [
30]. Finally, a parallel algorithm using Peano curves for dimension reduction has demonstrated significant improvements in speed and efficiency when run on GPUs, compared to CPU-only implementations, for multidimensional problems with multiple local minima [
31]. Parallel optimization represents a significant approach in the field of optimization problem-solving and is applied across a broad range of applications, including the optimization of machine learning model parameters. In the case of GraphLab, a platform is presented that enhances existing abstract methods, such as MapReduce, for parallel execution of machine learning algorithms with high performance and accuracy [
32]. Additionally, the new MARSAOP method for hyperparameter optimization uses multidimensional spirals as surrogates and dynamic coordinate search to achieve high-quality solutions with limited computational costs [
33]. Lastly, the study proposes a processing time-estimation system using machine learning models to adapt to complex distributions and improve production scheduling, achieving significant reduction in completion time [
34].
The use of parallel optimization has brought significant advancements in solving complex control and design problems [
35]. In one case, the parallel application of the Genetic Algorithm with a Fair Competitive Strategy (HFCGA) has been used for the design of an optimized cascade controller for ball and beam systems. This approach allows for the automation of controller parameter tuning, improving performance compared to traditional methods, with parallel computations helping to avoid premature convergence to suboptimal solutions [
36]. In another case, the advanced Teaching–Learning-Based Optimization (TLBO) method has been applied to optimize the design of propulsion control systems with hydrogen peroxide. This method, which includes improvements in the teaching and learning phases as well as in the search process, has proven highly effective in addressing uncertainties in real-world design problems, enhancing both accuracy and convergence speed of the optimization process [
37].
Similarly, parallel optimization has significant applications in energy and resource management. In the case of pump and cooling system management, a two-stage method based on an innovative multidimensional optimization algorithm has been proposed, which reduces overall energy consumption and discrepancies between pump and cooling system flow rates, achieving significant improvements in energy efficiency [
38]. In energy network management, a coordinated scheduled optimization method has been proposed, which simultaneously considers various energy sources and uncertainty conditions. This method, using the Competitive Swarm Optimization (CSO) algorithm and adjusted based on chaos theory, has proven 50% faster than other methods and effective in managing uncertainties and energy [
39]. In topological optimization, parallel processing through CPUs and GPUs has drastically improved speed and energy consumption for complex problems, achieving up to 25-times faster processing and reducing energy consumption by up to 93% [
40]. Lastly, in building energy retrofitting, a multi-objective platform was used for evaluating and optimizing upgrade strategies, demonstrating that removing subsidies and providing incentives for energy upgrades yield promising results [
41].
In conclusion, parallel optimization has provided significant solutions to problems related to sustainable development and enhancing sustainability. In the field of increasing the resilience of energy systems, the development of strategies for managing severe risks, such as extreme weather conditions and supplier disruptions, has been improved through parallel optimization, enabling safer and more cost-effective energy system operations [
42]. In addressing groundwater contamination, the use of parallel optimization has highlighted the potential for faster and more effective design of remediation methods, significantly reducing computational budgets and ensuring better results compared to other methods [
43]. Finally, in the field of sustainable production, metaheuristic optimization algorithms have proven useful for improving production scheduling, promoting resource efficiency, and reducing environmental impacts, while contributing to the achievement of sustainable development goals [
44].
By harnessing multiple computational resources, parallel optimization allows for the simultaneous execution of multiple algorithms, leading to faster convergence and improved performance. These computational resources can communicate with each other to exchange information and synchronize processes, thereby contributing to faster convergence towards common solutions. Additionally, leveraging multiple resources enables more effective handling of exceptions and errors, while increased computational power for conducting more trials or utilizing more complex models leads to enhanced performance [
45]. Of course, this process requires the development of suitable algorithms and techniques to effectively manage and exploit available resources. Each parallel optimization algorithm requires a coherent strategy for workload distribution among the available resources, as well as an efficient method for collecting and evaluating results.
Various researchers have developed parallel techniques, such as the development and evaluation of five different parallel Simulated Annealing (SA) algorithms for solving global optimization problems. The paper compares the performance of these algorithms across an extensive set of tests, focusing on various synchronization and information exchange approaches, and highlights their particularly noteworthy performance on functions with large search spaces where other methods have failed [
46]. Additionally, other research focuses on the parallel application of the Particle Swarm Optimization (PSO) method, analyzing the performance of the parallel PSO algorithm on two categories of problems: analytical problems with low computational costs and industrial problems with high computational costs [
47]. This research demonstrates that the parallel PSO method performs exceptionally well on problems with evenly distributed loads, while in cases of uneven loads, the use of asynchronous approaches and larger particle populations proves more effective. Another study develops a parallel implementation of a stochastic Radial Basis Function (RBF) algorithm for global optimization, which does not require derivatives and is suitable for computationally intensive functions. The paper compares the performance of the proposed method with other parallel optimization methods, noting that the RBF method achieves good results with one, four, or eight processors across various optimization problems [
48]. Parallel techniques are also applied in real-time image processing, which is critical for applications such as video surveillance, diagnostic medicine, and autonomous vehicles. A related article examines parallel architectures and algorithms, identifying effective applications and evaluating the challenges and limitations of their practical implementation, aiming to develop more efficient solutions [
49]. Another study reviews parallel computing strategies for computational fluid dynamics (CFD), focusing on tools like OpenMP, MPI, and CUDA to reduce computational time [
50]. Finally, a significant study explores parallel programming technologies for processing genetic sequences, presenting three main parallel computing models and analyzing applications such as sequence alignment, single nucleotide polymorphism calling, sequence preprocessing, and pattern detection [
51].
Genetic Algorithms (GAs) are methods that can be easily parallelized, and several researchers have thoroughly examined them in the literature. For example, a universal parallel execution method for Genetic Algorithms, known as IIP (Parallel, Independent, and Identical Processing), is presented. This method achieves acceleration with the use of m processors. The technique calculates execution speed and compares results on small-size problems and the non-parametric Inverse Fractal Problem [
52]. Similarly, another paper presents a distributed mechanism for improving resource protection in a digital ecosystem. This mechanism can be used not only for secure and reliable transactions but also to enhance collaboration among digital ecosystem community members to secure the environment, and also employs Public Key Infrastructure to provide strong protection for access workflows [
53]. Genetic Algorithms have also been used as a strategy for optimizing large and complex problems, specifically in wavelength selection for multi-component analysis. The study examines the effectiveness of the genetic algorithm in finding acceptable solutions in a reasonable time and notes that the algorithm incorporates prior information to improve performance, based on mathematically grounded frameworks such as the schema theorem [
54]. Additionally, another paper develops two genetic algorithm-based algorithms to improve the lifetime and energy consumption in mobile wireless sensor networks (MWSNs). The first algorithm is an improvement of the Unequal Clustering Genetic Algorithm, while the second algorithm combines the K-means Clustering Algorithm with Genetic Algorithms. These algorithms aim to better adapt to dynamic changes in network topology, thereby enhancing network lifetime and energy consumption [
55]. Lastly, the use of Genetic Algorithms has promising applications across various medical specialties, including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. A related paper reviews and introduces applications of Genetic Algorithms in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, enabling physicians to envision potential applications of this metaheuristic method in their medical careers [
56].
In this paper, a new optimization method is proposed which is a mixture of existing global optimization techniques running in parallel on a number of available computing units. Each technique is executed independently of the others and periodically the optimal values retrieved from them are distributed to the rest of the computing units using the propagation techniques presented here. In addition, for the most efficient termination of the overall algorithm, intelligent termination techniques based on stochastic observations are used, which are suitably modified to adapt to the parallel computing environment. The contributions of this work as summarized as follows:
Periodic local search was integrated into the Differential Evolution (DE) and Particle Swarm Optimization (PSO) methods.
A mechanism for disseminating the optimal solution to all PUs was added in each iteration of the methods.
The overall algorithm terminates based on the proposed termination criterion.
The following sections are organized as follows: In
Section 2, the three main algorithms participating in the overall algorithm are described. In
Section 3 the algorithm for parallelizing the three methods is described, along with the proposed mechanism for propagating the optimal solution to the remaining methods. In
Section 4, experimental models and experimental results are described. Finally, in
Section 5, the conclusions from the application of the current work are discussed.