Next Article in Journal
The Adaptive Seismic Resilience of Infrastructure Systems: A Bayesian Networks Analysis
Next Article in Special Issue
Applicability of the Future State Maximization Paradigm to Agent-Based Modeling: A Case Study on the Emergence of Socially Sub-Optimal Mobility Behavior
Previous Article in Journal
Cyclical Evolution of Emerging Technology Innovation Network from a Temporal Network Perspective
Previous Article in Special Issue
A New Decision Method of Flexible Job Shop Rescheduling Based on WOA-SVM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Particle Swarm Optimization Algorithm Based on the Theory of Reinforcement Learning in Psychology

1
School of Business, Hunan University of Science and Technology, Xiangtan 411201, China
2
School of Artificial Intelligence, Hunan Institute of Engineering, Xiangtan 411104, China
*
Author to whom correspondence should be addressed.
Systems 2023, 11(2), 83; https://doi.org/10.3390/systems11020083
Submission received: 8 January 2023 / Revised: 25 January 2023 / Accepted: 2 February 2023 / Published: 6 February 2023

Abstract

:
To more effectively solve the complex optimization problems that exist in nonlinear, high-dimensional, large-sample and complex systems, many intelligent optimization methods have been proposed. Among these algorithms, the particle swarm optimization (PSO) algorithm has attracted scholars’ attention. However, the traditional PSO can easily become an individual optimal solution, leading to the transition of the optimization process from global exploration to local development. To solve this problem, in this paper, we propose a Hybrid Reinforcement Learning Particle Swarm Algorithm (HRLPSO) based on the theory of reinforcement learning in psychology. First, we used the reinforcement learning strategy to optimize the initial population in the population initialization stage; then, chaotic adaptive weights and adaptive learning factors were used to balance the global exploration and local development process, and the individual optimal solution and the global optimal solution were obtained using dimension learning. Finally, the improved reinforcement learning strategy and mutation strategy were applied to the traditional PSO to improve the quality of the individual optimal solution and the global optimal solution. The HRLPSO algorithm was tested by optimizing the solution of 12 benchmarks as well as the CEC2013 test suite, and the results show it can balance the individual learning ability and social learning ability, verifying its effectiveness.

1. Introduction

In order to more effectively solve problems in many fields in real life, scholars mathematically model them; that is, they establish an optimization model [1]. In the process of this mathematical modeling, it is found that some problems are difficult to accurately model or solve. To facilitate the solution of traditional methods, the target usually needs to be processed, which increases the complexity of the problem [2]. The intelligent optimization method does not have this limitation and can solve the target model more conveniently [3,4]. Li et al. [5], Dokeroglu et al. [6] and Xue et al. [7] presented some comprehensive surveys of the state-of-the-art schemes on intelligent optimization for feature selection, which is helpful for optimization performance. Therefore, intelligent optimization methods have developed rapidly. Intelligent optimization methods include the genetic algorithm (GA) [8], the artificial bee colony algorithm (ABC) [9], the simulated annealing algorithm (SA) [10], the particle swarm optimization (PSO) algorithm [11], etc. Among them, PSO has attracted scholars’ attention because of its simple structure and easy implementation [12].
PSO was first proposed by Kennedy and Eberhart [12]. The initially proposed optimization effect of PSO was unexceptional. Later, scholars usually attempted to improve the inertia weight ω with a nonfixed value in the PSO, and the particle renewal formula was first proposed by Yuhui and Eberhart [13]. Subsequent scholars have carried out a lot of research regarding how the optimization ability of PSO can be improved. PSO usually randomly generates various potential solutions in the range of the solution of the optimization problem, which are called “particles”. In reference [14], in order to improve the quality of initial particles, Tian et al. replaced the method of generating initial particles via random mapping in PSO with logical mapping. Chen et al. [15] first used random mapping to generate initial particles and then combined this method with a reinforcement learning strategy [16] to generate another batch of reinforcement particles. In this method, after comparing the fitness values of the particles generated using the two methods, the particles with good fitness values are left as the initial particles. Gao et al. [17] first initialized particles via sinusoidal mapping and then used a reinforcement learning strategy to generate a batch of reinforcement particles. Then, they compared the advantages and disadvantages of the two batches of particles to leave particles closer to the optimal solution. In this method, new particles are generated through the two cores of PSO’s updated velocity and displacement formula. In the velocity formula, the degree to which the velocity of the new particle is affected by the previous velocity is determined by the inertia weight ω. The degree of influence of the global optimal solution and the individual optimal solution is controlled by the acceleration coefficients c1 and c2. Therefore, ω and c1/c2 have great influence on the final optimization results. To this end, the strategies used to improve ω include the linear strategy [13], the nonlinear strategy [18], the fuzzy rule [19], the chaotic strategy [15], etc. With regard to the acceleration coefficient, sometimes variable acceleration coefficients [20], fixed value acceleration coefficients [21], etc., are used. However, some scholars have improved other values of the updated formula. For example, Xu et al. proposed a dimension learning strategy to improve the individual optimal solution. In this method, the value of each dimension of each individual optimal solution is replaced by the value of each dimension of the global optimal solution one by one. If the effect is positive, the value of the corresponding dimension of the global optimal solution will be retained, and if not, the original state will be maintained [3]. Liang et al. proposed a comprehensive learning strategy to remove the social learning aspect from the speed update formula of classical PSO so that all the remaining individual optimal solutions have the opportunity to learn from the historical individual optimal solutions of other particles, which creates the opportunity for particles to learn from all of the individual optimal solutions [22]. Li et al. combined the improvement of the comprehensive learning strategy and mutation strategy to improve the optimization ability of PSO [23]. Mendes et al. established a speed update strategy in which the particle speed update depends not only on the historical optimal solution of the particle, but also on the historical optimal solution of all other particles [24]. Some scholars applied a mutation strategy to the position of particles to make particles jump out of the local optimal solution. After updating the historical individual optimal particles and historical global optimal particles in PSO, Wang et al. used a mutation strategy to mutate them [25]. This mutation strategy includes Cauchy, Levy, and Gaussian mutations; then, a roulette selection mechanism is used to select mutation factors [26]. Li et al. performed a mutation operation on the global optimal solution in the algorithm when improving PSO. The mutation factor was generated from the difference between two random particles in the population [23]. This research represents the main improvements made by scholars regarding the ontology of the PSO algorithm, while some scholars have combined other algorithms with PSO to form a better algorithm. For example, in reference [27], PSO and GSA were combined to form a hybrid algorithm. The aim was to combine the local development performance of the GSA and the global exploration performance of PSO to form a complementary algorithm. PSO can also be mixed with the sine cosine algorithm [28], the genetic algorithm [29], etc. However, these modified PSOs are still likely be categorized as individual optimal solutions, leading to the transition of the optimization process from global exploration to local development.
In summary, the main challenges of the PSO algorithm are to improve the optimization ability of both the local exploitation and the global exploration by combining all kinds of other algorithms. This leads to the transition of the optimization process from global exploration to local development.
To improve the optimization performance, in this paper, we propose a Hybrid Reinforcement Learning Particle Swarm Algorithm (HRLPSO) based on the theory of reinforcement learning in psychology, which is based on teamwork and runs parallelly.
(1)
A Hybrid Reinforcement Learning Particle Swarm Algorithm was proposed. To enhance the optimization capability of HRLPSO, five strategies were applied to improve the traditional PSO in this work. (i) An opposition-based learning strategy was combined with random mapping to generate the initial population; (ii) cubic mapping and adaptive strategies were combined and applied to the weights; (iii) the ci parameter was controlled to vary nonlinearly within a certain range; (iv) a dimensional learning strategy was applied to the optimal solution; (v) Cauchy and Gaussian mutation strategies were used in the optimal solution to increase the diversity of the solutions.
(2)
The results regarding standard functions show that the proposed HRLPSO strategy works well in both stand-alone and ensemble applications, and the results regarding the CEC2013 test suite further demonstrate the good optimization capability of HRLPSO.
(3)
Compared with the existing schemes, the main contributions of the proposed HRLPSO are as follows: (i) The theory of reinforcement learning in psychology is firstly applied and the opposition-based learning strategy is proposed to generate the initial population of the PSO. (ii) Unlike the traditional PSO algorithm, which only uses a few hybrid methods, the proposed HRLPSO fully considers the improvement measures at each stage and the five hybrid methods stated above in (1) are applied to improve the optimization performance.

2. Particle Swarm Optimization Algorithm

The particle swarm optimization algorithm is an evolutionary algorithm. The algorithm first generates a set of “solutions” within the approximate range of the solution of the optimization problem, that is, “particles” Xi = (xi1, xi2, …, xiD). The value of i is an integer from 1 to N, N is the number of particles, and D is the dimension of particles. Then, by comparing the corresponding objective function values of these particles in the optimization problem, the historical individual optimal solution Pbesti = (pbesti1, pbesti2, …, pbestiD) and the historical global optimal solution Gbest = (gbest1, gbest2, …, gbestD) are obtained. The new particles are updated using the following formula:
v ( i + 1 ) d = ω v i d + c 1 r a n d ( ) ( p b e s t i d x i d ) + c 2 r a n d ( ) ( g b e s t d x i d ) ,
x ( i + 1 ) d = x i d + v i d ,
In Equation (1), v represents the velocity of particles, and all the velocity vectors are represented by Vi = (vi1, vi2, …, viD). The values of c1 and c2 are weight factors that control particles’ individual learning and social learning, and ω is the inertia weight that controls the influence of the previous particle velocity on the updated particle velocity.

3. Hybrid Reinforcement Learning Particle Swarm Optimization Algorithm

3.1. Initial Population Based on Positive Reinforcement Learning

The initial population reinforcement theory based on positive reinforcement learning is a theory proposed by Skinner, an American psychologist and behavioral scientist. Skinner was one of the founders of new behaviorist psychology. He believed that people or animals will display certain behaviors to act on the environment in order to achieve a certain purpose. When the consequences of such behavior are beneficial to the individual, such behavior will repeat in the future; when unfavorable, this behavior weakens or disappears. People can use this method of positive reinforcement or negative reinforcement to change the consequences of behavior and modify their behavior. This is reinforcement theory, also known as behavior modification theory [5]. The convergence speed and accuracy of the particle swarm optimization algorithm are easily affected by the quality of the initial population. In order to improve the quality of the initial population, reinforcement learning is applied to the process of initializing the population.
In the optimization process of various algorithms, some random individuals are randomly generated in the range of solutions as potential solutions and then continuously approach the optimal solution through various iterative mechanisms to produce the optimal solution. However, these algorithms can be improved by later scholars so as to make the algorithm approach better and faster and produce the optimal solution. In this study, reinforcement learning was applied to the algorithm. Reinforcement learning [12] is defined as follows:
Suppose a real number xrn ∈ [A, B], and the opposite number of xon is defined as follows:
x o n = A + B x r n
The remaining two definitions are based on the definitions above. Apply the definitions above to the position of an algorithm, such as the PSO algorithm, in which particle Xirn = (xi1rn, xi2rn, …, xiDrn), and enhanced particle Xion = (xi1on, xi2on, …, xiDon), where xrn ∈ [A, B], and
x i j o n = A i + B i x i j r n
Then, by comparing the fitness values of Xirn and Xion in the optimized objective function f(x), the particles with excellent fitness values are left.

3.2. Chaos Adaptive Inertia Weight

The optimization ability of PSO can be effectively improved by reasonably setting the change in the inertia weight coefficient. It has been proved in [9,26] that a linear decline in inertia weight within a certain range can effectively enhance the performance of PSO. The linear decline formula is
ω = ω max ω max ω min max g e n i ,
where ω is the value of the inertia weight coefficient under the current number of iterations, ωmax/ωmin is the maximum/minimum value of the inertia weight coefficient, i is the current number of iterations, and maxgen is the maximum number of iterations. At present, the most commonly used ωmax/ωmin values in this formula are 0.9/0.4, respectively. In this study, cubic mapping was applied to linearly decreasing weight coefficients as follows [27]:
x n + 1 = a x n 3 + 1 a x n ,
where xn denotes the n-th chaotic state in the range of [−1, 1]; the initial value x0 of xn cannot take 0; and a is the bifurcation coefficient in the semi-open interval (0, 4]. When the value of a increases from zero, the fixed points in Figure 1, the bifurcation graph generated by Equation (6), vary from 1 to 2 and, after that, from 4 to 2n. This variation presents as unlimited and stable, but when a increases close to 3.598076211, the duration proves to be infinite, even aperiodic. When a lies in the range of [3.598076211, 4], the chaotic state occurs and the system presents as unstable when a is bigger than 4, as depicted in Figure 1, where the different random numbers are displayed in different colors.
After setting the absolute value range of the mapped fluctuation (the range is obtained through continuous experimental parameter adjustment), the absolute value range of the fluctuation is as follows:
V ( i ) = M a x i max g e n M a x ,
where V(i) represents the absolute value of the fluctuation of the mapping under the current number of iterations, and Max is the absolute value of the fluctuation at the first iteration. Combined with cubic mapping, a linearly decreasing mixed disturbance is formed. The combined formula is as follows:
C i = x i V i ,
Finally, the chaotic adaptive inertia weight is obtained by adding it to Equation (5).
ω i = ω i + C i ,
The whole process is shown in Figure 2, where the variables are depicted as the blue curves

3.3. Adaptive Learning Factor

The research regarding learning factors usually focuses on two aspects. On the one hand, the learning factor can be set to a fixed constant. The most typical example of this is the original PSO algorithm. The value of both learning factors is set to 2 in the literature [1]. On the other hand, the learning factor can be set as an adaptive learning factor. Usually, the value of the learning factor is fixed in a certain range and changes with the number of iterations. In the typical research literature [11,16,28,29], this value increases or decreases linearly or nonlinearly between 0.5 and 2.5 as the number of iterations changes. This study is based on adaptive learning factors. The formula of the variable learning factor is as follows:
c 1 i = α × 1 1 i / max g e n 2 + β ,
c 2 i = α × 1 1 1 i / max g e n 2 + β ,
where α = 2, β = 0.5. The iterative curve of the learning factors is shown in Figure 3.

3.4. Update Strategy

3.4.1. Dimension Learning

Xu et al. proposed a dimension learning strategy. The principle of this strategy is to replace the value of each dimension of the historical individual optimal solution with the value of the corresponding dimension of the historical global optimal solution. If the objective function value of the optimization problem corresponding to the replaced historical individual optimal solution is better, the value of the replaced dimension will be retained [4]. The advantage of this work is that the best solution is selected from the historical individual optimal solution with the reinforcement learning strategy, and it is compared with the historical global optimal solution, improving the historical global optimal solution. The updated formula is as follows:
v ( i + 1 ) d = ω v i d + c 1 r a n d ( ) ( p b e s t i d d l x i d ) + c 2 r a n d ( ) ( g b e s t d d l x i d ) ,
Pbestidl and Gbestdl represent the historical individual optimal solution and the historical global optimal solution of the reinforcement learning strategy, respectively.

3.4.2. Mutation

The PSO algorithm has inherent defects and can be easily categorized as local optimization. Particle mutation is an effective strategy to alleviate this situation. The random number of Gaussian distribution functions is mainly concentrated near 0, meaning Gaussian mutation is suitable for particle exploration. Compared with the Gaussian distribution function, the number randomly generated by the Cauchy distribution function is far from 0, meaning that the Cauchy mutation is suitable for particle development. In our work, we aimed to mutate the particle when the particle of the PSO algorithm was categorized as local optimization and simultaneously carried out Gaussian mutation and Cauchy mutation on the particle. The mutation method that produced better results was adopted. The mutation formula is as follows:
p i d d l m = p i d d l + m u t a t i o n d ( )
p g d d l m = p g d d l + m u t a t i o n d ( )
where mutationd () is the mutation factor.

4. Experimental Setup

In this study, classical test functions were used to test the performance of the algorithm, including seven unimodal functions (F1–F7) and five multimodal functions (F8–F12) [30]. When comparing HRLPSO with the other four algorithms, the population size was set to 30, and each algorithm optimized the test function 20 times. When comparing the performance of the improved method separately, the number of iterations was 1000, and when comparing the performance of HRLPSO with the other four algorithms, the number of iterations was 10,000. The maximum speed limit was consistent with the range of the test function. In addition, in this study, the average value of the final result of the algorithm’s optimization of the test function after 20 times is displayed in bold for easy observation. CIPSO was directly applied to engineering problems in the original text, and the performance of the algorithm was analyzed based on the results of engineering problems. In CLPSO and DLPSO, standard test functions are mainly used to assess the performance of an algorithm. The parameter settings of all the algorithms in this work are shown in Table 1.
In the original literature, the capabilities of CIPSO were evaluated by optimizing the results for application to engineering problems. In the original literature, two algorithms, CLPSO and DLPSO, were mainly used to test the algorithm performance with standard test functions. The parameters of the PSO variants are shown in Table 1.
Figure 4 displays the flow chart of HDLPSO, in which Fit is the fitness value of the solution. As test functions exist in minimum values, the solution with the smaller fitness value was taken as the better solution when comparing the fitness values.

5. Discussion

5.1. Test Results of the PSO Variants under Benchmark

Taking 12 benchmark functions as experimental objects, we compared the optimization results of HRLPSO and four other algorithms. There were 10,000 iterations. The results of the comparison of HRLPSO and the four other algorithms are shown in Table 2. For the other four functions aside from the HRLPSO function in the table, the global optimal value of 0 could be obtained. It was shown that even the most original PSO could obtain the global optimal value of 0 on function F4. However, the global optimal results of HRLPSO on functions F1, F2, F3, F4, F6, F8, and F10 were all 0, and the standard deviation was also 0, which shows that HRLPSO can obtain the optimal value, 0, of the function every time it is optimized on these test functions, reflecting its better global optimization ability. Moreover, HRLPSO ranked first in the 12 test functions and the other 6 functions, as well as in the average ranking and final ranking. In the table, F represents the function name, D represents the dimension of the test function, Mean represents the average value of the objective function, and S.D. represents the standard deviation of the objective function value.
The average evolution curves under the 12 test functions are shown in Figure 5. In the 12 evolution curves, the final convergence accuracy of CLPSO on the test functions F1, F2, F3, F6, F7, F8, and F10 was better than that of PSO, but the convergence accuracy was worse than that of PSO when the number of iterations was 1000. Although HRLPSO did not converge when the number of iterations on test functions F1, F2, F3, F4, and F6 was 1000, it also achieved good convergence accuracy. It converged when the number of iterations on test functions F5, F7, F8, F9, F10, F11, and F12 was 1000. HRLPSO only converged after 1000 iterations of test functions F5, F6, F7, F11, and F12. Therefore, in this test function experiment, the number of iterations was set to 10000. Among the 12 evolution curves, HRLPSO had the highest convergence accuracy. Secondly, as shown in the figures, HRLPSO had the fastest convergence speed on unimodal functions F1, F2, F3, and F4 and multimodal functions F8, F9, and F10. By combining these results with the previous analysis regarding convergence accuracy, it can be concluded that HRLPSO not only has good convergence accuracy but also has good convergence speed.
Table 3 presents a quantitative comparison of the performance indicators of the five algorithms, providing the average computational time and the average rank under 30 running times, achieved under the standard test functions F1~F12. From Table 3, it can be seen that the average rank of HRLPSO is the first and more advanced than the other algorithms. Meanwhile, although the average computational time of HRLPSO is shorter than that of CLPSO, DLPSO, it approximated that of the standard PSO and CIPSO. These indicators show that it performs best.

5.2. Test Results of the PSO Variants under CEC2013 Test Suite

The superiority-seeking performance of HDLPSO was verified in the previous experiments using 12 benchmark test functions. To make the optimization capability of HDLPSO more convincing, in this section, the experimental results of HDLPSO and the other four algorithms under the CEC2013 test suite are provided; see reference [30] for the specific test suite. The experimental parameters were set as the same values used in the previous experiments. In order to distinguish them from the previous 12 benchmark test functions, the 28 functions in the CEC2013 test suite were sequentially sorted by adding 12 to their names.
The optimization results of the five algorithms under the CEC2013 test suite are shown in Table 4, which show that the combined ranking of the five algorithms differed from the previous combined ranking under the 12 benchmark test functions. The ranking of HDLPSO, DLPSO, CLPSO, and PSO remained unchanged; they remained in first, third, fourth, and last place, respectively. Meanwhile, CIPSO ranked second overall; this reflects the fact that an algorithm cannot achieve the best results on every optimization problem. However, when considered together, the optimization results of the five algorithms under the CEC2013 test suite still showed that HDLPSO has excellent optimization capabilities.

6. Discussion

In order to improve the optimization ability of PSO, five improvement strategies were applied to the PSO algorithm. The reinforcement learning strategy in psychology was applied to the random generation of the initial population to leave better particles. The combination of cubic mapping and an adaptive strategy was applied to ω. This has the advantages of chaotic mapping and being adaptive at the same time. The adaptive strategy was used to adjust c1 and c2 to balance the individual learning ability and social learning ability of the algorithm. The dimension learning strategy was applied to improve the convergence speed and accuracy of the algorithm. Finally, Cauchy mutation and Gaussian mutation strategies were applied to the historical individual optimal solution and the historical global optimal solution, leaving better solutions to jump out of the local optimal solution. Using 12 benchmark functions, the algorithm and existing strategies were verified. The experimental results show that the proposed strategy has a good effect and prove the effectiveness and good optimization ability of the proposed strategy.
The future work is to further refine the HRLPSO algorithm as well as its parameters so that it can be applied to complex economic models.

Author Contributions

Conceptualization, W.H. and Y.L.; methodology, W.H.; software, X.Z.; validation, W.H. and X.Z.; formal analysis, W.H.; investigation, W.H.; resources, Y.L.; writing—original draft preparation, W.H.; writing—review and editing, X.Z.; supervision, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Foundation of China, grant number 17ZDA046; the National Natural Science Foundation of China (NSFC), grant number 62173134; and the key scientific research project of Hunan Province, grant numbers 21A0452 and HNJG-2021-0168.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sheng, X.; Lan, K.; Jiang, X.; Yang, J. Adaptive Curriculum Sequencing and Education Management System via Group-Theoretic Particle Swarm Optimization. Systems 2023, 11, 34. [Google Scholar] [CrossRef]
  2. Wang, R.; Hao, K.; Chen, L.; Wang, T.; Jiang, C. A novel hybrid particle swarm optimization using adaptive strategy. Inf. Sci. 2021, 579, 231–250. [Google Scholar] [CrossRef]
  3. Li, T.; Liu, Y.; Chen, Z. Application of Sine Cosine Egret Swarm Optimization Algorithm in Gas Turbine Cooling System. Systems 2022, 10, 201. [Google Scholar] [CrossRef]
  4. Shi, L.; Cheng, Y.; Shao, J.; Sheng, H.; Liu, Q. Cucker-Smale flocking over cooperation-competition networks. Automatica 2022, 135, 109988. [Google Scholar] [CrossRef]
  5. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature Selection: A Data Perspective. ACM Comput. Surv. 2016, 50, 94. [Google Scholar] [CrossRef]
  6. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A Survey on Evolutionary Computation Approaches to Feature Selection. IEEE Trans. Evol. Comput. 2016, 20, 606–626. [Google Scholar] [CrossRef]
  7. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A comprehensive survey on recent metaheuristics for feature selection. Neurocomputing 2022, 494, 269–296. [Google Scholar] [CrossRef]
  8. Schockenhoff, F.; Zähringer, M.; Brönner, M.; Lienkamp, M. Combining a Genetic Algorithm and a Fuzzy System to Optimize User Centricity in Autonomous Vehicle Concept Development. Systems 2021, 9, 25. [Google Scholar] [CrossRef]
  9. Ganguli, C.; Shandilya, S.K.; Nehrey, M.; Havryliuk, M. Adaptive Artificial Bee Colony Algorithm for Nature-Inspired Cyber Defense. Systems 2023, 11, 27. [Google Scholar] [CrossRef]
  10. Abdelbari, H.; Shafi, K. A System Dynamics Modeling Support System Based on Computational Intelligence. Systems 2019, 7, 47. [Google Scholar] [CrossRef] [Green Version]
  11. Li, Y.; Wei, K.; Yang, W.; Wang, Q. Improving wind turbine blade based on multi-objective particle swarm optimization. Renew. Energy 2020, 161, 525–542. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  13. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
  14. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  15. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A hybrid particle swarm optimizer with sine cosine acceleration coefficients. Inf. Sci. 2018, 422, 218–241. [Google Scholar] [CrossRef]
  16. Ahandani, M.A. Opposition-based learning in the shuffled bidirectional differential evolution algorithm. Swarm Evol. Comput. 2016, 26, 64–85. [Google Scholar] [CrossRef]
  17. Gao, W.F.; Liu, S.Y.; Huang, L.L. Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4316–4327. [Google Scholar] [CrossRef]
  18. Malik, R.F.; Rahman, T.A.; Hashim, S.Z.M.; Ngah, R. New particle swarm optimizer with sigmoid increasing inertia weight. Int. J. Comput. Sci. Secur. 2007, 1, 35–44. [Google Scholar]
  19. Robati, A.; Barani, G.A.; Pour, H.N.A.; Fadaee, M.J.; Anaraki, J.R.P. Balanced fuzzy particle swarm optimization. Appl. Math. Model. 2012, 36, 2169–2177. [Google Scholar] [CrossRef]
  20. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  21. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self regulating particle swarm optimization algorithm. Inf. Sci. 2015, 294, 182–202. [Google Scholar] [CrossRef]
  22. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  23. Li, W.; Meng, X.; Huang, Y.; Fu, Z.H. Multipopulation cooperative particle swarm optimization with a mixed mutation strategy. Inf. Sci. 2020, 529, 179–196. [Google Scholar] [CrossRef]
  24. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  25. Wang, L.; Yang, B.; Orchard, J. Particle swarm optimization using dynamic tournament topology. Appl. Soft Comput. 2016, 48, 584–596. [Google Scholar] [CrossRef]
  26. Wang, H.; Wang, W.; Wu, Z. Particle swarm optimization with adaptive mutation for multimodal optimization. Appl. Math. Comput. 2013, 221, 296–305. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Hashim, S.Z.M. A new hybrid PSOGSA algorithm for function optimization. In Proceedings of the 2010 International Conference on Computer and Information Application, Tianjin, China, 2–4 November 2010. [Google Scholar]
  28. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Hybrid particle swarm optimization with sine cosine algorithm and nelder–mead simplex for solving engineering design problems. Arab. J. Sci. Eng. 2020, 45, 3091–3109. [Google Scholar] [CrossRef]
  29. Sedki, A.; Ouazar, D. Hybrid particle swarm optimization and differential evolution for optimal design of water distribution systems. Adv. Eng. Inform. 2012, 26, 582–591. [Google Scholar] [CrossRef]
  30. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  31. Rogers, T.D.; Whitley, D.C. Chaos in the cubic mapping. Math. Model. 1983, 4, 9–25. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The bifurcation graph with 3 ≤ a ≤ 4.
Figure 1. The bifurcation graph with 3 ≤ a ≤ 4.
Systems 11 00083 g001
Figure 2. The chaotic adaptive inertia weight coefficient process.
Figure 2. The chaotic adaptive inertia weight coefficient process.
Systems 11 00083 g002
Figure 3. Adaptive learning factors.
Figure 3. Adaptive learning factors.
Systems 11 00083 g003
Figure 4. Flow chart of HDLPSO.
Figure 4. Flow chart of HDLPSO.
Systems 11 00083 g004
Figure 5. Average evolution curve of 5 algorithms under 12 test functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, (j) F10, (k) F11 and (l) F12.
Figure 5. Average evolution curve of 5 algorithms under 12 test functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, (j) F10, (k) F11 and (l) F12.
Systems 11 00083 g005aSystems 11 00083 g005b
Table 1. Parameter settings.
Table 1. Parameter settings.
AlgorithmParameterReference
PSOthe population size is 30, each algorithm is optimized 20 times,
the number of iterations is 10,000,
the maximal speed is within the range of F1~F12
w: 1, c1: 2, c2: 2[8]
CIPSOw: 0.9~0.4, c1: 3.5~0.5, c2: 0.5~3.5[31]
CLPSOw: 0.9~0.4, c: 1.5[18]
DLPSOw: 0.7298, c1: 1.5, c2: 0.5~2.5[3]
HRLPSOw: 0.9~0.6, c1: 2.5~0.5, c2: 0.5~2.5, a: 4, Max: 0.05-
Table 2. Optimization results of HRLPSO and other algorithms under benchmark.
Table 2. Optimization results of HRLPSO and other algorithms under benchmark.
FDItemPSOCIPSOCLPSODLPSOHRLPSO
F130Mean2 × 1031.72 × 1014.43 × 10−90.000.00
S.D.5.23 × 1038.422.53 × 10−90.000.00
Rank43211
F230Mean1.50 × 1015.86 × 10−18.66 × 10−83.49 × 10−430.00
S.D.8.893.02 × 10−13.54 × 10−81.53 × 10−480.00
Rank54321
F330Mean1.15 × 1044.96 × 1028.40 × 1031.07 × 1030.00
S.D.1.08 × 1042.25 × 1028.92 × 1032.35 × 1030.00
Rank52431
F430Mean0.004.351.282.01 × 10−130.00
S.D.0.001.065.87 × 10−18.98 × 10−130.00
Rank14321
F530Mean5.70 × 1016.31 × 1022.31 × 1024.91 × 1011.99 × 10−1
S.D.1.26 × 1024.24 × 1026.75 × 1024.01 × 1018.91 × 10−1
Rank35421
F630Mean1.01 × 1031.80 × 1016.38 × 10−90.000.00
S.D.3.11 × 1036.694.53 × 10−90.000.00
Rank43211
F730Mean1.341.19 × 10−29.17 × 10−42.16 × 10−23.49 × 10−4
S.D.3.545.20 × 10−33.78 × 10−41.46 × 10−23.13 × 10−4
Rank53241
F830Mean6.51 × 1016.20 × 1017.065.420.00
S.D.4.26 × 1011.54 × 1012.583.880.00
Rank54321
F930Mean7.243.111.90 × 1011.25 × 10−18.88 × 10−16
S.D.8.444.73 × 10−14.70 × 10−13.85 × 10−10.00
Rank43521
F1030Mean9.02 × 1011.111.85 × 10−101.41 × 10−20.00
S.D.2.78 × 1013.84 × 10−21.54 × 10−12.24 × 10−20.00
Rank54231
F1130Mean1.49 × 10−17.92 × 10−14.23 × 10−113.63 × 10−21.57 × 10−32
S.D.3.87 × 10−23.83 × 10−12.78 × 10−115.07 × 10−22.81 × 10−48
Rank45231
F1230Mean2.071.93e4.38 × 10−102.69 × 10−21.35 × 10−32
S.D.3.201.133.05 × 10−109.00 × 10−22.81 × 10−48
Rank54231
Average Rank43.892.782.441
Final Rank54321
Table 3. Performance indicators of different PSO algorithms.
Table 3. Performance indicators of different PSO algorithms.
IndicatorsPSOCIPSOCLPSODLPSOHRLPSO
Running times 3030303030
Average computational time (s) 13.57 14.11 14.92 14.89 14.88
Average rank43.892.782.441
Table 4. Optimization results of HRLPSO and other algorithms under the CEC2013 test suite.
Table 4. Optimization results of HRLPSO and other algorithms under the CEC2013 test suite.
FunctionsDimensionsIndicatorsPSOCIPSOCLPSODLPSOHDLPSO
F1330M1.21 × 104−1.38 × 103−1.01 × 103−1.40 × 103−1.40 × 103
S5.90 × 1037.154.79 × 1023.54 × 10−131.88 × 10−13
R62311
F1430M1.31 × 1086.17 × 1063.96 × 1076.61 × 1062.31 × 104
S8.72 × 1072.37 × 1062.76 × 1073.74 × 1062.25 × 104
R72531
F1530M6.53 × 10132.75 × 1083.31 × 10102.36 × 1093.22 × 108
S1.91 × 10141.44 × 1081.83 × 10102.20 × 1095.37 × 108
R71432
F1630M7.83 × 1044.19 × 1031.65 × 1049.77 × 103−6.81 × 102
S5.77 × 1041.77 × 1038.83 × 1033.85 × 1032.98 × 102
R72431
F1730M7.80 × 103−9.80 × 102−6.36 × 102−1.00 × 103−1.00 × 103
S5.16 × 1039.815.56 × 1021.37 × 10−91.14 × 10−13
R62311
F1830M1.02 × 103−8.34 × 102−8.30 × 102−8.62 × 102−8.81 × 102
S1.73 × 1031.53 × 1013.18 × 1011.99 × 1011.68 × 101
R73421
F1930M9.21 × 1027.73 × 102−6.88 × 102−7.06 × 102−7.27 × 102
S4.22 × 1038.663.79 × 1011.66 × 1011.99 × 101
R71432
F2030M−6.79 × 1026.79 × 102−6.79 × 102−6.79 × 102−6.79 × 102
S5.67 × 10−25.05 × 10−26.81 × 10−24.32 × 10−26.91 × 10−2
R11111
F2130M−5.67 × 1025.80 × 102−5.62 × 102−5.70 × 102−5.78 × 102
S2.402.151.243.163.33
R51632
F2230M1.19 × 103−4.76 × 102−1.91 × 102−4.88 × 102−5.00 × 102
S9.47 × 1021.32 × 1011.79 × 1021.74 × 1014.31 × 10−2
R73521
F2330M1.74 × 101−2.94 × 102−3.54 × 102−3.82 × 102−3.64 × 102
S8.17 × 1012.13 × 1012.26 × 1016.489.35
R74312
F2430M8.52 × 101−1.91 × 102−1.14 × 102−1.95 × 102−2.19 × 102
S9.44 × 1012.08 × 1012.32 × 1013.29 × 1011.85 × 101
R73421
F2530M1.71 × 1027.48 × 101-2.15 × 101−4.37 × 101−5.41 × 101
S7.00 × 1012.08 × 1011.62 × 1013.04 × 1013.26 × 101
R71432
F2630M6.50 × 1034.33 × 1032.26 × 1031.47 × 1021.16 × 103
S4.74 × 1025.89 × 1024.72 × 1021.88 × 1023.45 × 102
R76312
F2730M7.42 × 1034.60 × 1037.20 × 1035.03 × 1034.08 × 103
S3.63 × 1025.32 × 1023.28 × 1026.75 × 1025.63 × 102
R72631
F2830M2.02 × 1022.02 × 1022.02 × 1022.02 × 1022.01 × 102
S3.12 × 10−12.47 × 10−12.74 × 10−13.40 × 10−12.06 × 10−1
R22221
F2930M8.38 × 1024.61 × 1023.42 × 1023.44 × 1023.42 × 102
S1.41 × 1023.09 × 1012.514.727.69
R63121
F3030M8.66 × 1025.72 × 1025.87 × 1025.52 × 1024.87 × 102
S1.27 × 1021.90 × 1011.01 × 1012.72 × 1011.70 × 101
R73421
F3130M1.35 × 1055.12 × 1022.23 × 1035.03 × 1025.03 × 102
S2.64 × 1052.262.50 × 1031.041.25
R62511
F3230M6.13 × 1026.11 × 1026.12 × 1026.14 × 1026.12 × 102
S3.90 × 10−11.374.38 × 10−18.16 × 10−19.72 × 10−1
R31242
F3330M2.24 × 1031.10 × 1031.10 × 1031.03 × 1031.02 × 103
S5.14 × 1025.13 × 1011.65 × 1021.53 × 1025.91 × 101
R63321
F3430M7.96 × 1035.09 × 1032.89 × 1031.38 × 1032.02 × 103
S5.85 × 1025.47 × 1025.56 × 1023.83 × 1023.87 × 102
R74312
F3530M8.13 × 1035.55 × 1038.14 × 1036.53 × 1035.11 × 103
S4.65 × 1027.54 × 1022.75 × 1026.16 × 1028.07 × 102
R62741
F3630M1.30 × 1031.26 × 1031.28 × 1031.28 × 1031.27 × 103
S7.286.625.071.03 × 1017.17
R51332
F3730M1.42 × 1031.38 × 1031.39 × 1031.39 × 1031.38 × 103
S1.45 × 1011.07 × 1019.567.538.22
R51221
F3830M1.56 × 1031.47 × 1031.50 × 1031.40 × 1031.40 × 103
S7.22 × 1017.39 × 1019.29 × 1014.01 × 10−11.30 × 10−3
R52311
F3930M2.57 × 1032.08 × 1032.51 × 1032.39 × 1032.25 × 103
S1.16 × 1027.64 × 1019.04 × 1018.60 × 1019.18 × 101
R71632
F4030M4.69 × 1031.87 × 1033.18 × 1032.08 × 1031.76 × 103
S6.74 × 1026.70 × 1013.92 × 1025.11 × 1022.59 × 102
R72431
Average R5.962.183.712.211.36
Final R72431
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, W.; Liu, Y.; Zhang, X. Hybrid Particle Swarm Optimization Algorithm Based on the Theory of Reinforcement Learning in Psychology. Systems 2023, 11, 83. https://doi.org/10.3390/systems11020083

AMA Style

Huang W, Liu Y, Zhang X. Hybrid Particle Swarm Optimization Algorithm Based on the Theory of Reinforcement Learning in Psychology. Systems. 2023; 11(2):83. https://doi.org/10.3390/systems11020083

Chicago/Turabian Style

Huang, Wenya, Youjin Liu, and Xizheng Zhang. 2023. "Hybrid Particle Swarm Optimization Algorithm Based on the Theory of Reinforcement Learning in Psychology" Systems 11, no. 2: 83. https://doi.org/10.3390/systems11020083

APA Style

Huang, W., Liu, Y., & Zhang, X. (2023). Hybrid Particle Swarm Optimization Algorithm Based on the Theory of Reinforcement Learning in Psychology. Systems, 11(2), 83. https://doi.org/10.3390/systems11020083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop