Next Article in Journal
Fourier Ptychographic Reconstruction Method of Self-Training Physical Model
Previous Article in Journal
Shaking Table Test of the Negative Skin Friction of a Single Pile Induced by Seismic Settlement of Model Soft Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Strategy to Improve the High-Dimensional Multi-Target Sparrow Search Algorithm and Its Application

1
China Aerospace Academy of Systems Science and Engineering, Beijing 100035, China
2
School of Economics and Managemet, Xi’an University of Posts and Telecommunications, Xi’an 710061, China
3
School of Modern Postal, Xi’an University of Posts and Telecommunicatios, Xi’an 710061, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3589; https://doi.org/10.3390/app13063589
Submission received: 14 February 2023 / Revised: 26 February 2023 / Accepted: 5 March 2023 / Published: 11 March 2023
(This article belongs to the Section Mechanical Engineering)

Abstract

:
This research combines the improved reference point selection strategy and the sparrow search algorithm with an enhanced competition mechanism to create a high-dimensional multi-objective sparrow search algorithm with an incorporated improved reference point selection strategy. First, the reference point selection approach is utilized to establish the reference points and sparrow populations, and the most important reference points are dynamically chosen to increase the global search ability. Then, the size of the search population and the method of searcher position updating are dynamically adjusted according to the size of the entropy difference between two adjacent generations of the population. Following, the convergence speed is increased by improving the follower position formula and extending the competition mechanism to high-dimensional multi-objective optimization. The Corsi variation operator improves the algorithm’s capacity to break out of its local optimum. Finally, we have used 12 standard benchmark test functions to evaluate the MaOISSA (Many/Multi-Objective Sparrow Search Algorithm based on Improved reference points) and compared it with many high-dimensional multi-objective algorithms. There were nine with substantial IGD values and eight with significant HV values. The findings revealed that MaOISSA had convergence and variety. The simulated results of the performance model for the defense science and technology innovation ecosystem demonstrate that MaOISSA offers a superior solution for tackling the high-dimensional, multi-objective issue, demonstrating the method’s efficacy.

1. Introduction

The swarm intelligence algorithm is a computational technique based on the behavioral laws of biological groups to solve distributed problems; it is mainly used in the fields of combinatorial optimization, image processing, and data mining. Because of its simple operation and strong ability to solve problems, the swarm intelligence optimization algorithm is favored by many scholars. The sparrow search algorithm [1] (SSA) is a new swarm optimization algorithm proposed by Jiankai Xue and Bo Shen in 2020. It has better convergence than the bat algorithm [2], the dragonfly algorithm [3], and the locust optimization algorithm [4] when dealing with both unimodal and multimodal test functions [5]. Compared with other population optimization algorithms, such as the gray wolf algorithm [6] and the bat algorithm, the sparrow search algorithm takes into account all possible factors of the population behavior and has fast convergence as well as better local search with capability and stability. However, like other population intelligence algorithms, it is easy to “prematurely” solve complex optimization problems, leading to local optimality and poor convergence. Scholars at both the domestic and international levels proposed corresponding methods to performance enhancements to the sparrow search algorithm: literature [7,8,9,10,11] et al. increase the initial population variety using elite learning, random wandering, backward learning, enhanced Logistics chaos, or Tent chaos sequences. To increase the merit-seeking capacity of individual sparrows by adding an adaptive cross-variance operator, adaptive inertia weights, adaptive t-distribution strategy, and reverse learning strategy based on the lens principle, the discoverer population in the literature [11,12,13,14,15] was enhanced. Tactics such as t-distribution perturbation, differential variation for perturbation or variable spiral search strategy, were used to boost the convergence speed of the algorithm and its capacity to escape the local algorithm. Most of these improvements were performed for single-objective search optimization and in multi-objective optimization problems. Literature [16] et al. performed a multi-objective sparrow search through a novel congestion distance calculation strategy, and external archiving; literature [17] et al. performed multi-objective optimization by balancing global and local optima through a competitive mechanism for population selection. These enhanced algorithms make extensive use of the external archiving strategy in the storage of multi-dimensional solutions and perform well on multi-objective optimization problems. However, in high-dimensional multi-objective optimization problems with more than three objectives, the convergence speed decreases significantly with increasing dimensionality, and it is challenging to approximate the true Pareto frontier.
A multi-objective optimization problem is a problem in which many sub-objectives within a system reach the optimal solution under certain conditions. Most of the current economic and social activities as well as practical engineering applications are multi-objective problems, and the high-dimensional multi-objective optimization problem is an extension of the Multi-objective Optimization Problem (MOP). We refer to problems with more than three objects as Multi/Many-Objective Optimization Problems (MaOP) [18]. In contrast to single-objective optimization problems, the answer to a high-dimensional multi-objective optimization issue is typically a collection of numerous non-dominated alternatives known as the Pareto optimum solution set [19]. Therefore, solving high-dimensional multi-objective optimization problems requires finding as many non-dominated solutions with good convergence and diversity as possible. However, for high-dimensional complex problems, the higher the dimensionality, the pressure on the non-dominated solutions increases dramatically for traditional optimization algorithms, and the convergence speed decreases dramatically. Therefore, determining how to make the problem optimal and each element reasonably allocated has been a hot topic for the high-dimensional multi-objective optimization problem in recent years [20].
Combinatorial optimization challenges based on high-dimensional multi-objective and population intelligence are efficient solutions for high-dimensional multi-objective optimization issues that have evolved in recent years. A set of reference points are used to evaluate the quality of the solution, which assists in managing the population distribution in the objective space. As an extension of NSGA-II, NSGA-III is able to tackle high-dimensional multi-objective optimization problems more effectively and efficiently. The literature [21] maintains the diversity of multi-objective particle swarm algorithms by incorporating a reference point regeneration strategy; researchers [22] and others perform a global optimal selection strategy by counting the number of people connected to the centerline. Literature [23] evaluated the individual merit and directed the population crossover operation based on the algebra of algorithm runs. Literature [24] obtained the most efficient solution by finding the closest vector of people in the population and the reference point for ranking the individuals. However, the positions of reference points in these algorithms are pre-given, i.e., the parameters are set by themselves. The shape of the Pareto front of the optimization problem is generally unknown. Thus, the predetermined reference point placements may degrade the performance of the search algorithm. In this article, the reference space-based reference point selection technique suggested in the literature [20] adapts the total population size N to the number of reference points. The set of parameters in the search process is reduced, and then the correlation of reference points are used to judge the iteration stage the population is in and accelerate the population convergence.
Considering the foregoing issues, this article combines the improved reference point selection strategy with the sparrow algorithm to solve the problem of convergence degradation that occurs when solving high-dimensional multi-objective optimization problems and improving the performance of the algorithm. First, the initial population and the reference points matching its number in the high-dimensional multi-objective problem are obtained by the reference point selection strategy in the case of an unknown Pareto frontier to reduce the human judgment. We used the entropy values of two successive generations of populations to assess the evolutionary phase of populations: in the nascent stage of population development, the entropy values of the two generations are similar. The complete set of reference points is used for environment selection, when the population is in the “exploration” stage and needs to expand the search range; in the late stage of population evolution, the filtered reference points are used for environment selection, when the population needs to accelerate convergence and enter the “exploration” stage. Depending on the stage, the size of the discoverer population is adjusted according to the difference in population entropy between the two generations to reduce the search time; second, the search ability of the sparrow search algorithm depends on the location of the discoverer, and the location of the discoverer is updated by introducing the competition mechanism in CMOPSO [25]. This is unlike the competition mechanism used in [18], which is only used to determine the optimal and inferior solutions of the initial population. This study employed the competition mechanism for the whole process of discoverer position updating to enhance the search ability in the “exploration” phase. In order to improve the global search ability and the ability to jump out of the local optimal solution, the Corsi variational operator is introduced. Simulation tests with various test functions and comparison of various high-dimensional multi-objective algorithms show that MaOISSA (Many/Multi-Objective Sparrow Search Algorithm based on Improved reference points) has good search capability and convergence. By applying the algorithm to the multi-objective optimization and regulation of the economy and society, it can be seen that the algorithm has some application value.

2. Sparrow Search Algorithm and High-Dimensional Multi-Objective Optimization Problem

2.1. Principle of Sparrow Search Algorithm

The sparrow search algorithm was inspired by the biology of sparrows’ predatory and anti-predatory behavior. The sparrow population is split into discoverers and joiners, with the discoverers searching for regions with more food; the joiners follow the discoverers for foraging, and once the sparrows locate predators, they will alert the community and direct them to a safe feeding place.
With an initial population of sparrows, their initial positions are represented by the matrix:
X = [ x 1 , 1 x 1 , 2 x 1 , d x 2 , 1 x 2 , 2 x 2 , d x n , 1 x n , 2 x n , d ]
n is the number of sparrows, and d is the dimension of the optimization variable. The value of sparrows’ fitness is thus stated as:
F ( X ) = [ f ( [ x 1 , 1 x 1 , 2 x 1 , d ] ) f ( [ x 2 , 1 x 2 , 2 x 2 , d ] ) f ( [ x n , 1 x n , 2 x n , d ] ) ]
where f ( [ x i , d ] ) denotes the fitness value of the individual.
Within SSA, the discoverer is responsible for hunting for food and delivering scavenging guidance to the whole colony. The greater the value of fitness, the greater the energy reserves of the population. The change to the discoverer’s location is mentioned below.
X i , j t + 1 = { X i , j t exp ( i α i t e r max ) , R 2 < S T X i , j t + Q L , R 2 S T ,
where t denotes the current number of iterations, j = 1 , 2 , , d . i t e r max is a constant that represents the maximum number of iterations. X i , j represents the position information of the ith sparrow in dimension j. It is a random number. R 2 ( R 2 [ 0 , 1 ] )   and S T ( S T [ 0.5 , 1 ] ) respectively represent the warning value and the safety value; Q is a random number that follows a normal distribution; L is a matrix 1 × d where all the elements are 1.
The remainder of the population, excluding the discoverer, is the follower, whose position is updated as follows:
X i , j t + 1 = { Q exp ( X worst X i , j t i 2 ) if   i > n / 2 X p t + 1 + | X i , j t X P t + 1 | A + L   otherwise ,
where X P represents the ideal location of the current discoverer, X worst indicates the current worldwide lowest ranking. A represents a matrix 1 × d , where each element is randomly assigned 1 or −1, and A + = A T ( A A T ) 1 . When i > n / 2 , it means that the i th entrant needs to fly to other areas to forage for food due to poor physical value. When i n / 2 , the entrant will forage nearby.
In SSA, the scout is responsible for monitoring the foraging area and will immediately send out a danger signal when he is aware of the danger. Its position updating formula can also be stated as:
X i , j t + 1 = { X best t + β | X i , j t X best t | if   f i > f g X i , j t + K ( | X i , j t     X worst t | ( f i     f w )   +   ε ) if   f i = f g .
where symbolizes the ideal position of the current sparrow population on a global scale, whereas β is the step length control parameter, which conforms to the normal distribution of random numbers with a mean of zero and a standard deviation of 1. K [ 1 , 1 ] is the random number,   f i represents the fitness of the current sparrow, f w and f g are the optimal and worst fitness value of the current situation, and ε is the minimal constant necessary to avoid a null denominator. When   f i > f g , it signals that the sparrow is on the periphery of the population and susceptible to predation. In the period   f i = f g , it signifies that the sparrows in the population are aware of the threat and must come closer together to reduce the chance of predation. K defines the direction of sparrow flight and is also the control parameter for step length.

2.2. High-Dimensional Multi-Objective Optimization Problem

Generally speaking, the minimization multi-objective optimization problem containing decision variables n 1 and optimization objectives m 3 can be expressed as:
min F ( X ) = [ f 1 ( x ) , f 2 ( x ) , , f m ( x ) ] T s . t . g i ( x ) 0 ,   i = 1 , 2 , , p ;   h j ( x ) = 0 , j = 1 , 2 , , q . .
where x is the decision variable, F ( X ) is an m-dimensional vector goal, f i ( x ) is the ith objective function and g i ( x ) 0 is the inequality constraint condition; h j ( x ) = 0 is the equality constraint.
Definition 1. 
(Pareto domination) [26] refers to that the decision vector x v = ( v 1 , v 2 , , v n ) is dominated by Pareto, expressed as x v x u , if and only if i = 1 , 2 , , m ,   f i ( x v ) f i ( x u )   and j = 1 , 2 , , m , f j ( x v ) f j ( x u ) .
Definition 2. 
(Pareto optimal solution set) [26]. If P * = { x R n ¬ x R n , x x } , it means P * is Pareto optimal solution set.
Definition 3. 
(Pareto frontier, P F * ) [26]. Mapping of Pareto solution set on objective function, that is P F * = { F ( x ) x P * } ; the set after mapping is called the Pareto frontier.

3. High-Dimensional Multi-Objective Sparrow Search Algorithm

3.1. Improved Reference Point Selection Strategy

The NSGA-III proposed by Deb et al. [27] performs environmental selection through Pareto dominance relations, and the association of individuals has used reference lines to balance the population’s convergence and distribution of people. The reference points are defined in a structured way, and on a normalized hyperplane, assuming that M is the quantity of objectives and each dimensional target is divided into p copies, at that time, the number of points of reference is computed as follows:
H = ( M + p 1 p )
Suppose that, in the three-dimensional optimization problem, quadratic division of each dimensional objective will produce 15 reference points in the two-dimensional hyperplane. NSGA-III balances the convergence and distributivity of the population by associating reference lines with individuals for environment selection. The number of reference points H is proportional to the goal dimension M divided by p in an exponential fashion. As can be seen in Table 1, the amount of H increases sharply with the increase of M and p in the high-dimensional multi-objective optimization problem. The amount of reference points determines the algorithm running time, and the final evolved population of the algorithm does not quite improve as the number of references increases. Dividing p in NSGA-III is usually a custom reference point, which will cause redundancy of reference points and affect the evolution speed.
Since the real Pareto frontier of many high-dimensional multi-objective optimization problems is not uniformly distributed, the importance of reference points can be evaluated by counting the total number of individuals per generation of the population associated with these reference points, eliminating redundant reference points, and retaining points with more associated reference points, all of which reduce the interference of invalid reference points and accelerate convergence. To achieve the screening effect, the proportion of deleted reference sites must exceed 20% of the total population. Hence, the initial number of reference points should be more than or equal to of the population size, and the initial number of reference points should be fewer than 1.2 N of the population size when divided by p−1. That is, H m 1.2 N , H m 1 < 1.2 N , N is the population size and H m is the number of reference points divided by p . According to the improved reference point selection strategy, the number of reference points can be determined according to self-adaptation of the population size without the need to set the parameter p .
In order to eliminate redundant reference points, it is necessary to judge the stage of population evolution. Set two adjacent generations of population evolutions as generation t and generation t−1; the difference Δ e t of entropy values e t and e t 1 of the two generations are then:
e t = i = 1 n inf i × lg inf i i = 1 n Δ m i d i t × lg Δ m i d i t ,
Δ e t = | e t e t 1 | .
where in f i denotes the population’s standardized quartile difference and Δ m i d t denotes the standardized median difference.
For each finite interval E in one dimension, the threshold value μ is used as the criterion of evolution stage of Δ e t .
μ = D × i n f ¯ × lg i n f ¯ D × ( i n f ¯ + 1 N ) × lg ( i n f ¯ + 1 N ) ,
i n f ¯ = 0.75 E 0.25 E E = 1 2 .
When the population is renewed, if Δ e t is larger and Δ e t μ , it indicates that the population is in the global search stage, that is, the “exploration” stage. At this time, the sum of the reference point set Z and the number of associated individuals in each generation is counted as Zsum . If Δ e t is smaller and Δ e t < μ , it indicates that the population begins to converge, which is also the “exploration” stage. At this time, remain N reference points with the largest number of associations in Z according to Zsum , eliminate the other reference points and form the new reference point Z n . In the traditional sparrow search algorithm, the ratio between discoverer population and entrant population is fixed and γ is a constant, which may lead to poor global and local search performance. Therefore, in order to balance the early global exploration and late local development capabilities, the transformation law of the population scale factor γ should be as follows: in the early iteration, in order to ensure sufficient number of discoverers for extensive search, a large population scale factor should be maintained. When the number of iterations falls linearly, the number of discoverers should be lowered or nonlinear and the number of entrants should be increased, while the diversity of the population should be ensured to exercise more delicate local exploitation operations. Therefore, Δ e t just reflects the search stage of the population and the value of Δ e t is set as the scale factor Δ e t to dynamically alter the population search size during the process of evolution.

3.2. Competitive Mechanism for Discoverer Position Update

The competition mechanism is mostly used in particle swarm optimization algorithm. The elite particles compete in pairs and the winner will guide the direction of the current particle movement. In the process of sparrow search, the discoverer randomly selects two elite individuals, p and q , in the Pareto frontier and calculates the included angle formed between the discoverer w and elite individuals p and q . The elite individual with a narrower angle wins and directs the evolution of the population.
Population renewal is conducted through the guidance of competition mechanism, and the updated position formula of the i th individual in the t + 1 iteration is improved as follows:
X i , j t + 1 = { X w t exp ( i α i t e r max ) , R 2 < S T , X w t + Q L , R 2 S T .  
where X w t is the position of the winning competing individual in the i th iteration.

3.3. Cauchy Mutation Arithmetic

In response to the problem that the sparrow algorithm tends to fall into a local optimum, some individuals in the sparrow population are manipulated by certain mutations to change the original values and form new individuals. The variation range of the Cauchy variation operator is larger compared with other operators, which is advantageous for the population’s subsequent departure from the local optimum.
The Cauchy variational operator is of the type:
s k = t k π ( x k 2 + t k 2 ) , < x k < + .
From Equation (10), it can be seen that the Corsi variation operator is similar to the normal distribution, decreasing from the peak to both sides; after the Corsi variation, the decreasing trend is flatter, the capacity of sparrow individuals to leap out of the local extremes increases, and the peak of the Corsi variation operator is relatively small. I search time of sparrow individuals for their neighbors decreases, and the global search time increases after the variation. The sparrow search algorithm is optimized by using the above-mentioned Corsi distribution characteristics to update the follower formula as:
X i , j t + 1 = { C a u c h y ( 0 , 1 ) exp ( X worst     X i , j t i 2 ) if   i > n 2 , X p t + 1 + | X i , j t X P t + 1 | A + L   otherwise .
In Equation (11), C a u c h y ( 0 , 1 ) is the standard Cauchy random distribution at t = 1 , i.e., the standard Kersey random distribution at time.

3.4. Algorithm Process

Combined with the four strategies of improving the reference point selection strategy, improving the discoverer location formula, dynamically regulating the discoverer population size, and performing the Corsi variation on the population, the flow chart of the MaOISSA algorithm, a multi-strategy based high-dimensional multi-objective sparrow algorithm, As shown in Figure 1. MaOISSA is illustrated in Algorithm 1.
Algorithm 1: MaOISSA
Input: initialized population P 0 ; population size N ; target number M ; decision variable dimension D ; current evolutionary algebra t ; population’s maximum evolution algebra max T , constrained boundary u b and l b ; warning sparrow number S D = 0.2 N ; warning value R 2 .
Output: Pareto optimal frontier.
(a) Initialized population P 0 and initial reference point Z 0 ;
(b) According to Formulas (7) and (8), calculate the threshold, initialized reference point Z 0 , and the total number of individuals associated with the reference point Zsum = 0;
(c) For t = 1: max T ;
(d) Sort the population by non-domination, select the elite individual leader, and select the best individual and the worst individual from the results;
(e) Calculate Δ e and judge the population scale factor γ ;
(f) For i = 1 : γ N
(g) Update the discoverer’s location in the sparrow population according to Formula (9);
(h) End for
(i) For i = γ N : N
(j) Add Cauchy variation operator and update the follower’s position of sparrow population according to Formula (11);
(k) End for
(l) For i = 1 : S D
(m) Update sparrow population location according to Formula (3);
(n) End for
(o) Form a population P t with the size of N
(p) Calculate the entropy E r of the population P t according to Formula (5) and Δ e ;
(q) If Δ e t < μ
(r) Delete Z t N reference points with the least number of associated individuals from Zsum ;
(s) End if
(t) t = t + 1 ;
(u) End for
(v) Output the optimal solution.

3.5. Time Complexity Analysis

Assume that M is the goal number for the high-dimensional multi-objective optimization problem, N is the sparrow population size, and D is the decision number. The time complexity is discussed in two parts: the initial pertains to the temporal complexity of the adaptive reference point method, while the other is that of the sparrow optimization algorithm. The event complexity of the original NSGA-III algorithm is O ( N 2 log M 2 N ) or O ( N 2 M ) , and the improved adaptive reference point strategy adds comparison operation at two stages, so the time complexity of the entropy difference Δ e t between the two generations is calculated as O ( N log N ) ; in the reference point screening stage, the time complexity is O ( N log N ) ; therefore, in the worst case, the time complexity of the improved adaptive reference point strategy is max { O ( N 2 log M 2 N ) ,   O ( N 2 M ) ,   O ( N log N ) } . The event complexity of the initial sparrow search algorithm is O ( D N ) , and the time complexity of the polynomial variation operation is O ( D N 2 ) . Considering the time complexity of the two parts, the overall time complexity of the algorithm is max { O ( N 2 log M 2 N ) ,   O ( N 2 M ) ,   O ( N log N ) ,   O ( D N 2 ) ,   O ( D N ) } .

4. Algorithm Performance Test

4.1. Algorithm Flow

For the purpose of validating the performance of MaOISSA, CMOPSO, SPEAE/R [28], NSGA-III, and SMPSO3, high-dimensional multi-objective algorithms were selected as comparison algorithms on the test set DTLZ (DTLZ1-6) [29], MaF (MaF2-5, MaF7) [30], SMOP [31], and other 12 standard test functions testing. The properties of the set of test functions are shown in Table 2. The algorithm parameters were set as follows: population size: 100, maximum number of iterations: 300, where D is the dimensionality of the decision variable. Each algorithm was run 30 times on the test functions, and the experimental environment was based on the Platemo 3.5 [32] platform.

4.2. Performance Indicators

4.2.1. Inverted Generation Distance (IGD) Index

Inverted generation distance (IGD) [33] is used to measure the average minimized distance between the Pareteo reference solution set and the obtained Pareto solution set. As a comprehensive index, it can evaluate the distribution and convergence of the solution set. The smaller the IGD value is, the higher the fitting degree between the solution set obtained and the Pareto reference solution set, and the closer to the real Pareto frontier. Assuming that is the real Pareto frontier and is the optimal solution set obtained by the multi-objective optimization algorithm, then the calculation formula of IGD is as follows:
I G D ( P , P * ) = 1 | P * | v P * d i s t ( x , P )
where d i s t ( x , P ) . is the minimum Euclidean distance of a solution set obtained by the current algorithm and the solution obtained by the algorithm, and | P * | is the number of solutions in the solution set P * .

4.2.2. Hypervolume Index

Hypervolume index [34] is used to calculate the area covered by the population to the reference point. The greater the value of HV, the higher the quality of the population. Unlike IGD, the calculation of HV does not require the prior information of PF. In a while, the computational complexity grows exponentially with the number of targets. Assume that the reference point z * = ( z 1 * , , z m * ) T is dominated by all Pareto optimal target vectors. HV is calculated by the following formula:
HV ( P ) = v o l u m e ( f P [ f 1 , z 1 * ] × × [ f m , z m * ] ) ,
HV = δ ( i = 1 | S |   v i ) .
where P is the approximate Pareto frontier obtained. The HV index of the approximate Pareto frontier takes z * as the boundary, | S | reflects the number of solution sets that are not dominating, v i represents the volume of the hypercube formed by the i th solution and the reference point, and v o l u m e (   ) is the Lebesgue measure used to calculate the volume.

5. Experimental Results and Analysis

By simulating the Pareto approximation frontiers in the three-dimensional space obtained by MaOISSA and the four comparison algorithms in the test function SMOP3, Figure 2 shows that the Pareto approximation frontiers of NSGA-III and MaOISSA are basically the same, Table 3 and Table 4 regarding for 4-objective and 5-objective problems, show that MaOISSA’s overall index is better than NSGA-III, indicating that MaOISSA has some advantage in high-dimensional and multi-dimensional goals as the number of dimensions increases. Compared with SPEAE/R and CMOPSO, MaOISSA has better diversity and convergence, while SMPSO has poorer convergence and diversity in the Pareto approximate frontier in three-dimensional space.
In Table 5, in terms of IGD values of the five algorithms on the test function SMOP3 when the target dimension was three, five, eight, and thirteen, it can be seen that, when the target dimension was three, the NSGA-III algorithm performs better, but MaOISSA has similar results to NSGA-III. With the increase of objective dimension, the MaOISSA algorithm has better convergence than other algorithms, which indicates that Maoissa algorithm has better convergence and diversity in high-dimensional multi-objective optimization.
Table 3 and Table 4 show the statistical results of IGD and HV of MaOISSA, CMOPSO, SPEAE/R, NSGA-III and SMPSO3 in 12 test functions. The non-parametric statistical Wilcoxon rank sum test with a significance threshold of 5% is used to compare the five algorithms. “≈” in the table means that the two algorithms are identical, “+” means that the algorithm is better than MaOISSA and “−” means that the algorithm is worse than the MaOISSA. M is the target space dimension of a test function in the table, and D is the decision variable dimension of a test function. Each method is run 30 times for each test function to compute the mean value and standard deviation of the results, with the ideal result for each test function indicated in bold.
As can be seen from Table 3, the algorithm in this paper obtains optimal IGD values in 9 of the 12 test functions. In comparison to NSGA-III and SPEA/R, both of which are high-dimensional multi-objective optimization problems, MaOISSA has better convergence and diversity on DTL5, MaF2, MaF4, MaF7, and SMOP3, showing that the algorithm can deal with multi-modal problems well. For the high-dimensional multi-objective particle swarm optimization algorithm, the IGD values of CMPSO and SMPSO are much different from those of MaOISSA, so it cannot show the advantage of convergence speed. MaOISSA does not obtain better results for DTLZ4, DTLZ6, and MaF5. However, this algorithm is not much different than the best results, indicating that the algorithm has better robustness for different types of problems.
As can be seen from Table 4, HV of MaOISSA obtains the optimal average value on 8 of the 12 test functions with the best performance on DTLZ1-3, DTLZ5, MaF2, MaF3, MaF7, and SMOP3. it is approximate to SPEA/R result on DTLZ4. NSGAIII receives better results on DTLZ6 test function, SMPSO is better than other algorithms on MaF4 test function, SPEA/R is better than other algorithms on MaF5 test function. The above results show that MaOISSA has better overall performance in most cases.
Figure 3 depicts box plots of the IGD metrics for the populations of the five algorithms throughout 30 different tests on the 12 chosen issues. The horizontal point is the name of the comparison algorithm, while the vertical coordinate represents the IGD metric scale. The outer top and bottom lines of each column of the box plot reflect the sample’s maximum and lowest values, the inner top and bottom lines represent the upper and lower quartiles, and the “+” represents discrete values. The IGD performance is more favorable on eleven test functions, particularly on DTL5, MaF2, MaF3, MaF4, MaF7, and SMOP3, where the box plots of IGD are more squashed than the other algorithms, and on DTLZ1, DTLZ2, DTLZ3, and NSGAIII, where the box plots are virtually identical, indicating that the algorithm and Pareto frontier are a better fit.
Figure 4 depicts box plots of the population HV metrics acquired by the five methods following 30 individual tests on the 12 specified test functions. From Figure 4, it can be seen that that the HV values of MaOISSA on the 12 test functions are generally high and the box plots are flat, particularly on DTLZ2, DTLZ5, MaF2, MaF7, and SMOP3, where the HV values of the algorithms have more significant improvement compared to other algorithms, and on DTLZ1, MaF3, and MaF4, where the HV values of the algorithms are essentially the same, indicating that the algorithms have better robustness compared to MaOISSA. Therefore, MaOISSA is able to solve most of the high-dimensional multi-objective optimization problems, even though it is unable to achieve optimum solutions for all test functions.

6. Case Analysis

To validate the hybrid strategy enhanced high-dimensional multi-objective sparrow search algorithm described in this study, simulations will be conducted. The evaluation model of the innovation performance of the defense science and technology innovation ecosystem is used to construct three primary indicators and ten secondary indicators affecting the innovation performance. Simulating the defense science and technology data of Shaanxi Province, a multi-objective optimization model is created to determine the Pareto optimum and to achieve the optimal configuration of innovation performance for the defense science and technology innovation ecosystem.

6.1. Establishment of Objective Function

In the past, most evaluation methods of innovation performance focused on the measurement of three aspects of innovation output, innovation input, and innovation efficiency. The measurement is mainly carried out by DEA model or fuzzy comprehensive evaluation and the multi-objective optimization configuration is rarely used for research. By reading the measurement indicators of innovation performance and referring to the mature indicators through scholars’ practice, this paper adopts the thinking indicators of innovation output, innovation input, and innovation industry agglomeration to measure the innovation performance of the innovation ecosystem for science and technology in defense in Shaanxi Province. The index system is established as follows. Relevant Shaanxi Province statistics from 2010 to 2020 were collected through the “China Statistical Yearbook”, “Shaanxi Provincial Statistical Yearbook”, the official website of Shaanxi Provincial Office of Science, Technology and Industry for National Defense, “China National Defense White Paper”, etc. Using the entropy weighting method, the corresponding weights of each indicator were calculated. As shown in Table 6.
Based on this, the objective function of innovation performance of the defense science and technology innovation ecosystem is established:
(a)
Investment in innovation resources
In an ecological system, it is one of the purposes of the ecosystem to obtain greater innovation performance and output with lower input of innovation resources, so the input of innovation resources should be minimized. x 1 i represents the i th second-level index under the first first-level index. The decision variable R & D exp i n d u s is the experimental development expenditure of R&D funds of industrial enterprises above the scale, R & D exp d e f is internal expenditure of R&D funds for defense science and technology industry, R & D exp S c i r e s is internal expenditure on R&D funding for research and development institutions, R & D exp H e d u is internal expenditure on R&D funds of higher education institutions. The following objective function is established:
min f 1 = C 11 R & D exp i n d u s + C 12 R & D exp d e f + C 13 R & D exp S c i r e s + C 14 R & D exp H e d u ;
(b)
Innovation performance output
The essence of the transformation of scientific and technological achievements is the transfer, diffusion, and application of innovative knowledge, which is an important method of promoting the close integration of science and technology with economy and improving the operational efficiency of the defense science and technology innovation ecosystem. Therefore, one of the goals of the defense science and technology innovation ecosystem is to maximize the innovation performance of the system and establish the following objective function.
max f 2 = C 21 I n v p a t s c i r e s + C 22 I n v p a t H e d u + C 23 I n v p a t i n d u s + C 24 I n v p a t d e f ;
The decision variable I n v p a t s c i r e s is the number of valid invention patents for organizations of research and development, I n v p a t H e d u is the number of valid invention patents at colleges and universities, I n v p a t i n d u s is the number of innovation patents of industrial enterprises above the scale and I n v p a t d e f is the number of defense invention patents granted.
(c)
Industrial agglomeration degree
When the defense science and technology innovation ecosystem develops to a certain stage, it will not only have a clustering effect on the enterprises within the system, but also on the enterprises outside the system, and then it shall play a driving role in regional economic development. Therefore, the higher the degree of industrial agglomeration, the greater the benefits to the surrounding areas. The objective function of industrial agglomeration is as follows:
max f 3 = C 31 C m p + C 32 D e f c ;
The decision variable C m p is the proportion of the technology output of military-civilian integration innovation demonstration zones in the national output, and D e f c is construction degree of national defense science and technology innovation platform.
China’s total investment in research and experimental development (R&D) was estimated to reach 3.087 trillion yuan in 2022, according to the National Bureau of Statistics’ preliminary estimate, and the ratio of R&D expenditure to gross domestic product (GDP) was expected to reach 2.55%, which is 0.12 percentage points higher than the previous year. It can be seen that the state’s investment in R&D funding is increasing year by year. Within a certain range, R&D investment can promote industry performance, but exceeding the critical point can have the opposite effect. Therefore, referring to the ratio of R&D funding to gross domestic product (GDP) in other countries, the ratio of R&D investment in defense science and technology to GDP in Shaanxi Province is limited to less than or equal to 4%. Then the constraint:
0 C 11 R & D exp i n d u s + C 12 R & D exp d e f + C 13 R & D exp S c i r e s + C 14 R & D exp H e d u 0.04 GDP ;
The evaluation of the ecosystem supporting innovations in defense science and technology is multi-objective and must account for its many moving parts. For the remaining indicators, there are no mandatory investment intensity or other constraints, so the constraints are set to be greater than or equal to zero for each indicator.
In accordance with the preceding, the multi-objective optimization model of the defense science and technology innovation ecosystem’s innovation performance is:
{ min f 1 = C 11 R & D exp i n d u s + C 12 R & D exp d e f + C 13 R & D exp S c i r e s + C 14 R & D exp H e d u max f 2 = C 21 I n v p a t s c i r e s + C 22 I n v p a t H e d u + C 23 I n v p a t i n d u s + C 24 I n v p a t d e f max   f 3 = C 31 C m p + C 32 D e f c 0 C 11 R & D exp i n d u s + C 12 R & D exp d e f + C 13 R & D exp S c i r e s + C 14 R & D exp H e d u 0.04 GDP R & D exp i n d u s , R & D exp d e f , R & D exp S c i r e s , R & D exp H e d u 0 C m p , D e f c 0 I n v p a t s c i r e s , I n v p a t H e d u , I n v p a t i n d u s , I n v p a t d e f 0

6.2. Optimization Model Solving

By standardizing the objective functions and constraints of the constructed model, the following representative feasible solutions are obtained through calculation using MaOISSA:
Based on Table 7 and Table 8, it can be seen that when the resource input f 1 is smaller and the performance output f 2 is higher, the industrial aggregation degree f 3 is lower, which indicates that, although the results transformation effect is better, the industrial aggregation is affected to a certain extent when the input of resources is lower, Corresponding to the scheme No. 1 in Table 7 it can be seen that, when all other resource inputs are normal, the R&D expenditure within defense science and technology is smaller, which affects the industrial aggregation degree to a greater extent. It is also apparent that the industrial aggregation degree is affected by the intensity of resource input of defense science and technology. When the resource input f 1 is basically unchanged, the innovation performance f 2 is small, and the industry aggregation f 3 is small, corresponding to scenario 2. At this time, the number of defense invention patents granted is 66% lower than that of scenario 1 compared to the reduction of other innovation performance, indicating that the transformation of defense science and technology has a greater impact on innovation performance. Compared with scenario 2, when the resource investment f 1 increases by 1.7%, and the innovation performance f 2 decreased by 64% while the industrial aggregation f 3 decreased by 10.23%, it can be seen that the investment strength of innovation resources has a greater impact on the industrial aggregation, and at this time, the indicators of innovation performance, except for the indicators of defense science and technology, all have a greater decrease. This indicates that a smaller increase in resource investment can cause a greater innovation in defense science and technology changes. Thus, it is evident from the representative solution that increases in both input and output can cause an upsurge in regional industry aggregation, and to obtain higher performance and regional industry aggregation with the least resource input, the investment in defense science and technology innovation should be greatly strengthened, because a small increase of resources can accelerate the output of performance and the increase of regional defense industry aggregation.

7. Conclusions

This paper offers a hybrid strategy enhanced high-dimensional multi-objective sparrow search algorithm MaOISSA with the goal of addressing the deficiencies of the sparrow algorithm in high-dimensional multi-objective optimization situations. Without setting the number of reference points in the objective space by itself, the reference points with small correlation are screened independently according to the number of populations and evolutionary generations, which may efficiently eliminate non-dominant solutions with strong variety and convergence. This indicates that the worldwide distribution of individual sparrows raises population selection pressure to the point that the growth in the number of targets has no effect on the search capability of the sparrow algorithm. Introducing the competition mechanism into the sparrow algorithm, the winning sparrow individuals guide the population for updating, and improving the discoverer position update formula in the sparrow algorithm can effectively increase its convergence. In the meanwhile, the Corsi variation operator is added to the follower search formula to improve the algorithm’s ability to escape the local maximum. By performing simulations on the high-dimensional multi-objective test function SMOP3 and compared with CMOPSO, SPEAE/R, NSGA-III, SMPSO algorithms, it is clear that the proposed algorithm has the advantages of convergence and diversity in solving high-dimensional multi-objective optimization problems. The MaOISSA approach is utilized for the multi-objective optimum design of ecological ecosystems, and the outcomes demonstrate the algorithm’s applicability.

Author Contributions

Conceptualization, L.R. and Y.Y.; Data curation, L.R.; Formal analysis, L.R.; Funding acquisition, L.R.; Investigation, X.L.; Methodology, L.R.; Project administration, L.R.; Resources, W.Z.; Software, L.R.; Supervision, W.Z.; Validation, L.R. and W.Z.; Visualization, X.L.; Writing—original draft, L.R.; Writing—review & editing, L.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Key Project of Shaanxi Education Department, grant number 19JZ056.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  2. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  3. Seyedali, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2015, 28, 673–687. [Google Scholar]
  4. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  5. Li, Y.; Wang, S.; Chen, Q.; Wang, X. Comparative Study of Several New Swarm Intelligence Optimization Algorithms. Comput. Eng. Appl. 2020, 56, 1–12. [Google Scholar]
  6. Yang, X.S. A new metaheuristic bat-inspired algorithm. Comput. Knowl. Technol. 2010, 284, 65–74. [Google Scholar]
  7. Gad, A.G.; Sallam, K.M.; Chakrabortty, R.K.; Ryan, M.J.; Abohany, A.A. An improved binary sparrow search algorithm for feature selection in data classification. Neural Comput. Appl. 2022, 34, 15705–15752. [Google Scholar] [CrossRef]
  8. Zhang, Z.; He, R.; Yang, K. A bioinspired path planning approach for mobile robots based on improved sparrow search algorithm. Adv. Manuf. 2021, 10, 114–130. [Google Scholar] [CrossRef]
  9. Nie, F.; Wang, Y. Sparrow Search Algorithm Based on Adaptive t-Distribution and Random Walk. Electron. Sci. Technol. 2023, 1–7. [Google Scholar] [CrossRef]
  10. Hu, X.J.; Meng, B.M.; Li, P.; Ouyang, X.J.; Huang, S.P. Research and Application of Improved Sparrow Search Algorithm Based on Multi-Strategy. J. Chin. Comput. Syst. 2022, 1–10. Available online: http://kns.cnki.net/kcms/detail/21.1106.tp.20221018.0920.004.html (accessed on 7 March 2023).
  11. Li, D.; Wu, Y.; Zhu, C. Improved Sparrow Search Algorithm Based on A Variety of Improved Strategies. Comput. Sci. 2022, 49, 217–222. [Google Scholar]
  12. Tang, Y.; Li, C.; Li, S.; Cao, B.; Chen, C. A Fusion Crossover Mutation Sparrow Search Algorithm. Math. Probl. Eng. 2021, 2021, 9952606. [Google Scholar] [CrossRef]
  13. Ouyang, C.; Zhu, D.; Qiu, Y. Lens Learning Sparrow Search Algorithm. Math. Probl. Eng. 2021, 2021, 9935090. [Google Scholar] [CrossRef]
  14. Fu, H.; Liu, H. Improved sparrow search algorithm with multi-strategy integration and its application. Control. Decis. 2022, 37, 87–96. [Google Scholar] [CrossRef]
  15. Kong, F.; Song, J.; Yang, Z. A daily carbon emission prediction model combining two-stage feature selection and optimized extreme learning machine. Environ. Sci. Pollut. Res. 2022, 29, 87983–87997. [Google Scholar] [CrossRef]
  16. Wen, Z.Y.; Xie, J.; Xie, G.; Xu, X.Y. Multi-objective sparrow search algorithm based on new crowding distance. Comput. Eng. Appl. 2021, 57, 102–109. [Google Scholar]
  17. Wu, W.; Tian, L.; Wang, Z.; Zhang, Y.; Wu, J.; Gui, F. Novel multi-objective sparrow optimization algorithm with improved non-dominated ranking. Appl. Res. Comput. 2022, 39, 2012–2019. [Google Scholar] [CrossRef]
  18. Farina, M.; Amato, P. On the optimal solution definition for many-criteria optimization problems. In Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society, New Orleans, LA, USA, 27–29 June 2002; pp. 232–238. [Google Scholar]
  19. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2001; pp. 28–55. [Google Scholar]
  20. Geng, H.T.; Dai, Z.B.; Wang, T.L.; Xu, K. Improved NSGA-III Algorithm Based on Reference Point Selection Strategy. Pattern Recognit. Artif. Intell. 2020, 33, 191–201. [Google Scholar]
  21. Yang, W. Research on Many-objective Particle Swarm Optimization Algorithm. Ph.D. Thesis, Northwest University, Xi’an, China, 2020. [Google Scholar] [CrossRef]
  22. Han, M.; He, Y.; Zheng, D. Reference-point-based particle swarm optimization algorithm for many-objective optimization. Control. Decis. 2017, 32, 607–612. [Google Scholar] [CrossRef]
  23. Cai, X.J.; Hu, Z.M.; Zhang, Z.X.; Wang, Q.; Cui, Z.H.; Zhang, W.S. Multi-UAV coordinated path planning based on many-objective optimization. Sci. Sin. Inform. 2021, 51, 985–996. (In Chinese) [Google Scholar] [CrossRef]
  24. Yang, W.; Chen, L.; Wang, Y.; Zhang, M. Many-objective particle swarm optimization algorithm for fitness ranking. J. Xidian Univ. 2021, 48, 78–84. [Google Scholar] [CrossRef]
  25. Zhang, X.; Zheng, X.; Cheng, R.; Qiu, J.; Jin, Y. A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf. Sci. 2018, 427, 63–76. [Google Scholar] [CrossRef]
  26. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.A. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  27. Reyes-Sierra, M.; Coello, C.C. Multi-objective particleswarm optimizers: A survey of the state-of-the-art. Int. J. Comput. Intell. Res. 2012, 2, 287–308. [Google Scholar]
  28. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2014, 19, 694–716. [Google Scholar] [CrossRef]
  29. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Test Problems for Evolutionary Multi-Objective Optimization; Springer: London, UK, 2005. [Google Scholar]
  30. Cheng, R.; Li, M.Q.; Tian, Y.; Zhang, X.; Yang, S.; Jin, Y.; Yao, X. A Benchmark Test Suite for Evolutionary Many-Objective Optimization. Complex Intell. Syst. 2017, 3, 67–81. [Google Scholar] [CrossRef]
  31. Tian, Y.; Zhang, X.; Wang, C.; Jin, Y. An evolutionary algorithm for large-scale sparse multiobjective optimization problems. IEEE Trans. Evol. Comput. 2020, 24, 380–393. [Google Scholar] [CrossRef]
  32. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef] [Green Version]
  33. He, X.; Zhou, Y.; Chen, Z. An Evolution Path Based Reproduction Operator for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2017, 23, 29–43. [Google Scholar] [CrossRef]
  34. Zhou, C.; Dai, G.; Wang, M.; Li, X. Indicator and Reference Points Co-Guided Evolutionary Algorithm for Many-Objective Optimization Problems. Knowl. Based Syst. 2018, 140, 50–63. [Google Scholar]
Figure 1. MaOISSA Algorithm flow chart.
Figure 1. MaOISSA Algorithm flow chart.
Applsci 13 03589 g001
Figure 2. Comparison of Pareto Approximation Frontiers for Five Comparison Algorithms on Three-objective SMOP3.
Figure 2. Comparison of Pareto Approximation Frontiers for Five Comparison Algorithms on Three-objective SMOP3.
Applsci 13 03589 g002aApplsci 13 03589 g002b
Figure 3. Five algorithms for IGD indicator box plots on 12 test functions.
Figure 3. Five algorithms for IGD indicator box plots on 12 test functions.
Applsci 13 03589 g003
Figure 4. Five algorithms for HV indicator box plots on 12 test functions.
Figure 4. Five algorithms for HV indicator box plots on 12 test functions.
Applsci 13 03589 g004
Table 1. Objective M-dimension, divided by p reference points.
Table 1. Objective M-dimension, divided by p reference points.
M2345678910
p
12345678910
23610152128364555
341020355684120165220
45153570126210330495715
56215612625246279212872002
672884210462924171630035005
783612033079217163432643511,440
894516549512873003643512,87024,310
910552207152002500511,44024,31048,620
10116628610013003800819,44843,75892,378
Table 2. Characteristics of the test function set.
Table 2. Characteristics of the test function set.
CharacteristicProblem
Linear Pareto frontierDTLZ1, SMOP3
Concave Pareto frontDTLZ2-6, MaF2, MaF4
Convex Pareto frontierMaF3, MaF5
Hybrid frontierMaF7
MultimodeDTLZ1, DTLZ3, MaF3, MaF4, MaF7, SMOP3
Table 3. MaOISSA, CMOPSO, SPEAE/R, NSGA-III, SMPSO IGD values on test functions.
Table 3. MaOISSA, CMOPSO, SPEAE/R, NSGA-III, SMPSO IGD values on test functions.
ProblemMDCMOPSONSGAIIISPEAE/RSMPSOMaOISSA
DTLZ15201.5355 × 10+2 (2.16 × 10+1) −6.1949 × 10+0 (2.23 × 10+0) =8.4207 × 10+0 (3.55 × 10+0) −1.4025 × 10+2 (6.76 × 10+1) −5.8102 × 10+0 (2.68 × 10+0) +
DTLZ24131.6753 × 10−1 (6.61 × 10−3) −1.4032 × 10−1 (1.37 × 10−5) −1.4498 × 10−1 (2.02 × 10−3) −3.8239 × 10−1 (4.40 × 10−2) −1.2362 × 10−1 (1.36 × 10−3) +
DTLZ34131.3917 × 10+2 (2.86 × 10+1) −1.2793 × 10+0 (1.08 × 10+0) =4.0254 × 10+0 (1.67 × 10+0) −1.5461 × 10+1 (2.01 × 10+1) −1.2553 × 10+0 (1.10 × 10+0) +
DTLZ44131.9280 × 10−1 (1.35 × 10−2) =2.6811 × 10−1 (1.59 × 10−1) +1.4599 × 10−1 (2.73 × 10−3) =3.7316 × 10−1 (3.48 × 10−2) =2.7557 × 10−1 (1.61 × 10−1) −
DTLZ54131.1479 × 10−1 (2.33 × 10−2) −5.5092 × 10−2 (1.41 × 10−2) −1.6755 × 10−1 (4.71 × 10−2) −7.2698 × 10−2 (2.27 × 10−2) −4.2965 × 10−2 (1.02 × 10−2) +
DTLZ64131.3460 × 10+0 (8.52 × 10−1) −1.0311 × 10−1 (3.91 × 10−2) +2.0997 × 10−1 (7.44 × 10−2) −2.5911 × 10+0 (1.12 × 10+0) −1.5857 × 10−1 (9.57 × 10−2) −
MaF25201.4812 × 10−1 (6.41 × 10−3) −1.4463 × 10−1 (5.84 × 10−3) −1.5543 × 10−1 (2.37 × 10−3) −1.6306 × 10−1 (5.15 × 10−3) −1.3384 × 10−1 (3.59 × 10−3) +
MaF35201.1606 × 10+6 (6.80 × 10+5) −5.7684 × 10+2 (4.90 × 10+2) =3.7808 × 10+3 (4.61 × 10+3) −1.4137 × 10+5 (1.06 × 10+5) −5.2688 × 10+2 (4.35 × 10+2) +
MaF45207.5265 × 10+3 (1.39 × 10+3) −2.1965 × 10+2 (1.11 × 10+2) −3.3508 × 10+2 (1.58 × 10+2) −7.7206 × 10+2 (7.74 × 10+2) −1.5868 × 10+2 (8.53 × 10+1) +
MaF55204.9650 × 10+0 (5.87 × 10−1) −3.0733 × 10+0 (1.32 × 10+0) −2.4431 × 10+0 (3.35 × 10−2) +6.1773 × 10+0 (7.56 × 10−1) −2.9255 × 10+0 (1.10 × 10+0) −
MaF75206.6403 × 10−1 (7.46 × 10−2) −3.8894 × 10−1 (2.15 × 10−2) −5.0488 × 10−1 (1.60 × 10−2) −7.0573 × 10−1 (1.34 × 10−1) −3.5604 × 10−1 (1.10 × 10−2) +
SMOP35202.8987 × 10+0 (1.05 × 10−1) −1.4187 × 10+0 (3.18 × 10−3) −1.4241 × 10+0 (7.34 × 10−3) −2.6776 × 10+0 (9.47 × 10−2) −1.3525 × 10+0 (7.71 × 10−2) +
+/−/=0/11/12/7/31/10/10/11/19/3/0
Table 4. MaOISSA, CMOPSO, SPEAE/R, NSGA-III, SMPSO HV values on test functions.
Table 4. MaOISSA, CMOPSO, SPEAE/R, NSGA-III, SMPSO HV values on test functions.
ProblemMDCMOPSONSGAIIISPEAE/RSMPSOMaOISSA
DTLZ15200.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =
DTLZ24135.8341 × 10−1 (1.45 × 10−2) −6.9112 × 10−1 (5.29 × 10−4) −6.8638 × 10−1 (2.49 × 10−3) −2.0194 × 10−1 (4.40 × 10−2) −6.9849 × 10−1 (1.60 × 10−3) +
DTLZ34130.0000 × 10+0 (0.00 × 10+0) −1.7215 × 10−1 (2.36 × 10−1) =0.0000 × 10+0 (0.00 × 10+0) −4.9317 × 10−2 (7.02 × 10−2) =2.1667 × 10−1 (2.92 × 10−1) +
DTLZ44135.5090 × 10−1 (3.50 × 10−2) −6.2507 × 10−1 (8.22 × 10−2) −6.8375 × 10−1 (2.88 × 10−3) =4.8318 × 10−1 (4.39 × 10−2) −6.2713 × 10−1 (8.31 × 10−2) +
DTLZ54137.7229 × 10−2 (2.00 × 10−2) −1.3475 × 10−1 (3.14 × 10−3) −9.5340 × 10−2 (1.85 × 10−2) −1.3717 × 10−1 (2.95 × 10−3)=1.3806 × 10−1 (3.07 × 10−3) +
DTLZ64132.2421 × 10−2 (4.40 × 10−2) −1.2581 × 10−1 (1.03 × 10−2) +7.7745 × 10−2 (3.00 × 10−2) −4.0200 × 10−3 (2.17 × 10−2) −1.1446 × 10−1 (1.65 × 10−2) −
MaF25201.0783 × 10−1 (7.07 × 10−3) −1.5186 × 10−1 (4.38 × 10−3) −1.3081 × 10−1 (3.16 × 10−3) −9.8532 × 10−2 (6.34 × 10−3) −1.6640 × 10−1 (2.94 × 10−3) +
MaF35200.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) +
MaF45200.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =0.0000 × 10+0 (0.00 × 10+0) =3.7082 × 10−3 (8.66 × 10−3) +0.0000 × 10+0 (0.00 × 10+0) −
MaF55201.8146 × 10−1 (1.04 × 10−1) −7.3079 × 10−1 (6.76 × 10−2) −7.6085 × 10−1 (3.97 × 10−3) +1.6672 × 10−1 (7.74 × 10−2) −7.5769 × 10−1 (5.38 × 10−2) −
MaF75208.1061 × 10−2 (1.65 × 10−2) −2.2874 × 10−1 (5.70 × 10−3) −2.2162 × 10−1 (3.41 × 10−3) −6.4835 × 10−2 (6.20 × 10−2) −2.3392 × 10−1 (3.73 × 10−3) +
SMOP35200.0000 × 10+0 (0.00 × 10+0) −1.9189 × 10−3 (8.86 × 10−5) −1.7169 × 10−3 (2.04 × 10−4) −0.0000 × 10+0 (0.00 × 10+0) −2.2147 × 10−3 (1.25 × 10−3) +
+/−/=0/11/12/7/30/9/31/7/48/3/1
Table 5. IGD values for five algorithms with different target dimensions on SMOP3 test function.
Table 5. IGD values for five algorithms with different target dimensions on SMOP3 test function.
ProblemMDCMOPSONSGAIIISPEARSMPSOMaOISSA
SMOP33201.3217 × 10+0 (6.91 × 10−2) −1.1998 × 10+0 (9.97 × 10−3) +1.2044 × 10+0 (1.03 × 10−2) +2.3338 × 10+0 (1.20 × 10−1) −1.2158 × 10+0 (1.10 × 10−2) −
SMOP35202.8987 × 10+0 (1.05 × 10−1) −1.4187 × 10+0 (3.18 × 10−3) −1.4241 × 10+0 (7.34 × 10−3) −2.6776 × 10+0 (9.47 × 10−2) −1.3525 × 10+0 (7.71 × 10−2) +
SMOP38203.3209 × 10+0 (1.48 × 10−1) −2.3996 × 10+0 (8.08 × 10−2) +2.9469 × 10+0 (1.54 × 10−1) −2.9915 × 10+0 (1.53 × 10−1) =2.2820 × 10+0 (1.51 × 10−1) +
SMOP313202.7189 × 10+0 (2.06 × 10−1) −1.9253 × 10+0 (2.56 × 10−1) −2.7637 × 10+0 (2.48 × 10−1) −2.3267 × 10+0 (1.83 × 10−1) −1.2894 × 10+0 (1.17 × 10−1) +
Table 6. Innovation Performance Evaluation Indexes and Weights of Defense Science and Technology Innovation Ecosystem.
Table 6. Innovation Performance Evaluation Indexes and Weights of Defense Science and Technology Innovation Ecosystem.
Primary IndicatorSecondary IndicatorWeightUnit
Innovation resource input
C1
Experimental development expenditure of R&D funds for industrial enterprises above the scale C11 0.2074million
Internal expenditure of R&D funds for defense science and technology industry C120.1646thousand
Internal expenditure on R&D funding for research and development institutions C130.1956million
Internal expenditure on R&D funds of higher education institutions C140.4324million
Innovation performance output
C2
Amount of valid invention patents for organizations of research and development C210.2509Pieces
Amount of valid invention patents at colleges and universities C220.2297Pieces
Above the scale, the number of legitimate innovation patents held by industrial businesses C230.3232Pieces
Number of defense invention patents granted C240.1961Pieces
Industrial agglomeration degree C3Proportion of the technology output of military-civilian integration innovation demonstration zones in the national output C310.1158%
Construction degree of national defense science and technology innovation platform C320.1193 %
Table 7. Optimization results of secondary indicators.
Table 7. Optimization results of secondary indicators.
No. R & D exp i n d u s R & D exp d e f R & D exp S c i r e s R & D exp H e d u I n v p a t s c i r e s I n v p a t H e d u I n v p a t i n d u s I n v p a t d e f C m p D e f c
11,478,445.40 580,714.234 1,282,345.78 2322.24613,00023,513.4825,0006895.1710.70.517
21,531,095.58 593,114.814 1,269,482.86 211012,999.9625,00020,772.082322.2460.750.441
31,541,209.22 596,982.586 1,295,022.20 211023478985.67544921100.690.56
Table 8. Optimization results of the objective function.
Table 8. Optimization results of the objective function.
No. f 1 f 2 f 3
1363,655.724516,76369.77
2363,758.539710,075.713867.41
3370,019.24393588.4760.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, L.; Zhang, W.; Ye, Y.; Li, X. Hybrid Strategy to Improve the High-Dimensional Multi-Target Sparrow Search Algorithm and Its Application. Appl. Sci. 2023, 13, 3589. https://doi.org/10.3390/app13063589

AMA Style

Ren L, Zhang W, Ye Y, Li X. Hybrid Strategy to Improve the High-Dimensional Multi-Target Sparrow Search Algorithm and Its Application. Applied Sciences. 2023; 13(6):3589. https://doi.org/10.3390/app13063589

Chicago/Turabian Style

Ren, Lu, Wenyu Zhang, Yunrui Ye, and Xinru Li. 2023. "Hybrid Strategy to Improve the High-Dimensional Multi-Target Sparrow Search Algorithm and Its Application" Applied Sciences 13, no. 6: 3589. https://doi.org/10.3390/app13063589

APA Style

Ren, L., Zhang, W., Ye, Y., & Li, X. (2023). Hybrid Strategy to Improve the High-Dimensional Multi-Target Sparrow Search Algorithm and Its Application. Applied Sciences, 13(6), 3589. https://doi.org/10.3390/app13063589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop