1. Introduction
1.1. Overview of PSO Algorithm
Particle swarm optimization (PSO) is a meta-heuristic, population-based swarm intelligence algorithm first proposed by [
1]. Recently, the PSO algorithm has attracted a great deal of researchers for its powerful ability to solve a wide range of complicated optimization problems without requiring any assumption on the objective function. The applications of the PSO algorithm are summarized by [
2,
3], which divide the applications into 26 different categories, including but not limited to: image and video analysis applications, control applications, design applications, power generation and power systems, combinatorial optimization problems, etc.
The PSO algorithm is a bionic algorithm which simulates the preying behavior of a bird flock. In the Particle Swarm Optimization algorithm, each solution of the optimization problem is considered to be a “bird” in the search space, and it is called a “particle”. The whole population of the solution is termed as a “swarm”, and all of the particles are searched by following the current best particle in the swarm. Each particle i has two characteristics: one is position (denoted by ), which determines the particle’s fitness value, the other is velocity (denoted by ), which determines the direction and distance of the search. In iteration t (t is a positive integer), to avoid confusion, the position and velocity of particle i are usually denoted by and . Each particle tracks two “best” positions: the first is the best position found by the particle itself so far, which is denoted by “”; the second is the best position found by the whole swarm so far, denoted by “”. When the algorithm terminates, is declared to be the solution to our problem.
The velocity and position of each particle are updated by equations:
Here,
is the velocity of the particle
i,
is the position of the particle
i, and
is the inertia weight.
and
are the local best position for particle
i and global best position found by all of the particles in iteration
t, respectively. Here,
and
are two random numbers in [0, 1], while
are “learning factors”, with
termed the “cognitive learning factor”, and
the “social learning factor” [
1].
From the formulas, the update of each is composed of three parts: the first part is the inertia velocity before the change; the second part is the cognitive learning part, which represents the learning process of the particle from its own experience; the third part is the social learning part, which represents the learning process of the particle from the experience of other particles.
1.2. Related Works and Main Improvements of This Manuscript
Recent related studies for particle swarm optimization methods include: Ref. [
4] proposed a multi hierarchical hybrid particle swarm optimization algorithm. Ref. [
5] proposed a combination of genetic algorithm and particle swarm optimization for the global optimization. Ref. [
6] proposed an improved PSO algorithm in power system network reconfiguration. Ref. [
7] introduced a multiobjective PSO algorithm based on multistrategy. Ref. [
8] proposed a vector angles-based many-objective PSO algorithm using archive. Ref. [
9] proposed a novel hybrid gravitational search particle swarm optimization algorithm. Ref. [
10] proposed the application of particle swarm optimization to portfolio construction.
Previous works provide a significant foundation for particle swarm optimization algorithms in optimal designs. However, there are some shortcomings in previous work:
- (i)
In order to facilitate the operation, previous works use the same dynamic parameters for the whole particle swarm in each iteration. However, in some situations, different kinds of particles may have different dynamic parameters.
- (ii)
Previous works mainly focus on pessimistic criterion. The combination of the PSO algorithm and other useful decision making criteria, including the optimistic coefficient criterion and minimax regret criterion, are seldom considered in previous research.
To solve these problems, in this paper, utilizing the core ideology of the genetic algorithm and dynamic parameters, an improved particle swarm optimization algorithm is proposed in
Section 3. Then, based on the improved algorithm, combining the PSO algorithm with decision making, nested PSO algorithms are proposed in
Section 4. The main improvements of this manuscript include: in the improved PSO algorithm in
Section 3, top 10 percent particles are chosen as “superior particles”. Then, different cognitive learning factors and social learning factors are used for superior particles and normal particles. In each iteration, the superior particles will split, and the inferior particles will be eliminated. In the nested PSO algorithms in
Section 4, two useful decision making criteria, optimistic coefficient criterion and minimax regret criterion, are combined with the PSO algorithm. These research studies have not been conducted yet by other researchers. The details of the improvements are in
Section 3 and
Section 4.
The proposed algorithms are implemented on various mathematical functions, which are shown in
Section 5.
2. Background
2.1. Experimental Design and the Fisher Information Matrix
An experimental design
which has
n support points can be written in the form:
Here, are the values of the support points within the allowed design region, and are the weights, which sum to 1, and represent the relative frequency of observations at the corresponding design point.
For several nonlinear models, the design criterion to be optimized contains unknown parameters. The general form of the regression model can be written as . Here, can be either a linear or nonlinear function, is the vector of unknown parameters, and is the vector of design (includes the information for both weight and the value of the support point). The range of is , and the range of is . The value of a design is measured by its Fisher information matrix, which is defined to be the negative of the expectation of the matrix of second derivatives (with respect to ) of the log-likelihood function.
The Fisher information matrix for a given design is . Here, is the probability function of . To estimate parameters accurately, the objective function should be minimized over all designs on .
For example, for the popular Michaelis–Menten model in biological sciences, which is presented by [
11]:
The Fisher information matrix for a given design
is defined by
Chen et al. [
12] proposed the equivalence theorem for experimental design with the Fisher information matrix: for a heteroscedastic linear model with mean function
,
is the assumed reciprocal variance of the response at
x, then the variance of the fitted response at the point
z is proportional to
, where
. Design
is optimal if there exists a probability measure
on
, where
,
Z is the range of unknown parameter. For all
,
with equality at the support points of
. Here
2.2. Essential Elements of Decision Making
A decision making problem is composed of four elements [
13,
14]:
- (i)
A number of actions to be taken;
- (ii)
A number of states which can not be controlled by the decision maker;
- (iii)
Objective function: payoff function or loss function which depends on both an action and a state (our objective is to maximize the payoff function or minimize the loss function);
- (iv)
Criterion: by certain criterion, the decision maker decides which action to take.
In the objective function , is in the set of states which are out of our control and design is an action to be taken.
2.3. Optimization Criterion for Decision Making
Decision-making with loss functions is proposed in several papers, such as [
15]. Clearly, our objective is to minimize the loss functions. Based on the loss function, there are several popular criterion for decision making:
- (i)
Pessimistic criterion: The pessimistic decision maker always considers the worst case, that is, suppose
will maximize the loss function. The decision maker will take the action that minimizes the loss function in the worst case. This criterion is also known as the minimax criterion. The formula for this criterion is:
- (ii)
Optimistic coefficient criterion: usually the decision maker will trade off from optimism and pessimism in decision making. This derives the optimistic coefficient criterion which take the weighted average of maximum and minimum of the loss function. The weight is called optimistic coefficient, which is between 0 and 1. It reflects the content of optimism of the decision maker. The formula for this criterion is:
Here, is the optimistic coefficient. When , the optimistic coefficient criterion shrinks to pessimistic criterion.
- (iii)
Minimax regret criterion: in this criterion, our objective is to minimize the maximum possible regret value. The regret value is defined by the difference between the loss under certain action and the minimum loss possible under the same state. The formula for this criterion is:
Here,
The significance for criterion (ii) and (iii) are: usually the decision maker will trade off from optimism and pessimism in decision making. This derives the optimistic coefficient criterion which takes the weighted average of the maximum and minimum of the possible loss. The weight is called the optimistic coefficient, which reflects the content of optimism of the decision maker.
Some times after the decision maker makes a decision, he or she may regret when certain states appear. In this case, people want to minimize the maximum regret value, which is the distance between the loss value of the action they take and the minimum loss value possible in the relevant state. The regret value is also called opportunity costs, which represent regret in the sense of lost opportunities.
3. Improved Particle Swarm Optimization for Experimental Design
In this section, combining the core ideology of the genetic algorithm and dynamic parameters, an improved PSO algorithm is proposed as the foundation of the nested PSO algorithm.
3.1. Main Improvement of Our Algorithm
- (i)
The superior particles are split and the inferior particles are eliminated in each iteration. That is, in each iteration, the fitness values of the particles are ranked from high to low, and particles with top 10 percent fitness values as “superior particles” are taken. Then, each superior particle is split into two particles with the same velocities and positions, and particles with the bottom 10 percent fitness values are deleted to keep the swarm in a constant size. This improvement adopts the core ideology of the genetic algorithm: the individuals with higher fitness values will reproduce more offsprings. The splitting procedure in optimization methods is usually called individual cloning.
- (ii)
Dynamic parameters
and
are utilized in the algorithm:
Here, the iter is the current number of iterations and maxiter is the maximum number of iterations. and are the upper and lower bounds of the learning factors, respectively.
According to the ideology of the genetic algorithm and common sense, the superior particles have better learning abilities, thus, the learning factors for superior particles have higher upper and lower bounds than normal particles. After repeated attempts and comparisons, are taken for superior particles, and are taken for normal particles.
Consequently, in the running process of the algorithm, the cognitive learning factor is linearly decreased and the social learning factor is linearly increased.
- (iii)
Dynamic parameter
is utilized in the algorithm:
and are the upper and lower bounds of , respectively. According to common sense, the superior particles are more active, thus, they have lower inertia. In our algorithm, , are set for superior particles, and , are set for normal particles.
Improvements (ii) and (iii) are utilized in the algorithm because these approaches are in accordance with the idea of particle swarm optimization: at the beginning, each bird has a large cognitive learning factor and small social learning factor, and each bird searches mainly by its own experience. After a period of time, as each bird gets more and more knowledge from the bird population, it relies increasingly on the social knowledge for its search. In addition, the effect of inertia velocity will decrease over time since the particles obtain more and more information from cognitive learning and social learning in the process of searching, so they rely increasingly on their learning instead of the inertia.
After applying these improvements, the improved PSO algorithms are as follows. Following the common notations in related research, in each iteration, the , , and in Formulas (1) and (2) are written as , , and , without causing confusion.
3.2. Improved PSO Algorithm for Minimization/Maximization Problem
When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied. In Algorithm 1, we set the maximum number of iterations to 500. When the difference of the fitness values between two adjacent iterations is less than 0.002, the change is considered as negligible.
If any value of or exceed the upper or lower bounds, then we will take the corresponding upper bound or lower bound instead of that value.
Clearly, this improved algorithm can be used to solve either the minimization or maximization problem. For the minimization problem, in step 1.3, the local best is the with minimum . The update process of and in 2.4 is: for each particle i, if the updated fitness value is less than the fitness value of the current , then is updated to the new solution; otherwise, remains unchanged. is the particle which takes minimum of the fitness value of .
For the maximization problem, in step 1.3, the local best is the
with the maximum fitness value. The update process of
and
in 2.4 is: for each particle
i, if the updated fitness value is greater than the fitness value of the current
, then
is updated to the new solution; otherwise,
keeps unchanged.
is the particle which takes the maximum of the fitness value of
(See
Figure A1 Flowchart of Algorithm 1 in
Appendix A).
Algorithm 1: Improved PSO algorithm. |
Initialization process - 1.1
For each of the n particles, initialize particle position and velocity with random values in corresponding search space. - 1.2
Evaluate the fitness value of each particle according to the objective function. - 1.3
Determine local best and global best positions and .
Update process: - 2.1
Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as “superior particles”. Then split each superior particle into two particles with the same velocities and positions, and update the velocity of particles with Formula (1). - 2.2
Based on the velocity, update the position of particles with Formula (2). - 2.3
Update the fitness value . - 2.4
Update the local and global best positions and . Then update the fitness values of and . - 2.5
Eliminate the particles with bottom 10 percent fitness values.
If the stopping criteria is satisfied, output the and fitness value (denoted by . If not, update and by Formulas (9)–(11), and repeat the update process. |
4. Nested Particle Swarm Optimization for Experimental Design
The application of the Particle Swarm Optimization algorithm to the maximization and minimization problems and a nested PSO algorithm for the pessimistic criterion are presented by Chen et al. [
12]. Chen’s paper is a milestone in the research of applying particle swarm optimization to experimental design. The combination of decision making and particle swarm optimization has been studied in several previous papers, such as [
16,
17], Yang et al. [
18] and Yang and Shu [
19].
However, the combination of the PSO algorithm and other decision making criteria, including the optimistic coefficient criterion and minimax regret criterion, are seldom considered in previous research. These combination problems are more interesting and challenging, and are worthy of in-depth study.
To solve these problems, nested PSO algorithms with multiple decision making criteria are proposed in this section. The implementations of these algorithms are proposed in
Section 5.
4.1. Introduction of Nested PSO Algorithms
For regression with the Fisher information matrix involving unknown parameters, we need two “” of particles (one is , the other is ) to solve it with a nested PSO algorithm. These two swarms of particles are used in different layers of iterations. In each layer, the fitness value is determined by one of the two swarms of particles. For convenience of expression, we denote the two swarms corresponding to and by swarm 1 and swarm 2, the position by and , and the velocity by , respectively. Each swarm consists of 50 particles.
4.2. PSO Algorithm for Optimistic Coefficient Criterion
Define Our objective is to find .
When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied. In Algorithm 2, we set the maximum number of iterations to 100. When the difference of the fitness values between two adjacent iterations is less than 0.2 percent of the current fitness value, the change is considered as negligible.
Algorithm 2: PSO algorithm for optimistic coefficient criterion. |
Initialization process: - 1.1
For each of the n particles in each of the two swarms , and , initialize particle position , and velocity , with random vectors in corresponding search space. - 1.2
Evaluate the fitness value and by improved PSO algorithm. Then, initialize the and local and global best position. - 1.3
Determine local best and global best positions and .
Update process: - 2.1
Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as “superior particles”. Then, split each superior particle into two particles with the same velocities and positions, and update the velocity of particles by Formula (1). - 2.2
Based on the velocity, update the position of particles by Formula (2). - 2.3
Based on the new position, calculate the fitness value by Algorithm 1. - 2.4
Update the local and global best positions and . Then, update the fitness values of and . - 2.5
Eliminate the particles with bottom 10 percent fitness values.
If the stopping criteria is satisfied, output the and fitness value (denoted by . If not, update and by Formula (9)–(11), and repeat the update process. |
If any value of , or , exceed the upper or lower bounds, then we will take the corresponding upper bound or lower bound instead of that value.
In this algorithm, the process of evaluating is the inner circulation, the process of evaluating is the outer circulation. Pessimistic criterion is the special case for this algorithm when .
4.3. PSO Algorithm for Minimax Regret Criterion
Define . Then, this optimization problem is to find . So this is a three-fold nested algorithm.
In this algorithm, the process of evaluating
is the inner circulation, the process of evaluating
is the outer circulation. When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied. In Algorithm 3, we set the maximum number of iterations to 100. When the difference of the fitness values between two adjacent iterations is less than 0.2 percent of the current fitness value, the change is considered as negligible.
Algorithm 3: PSO algorithm for minimax regret criterion. |
Initialization process: - 1.1
For each of the n particles in each of the two swarms, and , initialize particle position , and velocity , with random vectors. - 1.2
Compute the fitness value by improved algorithm. Based on that, compute . - 1.3
Determine local best and global best positions and .
Update process: - 2.1
Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as “superior particles”. Then, split each superior particle into two particles with the same velocities and positions, and update velocity of particles in swarm 2 by Formula (1). - 2.2
Based on the velocity, update the position of particles in swarm 2 by Formula (2). - 2.3
Update the fitness value with Algorithm 1. - 2.4
Update velocity of particles in swarm 1 by Formula (1). - 2.5
Based on the velocity, update the position of particles by in swarm 1 Formula (2). - 2.6
Update the fitness value (the loss function) by Algorithm 1. Then, update and . Then, update the fitness values of and . - 2.7
Eliminate the particles with bottom 10 percent fitness values.
If the stopping criteria is satisfied, output the and fitness value (denoted by . If not, update , and by Formulas (9)–(11), and repeat the update process. |
5. Results and Comparisons
In this section, in
Section 5.1 the improved PSO algorithm in
Section 3 is compared with the traditional PSO on two unimodal functions and two multimodal functions to show the ability of the algorithm. In
Section 5.2, we propose the comparisons of our improved PSO algorithm in
Section 3 with a typical algorithm on one unimodal functions and two multimodal functions. The test functions of
Section 5.1 and
Section 5.2 are mainly chosen from [
4]. After that, the nested PSO algorithms in
Section 4 are applied on two representative models with unknown parameters as examples. The algorithms are programmed by Matlab 2020a, and operated under Intel Core i7, Windows 7.
5.1. Comparisons of Traditional PSO Algorithm and Improved PSO Algorithm
In this subsection, the improved PSO algorithm in
Section 3 is compared with traditional PSO on two unimodal functions and two multimodal functions.
Table 1 is the information of test functions. In order to facilitate the calculation, the original minimum problems are transformed into maximum problems with maximum value = 100, that is why the fitness values are defined as
.
To eliminate the randomness, for each function, each algorithm is run 50 times independently, and the statistical results are analyzed. In each run, when the fitness value is more than 99.9, the computation is considered as successful. In the table, the success rate indicates the percentage of success (not the times of success), and average indicates the average of the fitness values.
Table 2 is the comparisons of the performance of traditional PSO and improved PSO.
From
Table 2, the results of our improved PSO algorithm are much better than that of traditional PSO for all of the four test functions, which confirmed the ability of the improved PSO algorithm.
5.2. Comparisons of Our Improved PSO Algorithm with a Typical Combination of Genetic and PSO Algorithm
Ref. [
5] proposed a typical combination of the genetic and particle swarm optimization algorithm for the global optimization, which incorporates the crossover and mutation operations of the genetic algorithm into the PSO algorithm. This combination method is a common PSO variant. In this subsection, we propose the comparisons of our improved PSO algorithm with that typical combinatorial algorithm on one unimodal function and two multimodal functions to show the ability our improved algorithm. In this subsection, we use two new test functions on
Table 1 and
Table 3 test function on
Section 5.1.
The number of runs of each algorithm is the same as
Section 5.1. In each run, when the fitness value is more than 99.9, the computation is considered as successful. In the table, the success rate indicates the percentage of success (not the times of success), and average indicates the average of the fitness values.
Table 4 is the comparisons of the performance of our improved PSO algorithm with the typical combination of genetic and PSO algorithm in [
5].
From
Table 4, the results of our improved PSO algorithm are much better than that of the typical combination of genetic and PSO algorithm for all of the three test functions, which confirmed the ability of the improved PSO algorithm.
5.3. Implementation of Nested PSO Algorithm
In this subsection, the nested PSO algorithms in
Section 4 are applied on two representative models with unknown parameters as examples. Since the most often used parameters optimistic coefficient criterion are 0.3, 0.5 and 0.7, in
Table 5 and
Table 6, these parameters are mainly used in the computation. Extensions to other models are immediate, with a simple change of the objective function. More results of other models applying our algorithms are available upon request.
Example 1. Michaelis–Menten model. This model and its Fisher information matrix have been introduced in Section 2. For the Michaelis–Menten model on design space , Ref. [11] showed that an optimal design is supported at two support points, one of which is . In this section, the nested algorithms in Section 4 are applied to the Michaelis–Menten model with design space = [0, 200]. Example 2. Two parameter logistic regression model [20]. The probability of response is assumed to be . Here, is the unknown parameter vector. The information matrix of this model is: The nested algorithms in Section 4 are applied on two parameter logistic regression models with parameters . From
Table 5 and
Table 6 above, the
of the optimistic coefficient criterion is better than that of the pessimistic criterion, and
is inversely proportional to
. That is because the pessimistic criterion always considers the worst case, but the optimistic coefficient criterion takes a trade off between the optimistic case and pessimistic case. When
increases, the extent of optimism gets larger, so the loss function gets smaller (and therefore better).
Figure 1 plots the value and weight of support point 1 for Michaelis–Menten model with 50 particles. Vertical coordinate represents the value of the support point 1, and horizontal coordinate represents the weight of support point 1.
Figure 1 shows how the particles converges to the best solution after 50 iterations (the best solution is indicated by red star).
Figure 2 plots the equivalence theorem with
versus
x for two parameter logistic regression models. The vertical coordinate represents the value of
, and the horizontal coordinate represents the value of x. From all of these four plots,
for all x ∈ X, which confirms that all of the results obtained by our series of algorithms are optimal.
6. Conclusions and Future Works
In this paper, an improved particle swarm optimization (PSO) algorithm is proposed and implemented on two unimodal functions and two multimodal functions. Then, combining the Particle Swarm Optimization (PSO) algorithm with the theorem of decision making under uncertainty, nested PSO algorithms with two decision making criteria are proposed and implemented on the Michaelis–Menten model and two parameter logistic regression models. For the Michaelis–Menten model, the particles converge to the best solution after 50 iterations. For two parameter logistic regression models, the optimality of algorithms are verified by the equivalence theorem. In the nested PSO algorithms, the is inversely proportional to the optimistic coefficient.
The PSO algorithm is a powerful algorithm that needs only a well-defined objective function to minimize or maximize with different optimal criteria, here, the function . Thus, extensions to other models are immediate, with a simple change of the objective function. More results for other models applying our algorithms are available upon request. The limitation of our PSO method is: it does not work very efficiently in solving problems with a complicated matrix, which is more suitable to be solved with the simulated annealing algorithm.
Future work includes, but is not limited to, the comprehensive comparison of this series of PSO algorithms with other metaheuristic algorithms, such as the genetic algorithm and simulated annealing algorithm. This is interesting and challenging work, which will be researched in the near future.
Author Contributions
Supervision, D.C.C.; Writing—original draft, C.L. All authors have read and agreed to the published version of the manuscript.
Funding
This work is supported by NNSFC 72171133 (National Natural Science Foundation of China).
Data Availability Statement
All data supporting reported results are presented in the manuscript.
Acknowledgments
This manuscript is the improved version of the research results in Ph.D dissertation of Chang Li under the supervision of Daniel Coster in Department of Mathematics and Statistics at Utah State University. (
https://digitalcommons.usu.edu/cgi/viewcontent.cgi?amp=&article=2541&context=etd) (accessed on 5 May 2022). Neither the manuscript nor any parts of its content are published in another journal. We really appreciate the editors and reviewers for their very careful reading and helpful comments.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Flowcharts of Algorithms 1–3
Figure A1.
Flowchart of Algorithm 1.
Figure A1.
Flowchart of Algorithm 1.
Figure A2.
Flowchart of Algorithm 2.
Figure A2.
Flowchart of Algorithm 2.
Figure A3.
Flowchart of Algorithm 3.
Figure A3.
Flowchart of Algorithm 3.
References
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- Poli, R. Analysis of the publications on the applications of particle swarm optimisation. J. Artif. Evol. Appl. 2008, 28, 685175. [Google Scholar] [CrossRef]
- Poli, R. Particle swarm optimization—An overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
- Jin, M.; Lu, H. A multi-subgroup hierarchical hybrid of genetic algorithm and particle swarm optimization (in Chinese). Control Theory Appl. 2013, 30, 1231–1238. [Google Scholar]
- Hachimi, H.; Ellaia, R.; Hami, A.E. A New Hybrid Genetic Algorithm and Particle Swarm Optimization. Key Eng. Mater. 2012, 498, 115–125. [Google Scholar]
- Wu, Y.; Song, Q. Improved Particle Swarm Optimization Algorithm in Power System Network Reconfiguration. Math. Probl. Eng. 2021, 2021, 5574501. [Google Scholar] [CrossRef]
- Zou, K.; Liu, Y.; Wang, S.; Li, N.; Wu, Y. A Multiobjective Particle Swarm Optimization Algorithm Based on Grid Technique and Multistrategy. J. Math. 2021, 2021, 1626457. [Google Scholar] [CrossRef]
- Lei, Y.A.; Xin, H.A.; Ke, L.B. A vector angles-based many-objective particle swarm optimization algorithm using archive. Appl. Soft Comput. 2021, 106, 107299. [Google Scholar]
- Khan, T.A.; Ling, S.H. A novel hybrid gravitational search particle swarm optimization algorithm. Eng. Appl. Artif. Intell. 2021, 102, 104263. [Google Scholar] [CrossRef]
- Chen, R.R.; Huang, W.K.; Yeh, S.K. Particle swarm optimization approach to portfolio construction. Intell. Syst. Accounting, Financ. Manag. 2021, 28, 182–194. [Google Scholar] [CrossRef]
- Dette, H.; Wong, W. E optimal designs for the Michaelis Menten model. Stat. Probab. Lett. 1999, 44, 405–408. [Google Scholar] [CrossRef]
- Chen, R.B.; Chang, S.P.; Wang, W.; Tung, H.C.; Weng, K.W. Minimax optimal designs via particle swarm optimization methods. Stat. Comput. 2015, 25, 975–988. [Google Scholar] [CrossRef]
- Diao, Z.; Zhen, H.D.; Liu, J.; Liu, G. Operations Research; Higher Educaiton Press: Beijing, China, 2001. [Google Scholar]
- Stoegerpollach, M.; Rubino, S.; Hebert, C.; Schattschneider, P. Decision Making and Problem Solving Strategies; Kogan Page: London, UK, 2007. [Google Scholar]
- Fozunbal, M.; Kalker, T. Decision-Making with Unbounded Loss Functions. In Proceedings of the 2006 IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 2171–2175. [Google Scholar]
- Rahmat-Samii, Y. Evolutionary Optimization Methods for Engineering: Part II—Particle Swarm Optimization; IEEE: Piscataway, NJ, USA, 2011. [Google Scholar]
- Abido, A.A. Particle swarm optimization for multimachine power system stabilizer design. In Proceedings of the Power Engineering Society Summer Meeting, Vancouver, BC, Canada, 15–19 July 2001. [Google Scholar]
- Yang, R.; Wang, L.; Wang, Z. Multi-objective particle swarm optimization for decision-making in building automation. In Proceedings of the 2011 IEEE Power and Energy Society General Meeting, Detroit, MI, USA, 24–28 July 2011; pp. 1–5. [Google Scholar]
- Yang, L.; Shu, L. Application of Particle Swarm Optimization in the Decision-Making of Manufacturers’ Production and Delivery. In Electrical, Information Engineering and Mechatronics 2011; Springer: Berlin/Heidelberg, Germany, 2012; pp. 83–89. [Google Scholar]
- King, J.; Wong, W.K. Minimax D-optimal Designs for the Logistic Model. Biometrics 2000, 56, 1263–1267. [Google Scholar] [CrossRef] [PubMed]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).