Comparison of Optimization Methods for the Attitude Control of Satellites
Abstract
:1. Introduction
2. Materials and Methods
2.1. OPS-SAT Scenery
2.2. Fuzzy Controller
2.3. Optimization Algorithms
2.3.1. Genetic Algorithms Modified
- (i)
- The first is related to the calculation of spread. This calculation was proposed in [23], but it was incorrectly implemented in the MATLAB© code. It was thus corrected.
- (ii)
- The second aspect concerns the way in which the spread change is calculated, represented by the variable “spreadChange”, which is involved in the stopping criterion. In the definition of the provided stopping criterion, abstract terms like “geometric mean of the relative change” were used, which needed clarification. Therefore, the original method of calculating this relative change was replaced by a mean that involves the differences in the spread value between one iteration and the immediately preceding iteration over a window of iterations. These changes were made in the file “gamultiobjConverged”.
- (iii)
- The third modification was made to the “stepgamultiobj.m” file, where the tolerance for finding duplicates in the population was reduced from 10−12 to 10−4, as the former was deemed excessive.
- (iv)
- The final modification is related to the function “DistanceMeasureFcn”. The “distancecrowding” function is the default for calculating distance; however, in the present work, this function was replaced by “distancecrowdingCustom”. Generally, the latter normalizes the objective values before computing the distances, which are calculated by summing the contributions from all dimensions in the objective space. This calculation involves distinguishing between distances at the extremes, which do not have two neighbors and therefore only consider the nearest neighbor, and the distances of the remaining points, calculated as half the difference for each objective between the distances to the immediately preceding and following neighbors. For the previous function, these distances were not divided by two, and the normalization of the values was not intuitive. Finally, the resulting “distancecrowding” vector is divided by the number of objectives (a step also omitted in the original function), and its extreme values are assigned an infinite distance, which was discarded in the subsequent calculation of the spread.
2.3.2. Particle Swarm Optimization Modified
- (i)
- One of them is tied to the calculation of the algorithm’s convergence, as it is advisable to have well-defined and analogous stopping criteria to facilitate future comparisons between algorithms. For this same reason, the final stopping criterion was changed. Originally, the algorithm’s stopping criterion was encompassed in the “Improvement-Based Criteria” category [38] by accounting for any improvement in each iteration. The final stopping criterion proposed here is also of the “Improvement-Based Criteria” type. However, instead of monitoring the best objective function value in each iteration, a certain fraction of the best objective function values in each iteration is controlled, and a calculation like the spread change is performed. It is important to note that the best objective function values in each iteration consider, for all particles, the best position each has reached considering the current and all previous iterations and not only the particles that occupy the best positions in each iteration. To implement this change, an additional optional parameter called “ParetoFraction” was introduced into the algorithm, which establishes the fraction of individuals from the total population that will participate in the stopping criterion. For example, for a swarm of 100 particles and a ParetoFraction = 0.25, information about 25 particles and their best objective function values are collected for the stopping criterion calculation. Thus, the generated window is a matrix with as many columns as “MaxStallIterations” + 26 and as many rows as ParetoFraction x SwarmSize. In this way, once the first columns are sorted from the oldest to the most recent iteration and the rows from the best to the “worst” value, the relative change is computed as follows:
- (ii)
- Two optimization methods belonging to two different types are compared: single-objective optimization (PSO) and multi-objective optimization (GA). To facilitate the comparison, this study chose to construct the set of non-dominated points from all the positions attained by each particle in the swarm. For this purpose, the functions “outputFcnPSO” and “rateFcnOptimOPSSAT_PSO” were modified to store relevant information from the algorithm throughout all its iterations, such as the positions of the particles in each iteration, their associated utility function value, and the value of each objective of the original multi-objective problem. Dominance calculation implemented using this information from each iteration allowed us to build the Pareto front incrementally. Thus, a preliminary Pareto front was constructed with the initialization information. Then, the positions of the particles in the first iteration were added to the previous Pareto front, and the new set of non-dominated particles was recalculated. This process was repeated until all iterations were processed. It is important to realize that the calculation of dominance is performed based on the value of certain variables of the algorithm that are stored during each iteration. Therefore, all the necessary information will only be available once the algorithm stops. Thus, the dominance calculation is not part of the optimization itself but rather part of the post-processing of data.
2.3.3. Multi-objective Particle Swarm Optimization
- (i)
- Implementation by Víctor Martínez-Cagigal: This implementation is a vectorized version of the particle swarm, meaning it evaluates the objective function simultaneously for all particles. It requires the objective function to handle evaluation for each particle concurrently, which makes it impractical in cases where the objective function involves a prior simulation for each individual particle.
- (ii)
- Implementation by Mostapha Kalami Heris and Yarpiz: This implementation shares key characteristics with MOPSO, focusing on managing design variables within specified boundaries. It does not incorporate the strict handling of constraints present in the original MOPSO formulation, nor does it include innovations from [40]. One notable feature is the progressive damping of the inertia weight, emphasizing global exploration initially and gradually shifting towards local exploitation with each iteration.
- (i)
- To adapt it to a simulation framework, the original script containing the MOPSO algorithm was transformed into a function that accepts all problem-specific data as input arguments. This function outputs the same data as the GAMULTIOBJ function: the Pareto front in the design and target space, the reason for termination, and a structured output containing relevant information (such as the final population with their objectives, iterations, evaluations, and metrics of the Pareto front like spread or average distance). Additionally, a history was added to track the values of certain variables throughout the entire optimization process. This adaptation ensures that the MOPSO algorithm aligns with the requirements and constraints of the specific simulation environment or problem being addressed.
- (ii)
- Another change made was the incorporation of two stopping criteria. Originally, the algorithm was designed to run for a predefined number of iterations. On the one hand, the calculation functions for spread and “distancecrowdingCustom” from the GA were adapted to implement the exact stopping criteria used in the multi-objective genetic algorithm. On the other hand, the algorithm stops when an optimization time is exceeded.
- (iii)
- The last change made to the algorithm was the replacement of the dominance calculation function with a vectorized and computationally more efficient version proposed by Víctor Martínez-Cagigal. Since this calculation represents a significant portion of the computational cost of the algorithm, this modification led to considerable improvements in computational time.
2.4. Metrics
3. Results
3.1. Calibration Maneuver
- -
- Initial phase: The angular error is approximately zero as the body axes align with the target axes. Its duration is 10 s and serves to evaluate initial stability metrics, assessing the controller’s ability to maintain the commanded attitude with an error signal below the established limit.
- -
- Step phase: A new attitude is commanded that deviates 10 degrees from the body axis being calibrated. The controller must act accordingly to achieve the target attitude within a variable time depending on the dynamics of the axis being calibrated;
- -
- Final phase: The target attitude is maintained for another 10 s and serves to evaluate residual stability metrics, assessing the controller’s ability to maintain an approximately zero angular error after the step phase.
3.2. Optimization Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bello, Á.; del Castañedo, Á.; Olfe, K.S.; Rodríguez, J.; Lapuerta, V. Parameterized Fuzzy-Logic Controllers for the Attitude Control of Nanosatellites in Low Earth Orbits. A Comparative Studio with PID Controllers. Expert Syst. Appl. 2021, 174, 114679. [Google Scholar] [CrossRef]
- Walker, A.R.; Putman, P.T.; Cohen, K. Solely Magnetic Genetic/Fuzzy-Attitude-Control Algorithm for a CubeSat. J. Spacecr. Rocket. 2015, 52, 1627–1639. [Google Scholar] [CrossRef]
- Calvo, D.; Avilés, T.; Lapuerta, V.; Laverón-Simavilla, A. Fuzzy Attitude Control for a Nanosatellite in Low Earth Orbit. Expert Syst. Appl. 2016, 58, 102–118. [Google Scholar] [CrossRef]
- Bello, A.; Olfe, K.S.; Rodríguez, J.; Ezquerro, J.M.; Lapuerta, V. Experimental Verification and Comparison of Fuzzy and PID Controllers for Attitude Control of Nanosatellites. Adv. Space Res. 2023, 71, 3613–3630. [Google Scholar] [CrossRef]
- European Space Agency. Spacecraft Data Systems and Architectures. Available online: https://www.esa.int/Enabling_Support/Space_Engineering_Technology/Onboard_Computers_and_Data_Handling/Onboard_Computers_and_Data_Handling (accessed on 16 July 2024).
- Meß, J.-G.; k Dannemann, F.; Greif, F. Techniques of Artificial Intelligence for Space Applications—A Survey. In Proceedings of the European Workshop on On-Board Data Processing, Noordwijk, The Netherlands, 25–27 February 2019. [Google Scholar]
- Parouha, R.P.; Verma, P. State-of-the-Art Reviews of Meta-Heuristic Algorithms with Their Novel Proposal for Unconstrained Optimization and Applications. Arch. Comput. Methods Eng. 2021, 28, 4049–4115. [Google Scholar] [CrossRef]
- Konak, A.; Coit, D.W.; Smith, A.E. Multi-Objective Optimization Using Genetic Algorithms: A Tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 992–1007. [Google Scholar] [CrossRef]
- Hassan, R.; Cohanim, B.; De Weck, O.; Venter, G. A Comparison of Particle Swarm Optimization and the Genetic Algorithm. In Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Austin, TX, USA, 18 April 2005. [Google Scholar]
- Sa’adah, A.; Sasmito, A.; Pasaribu, A.A. Comparison of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) for Estimating the Susceptible-Exposed-Infected-Recovered (SEIR) Model Parameter Values. J. Inf. Syst. Eng. Bus. Intell. 2024, 10, 290–301. [Google Scholar] [CrossRef]
- Azam, M.H.; Ahmad, A.; Altaf, U.; Sarwar, S. Comparison of Genetic Algorithm and Particle Swarm Optimization for DC Optimal Power Flow. In Proceedings of the 2023 25th International Multitopic Conference (INMIC), Lahore, Pakistan, 17 November 2023; pp. 1–5. [Google Scholar]
- Calloquispe-Huallpa, R.; Huaman-Rivera, A.; Ordoñez-Benavides, A.F.; Garcia-Garcia, Y.V.; Andrade-Rengifo, F.; Aponte-Bezares, E.E.; Irizarry-Rivera, A. A Comparison Between Genetic Algorithm and Particle Swarm Optimization for Economic Dispatch in a Microgrid. In Proceedings of the 2023 IEEE PES Innovative Smart Grid Technologies Latin America (ISGT-LA), San Juan, PR, USA, 6 November 2023; pp. 415–419. [Google Scholar]
- Zhang, X.; Li, Y.; Chu, G. Comparison of Parallel Genetic Algorithm and Particle Swarm Optimization for Parameter Calibration in Hydrological Simulation. Data Intell. 2023, 5, 904–922. [Google Scholar] [CrossRef]
- Cheng, S.; Lu, H.; Lei, X.; Shi, Y. A Quarter Century of Particle Swarm Optimization. Complex Intell. Syst. 2018, 4, 227–239. [Google Scholar] [CrossRef]
- Wang, J.W.; Wang, H.F.; Ip, W.H.; Furuta, K.; Kanno, T.; Zhang, W.J. Predatory Search Strategy Based on Swarm Intelligence for Continuous Optimization Problems. Math. Probl. Eng. 2013, 2013, 1–11. [Google Scholar] [CrossRef]
- Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
- Katoch, S.; Chauhan, S.S.; Kumar, V. A Review on Genetic Algorithm: Past, Present, and Future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
- OPS-SAT Mission Overview. Available online: https://www.esa.int/Enabling_Support/Operations/OPS-SAT (accessed on 16 July 2024).
- Fratini, S.; Policella, N.; Silva, R.; Guerreiro, J. On-Board Autonomy Operations for OPS-SAT Experiment. Appl. Intell. 2022, 52, 6970–6987. [Google Scholar] [CrossRef]
- Kubicka, M.; Zeif, R.; Henkel, M.; Hörmer, A.J. Thermal Vacuum Tests for the ESA’s OPS-SAT Mission. Elektrotech. Inftech. 2022, 139, 16–24. [Google Scholar] [CrossRef]
- Zeif, R.; Kubicka, M.; Hörmer, A.J. Development and Application of an Embedded Computer System for CubeSats Exemplified by the OPS-SAT Space Mission. Elektrotech. Inftech. 2022, 139, 8–15. [Google Scholar] [CrossRef]
- Takagi, T.; Sugeno, M. Fuzzy Identification of Systems and Its Applications to Modeling and Control. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 116–132. [Google Scholar] [CrossRef]
- Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef]
- Poli, R. An Analysis of Publications on Particle Swarm Optimization Applications 2007; University of Essex: Colchester, UK, 2007. [Google Scholar]
- Bonyadi, M.R.; Michalewicz, Z. Particle Swarm Optimization for Single Objective Continuous Space Problems: A Review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef]
- Lee, K.; Park, J. Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages. In Proceedings of the 2006 IEEE PES Power Systems Conference and Exposition, Atlanta, GA, USA, 29 October–1 November 2006; pp. 188–192. [Google Scholar]
- Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
- Pedersen, M.E.H. Good Parameters Forparticle Swarm Optimization 2010. Magnus Erik Hvass Pedersen Hvass Laboratories. 2010. Available online: https://api.semanticscholar.org/CorpusID:7496444 (accessed on 16 July 2024).
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Mezura-Montes, E.; Coello Coello, C.A. Constraint-Handling in Nature-Inspired Numerical Optimization: Past, Present and Future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
- Iadevaia, S.; Lu, Y.; Morales, F.C.; Mills, G.B.; Ram, P.T. Identification of Optimal Drug Combinations Targeting Cellular Networks: Integrating Phospho-Proteomics and Computational Network Analysis. Cancer Res. 2010, 70, 6704–6714. [Google Scholar] [CrossRef] [PubMed]
- Liu, M.; Shin, D.; Kang, H.I. Parameter Estimation in Dynamic Biochemical Systems Based on Adaptive Particle Swarm Optimization. In Proceedings of the 2009 7th International Conference on Information, Communications and Signal Processing (ICICS), Macau, China, 8–10 December 2009; pp. 1–5. [Google Scholar]
- Shi, Y.; Eberhart, R.C. Parameter Selection in Particle Swarm Optimization. In Evolutionary Programming VII.; Porto, V.W., Saravanan, N., Waagen, D., Eiben, A.E., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; Volume 1447, pp. 591–600. ISBN 978-3-540-64891-8. [Google Scholar]
- Eberhart, R.C.; Shi, Y. Comparing Inertia Weights and Constriction Factors in Particle Swarm Optimization. In Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512), La Jolla, CA, USA, 16–19 July 2000; Volume 1, pp. 84–88. [Google Scholar]
- Clerc, M.; Kennedy, J. The Particle Swarm—Explosion, Stability, and Convergence in a Multidimensional Complex Space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
- Trelea, I.C. The Particle Swarm Optimization Algorithm: Convergence Analysis and Parameter Selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
- Taherkhani, M.; Safabakhsh, R. A Novel Stability-Based Adaptive Inertia Weight for Particle Swarm Optimization. Appl. Soft Comput. 2016, 38, 281–295. [Google Scholar] [CrossRef]
- Zielinski, K.; Laur, R. Stopping Criteria for a Constrained Single-Objective Particle Swarm Optimization Algorithm. Informatica 2007, 31, 51–59. [Google Scholar]
- Coello, C.C.A.; Lechuga, M.S. MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051–1056. [Google Scholar]
- Sierra, M.R.; Coello Coello, C.A. Improving PSO-Based Multi-Objective Optimization Using Crowding, Mutation and ∈-Dominance. In Evolutionary Multi-Criterion Optimization; Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3410, pp. 505–519. [Google Scholar]
- Tan, K.C.; Lee, T.H.; Khor, E.F. Evolutionary Algorithms for Multi-Objective Optimization: Performance Assessments and Comparisons. Artif. Intell. Rev. 2002, 17, 251–290. [Google Scholar] [CrossRef]
- Song, K.-Y.; Gupta, M.M.; Jena, D.; Subudhi, B. Design of a Robust Neuro-Controller for Complex Dynamic Systems. In Proceedings of the NAFIPS 2009—2009 Annual Meeting of the North American Fuzzy Information Processing Society, Cincinnati, OH, USA, 14–17 June 2009; pp. 1–5. [Google Scholar]
Metric | AE | PSO | GA | MOPSO |
---|---|---|---|---|
AE | Mean | 0.4215 | 1.3857 | 0.7630 |
AE | Standard Deviation | 0.0620 | 0.0142 | 0.0803 |
RNDI | Mean | 0.2080 | 0.1853 | 0.1773 |
RNDI | Standard Deviation | 0.0544 | 0.0492 | 0.0813 |
APD | Mean | 0.1077 | 0.0717 | 0.1654 |
APD | Standard Deviation | 0.0307 | 0.0051 | 0.0950 |
EPD | Mean | 0.0754 | 0.0247 | 0.0340 |
EPD | Standard Deviation | 0.1616 | 0.0633 | 0.1209 |
SD | Mean | 0.1588 | 0.1255 | 0.1119 |
SD | Standard Deviation | 0.0614 | 0.0806 | 0.0316 |
Spread | Mean | 0.2484 | 0.1813 | 0.1570 |
Spread | Standard Deviation | 0.1194 | 0.1255 | 0.0844 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Albareda, R.; Olfe, K.S.; Bello, Á.; Fernández, J.J.; Lapuerta, V. Comparison of Optimization Methods for the Attitude Control of Satellites. Electronics 2024, 13, 3363. https://doi.org/10.3390/electronics13173363
Albareda R, Olfe KS, Bello Á, Fernández JJ, Lapuerta V. Comparison of Optimization Methods for the Attitude Control of Satellites. Electronics. 2024; 13(17):3363. https://doi.org/10.3390/electronics13173363
Chicago/Turabian StyleAlbareda, Ramón, Karl Stephan Olfe, Álvaro Bello, José Javier Fernández, and Victoria Lapuerta. 2024. "Comparison of Optimization Methods for the Attitude Control of Satellites" Electronics 13, no. 17: 3363. https://doi.org/10.3390/electronics13173363
APA StyleAlbareda, R., Olfe, K. S., Bello, Á., Fernández, J. J., & Lapuerta, V. (2024). Comparison of Optimization Methods for the Attitude Control of Satellites. Electronics, 13(17), 3363. https://doi.org/10.3390/electronics13173363