A Comparative Analysis of Optimization Algorithms for Finite Element Model Updating on Numerical and Experimental Benchmarks
Abstract
:1. Introduction
2. Benchmarked Optimization Algorithms and Performance Metrics
2.1. Generalized Pattern Search (GPS)
- The search bounds of the input parameters are linearly scaled to the interval . This is needed since the employed mesh size is equal in all dimensions.
- At every successful poll, the mesh size is doubled. Conversely, it is halved after any unsuccessful poll.
- The algorithm is stopped when the maximum number of objective function evaluations is reached.
2.2. Simulated Annealing (SA)
- The initial temperature is set at 100.
- The temperature gradually decreases at each iteration according to the (exponential) cooling schedule , where k is equal to the iteration number.
- The reannealing function selects the next point in a random direction, with a step length equal to the current temperature T.
- If a sampled point is less fit, it is accepted according to the acceptance function , where is the difference between the objective values.
- Reannealing occurs every 100 consecutively accepted sampled points.
- The initial temperature is set at 50.
- The temperature gradually decreases at each iteration according to the (linear) cooling schedule , where is (as before) the iteration number.
- The reannealing function selects the next point in a random direction, with a step length equal to the current temperature T.
- If a sampled point is less fit, it is accepted according to the same acceptance function of Strategy 1, .
- Reannealing never occurs.
2.3. Genetic Algorithm (GA)
- Compliance with optimization bounds is enforced by ensuring that each individual is generated (though proper crossover and mutation operators) within the given constraints.
- The initial population, necessary to initialize the algorithm, consists of 50 randomly chosen points (the initial population size is therefore equal to 10 times the number of dimensions).
- The crossover fraction is set to 0.8.
- The elite size is set at 5% of the population size.
- The mutation fraction varies dynamically, according to the genetic diversity of each generation.
2.4. Bayesian Sampling Optimization (BO)
- I.
- The seed points are randomly chosen within the optimization space, defined by the optimization bounds of each input parameter.
- II.
- The optimization procedure is initialized by computing the objective function at these seed points.
- III.
- The fitting of a Gaussian Process (GP) occurs by maximizing the marginal log-likelihood, which enables one to select the optimal set of hyperparameters . Moreover, a small amount of Gaussian noise is added to the observations (such that the prior distribution has covariance K).
- IV.
- To maximize the acquisition function, several thousands of predictions are computed at the points , randomly chosen within the optimization space. Then, a selection of the best points is further improved with local search (to this end, the MatLab function fmincon (https://it.mathworks.com/help/optim/ug/fmincon.html. Last visited on 15 October 2023) is used in this application), among which the best point is finally chosen.
- V.
- The objective function is computed at the point corresponding to the acquisition function maximum.
- An ARD Matérn 5/2 kernel function is selected.
- The input variables are log-transformed.
- Four acquisition functions, as previously described, are considered. The best overall results, as will be shown, are achieved with the UCB acquisition function. All the options will be discussed in a dedicated paragraph.
- The seed size was set to 50 points, following the advice of ref. [22] to set it as (at least) , where is the number of dimensions of the optimization problem (i.e., updating parameters).
2.5. Objective Function
2.6. Performance Metrics
- Since SA and GA are stochastic/metaheuristic optimization techniques, the final optimization results are variable and non-deterministic. Hence, several optimization runs are performed with both algorithms: in the following case studies, the results shown stem from 10 different runs. Either the average or a representative case of the 10 executions is then considered.
- The GPS algorithm must be initialized from a starting point. Obviously, the distance from the global optimum to the selected starting point greatly impacts the algorithm’s efficiency and effectiveness. Hence, at each run, the algorithm is initialized at a different, randomly chosen point.
3. Case Study
3.1. Cranfield Optical Test System
3.1.1. Acquisition Setup and Experimental Dataset
3.1.2. Finite Element Model
3.1.3. Model Updating Setup and Numerical Dataset
- : the Young’s modulus of the aluminum (assumed identical for all the beam and shell elements).
- νAl: the Poisson’s ratio of the same.
- kx, ky, kz: the linear stiffness of the springs at the feet of the structure, along the x-, y-, and z-axis, respectively (assumed identical for all feet).
4. Results
4.1. Results for the Numerically Simulated Data
4.1.1. Results of the Four BO Acquisition Functions
4.1.2. Hyperparameters of Fitted Gaussian Process
4.2. Results for the Experimental Data
- A comparison in absolute terms is performed, addressing the disparity in modal properties derived from the four calibrated FE models and those inferred from the empirical acquisitions.
- In relative terms, the assessment is based on the contrast between the parameters estimated by the four candidate models.
5. Conclusions
- (i.)
- Its geometry and material are very similar to many metallic truss structures, commonly found in civil engineering applications.
- (ii.)
- It was dynamically tested under a strictly controlled environment and operating conditions.
- (iii.)
- Its unknown parameters comprise both the structure’s properties (,) and its boundary conditions ().
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Haag, S.; Anderl, R. Digital Twin—Proof of Concept. Manuf. Lett. 2018, 15, 64–66. [Google Scholar] [CrossRef]
- Boscato, G.; Russo, S.; Ceravolo, R.; Fragonara, L.Z. Global Sensitivity-Based Model Updating for Heritage Structures. Comput.-Aided Civ. Infrastruct. Eng. 2015, 30, 620–635. [Google Scholar] [CrossRef]
- Friswell, M.; Mottershead, J.E. Finite Element Model Updating in Structural Dynamics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
- Friswell, M.I. Damage Identification Using Inverse Methods. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2007, 365, 393–410. [Google Scholar] [CrossRef] [PubMed]
- Mottershead, J.E.; Friswell, M.I. Model Updating In Structural Dynamics: A Survey. J. Sound Vib. 1993, 167, 347–375. [Google Scholar] [CrossRef]
- Girardi, M.; Padovani, C.; Pellegrini, D.; Porcelli, M.; Robol, L. Finite Element Model Updating for Structural Applications. J. Comput. Appl. Math. 2020, 370, 112675. [Google Scholar] [CrossRef]
- Ereiz, S.; Duvnjak, I.; Fernando Jiménez-Alonso, J. Review of Finite Element Model Updating Methods for Structural Applications. Structures 2022, 41, 684–723. [Google Scholar] [CrossRef]
- Nicoletti, V.; Gara, F. Modelling Strategies for the Updating of Infilled RC Building FEMs Considering the Construction Phases. Buildings 2023, 13, 598. [Google Scholar] [CrossRef]
- Arezzo, D.; Quarchioni, S.; Nicoletti, V.; Carbonari, S.; Gara, F.; Leonardo, C.; Leoni, G. SHM of Historical Buildings: The Case Study of Santa Maria in Via Church in Camerino (Italy). Procedia Struct. Integr. 2023, 44, 2098–2105. [Google Scholar] [CrossRef]
- Xiao, F.; Zhu, W.; Meng, X.; Chen, G.S. Parameter Identification of Structures with Different Connections Using Static Responses. Appl. Sci. 2022, 12, 5896. [Google Scholar] [CrossRef]
- Xiao, F.; Zhu, W.; Meng, X.; Chen, G.S. Parameter Identification of Frame Structures by Considering Shear Deformation. Int. J. Distrib. Sens. Netw. 2023, 2023, 6631716. [Google Scholar] [CrossRef]
- Xiao, F.; Sun, H.; Mao, Y.; Chen, G.S. Damage Identification of Large-Scale Space Truss Structures Based on Stiffness Separation Method. Structures 2023, 53, 109–118. [Google Scholar] [CrossRef]
- Torczon, V. On the Convergence of Pattern Search Algorithms. SIAM J. Optim. 2006, 7, 1–25. [Google Scholar] [CrossRef]
- van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing. In Simulated Annealing: Theory and Applications; Springer: Dordrecht The Netherlands, 1987; pp. 7–15. [Google Scholar] [CrossRef]
- Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
- Brochu, E.; Cora, V.M.; De Freitas, N. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. arXiv 2010, arXiv:1012.2599. [Google Scholar]
- Hooke, R.; Jeeves, T.A. “Direct Search” Solution of Numerical and Statistical Problems. J. ACM (JACM) 1961, 8, 212–229. [Google Scholar] [CrossRef]
- Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
- Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 2004, 21, 1087. [Google Scholar] [CrossRef]
- Ingber, L. Adaptive Simulated Annealing (ASA): Lessons Learned. Control Cybern. 2000, 25, 32–54. [Google Scholar] [CrossRef]
- Civera, M.; Pecorelli, M.L.; Ceravolo, R.; Surace, C.; Zanotti Fragonara, L. A Multi-objective Genetic Algorithm Strategy for Robust Optimal Sensor Placement. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 1185–1202. [Google Scholar] [CrossRef]
- Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient Global Optimization of Expensive Black-Box Functions. J. Glob. Optim. 1998, 13, 455–492. [Google Scholar] [CrossRef]
- Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; Massachusetts Institute of Technology, Ed.; MIT Press: Cambridge, MA, USA, 2006; ISBN 026218253X. [Google Scholar]
- Hutter, F.; Hoos, H.H.; Leyton-Brown, K. Sequential Model-Based Optimization for General Algorithm Configuration. In Proceedings of the Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, 17–21 January 2011; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6683, pp. 507–523. [Google Scholar] [CrossRef]
- Snoek, J.; Rippel, O.; Swersky, K.; Kiros, R.; Satish, N.; Sundaram, N.; Mostofa, M.; Patwary, A.; Adams, R.P. Scalable Bayesian Optimization Using Deep Neural Networks. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
- Springenberg, J.T.; Klein, A.; Falkner, S.; Hutter, F. Bayesian Optimization with Robust Bayesian Neural Networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4134–4142. [Google Scholar]
- Wang, Z.; Gehring, C.; Kohli, P.; Jegelka, S. Batched Large-Scale Bayesian Optimization in High-Dimensional Spaces. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), Lanzarote, Spain, 9–11 April 2018. [Google Scholar]
- Rasmussen, C.E.; De, H.M. Gaussian Processes for Machine Learning (GPML) Toolbox Hannes Nickisch. J. Mach. Learn. Res. 2010, 11, 3011–3015. [Google Scholar]
- Kushner, H.J. A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise. J. Basic Eng. 1964, 86, 97–106. [Google Scholar] [CrossRef]
- Cox, D.D.; John, S. A Statistical Method for Global Optimization. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Chicago, IL, USA, 18–21 October 1992; pp. 1241–1246. [Google Scholar] [CrossRef]
- Bull, A.D. Convergence Rates of Efficient Global Optimization Algorithms. J. Mach. Learn. Res. 2011, 12, 2879–2904. [Google Scholar]
- Gelbart, M.A.; Snoek, J.; Adams, R.P. Bayesian Optimization with Unknown Constraints. arXiv 2014, arXiv:1403.5607. [Google Scholar] [CrossRef]
- Snoek, J.; Larochelle, H.; Adams, R.P. Practical Bayesian Optimization of Machine Learning Algorithms. Adv. Neural Inf. Process. Syst. 2012, 4, 2951–2959. [Google Scholar] [CrossRef]
- Allemang, R.J.; Brown, D.L. A Correlation Coefficient for Modal Vector Analysis. In Proceedings of the 1st International Modal Analysis Conference (IMAC 1982), Orlando, FL, USA, 8–10 November 1982; pp. 110–116. [Google Scholar]
- Xiao, F.; Hulsey, J.L.; Chen, G.S.; Xiang, Y. Optimal Static Strain Sensor Placement for Truss Bridges. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147717707929. [Google Scholar] [CrossRef]
- Golanó, P.G.; Zanotti Fragonara, L.; Morantz, P.; Jourdain, R. Numerical and Experimental Modal Analysis Applied to an Optical Test System Designed for the Form Measurements of Metre-Scale Optics. Shock Vib. 2018, 2018, 3435249. [Google Scholar] [CrossRef]
- Grimes, R.G.; Lewis, J.G.; Simon, H.D. A Shifted Block Lanczos Algorithm for Solving Sparse Symmetric Generalized Eigenproblems. Soc. Ind. Appl. Math. 1994, 15, 228–272. [Google Scholar] [CrossRef]
Structural Component | Mass (kg) |
---|---|
Upper and lower aluminum platens | (each platen) |
Interferometer and interferometer support structure | |
Test piece 1 | |
Test piece support structure 1 |
Mode Number | Mode Description | (Hz) |
---|---|---|
1 | First bending mode along the x-axis | |
2 | First bending mode along the y-axis | |
3 | First axial (vertical) mode | |
4 | First torsional mode |
Parameters to Be Updated | Search Bounds | Measurement Unit | |
---|---|---|---|
lower bound | upper bound | ||
- | |||
Target Input Parameters | Target Value |
---|---|
Mode Number | Mode Description | (Hz) |
---|---|---|
1 | First bending mode along the x-axis | |
2 | First bending mode along the y-axis | |
3 | First axial (vertical) mode | |
4 | First torsional mode |
Mode Number | (Hz) | GPS (200 Fun. Eval.) | SA (200 Fun. Eval.) | GA (200 Fun. Eval.) | BO (100 Fun. Eval.) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
(Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | ||
1 | 4.38 | −0.228 | 1.0000 | 4.41 | 0.456 | 1.0000 | 4.40 | 0.228 | 1.0000 | 4.39 | 0.000 | 1.0000 | |
2 | 6.53 | 0.462 | 0.9999 | 6.48 | −0.308 | 1.0000 | 6.54 | 0.615 | 1.0000 | 6.51 | 0.154 | 1.0000 | |
3 | 22.70 | 22.66 | −0.176 | 0.9995 | 22.54 | −0.705 | 1.0000 | 22.74 | 0.176 | 0.9998 | 22.74 | 0.176 | 1.0000 |
4 | 29.15 | 29.84 | 2.367 | 0.9998 | 29.89 | 2.539 | 0.9999 | 29.52 | 1.269 | 1.0000 | 29.17 | 0.069 | 1.0000 |
RMSRE (%) | 2.16 | 1.43 | 1.92 | 0.29 | |||||||||
Best cost fun. | 0.0612 | 0.0502 | 0.0511 | 0.0070 |
Parameter | Target Value | GPS (200 Fun. Eval.) | SA (200 Fun. Eval.) | GA (200 Fun. Eval.) | BO (100 Fun. Eval.) | ||||
---|---|---|---|---|---|---|---|---|---|
Updated Value | Error (%) | Updated Value | Error (%) | Updated Value | Error (%) | Updated Value | Error (%) | ||
70.00 GPa | 72.00 | 2.857 | 72.36 | 3.371 | 72.28 | 3.257 | 70.37 | 0.529 | |
0.32 | 0.40 | 25.000 | 0.32 | 0.000 | 0.37 | 15.625 | 0.34 | 6.250 | |
1.00 × 104 | 200.000 | 2.47 | 147.000 | 8.11 | −18.900 | 9.47 | −5.300 | ||
2.00 × 103 | 3.12 | 56.000 | 2.08 | 4.000 | 2.44 | 22.000 | 1.93 | −3.500 | |
2.00 × 104 | 1.46 | −27.000 | 1.31 | −34.500 | 1.53 | −23.500 | 1.98 | −1.000 | |
RMSRE (%) | 94.33 | 67.68 | 18.08 | 3.62 |
GPS (200 Fun. Eval.) | SA (200 Fun. Eval.) | GA (200 Fun. Eval.) | BO (100 Fun. Eval.) | |
---|---|---|---|---|
Total optimization time [s] | 1358 | 1363 | 1378 | 756 |
Parameter | Length Scale (-) |
---|---|
) | 0.59 |
(-) | 6239.04 |
23.28 | |
0.98 | |
8.29 |
Mode Number | (Hz) | GPS (200 Fun. Eval.) | SA (200 Fun. Eval.) | GA (200 Fun. Eval.) | BO (100 Fun. Eval.) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
(Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | (Hz) | Error (%) | MAC (-) | ||
1 | 4.26 | 4.26 | −0.23 | 0.92 | 4.28 | 0.23 | 0.92 | 4.24 | −0.7 | 0.92 | 4.26 | −0.23 | 0.92 |
2 | 6.45 | 6.45 | 4.37 | 0.95 | 6.4 | 3.56 | 0.95 | 6.39 | 3.4 | 0.95 | 6.36 | 2.91 | 0.95 |
3 | 22.31 | 22.31 | −0.93 | 0.95 | 22.1 | −1.86 | 0.94 | 22.07 | −1.99 | 0.95 | 22.1 | −1.87 | 0.95 |
4 | 28.52 | 28.52 | 0.07 | 0.95 | 28.91 | 1.44 | 0.95 | 28.51 | 0.04 | 0.94 | 28.53 | 0.11 | 0.95 |
RMSRE (%) | 4.46 | 4.59 | 4.55 | 4.34 | |||||||||
Best cost fun. | 0.1205 | 0.1313 | 0.1229 | 0.1201 |
Parameter | GPS (200 Fun. Eval.) | SA (200 Fun. Eval.) | GA (200 Fun. Eval.) | BO (100 Fun. Eval.) |
---|---|---|---|---|
65139 | 66302 | 64310 | 65131 | |
0.3311 | 0.3103 | 0.3068 | 0.30883 | |
6928 | 24341 | 18102 | 28696 | |
8976 | 6234 | 8020 | 3242.5 | |
29456 | 18234 | 26478 | 22939 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Raviolo, D.; Civera, M.; Zanotti Fragonara, L. A Comparative Analysis of Optimization Algorithms for Finite Element Model Updating on Numerical and Experimental Benchmarks. Buildings 2023, 13, 3010. https://doi.org/10.3390/buildings13123010
Raviolo D, Civera M, Zanotti Fragonara L. A Comparative Analysis of Optimization Algorithms for Finite Element Model Updating on Numerical and Experimental Benchmarks. Buildings. 2023; 13(12):3010. https://doi.org/10.3390/buildings13123010
Chicago/Turabian StyleRaviolo, Davide, Marco Civera, and Luca Zanotti Fragonara. 2023. "A Comparative Analysis of Optimization Algorithms for Finite Element Model Updating on Numerical and Experimental Benchmarks" Buildings 13, no. 12: 3010. https://doi.org/10.3390/buildings13123010
APA StyleRaviolo, D., Civera, M., & Zanotti Fragonara, L. (2023). A Comparative Analysis of Optimization Algorithms for Finite Element Model Updating on Numerical and Experimental Benchmarks. Buildings, 13(12), 3010. https://doi.org/10.3390/buildings13123010