Parameter Tuning of Agent-Based Models: Metaheuristic Algorithms
Abstract
:1. Introduction
2. Related Work
3. Parameter-Tuning Algorithms
3.1. Markov Chain Monte Carlo Method
- In the first step, the intervals of possible parameter values are determined and their initial values are set .
- Next, at each step t of the algorithm, a candidate set of model parameter values is generated based on the set parameters from the previous step . We transform previous parameter values so that the candidates would stay within parameter boundaries:
- Next, we either accept or reject the candidates based on the loss function. If the loss for the candidate is lower than the loss on the previous step, the candidate set of parameters is accepted as the new set in the next step of the algorithm, and rejected otherwise.
- To avoid local minima, we accept a new set of parameters after 10 consecutive failures.
- Repeat the algorithm from Step 2 until the specified number of iterations is reached and select the set of parameter values with the lowest .
Algorithm 1: Markov chain Monte Carlo algorithm |
3.2. Surrogate Modelling Method
- In the first step, we build the initial training set for the surrogate model by conducting numerical experiments using sets of parameter values obtained from the Latin hypercube sampling method.
- We use XGBoost [22], a gradient-boosting algorithm that creates an ensemble of decision trees for training the surrogate model on the resulting training set.
- We use the Markov chain Monte Carlo method with the (Section 3.1), but in this case we use the trained surrogate model to predict the value of the loss function, so we obtain a set of parameter values that gives a minimum of the loss function according to the surrogate model.
- We conduct a numerical experiment with the ABM using set of obtained parameter values and find corresponding loss.
- We update the training set.
- Repeat the algorithm from Step 2 until the specified number of iterations is reached and select the set of parameter values with the lowest .
Algorithm 2: Surrogate modelling algorithm |
3.3. Particle Swarm Optimisation Algorithm
- In the first step, the number of particles, the values of the hyperparameters of the algorithm, as well as positions of particles, their boundaries, and initial velocities are specified.
- We update the velocity of each particle on each step t.
- We update the position of each particle, and if it goes out of the boundaries, we sample the parameter value from the uniform distribution.
- We conduct numerical experiments with the new positions using the model. If the of the particle is less than the lowest error found by this particle or found by all particles, we update the corresponding values and best positions.
- Repeat the algorithm from Step 2 until the specified number of iterations is reached and select the set of parameter values with the lowest .
3.4. Genetic Algorithm
- In the first step, we choose the size of the population consisting of parameter value combinations and the number of parents we want to select for reproduction. We also define the initial population of parameter values using Latin hypercube sampling method.
- Selection: we select potential parents from the existing population using the binary tournament selection method where we select the best out of two random individuals from the population in each tournament round based on .
- Crossover: we randomly select two parents from the pool of potential parents. Crossovers occur randomly with a given probability using one-point crossover, where we split the set of parameter values into two sets and exchange them between the parents, creating two offsprings.
- Mutations occur randomly with a given probability by adding Gaussian noise to one or more parameter values of the offspring to avoid the stagnation of the algorithm: , where , , and . If the parameter value goes out of the boundaries, we set .
- Finally, following the elitism strategy, we replace the current population with the best individuals from the current and offspring populations combined together.
- Repeat the algorithm from Step 2 until the specified number of iterations is reached and select the set of parameter values with the lowest .
Algorithm 3: Particle swarm optimisation algorithm |
Algorithm 4: Genetic algorithm |
3.5. Chaos Game Optimisation
- In the first step, we choose the number of seeds () consisting of parameter value combinations and set the initial seed positions using the Latin hypercube sampling method.
- Find the best seed according to .
- For each seed i, we determine the mean value of the group () defined as the group of randomly selected seeds with equal probability of including the current one.
- For each seed i we find the positions of four new potential seeds , , , and and check their boundary conditions. If the parameter value goes out of the boundaries, we set .
- Replace current seeds with best seeds from the current and potential seeds combined together.
- Repeat the algorithm from Step 2 until the specified number of iterations is reached and select the set of parameter values with the lowest .
Algorithm 5: Chaos game optimisation algorithm |
4. Agent-Based Models
4.1. Agent-Based Model of the Spread of Contagious Disease (ABM-SIR)
4.2. Model of the Circulation of Respiratory Viruses in a City (ABM-ARI)
- First, we determine whether the current day is a holiday or a day off for different social settings. National holidays and Sundays are days off for everyone. Children have summer, autumn, spring, and winter vacations. Workers do not work on Saturdays.
- We iterate over the agents, and if we find an infectious agent, we iterate over the agents with whom the agent makes contact with the current step. In the case of finding an agent, susceptible or partially susceptible to the virus, we sample the duration of their contact from a given distribution [29] and evaluate the risk of infection transmission. If the transmission is successful, the agent becomes exposed.
- Next, we update the agents’ properties based on their health state:
- (a)
- Susceptible. There is a small chance that the agent may have been exposed to a virus from an unknown source at the current step. The virus is selected randomly, and if it is able to overcome the level of specific immunity, the agent becomes exposed.
- (b)
- Immune to at least one of the viruses. We decrease the level of specific immunity to the virus.
- (c)
- (d)
- Infectious. The agent can exhibit symptoms or progress to the asymptomatic stage after the end of the incubation period, self-isolate and become diagnosed [33], or recover.
- (e)
- Recovered. We check if it is able to be infected with viruses again.
- In the end, we update the model date and air temperature.
5. Results
5.1. ABM-SIR Model Results
5.2. ABM-ARI Model Results
6. Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ABM | Agent-based model |
MCMC | Markov chain Monte Carlo method |
SM | Surrogate modelling method |
PSO | Particle swarm optimisation algorithm |
GA | Genetic algorithm |
CGO | Chaos game optimisation |
LHS | Latin hypercube sampling |
Root mean square error |
References
- Macal, C.M.; North, M.J. Tutorial on agent-based modeling and simulation. In Proceedings of the Winter Simulation Conference, Orlando, FL, USA, 4–7 December 2005; pp. 2–15. [Google Scholar] [CrossRef]
- Macal, C.M. Everything you need to know about agent-based modelling and simulation. J. Simul. 2016, 10, 144–156. [Google Scholar] [CrossRef]
- Kotthoff, F.; Hamacher, T. Calibrating agent-based models of innovation diffusion with gradients. J. Artif. Soc. Soc. Simul. 2022, 25, 4. [Google Scholar] [CrossRef]
- Carrella, E. No free lunch when estimating simulation parameters. J. Artif. Soc. Soc. Simul. 2021, 24, 7. [Google Scholar] [CrossRef]
- Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
- Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. Metaheuristic algorithms in modeling and optimization. Metaheuristic Appl. Struct. Infrastruct. 2013, 1, 1–24. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Elsevier: Amsterdam, The Netherlands, 2018; pp. 185–231. [Google Scholar] [CrossRef]
- Hare, W.; Nutini, J.; Tesfamariam, S. A survey of non-gradient optimization methods in structural engineering. Adv. Eng. Softw. 2013, 59, 19–28. [Google Scholar] [CrossRef]
- Gilli, M.; Winker, P. A global optimization heuristic for estimating agent based models. Comput. Stat. Data Anal. 2003, 42, 299–312. [Google Scholar] [CrossRef]
- Chen, S.; Desiderio, S. A regression-based calibration method for agent-based models. Comput. Econ. 2022, 59, 687–700. [Google Scholar] [CrossRef]
- Lamperti, F.; Roventini, A.; Sani, A. Agent-Based Model Calibration using Machine Learning Surrogates. J. Econ. Dyn. Control 2018, 90, 366–389. [Google Scholar] [CrossRef]
- Sallans, B.; Pfister, A.; Karatzoglou, A.; Dorffner, G. Simulation and Validation of an Integrated Markets Model. J. Artif. Soc. Soc. Simul. 2003, 6. Available online: https://jasss.soc.surrey.ac.uk/6/4/2.html (accessed on 12 July 2024).
- Merler, S.; Ajelli, M.; Fumanelli, L.; Gomes, M.F.; Piontti, A.P.; Rossi, L.; Chao, D.L.; Longini, I.M.; Halloran, M.E.; Vespignani, A. Spatio-temporal spread of the Ebola 2014 outbreak in Liberia and the effectiveness of non-pharmaceutical interventions: A computational modelling analysis. Lancet Infect. Dis. 2018, 15, 204–211. [Google Scholar] [CrossRef] [PubMed]
- Tan, R.K.; Bora, S. Adaptive parameter tuning for agent-based modeling and simulation. SIMULATION Trans. Soc. Model. Simul. Int. 2019, 95, 003754971984636. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, Z.; Zhang, Y. Validation and calibration of an agent-based model: A surrogate approach. Discret. Dyn. Nat. Soc. 2020, 2020, 6946370. [Google Scholar] [CrossRef]
- Perumal, R.; van Zyl, T.L. Surrogate Assisted Methods for the Parameterisation of Agent-Based Models. In Proceedings of the 2020 7th International Conference on Soft Computing & Machine Intelligence (ISCMI), Stockholm, Sweden, 14–15 November 2020; pp. 78–82. [Google Scholar] [CrossRef]
- Angione, C.; Silverman, E.; Yaneske, E. Using machine learning as a surrogate model for agent-based simulations. PLoS ONE 2022, 17, e0263150. [Google Scholar] [CrossRef] [PubMed]
- Calvez, B.; Hutzler, G. Automatic tuning of agent-based models using genetic algorithms. In Proceedings of the International Workshop on Multi-Agent Systems and Agent-Based Simulation, Utrecht, The Netherlands, 25 July 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 41–57. [Google Scholar] [CrossRef]
- Heppenstall, A.J.; Evans, A.J.; Birkin, M.H. Genetic algorithm optimisation of an agent-based model for simulating a retail market. Environ. Plan. B Urban Anal. City Sci. 2007, 34, 1051–1070. [Google Scholar] [CrossRef]
- McKay, M.D.; Beckman, R.J.; Conover, W.J. Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1979, 21, 239–245. [Google Scholar] [CrossRef]
- Brooks, S.P. Markov Chain Monte Carlo Method and Its Application. J. R. Stat. Soc. Ser. D Stat. 1998, 47, 69–100. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016; KDD ’16. pp. 785–794. [Google Scholar]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
- Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
- Talatahari, S.; Azizi, M. Chaos game optimization: A novel metaheuristic algorithm. Artif. Intell. Rev. 2021, 54, 917–1004. [Google Scholar] [CrossRef]
- Kermack, W.O.; McKendrick, A.G. A Contribution to the Mathematical Theory of Epidemics. Proc. R. Soc. Lond. Ser. A Contain. Pap. Math. Phys. Character 1927, 115, 700–721. [Google Scholar]
- Vlad, A.I.; Romanyukha, A.A.; Sannikova, T.E. Circulation of Respiratory Viruses in the City: Towards an Agent-Based Ecosystem model. Bull. Math. Biol. 2023, 85, 100. [Google Scholar] [CrossRef]
- Albert, R.; Barabasi, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef]
- Mossong, J.; Hens, N.; Jit, M.; Beutels, P.; Auranen, K.; Mikolajczyk, R.; Massari, M.; Salmaso, S.; Tomba, G.S.; Wallinga, J.; et al. Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases. PLoS Med. 2008, 5, e74. [Google Scholar] [CrossRef]
- Lessler, J.; Reich, N.G.; Brookmeyer, R.; Perl, T.M.; Nelson, K.E.; Cummings, D.A.T. Incubation periods of acute respiratory viral infections: A systematic review. Lancet Infect. Dis. 2009, 9, 291–300. [Google Scholar] [CrossRef]
- Carrat, F.; Vergu, E.; Ferguson, N.M.; Lemaitre, M.; Cauchemez, S.; Leach, S.; Valleron, A.J. Time Lines of Infection and Disease in Human Influenza: A Review of Volunteer Challenge Studies. Am. J. Epidemiol. 2008, 167, 775–785. [Google Scholar] [CrossRef]
- Warrell, D.A.; Cox, T.M.; Firth, J.D.; Torok, E. Oxford Textbook of Medicine: Infection; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
- Elveback, L.R.; Fox, J.P.; Ackerman, E.; Langworthy, A.; Boyd, M.; Gatewood, L. An influenza simulation model for immunization studies. Am. J. Epidemiol. 1976, 103, 152–165. [Google Scholar] [CrossRef] [PubMed]
- Romanyukha, A.A.; Sannikova, T.E.; Drynov, I.D. The origin of acute respiratory epidemics. Her. Russ. Acad. Sci. 2011, 81, 31–34. [Google Scholar] [CrossRef] [PubMed]
- Karpova, L.S.; Volik, K.M.; Smorodintseva, E.A.; Stolyarova, T.P.; Popovtseva, N.M.; Stolyarov, K.A. The Impact of Influenza of Different Etiologies on other ARVI in Children and Adults in 2014 to 2016. Epidemiol. Vaccinal Prev. 2018, 17, 35–47. [Google Scholar] [CrossRef]
- Platt, D. A comparison of economic agent-based model calibration methods. J. Econ. Dyn. Control 2020, 113, 103859. [Google Scholar] [CrossRef]
- Dyer, J.; Cannon, P.; Farmer, J.D.; Schmon, S.M. Black-box Bayesian inference for agent-based models. J. Econ. Dyn. Control 2024, 161, 104827. [Google Scholar] [CrossRef]
Symbol | Description |
---|---|
Set of parameters | |
Root mean square error associated with | |
Number of parameters | |
t | Step of the algorithm |
Maximum step of the algorithm | |
ABM | Numerical experiment with the agent-based model |
Symbol | Description |
---|---|
Markov chain Monte Carlo | |
Candidate set of parameters | |
Number of consecutive candidate rejections | |
Surrogate model | |
SM | Surrogate model for ABM approximation |
X | Features of the training set |
Y | Labels of the training set |
Particle swarm optimisation | |
Number of particles | |
Velocity of particle i | |
Inertia weight hyperparameter | |
Personal acceleration hyperparameter | |
Social acceleration hyperparameter | |
Best found position of particle i | |
G | Best found solution by all particles |
Genetic algorithm | |
Size of the population of parameter sets | |
Number of parents for crossover procedure | |
Offspring set of parameters | |
Chaos game optimisation | |
Number of seeds | |
Global best position of the seed | |
Mean position of the group containing random seeds | |
Position of the potential k-th seed spawned by i-th seed |
Method | Parameter | Description | Value |
---|---|---|---|
MCMC | Standard deviation for the parameter candidates | 0.1 | |
SM | Number of initial training samples | 1000 | |
Learning rate | 0.1 | ||
Maximum depth of a tree | 10 | ||
Number of rounds for boosting | 150 | ||
PSO | Number of particles | 10 | |
Minimum inertia weight | 0.4 | ||
Maximum inertia weight | 0.9 | ||
Personal acceleration coefficient | 2.0 | ||
Social acceleration coefficient | 2.0 | ||
GA | Size of population | 10 | |
Number of parents | 5 | ||
Tournament size | 2 | ||
Probability of crossover | 0.8 | ||
Probability of mutation | 0.15 | ||
Mutation deviation coefficient | 0.33 | ||
CGO | Number of seeds | 10 |
Parameter | Description | Range | Reference |
---|---|---|---|
Probability of the transmission | [0.02, 0.2] | 0.05 | |
c | Contact rate | [5, 25] | 10 |
Probability of recovery | [0.01, 0.05] | 0.02 | |
Initial number of infectious agents | [1, 50] | 25 |
Method * | Avg Initial RMSE | Avg Best RMSE | Best RMSE | Avg Best Decrease | Best Decrease | Avg Best Run | Best Run ** | Avg Time *** |
---|---|---|---|---|---|---|---|---|
MCMC LHS | 17,868 | 791 | 243 | 96% | 99% | 148 | 171 | 169 s |
MCMC manual | 52,899 | 3342 | 1248 | 94% | 98% | 171 | 150 | 158 s |
SM | 17,842 | 7728 | 310 | 57% | 98% | 96 | 103 | 171 s |
PSO | 16,163 | 2360 | 627 | 85% | 96% | 121 | 101 | 100 s |
GA | 17,475 | 3175 | 1140 | 82% | 93% | 164 | 181 | 115 s |
CGO | 14,219 | 1614 | 704 | 89% | 95% | 27 | 11 | 105 s |
Parameter | Description | Group * | Range |
---|---|---|---|
c | Influence of the duration of contact | All | [0.1, 1] |
Susceptibility parameters | FluA | [1, 7] | |
FluB | |||
RV | |||
RSV | |||
AdV | |||
PIV | |||
CoV | |||
Temperature parameters | FluA | [0.01, 1] | |
FluB | |||
RV | |||
RSV | |||
AdV | |||
PIV | |||
CoV | |||
Mean immunity durations | FluA | [30, 365] | |
FluB | |||
RV | |||
RSV | |||
AdV | |||
PIV | |||
CoV | |||
Probabilities of | 0–2 y.o. | [0.0008, 0.0012] | |
infections from | 3–6 y.o. | [0.0005, 0.001] | |
unknown | 7–14 y.o. | [0.0002, 0.0005] | |
sources | 15+ y.o. | [0.000005, 0.00001] |
Method * | Avg Initial RMSE | Avg Best RMSE | Best RMSE | Avg Best Decrease | Best Decrease | Avg Best Run | Best Run ** | Avg Time *** |
---|---|---|---|---|---|---|---|---|
MCMC LHS | 3214 | 2104 | 1945 | 35% | 39% | 167 | 196 | ≈60 h |
MCMC manual | 4580 | 1757 | 1639 | 62% | 64% | 145 | 164 | ≈9.5 h |
SM | 3214 | 2093 | 1978 | 35% | 38% | 146 | 116 | ≈60.5 h |
PSO | 11,927 | 2889 | 2622 | 76% | 78% | 111 | 191 | ≈10 h |
GA | 9120 | 2428 | 2147 | 73% | 76% | 164 | 141 | ≈10 h |
CGO | 12,460 | 2813 | 2592 | 77% | 79% | 38 | 31 | ≈10 h |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vlad, A.I.; Romanyukha, A.A.; Sannikova, T.E. Parameter Tuning of Agent-Based Models: Metaheuristic Algorithms. Mathematics 2024, 12, 2208. https://doi.org/10.3390/math12142208
Vlad AI, Romanyukha AA, Sannikova TE. Parameter Tuning of Agent-Based Models: Metaheuristic Algorithms. Mathematics. 2024; 12(14):2208. https://doi.org/10.3390/math12142208
Chicago/Turabian StyleVlad, Andrei I., Alexei A. Romanyukha, and Tatiana E. Sannikova. 2024. "Parameter Tuning of Agent-Based Models: Metaheuristic Algorithms" Mathematics 12, no. 14: 2208. https://doi.org/10.3390/math12142208
APA StyleVlad, A. I., Romanyukha, A. A., & Sannikova, T. E. (2024). Parameter Tuning of Agent-Based Models: Metaheuristic Algorithms. Mathematics, 12(14), 2208. https://doi.org/10.3390/math12142208