Data Reduction of Digital Twin Simulation Experiments Using Different Optimisation Methods
Abstract
:1. Introduction
2. Literature Background
2.1. Simulation Optimisation
- denotes the search space—the domain of the input parameters of the discrete event simulation model;
- denotes the index of the -th decision variable (simulation model input parameter) of the simulation model;
- denotes the dimension of the search space;
- denotes the lower bound of the interval of the -th decision variable;
- denotes the upper bound of the interval of the -th decision variable.
- denotes a possible solution candidate;
- denotes the value of the -th decision variable.
- denotes the global minimum of the objective function ;
- denotes the objective function value of the solution candidate. The objective function represents the goal of the simulation study. Each solution candidate (possible solution of the modelled problem representing the vector of the values for each decision variable in search space) is evaluated by the objective function value—the range includes real numbers . The objective function maximisation can be converted to function minimisation.
- denotes the -th generated solution candidate;
- denotes the list—population—of generated solution candidates;
- denotes the length of the list —population size.
2.2. Optimisation Methods
- Discrete event simulation;
- Systems of stochastic nonlinear and/or differential equations [20].
- Ranking and selection;
- Black-box search methods that directly work with the simulation estimates of objective values;
- Meta-model-based methods;
- Gradient-based methods;
- Sample path optimisation;
- Stochastic constraints and multi-objective simulation optimisation.
2.2.1. Random Search
2.2.2. Downhill Simplex
- denotes the simplex;
- denotes the first solution candidate;
- denotes the dimension of the search space;
- denotes the search space—the domain of the input parameters of the discrete event simulation model—solution candidates.
- denotes the worst solution candidate of the simplex;
- denotes the best solution candidate;
- denotes the objective function value of the solution candidate;
- denotes the simplex centroid—centre of gravity (calculated from the solution candidate apart from the best solution candidate).
- Reflection—mirroring the worst solution candidate;
- Expansion—searching for a better solution in the reflection direction;
- Contraction—testing the solution candidate between reflection and simplex centroid
- Reduction—shrinkage towards the best solution candidate of the simplex.
2.2.3. Stochastic Hill Climbing
Algorithm 1: Stochastic Hill-Climbing Algorithm Pseudocode | |
1 | begin |
2 | ; //initial population |
3 | ; //sort in ascending order according to |
4 | ; //the best solutions candidate in |
5 | ; |
6 | while not do begin //termination criterion |
7 | ; //delete |
8 | for to do begin |
9 | ; (*mutation of best solutions candidate in -uniform distribution*) |
10 | for to do //for each gen of |
11 | //perturbation |
12 | ; //add new solutions candidate to |
13 | end; |
14 | ; |
15 | ; |
16 | if then //better solutions candidate was found |
17 | ; |
18 | end; |
19 | result ; |
20 | end; |
Algorithm 2: Perturbation Pseudocode | |
1 | begin |
2 | ; |
3 | if then |
4 | ; |
5 | while do begin |
6 | if then |
7 | //mirror the solutions candidate coordinate back to search space |
8 | else if then |
9 | ; |
10 | ; (* round or truncate the coordinate of the solutions candidate in the search space to the nearest neighbour *) |
11 | end; |
12 | result ; |
13 | end; |
2.2.4. Stochastic Local Search
Algorithm 3: Stochastic Local Search Pseudocode | |
1 | begin |
2 | ; //generating individual |
3 | while not do begin //termination criterion |
4 | ; (*mutation of the best-found individual *) |
5 | for to do //for each gen of the individual |
6 | //perturbation |
7 | if then //better solution candidate was found |
8 | ; |
9 | end; |
10 | result ; |
11 | end; |
2.2.5. Stochastic Tabu Search
Algorithm 4: Stochastic Tabu Search Pseudocode | |
1 | begin |
2 | ; //initial population |
3 | ; //sort population |
4 | ; //best solution candidate from |
5 | ; |
6 | ; //empty Tabu List |
7 | ; |
8 | while not do begin //termination criterion |
9 | ; //empty |
10 | for to do begin |
11 | repeat |
12 | ; |
13 | for to do //for each gen of solution candidate |
14 | //perturbation |
15 | ; //add to |
16 | until or ; //solution candidate is not element of or meet the aspiration criterion |
17 | if then |
18 | ; |
19 | ; |
20 | ; |
21 | end; |
22 | ; |
23 | ; |
24 | if then //better solution candidate was found |
25 | ; |
26 | end; |
27 | result ; |
28 | end; |
2.2.6. Stochastic Simulated Annealing
2.2.7. Differential Evolution
2.2.8. Self-Organising Migrating Algorithm (SOMA)
2.2.9. Evolution Strategy
- They usually use vectors of real numbers as solution candidates. In other words, both the search and the problem space are fixed-length strings of floating-point numbers, such as the real-encoded genetic algorithms;
- Mutation and selection are the primary operators and recombination is less common;
- Mutation most often changes the solution candidate gene to a number drawn from a normal distribution;
- In all other aspects, they perform exactly as basic evolutionary algorithms.
2.2.10. Particle Swarm Optimisation (PSO)
3. Digital Twin
- denotes the resulting objective function (maximisation);
- denotes the number of AGVs of the same type;
- denotes the number of AGVs of different types;
- denotes the utilisation of the i-th production line;
- denotes the number of production lines.
4. Methodology
- Step 1 and 2—The simulation run performed on a discrete event simulation model can be time consuming, depending on the difficulty of the discrete event simulation model. The developed simulation optimiser based on client–server architecture allows simulation optimisation experiments to be performed on many remote simulation optimisers—servers. Simulation optimisers are installed on different remote servers. Simulation optimisers perform optimisation experiments with the discrete event simulation model. The simulation optimiser communicates with the client via the assigned port number. The simulation optimiser only waits for the instruction to start the execution of optimisation experiments via the application for remote control of the optimisation experiments. If the version of the running simulation optimiser is older than the current version, the optimiser automatically upgrades to the latest version.
- Step 3, 4, and 5—The application for remote control of optimisation experiments is started on the client’s computer. The application downloads a list of IP addresses of available server computers. The application detects the state of the simulation optimisers. Each IP address lists these attributes of the series: name of the computer performing the optimisation process; status of the simulation optimiser (whether the optimisation experiment is running, ready to run the series, sending files with results, etc.); remaining time to the completion of the current series (the time updates after the completion of each optimisation experiment); remaining time to the completion of all series; index of the currently running series/total number of all series to perform; version of the simulation optimiser application. The user can launch multiple simulation optimisers on the server (depending on the computational capacity of the computer). The server can perform multiple independent series with different simulation models because each running optimiser communicates with the client using its assigned port (depending on the number of the licenses of the simulation software). The next figure shows the GUI of the proposed application for remote control of the optimisation experiments running on the simulation optimisers—see Figure 8.
- Step 6—Some methods are very susceptible to the setting of their parameters and this setting greatly affects the behaviour of the method when searching for the global optimum of the objective function. The user can test different parameter settings of the optimisation methods—series.
- Step 7—Simulation optimisers are connected to a remote SQL Server database. This database contains all tested possible solution candidates inside the search space by the remote simulation optimisers (performed simulation experiments with the discrete event simulation model). All simulation optimisers update this SQL server database with performed simulation experiments. The simulation optimiser does not have to perform the simulation run in simulation software, but it only searches for the solution candidate in the databases tested by any other simulation optimiser. The user can also launch another module for collecting the statistics on the use of candidate solutions at the SQL server. We use this module to analyse the relative frequency of using the candidate solutions by the optimisation methods and compare their efficiency of generating new candidate solutions in the optimisation process.
- Step 8—The application for remote control of optimisation experiments sends all necessary files with the settings of the optimisation methods, their parameters, and the simulation model with its concrete settings from the client’s computer to selected simulation optimisers running on servers. The execution of parallel simulation experiments allows the user to distribute simulation models to different computers (using the simulation optimisers) and enhance computing power in the optimisation process with the DT. The use of parallelisation in the simulation (sharing the SQL server database) also eliminates redundant data that would result from simulating the same solution of the modelled problem.
- Step 9 to 12—The performance of the optimisation method is also significantly affected by the settings of the optimisation method parameters; thus, we had to test different settings of the tested optimisation methods to reduce the bad settings of the optimisation methods. The user can also test different settings of the implemented optimisation methods. Each simulation optimiser can use different optimisation methods with different settings of its parameters. The user can set the range of the optimisation method parameters and the simulation optimiser can perform optimisation experiments using all possible settings of the method parameters within the specified range. We can divide the number of simulation experiments as follows:
- Simulation experiment—simulation run of a discrete event simulation model and calculating the objective function value using the outputs from the simulation model.
- Optimisation experiment—performed with a specific optimisation method setting to find the optimum of the objective function (to find the best possible solution to the modelled problem using the digital twin).
- Series—replication of optimisation experiments with a specific optimisation method setting (to test the behaviour of the optimisation method and partial reduction of the wrong settings of the optimisation method parameters). We tested different settings of the optimisation methods and replicated these optimisation experiments with concrete settings of the optimisation method (series) to reduce the influence of random implementation on the optimisation method. The next table shows the tested settings of the implemented optimisation methods, e.g., we tested 470,400,000 different settings of the particle swarm optimisation method—see Table 2, where denotes the dimension of the search space and denotes the number of bits in the gene.
- Step 13—The behaviour of the tested optimisation methods is random. We had to repeat many optimisation experiments to identify the pure nature of the optimisation methods (reduce the randomness of the method behaviour) to compare the efficiency of the tested optimisation methods.
- Step 14—The next flowchart shows the basic phases of simulation optimisation running on the simulation optimiser—see Figure 9 in Section 5 Simulation optimiser.
- Step 15—The simulation optimiser performs an evaluation of the series after the completion of each series. The evaluation contains the box plot characteristics of the objective function values of the generated solution candidates (the smallest observation—sample minimum, lower quartile, median, upper quartile, and largest observation—sample maximum) calculated for each series. The evaluation also includes the objective function value of the best-found solution candidates in all optimisation experiments in the series; the range of provided function objective values during the optimisation experiment, and the number of simulation experiments until the termination criterion is met.
- Step 16 to 17—Each simulation optimiser sends all the calculated evaluation results to the client after the completion of all performed series. Sending data also prevents the loss of data from simulation experiments performed on the simulation optimiser if the optimiser fails. These results can be used for the evaluation of the optimisation processes or the evaluation of the behaviour of the methods.
- Step 18—We proposed different criteria which express the success or the failure of the optimisation method in different ways. Each criterion value is between [0,1] and it is calculated from the box plot characteristics—the smallest observation—sample minimum Q1, lower quartile Q2, median Q3, upper quartile Q4, and largest observation—sample maximum Q5. These characteristics are calculated for each series (the setting of the optimisation method parameters)—see Section 6 Evaluation.
5. Simulation Optimiser
- Step 1 and 2—The simulation optimiser selects the optimisation method and sets the range of the optimisation method parameters for the series. This method is set according to the setting sent from the client. If a suitable optimisation method is used, the volume of generated data stored in the memory is significantly reduced.
- Step 3 and 4—The simulation optimiser also sets the termination criteria and the range of decision variables (lower and upper bound of the simulation model input parameters) according to the sent settings. The same termination criteria must be satisfied for each optimisation method in the series.
- denotes the size of the search space—the number of all possible solutions in the search space;
- denotes the coefficient of search space reduction.
- Step 5—The simulation optimiser can download the database of the simulation experiments from the SQL servers to its local memory at the start of the optimisation process.
- Step 6—The optimisation method generates the initial population of the solution candidates. We use one fixed solution candidate (starting point) in the initial population to reduce the randomness of the initial searching for the optimum.
- Step 7 and 8—When the simulation optimiser searches for the best solution candidate representing the best setting of discrete-event simulation model parameters, it generates a large volume of data. This volume of data increases with the number of the launched simulation optimisers on the computer. The generated data are stored in internal memory to speed up the optimisation process because many optimisation methods (especially population-based algorithms) use the same individuals that have been generated in previous populations. The problem is mainly related to the maximum size of memory that the simulation optimiser can use (e.g., 32-bit/64-bit application/operating system; the amount of available memory capacity on the computer). A problem could be the lack of computer local memory when storing data of the simulation optimisation run. We implemented the hashing algorithm to reduce the volume of generated data by the optimisation process using the simulation optimiser. The principle of the algorithm is shown in Algorithm 5.
Algorithm 5: Hashing Algorithm Pseudocode | ||
Hashing function returning the list of values to reduce the required memory for storing decision variables values | ||
Input: | List of all decision variables (simulation model input parameters) | |
DV[i].v | The i-th decision variable value | |
DV[i].numBits | Number of bits of the i-th decision variable | |
DV[i].a | Lower bound of the i-th decision variable | |
DV[i].b | Upper bound of the i-th decision variable | |
DV[i].step | Step (difference between two following values) of the i-th decision variable | |
Data: | Returns the number of decimal places | |
ToInt | Converts a specified value to 32-bit signed integer | |
ToBin | Converts a specified value to binary number (string data type) | |
ToStr | Converts a specified value to its equivalent string representation | |
ToInt64 | Converts a specified string to 64-bit signed integer | |
SubString | Returns a new string containing the section of the current string starting at index position Start and containing the specified number of characters | |
intNum | Integer number (local variable) | |
binNum | Binary number (local variable-string data type) | |
FinalBinNum | Binary value of the converted number (string data type) | |
Output: | List of all hashed values of the converted decision variables values | |
1 begin 2 3 //Go through all decision variables 4 for do begin 5 if then 6 if then //Concatenate sign bit to string binary number 7 8 else 9 //Conversion of the real number to integer number 10 //Convert integer number to string binary number 11 //Pad ‘0′ to string binary number 12 for do 13 //Final binary number 14 15 end; //Clear the output list 16 //Split the string to 64 parts 17 for do //Add converted decision variables values to list 18 19 result ; 20 end; |
- Step 9–12 and 19–20—The simulation optimisers can also download the record (encrypted data) from the remote database including all simulation experiments performed with the discrete event simulation model. The simulation optimiser does not have to perform the simulation run in simulation software, but it only searches for the simulation experiment in the local memory of the simulation optimiser. It can also search for the solution candidate (record) in the SQL server database if the local memory does not contain this solution candidate. The simulation optimiser does not have to perform the simulation run in simulation software, but it only searches for the solution candidate in the internal memory of all the solution candidates. The simulation experiment with AGVs DT takes about 10 s. If the simulation experiment was performed by any other simulation optimiser, the execution of the SQL query including loading the record representing the simulation experiment takes only 0.001 s. As we use the hashing algorithm, a SQL query with only one parameter value was used to find a simulation experiment with AGVs DT instead of searching for 15 parameter values (the SQL query would have to contain 15 conjunctive conditions, which means multiple slowdowns). The next advantage is indexing this field for faster searching of the database record.
- Step 13 and 14—If the local server or the remote SQL Server database does not contain the needed solution candidate, the simulation optimiser generates a candidate solution. This candidate solution represents the settings of the simulation model input parameters. The simulation optimiser launches the simulation run on the simulation software. We used the Tecnomatix Plant Simulation software developed by Siemens PLM Software. The simulation optimiser calculates the objective function value of this candidate solution using the simulation outputs.
- Step 15—The values of the decision variables of the solution candidate and its objective function value are encrypted by the hashing method to reduce the data volume. The hashing method supports the possibility of indexing records using the SQL server to speed up the search of records in the database while minimising memory requirements.
- Step 16 and 17—One way of speeding up the optimisation process is by storing the data generated by the optimisation process in the internal memory allocated to the simulation optimiser. We found that the optimisation algorithms often use the same solution candidates to map the objective function landscape (searching for the path to the optimum of the objective function). This fact can be used to speed up the optimisation process and reduce the data stored in the memory of the simulation optimiser. The simulation optimisers are connected to the SQL server database containing the records—the tested solution candidates and their objective values. Each simulation optimiser provides the possibility of downloading or uploading the data from the simulation experiment to the SQL server. The record is stored in the local database and to the SQL server database if the database does not contain this specific record (another simulation optimiser could perform the same simulation experiment and sends the record to the SQL Server database at the same time). This reduces the duplicated simulation experiments in the SQL server database. The simulation optimiser does not download the same simulation experiments in the initial stage of the simulation optimisation of the DT. Each DT has its own database of simulation experiments. The developed application supports managing the volume of data (generated by previous optimisation processes) held in the memory by setting the following options:
- No previously performed optimisation experiments are held in the memory—the simulation optimiser detects if the SQL server contains a solution candidate, that is, a record containing the performed simulation run (the concrete settings of the simulation model input parameters and the objective function value calculated according to the output of the simulation run with concrete settings). If this database contains this record, the simulation optimiser downloads these encrypted data by the hashing algorithm and decrypts them; otherwise, the simulation is run with the concrete settings of the simulation model input parameters.
- The simulation optimiser stores the specified number of records (executed simulation runs and calculated objective function values) in the local memory—this option is used when the simulation optimiser can use the limited available memory. The optimiser scans its internal computer memory first to find a specific record (the concrete settings of the simulation model input parameters). If it does not find this record in the internal memory, it searches the SQL server for this record. If the record is not found, the simulation optimiser runs the simulation experiment with the concrete settings of the simulation model input parameters.
- The simulation optimiser stores all the performed optimisation experiments (all records)—the logic is the same as the previous variant except that all the simulation runs in all optimisation experiments are held in the computer’s memory. The simulation optimiser downloads all records from the SQL server database. This approach will cause a certain time delay before the first simulation run is started, but it will save time spent on the communication between the SQL server and the simulation optimiser.
- Step 18—The simulation optimiser continuously displays the state of the optimisation experiment. It displays the value of the objective function of the proposed solution candidate; decision variables values (the settings of the simulation model input parameters of the proposed solution); the value of the objective function and the number of the simulation experiments in which the best solution candidate was found by the optimisation method; the value of the objective function of the current solution; the number of the current simulation experiment; simulation experiment computation time; the number of loaded/saved simulation experiments in the internal or external database, etc.; see Figure 13. The simulation optimiser also calculates the evaluation box plot characteristics of objective function values of the best-found solution candidates, all generated solution candidates, and the number of performed simulation experiments for each series.
- Step 21—If any of the termination criteria are met, the optimisation experiment is terminated.
6. Evaluation
6.1. Quality of the Found Solution Candidate
6.2. The Speed in Finding a Solution Candidate
6.3. Settings of the Optimisation Methods
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hartmann, D.; Van der Auweraer, H. Digital Twins. In Progress in Industrial Mathematics: Success Stories; Cruz, M., Parés, C., Quintela, P., Eds.; Springer International Publishing: Cham, Germany, 2021; Volume 5, pp. 3–17. [Google Scholar] [CrossRef]
- Fryer, T. Digital Twin—Introduction. This is the age of the Digital Twin. Eng. Technol. 2019, 14, 28–29. [Google Scholar] [CrossRef]
- Andronas, D.; Kokotinis, G.; Makris, S. On modelling and handling of flexible materials: A review on Digital Twins and planning systems. Procedia CIRP 2021, 97, 447–452. [Google Scholar] [CrossRef]
- Rao, S.V. Using a Digital Twin in Predictive Maintenance. J. Pet. Technol. 2020, 72, 42–44. [Google Scholar] [CrossRef]
- Liljaniemi, A.; Paavilainen, H. Using Digital Twin Technology in Engineering Education—Course Concept to Explore Benefits and Barriers. Open Eng. 2020, 10, 377–385. [Google Scholar] [CrossRef]
- Lv, Z.; Chen, D.; Lou, R.; Alazab, A. Artificial intelligence for securing industrial-based cyber–physical systems. Future Gener. Comput. Syst. 2021, 117, 291–298. [Google Scholar] [CrossRef]
- Vogel-Heuser, B.; Lee, J.; Leitão, P. Agents enabling cyber-physical production systems. at-Automatisierungstechnik 2015, 63, 777–789. [Google Scholar] [CrossRef] [Green Version]
- Tao, F.; Qi, Q.; Wang, L.; Nee, A.Y.C. Digital Twins and Cyber–Physical Systems toward Smart Manufacturing and Industry 4.0: Correlation and Comparison. Engineering 2019, 5, 653–661. [Google Scholar] [CrossRef]
- Uhlemann, T.H.-J.; Lehmann, C.; Steinhilper, R. The Digital Twin: Realizing the Cyber-Physical Production System for Industry 4.0. Procedia CIRP 2017, 61, 335–340. [Google Scholar] [CrossRef]
- Lee, J.; Bagheri, B.; Kao, H.-A. Recent Advances and Trends of Cyber-Physical Systems and Big Data Analytics in Industrial Informatics. In Proceedings of the International Conference on Industrial Informatics, Porto Alegre, Brazil, 27–30 July 2014. [Google Scholar] [CrossRef]
- Lu, Y.; Liu, C.; Wang, K.I.-K.; Huang, H.; Xu, X. Digital Twin-driven smart manufacturing: Connotation, reference model, applications and research issues. Robot. Comput. Manuf. 2020, 61, 101837. [Google Scholar] [CrossRef]
- Zhang, Y.; Ren, S.; Liu, Y.; Si, S. A big data analytics architecture for cleaner manufacturing and maintenance processes of complex products. J. Clean. Prod. 2017, 142, 626–641. [Google Scholar] [CrossRef] [Green Version]
- Gandomi, A.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. Int. J. Inf. Manag. 2015, 35, 137–144. [Google Scholar] [CrossRef] [Green Version]
- Alwan, H.B.; Ku-Mahamud, K.R. Big data: Definition, characteristics, life cycle, applications, and challenges. IOP Conf. Ser. Mater. Sci. Eng. 2020, 769, 012007. [Google Scholar] [CrossRef]
- Kendzierskyj, S.; Jahankhani, H.; Jamal, A.; Jimenez, J.I. The Transparency of Big Data, Data Harvesting and Digital Twins; Jahankhani, H., Kendzierskyj, S., Jamal, A., Epiphaniou, G., Al-Khateeb, H., Eds.; Springer International Publishing: Cham, Germany, 2019; pp. 139–148. [Google Scholar] [CrossRef]
- Da Xu, L.; Duan, L. Big data for cyber physical systems in industry 4.0: A survey. Enterp. Inf. Syst. 2019, 13, 148–169. [Google Scholar] [CrossRef]
- He, W.; Da Xu, L. Integration of Distributed Enterprise Applications: A Survey. IEEE Trans. Ind. Inform. 2012, 10, 35–42. [Google Scholar] [CrossRef]
- Vieira, A.A.; Dias, L.M.; Santos, M.Y.; Pereira, G.A.B.; Oliveira, J.A. On the use of simulation as a Big Data semantic validator for supply chain management. Simul. Model. Pract. Theory 2020, 98, 101985. [Google Scholar] [CrossRef]
- Cheng, S.; Liu, B.; Ting, T.O.; Qin, Q.; Shi, Y.; Huang, K. Survey on data science with population-based algorithms. Big Data Anal. 2016, 1, 35. [Google Scholar] [CrossRef] [Green Version]
- Amaran, S.; Sahinidis, N.V.; Sharda, B.; Bury, S.J. Simulation optimization: A review of algorithms and applications. Ann. Oper. Res. 2016, 240, 351–380. [Google Scholar] [CrossRef] [Green Version]
- Xu, J.; Huang, E.; Chen, C.-H.; Lee, L.H. Simulation Optimization: A Review and Exploration in the New Era of Cloud Computing and Big Data. Asia-Pac. J. Oper. Res. 2015, 32, 1550019-1–1550019-34. [Google Scholar] [CrossRef]
- Alrabghi, A.; Tiwari, A.; Savill, M. Simulation-based optimisation of maintenance systems: Industrial case studies. J. Manuf. Syst. 2017, 44, 191–206. [Google Scholar] [CrossRef] [Green Version]
- Longo, C.S.; Fantuzzi, C. Simulation and optimization of industrial production lines. at-Automatisierungstechnik 2018, 66, 320–330. [Google Scholar] [CrossRef]
- Shahbazi, S.; Sajadi, S.M.; Jolai, F. A Simulation-Based Optimization Model for Scheduling New Product Development Projects in Research and Development Centers. Iranian J. Manag. Stud. 2017, 10. [Google Scholar] [CrossRef]
- Van Der Auweraer, H.; Anthonis, J.; De Bruyne, S.; Leuridan, J. Virtual engineering at work: The challenges for designing mechatronic products. Eng. Comput. 2012, 29, 389–408. [Google Scholar] [CrossRef] [Green Version]
- Tasoglu, G.; Yildiz, G. Simulated annealing based simulation optimization method for solving integrated berth allocation and quay crane scheduling problems. Simul. Model. Pract. Theory 2019, 97, 101948. [Google Scholar] [CrossRef]
- Grzybowska, H.; Kerferd, B.; Gretton, C.; Travis Waller, S. A simulation-optimisation genetic algorithm approach to product allocation in vending machine systems. Expert Syst. Appl. 2020, 145, 113110. [Google Scholar] [CrossRef]
- Bovim, T.R.; Christiansen, M.; Gullhav, A.N.; Range, T.M.; Hellemo, L. Stochastic master surgery scheduling. Eur. J. Oper. Res. 2020, 285, 695–711. [Google Scholar] [CrossRef]
- Rouky, N.; Abourraja, M.N.; Boukachour, J.; Boudebous, D.; Alaoui, A.E.H.; El Khoukhi, F. Simulation optimization based ant colony algorithm for the uncertain quay crane scheduling problem. Int. J. Ind. Eng. Comput. 2019, 10, 111–132. [Google Scholar] [CrossRef]
- Yang, T.; Kuo, Y.; Chang, I. Tabu-search simulation optimization approach for flow-shop scheduling with multiple processors—A case study. Int. J. Prod. Res. 2004, 42, 4015–4030. [Google Scholar] [CrossRef]
- Hong, L.J.; Nelson, B.L. A brief introduction to optimization via simulation. In Proceedings of the 2009 Winter Simulation Conference (WSC), Austin, TX, USA, 13–16 December 2009; pp. 75–85. [Google Scholar] [CrossRef] [Green Version]
- Trigueiro de Sousa Junior, W.; Barra Montevechi, J.A.; de Carvalho Miranda, R.; Teberga Campos, A. Discrete simulation-based optimization methods for industrial engineering problems: A systematic literature review. Comput. Ind. Eng. 2019, 128, 526–540. [Google Scholar] [CrossRef]
- Raska, P.; Ulrych, Z. Methodology for evaluating optimization experiments. In Proceedings of the 32nd European Modeling and Simulation Symposium, EMSS 2020, Athens, Greece, 16–18 September 2020. [Google Scholar] [CrossRef]
- Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
- Maehara, N.; Shimoda, Y. Application of the genetic algorithm and downhill simplex methods (Nelder–Mead methods) in the search for the optimum chiller configuration. Appl. Therm. Eng. 2013, 61, 433–442. [Google Scholar] [CrossRef]
- Weise, T. Global Optimization Algorithms–Theory and Application. 2009. Available online: http://www.it-weise.de (accessed on 2 February 2011).
- Raska, P.; Ulrych, Z. Comparison of optimisation methods tested on testing functions and discrete event simulation models. Int. J. Simul. Process. Model. 2015, 10, 279. [Google Scholar] [CrossRef]
- Monticelli, A.J.; Romero, R.; Asada, E. Fundamentals of Tabu Search. In Modern Heuristic Optimization Techniques; Wiley-IEEE Press: Piscataway, NJ, USA, 2007; pp. 101–122. [Google Scholar]
- Dréo, J.; Pétrowski, A.; Taillard, E. Metaheuristics for Hard Optimization; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar] [CrossRef]
- Monticelli, A.J.; Romero, R.; Asada, E. Fundamentals of Simulated Annealing. In Modern Heuristic Optimization Techniques: Theory and Applications to Power Systems; Wiley-IEEE Press: Hoboken, NJ, USA, 2007; pp. 123–146. [Google Scholar] [CrossRef]
- Gonzalez, T.F. Handbook of Approximation Algorithms and Metaheuristics; Chapman and Hall/CRC: New York, NY, USA, 2007; 1432p. [Google Scholar] [CrossRef]
- Storn, R.; Price, K.V. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Eltaeib, T.; Mahmood, A. Differential Evolution: A Survey and Analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef] [Green Version]
- Zelinka, I. SOMA—Self-organizing Migrating Algorithm. In Studies in Computational Intelligence; Springer: Cham, Germany, 2016. [Google Scholar] [CrossRef]
- Zelinka, I.; Davendra, D.D.; Senkerik, R.; Pluhacek, M. Investigation on evolutionary predictive control of chemical reactor. J. Appl. Log. 2015, 13, 156–166. [Google Scholar] [CrossRef]
- Li, Y.-L.; Zhan, Z.-H.; Gong, Y.-J.; Chen, W.-N.; Zhang, J.; Li, Y. Differential Evolution with an Evolution Path: A DEEP Evolutionary Algorithm. IEEE Trans. Cybern. 2014, 45, 1798–1810. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
- Zelinka, I.; Snasel, V.; Abraham, A. (Eds.) Handbook of Optimization; Springer: Berlin/Heidelberg, Germany, 2013; Volume 38. [Google Scholar] [CrossRef]
- Rechenberg, I. Cybernetic Solution Path of an Experimental Problem (Kybernetische Lösungsansteuerung Einer Experimentellen Forschungsaufgabe). In Evolutionary Computation: The Fossil Record; Wiley-IEEE Press: Hoboken, NJ, USA, 1998. [Google Scholar] [CrossRef]
- Bäck, T.; Foussette, C.; Krause, P. Contemporary Evolution Strategies. Nat. Comput. Ser. 2013, 47. [Google Scholar] [CrossRef]
- Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
- Hansen, N.; Ostermeier, A.; Gawelczyk, A. On the Adaptation of Arbitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adaptation. In Proceedings of the Sixth International Conference on Genetic Algorithms, San Francisco, CA, USA, 15–19 July 1995. [Google Scholar]
- Meyer-Nieberg, S.; Beyer, H.-G. Self-Adaptation in Evolutionary Algorithms. Stud. Comput. Intell. 2007, 47–75. [Google Scholar] [CrossRef]
- Akhtar, J.; Awais, M.M.; Koshul, B.B. Evolutionary Algorithms based on non-Darwinian theories of evolution. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 2554–2560. [Google Scholar] [CrossRef]
- Beyer, H.-G.; Sendhoff, B. Covariance Matrix Adaptation Revisited—The CMSA Evolution Strategy. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
- Igel, C.; Suttorp, T.; Hansen, N. Steady-State Selection and Efficient Covariance Matrix Update in the Multi-Objective CMA-ES. In Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4403, pp. 171–185. [Google Scholar] [CrossRef]
- Müller, C.L.; Sbalzarini, I.F. Gaussian Adaptation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
- Hansen, N.; Ostermeier, A. Convergence Properties of Evolution Strategies with the Derandomized Covariance Matrix Adaptation: The (Mu/Mu_I, Lambda)-CMA-ES. In Proceedings of the 5th European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany, 8–11 September 1997. [Google Scholar]
- Jastrebski, G.; Arnold, D. Improving Evolution Strategies through Active Covariance Matrix Adaptation. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006. [Google Scholar] [CrossRef]
- Tripathy, A.; Schwefel, H.-P. Numerical Optimization of Computer Models. J. Oper. Res. Soc. 1982, 33, 1166. [Google Scholar] [CrossRef]
- Beyer, H.-G. Toward a Theory of Evolution Strategies: Self-Adaptation. Evol. Comput. 1995, 3, 311–347. [Google Scholar] [CrossRef]
- Matsumura, Y.; Ohkura, K.; Ueda, K. Advantages of global discrete recombination in (μ/μ,λ,)-evolution strategies. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, Honolulu, HI, USA, 12–17 May 2002. [Google Scholar] [CrossRef]
- Geyer, H.; Ulbig, P.; Schulz, S. Verschachtelte Evolutionsstrategien Zur Optimierung Nichtlinearer Verfahrenstechnischer Regressions-Probleme. Chem. Ing. Tech. 2000, 72, 369–373. [Google Scholar] [CrossRef]
- Rechenberg, I. Evolutionsstrategie—Optimieren wie in der Natur. Technik und Natur 1994, 227–244. [Google Scholar] [CrossRef]
- Raska, P.; Ulrych, Z. Testing different evolution strategy selection strategies. MM Sci. J. 2018, 2018, 2290–2299. [Google Scholar] [CrossRef] [Green Version]
- Eberhart, R.; Kennedy, J. New Optimizer Using Particle Swarm Theory. In Proceedings of the International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar] [CrossRef]
- Parrish, J.K.; Hamner, W.M. (Eds.) Animal Groups in Three Dimensions; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar] [CrossRef]
- Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
- Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
- Piotrowski, A.P.; Napiorkowski, J.; Piotrowska, A.E. Population size in Particle Swarm Optimization. Swarm Evol. Comput. 2020, 58, 100718. [Google Scholar] [CrossRef]
- Bonyadi, M.R.; Michalewicz, Z. Particle Swarm Optimization for Single Objective Continuous Space Problems: A Review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef]
- Nagpal, R.; Singh, P.; Garg, B.P. Smart Particle Swarm Optimization. In Proceedings of the 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 19–20 February 2021. [Google Scholar] [CrossRef]
- Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.-H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
- Tawhid, M.A.; Dsouza, K.B. Hybrid binary bat enhanced particle swarm optimization algorithm for solving feature selection problems. Appl. Comput. Inform. 2018, 16, 117–136. [Google Scholar] [CrossRef]
- Zhan, Z.-H.; Wang, Z.-J.; Jin, H.; Zhang, J. Adaptive Distributed Differential Evolution. IEEE Trans. Cybern. 2020, 50, 4633–4647. [Google Scholar] [CrossRef] [PubMed]
- Raska, P.; Ulrych, Z. Testing Different Particle Swarm Optimization Strategies. In Proceedings of the 30th International Business Information Management Association Conference, IBIMA 2017—Vision 2020: Sustainable Economic development, Innovation Management, and Global Growth, Madrid, Spain, 8–9 November 2017; Volume 2017. [Google Scholar]
- Raska, P.; Ulrych, Z. Evaluation of a Self-Organizing Migrating Algorithm applied to discrete event simulation optimization. In Proceedings of the 31st European Modeling and Simulation Symposium, EMSS 2019, Lisbon, Portugal, 18–20 September 2019. [Google Scholar]
- Arora, K.; Kumar, A.; Kamboj, V.K.; Prashar, D.; Jha, S.; Shrestha, B.; Joshi, G.P. Optimization Methodologies and Testing on Standard Benchmark Functions of Load Frequency Control for Interconnected Multi Area Power System in Smart Grids. Mathematics 2020, 8, 980. [Google Scholar] [CrossRef]
Optimisation Method | Total/% of the Total | |||
---|---|---|---|---|
Papers | Proceedings | Total | Cum. % | |
Heuristics: | ||||
Local Search | 7/11.7% | 3/5.0% | 10/16.7% | 18.2% |
Random Search | 5/8.3% | 2/3.3% | 7/11.7% | 30.9% |
Hill Climbing | 3/5.0% | 2/3.3% | 5/8.3% | 40.0% |
Others | 24/40.0% | 14/23.3% | 38/63.3% | 100% |
Metaheuristics: | ||||
Evolutionary | 60/20.7% | 65/22.4% | 125/43.1% | 43.1% |
Simulated Annealing | 11/3.8% | 12/4.1% | 23/7.9% | 51.0% |
Tabu Search | 14/4.8% | 6/2.1% | 20/6.9% | 57.9% |
VNS | 5/1.7% | 1/0.3% | 6/2.1% | 60.0% |
Others | 87/30.0% | 29/10.0% | 116/40.0% | 100% |
Optimisation Method | Optimisation Method Parameter | Step | Lower Boundary | Upper Boundary |
---|---|---|---|---|
Random search | Option to generate same solution candidates (Boolean) | 1 | 0 | 1 |
Downhill simplex (Nealder–Mead method) | Expansion coefficient | 0.2 | 0 | 1.6 |
Reflection coefficient | 0.2 | 0 | 1.6 | |
Contraction coefficient | 0.2 | 0 | 0.8 | |
Reduction coefficient | 0.2 | 0 | 0.8 | |
Local search | Number of steps | 2 | 1 | 29 |
Hill climbing | Population size | |||
Number of steps | 2 | 1 | 29 | |
Tabu search | Population Size | |||
Number of steps | 2 | 1 | 29 | |
Tabu length | ||||
Simulated annealing | Mutate randomly selected gene (Boolean) | 0 | 0 | 1 |
Non-negative parameter of temperature reducing | 0.1 | 0.1 | 0.4 | |
Minimum temperature | 0.005 | 0.005 | 0.01 | |
Use step (Boolean) | 1 | 0 | 1 | |
Number of steps | 2 | 1 | 29 | |
Reduce temperature by acceptance worse solution candidate (Boolean) | 1 | 0 | 1 | |
Return to the found best solution candidate of all Iterations (Boolean) | 1 | 0 | 1 | |
Number of previous iterations to remember the found best solution candidate | 5 | 5 | 10 | |
Differential evolution | Population size | |||
Parameter of the Adaptive Rule—Differential Evolution | 0.2 | 0.2 | 0.8 | |
Probability of a Crossover—Differential Evolution | 0.2 | 0.2 | 0.8 | |
Evolution strategy | Population size | |||
Number of offspring | ||||
Number of success (the offspring is better than the parent) to be monitored | ||||
Number of other contestants per tournament | ||||
Probability of individual selection | 0.1 | 0.1 | 0.6 | |
Cut-off value | ||||
Self-organising migrating algorithm (SOMA) | Mass (stopping distance from the leader) | 0.5 | 1.1 | 2.6 |
Step (distance of the currently selected individual and the leader) | 0.4 | 0.11 | 1.31 | |
Perturbation (probability of movement to leader) | 0.1 | 0.1 | 0.6 | |
Population size | ||||
Number of migration loops | 10 | 10 | 10 | |
Type of strategy of individuals movement | 1 | 0 | 3 | |
Particle swarm optimisation | Number of particles | |||
Weight | 0.2 | 0.8 | 1.4 | |
Particle constant | 0.7 | 0.2 | 3 | |
Global constant | 0.7 | 0.2 | 3 | |
Reduce weight (Boolean) | 1 | 0 | 1 | |
Reducing weight constant | 0.1 | 0.1 | 0.5 | |
Life span | 20 | 30 | 90 | |
Leader age | 5 | 0 | 20 | |
Leader—maximum evaluation | 1 | 2 | 5 | |
Dimension | 1 | 2 | 5 | |
Reassigning gap | 3 | 2 | 20 | |
Parameter of calculated particle velocity | 0.1 | 0.2 | 0.8 | |
Type of strategy | 0 | 0 | 5 | |
Genetic algorithm | Population minimum size | |||
Population maximum size | ||||
Type of generation strategy | 1 | 0 | 1 | |
Number of generations | 5 | 1 | 6 | |
Type of selection | 1 | 0 | 3 | |
Selection tournament size | ||||
Crossover probability | 0.25 | 0.5 | 1 | |
Type of crossover | 1 | 0 | 4 | |
Crossover swap point index | 0 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Raska, P.; Ulrych, Z.; Malaga, M. Data Reduction of Digital Twin Simulation Experiments Using Different Optimisation Methods. Appl. Sci. 2021, 11, 7315. https://doi.org/10.3390/app11167315
Raska P, Ulrych Z, Malaga M. Data Reduction of Digital Twin Simulation Experiments Using Different Optimisation Methods. Applied Sciences. 2021; 11(16):7315. https://doi.org/10.3390/app11167315
Chicago/Turabian StyleRaska, Pavel, Zdenek Ulrych, and Miroslav Malaga. 2021. "Data Reduction of Digital Twin Simulation Experiments Using Different Optimisation Methods" Applied Sciences 11, no. 16: 7315. https://doi.org/10.3390/app11167315
APA StyleRaska, P., Ulrych, Z., & Malaga, M. (2021). Data Reduction of Digital Twin Simulation Experiments Using Different Optimisation Methods. Applied Sciences, 11(16), 7315. https://doi.org/10.3390/app11167315