Evolutionary Multi-objective Optimization: An Honorary Issue Dedicated to Professor Kalyanmoy Deb

A special issue of Mathematical and Computational Applications (ISSN 2297-8747).

Deadline for manuscript submissions: closed (1 November 2022) | Viewed by 35103

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Depto de Computacion, CINVESTAV-IPN, 07360 Mexico City, Mexico
Interests: multi-objective optimization using metaheuristics; bio-inspired metaheuristics for optimization (e.g., evolutionary algorithms, artificial immune systems, particle swarm optimization)

E-Mail Website
Guest Editor
BEACON Center for the Study of Evolution in Action, Department of Computer Science and Engineering, Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48864, USA
Interests: evolutionary multi-objective optimization

E-Mail Website
Guest Editor
Faculty of Information Technology, University of Jyvaskyla, P.O. Box 35 (Agora), FI-40014 University of Jyvaskyla, Finland
Interests: multiobjective optimization; interactive methods; evolutionary algorithms; visualization; decision analytics and supports

E-Mail Website
Guest Editor
Department of Mechanical and Industrial Engineering, Indian Institute of Technology, Roorkee, India
Interests: evolutionary multi-objective optimization; multi-criteria decision making; AI assisted optimization

E-Mail Website
Guest Editor
Depto de Computacion, CINVESTAV, Mexico City 07360, Mexico
Interests: multi-objective optimization; evolutionary computation (genetic algorithms and evolution strategies); numerical analysis; engineering applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Engineering and Networks Laboratory, ETH Zürich, CH-8092 Zurich, Switzerland
Interests: evolutionary multi-objective optimization

Special Issue Information

Dear Colleagues,

This Honorary Special Issue is dedicated to the 60th birthday of Professor Kalyanmoy Deb, a pioneer and highly impactful and influential proponent of Evolutionary Multi-objective Optimization (EMO) since 1994.

We are gathering contributions from collaborators, former students and scientists working on research areas most impacted by Professor Deb's work. We hence invite researchers to submit high-quality articles related (but not limited to) the following topics:

-Evolutionary multi-objective optimization (EMO):
  •  multi-objective evolutionary algorithms;
  • genetic operators;
  •  theory of EMO;
  • performance indicators and proximity measures;
  • archiving strategies;
  • surrogate assisted optimization;
  •  integration of user preferences;
  • many objective optimization;
  • constraint handling;
  • hybrid methods;
  • test functions;

-Multi-criteria decision making (MCDM);
-Scalar optimization;
-Bi-level optimization
-AI-assisted optimization;
-Applications to real-world problems.

Both research articles and surveys are welcome. Accepted papers will be published free of charge.

Kalyanmoy Deb is currently a Koenig Endowed Chair Professor and University Distinguished Professor in the Department of Electrical and Computer Engineering at Michigan State University, USA, and holds additional appointments in Mechanical Engineering and in Computer Science and Engineering. Professor Deb's research interests are in evolutionary optimization and their application in multi-criteria optimization, modeling, machine learning, and multi-criteria decision making. He has been a visiting professor at various universities across the world, including IITs in India, Aalto University in Finland, the University of Skovde in Sweden, and Nanyang Technological University in Singapore. He was awarded the IEEE Evolutionary Computation Pioneer Award, the Infosys Prize, the TWAS Prize in Engineering Sciences, the CajAstur Mamdani Prize, the Distinguished Alumni Award from IIT Kharagpur, the Edgeworth-Pareto award, the Bhatnagar Prize in Engineering Sciences, and the Bessel Research award from Germany. He is a fellow of IEEE, ASME, and three Indian science and engineering academies. He has published over 550 research papers with Google Scholar citations of over 165,000 and with an h-index 126. More information about his research contributions can be found from https://www.coin-lab.org

Prof. Dr. Carlos Coello
Prof. Dr. Erik Goodman
Prof. Dr. Kaisa Miettinen
Prof. Dr. Dhish Saxena
Prof. Dr. Oliver Schütze
Prof. Dr. Lothar Thiele
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematical and Computational Applications is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

10 pages, 198 KiB  
Editorial
Interview: Kalyanmoy Deb Talks about Formation, Development and Challenges of the EMO Community, Important Positions in His Career, and Issues Faced Getting His Works Published
by Carlos Coello, Erik Goodman, Kaisa Miettinen, Dhish Saxena, Oliver Schütze and Lothar Thiele
Math. Comput. Appl. 2023, 28(2), 34; https://doi.org/10.3390/mca28020034 - 1 Mar 2023
Viewed by 2122
Abstract
Kalyanmoy Deb was born in Udaipur, Tripura, the smallest state of India at the time, in 1963 [...] Full article

Research

Jump to: Editorial

20 pages, 821 KiB  
Article
Single-Loop Multi-Objective Reliability-Based Design Optimization Using Chaos Control Theory and Shifting Vector with Differential Evolution
by Raktim Biswas and Deepak Sharma
Math. Comput. Appl. 2023, 28(1), 26; https://doi.org/10.3390/mca28010026 - 17 Feb 2023
Viewed by 1689
Abstract
Multi-objective reliability-based design optimization (MORBDO) is an efficient tool for generating reliable Pareto-optimal (PO) solutions. However, generating such PO solutions requires many function evaluations for reliability analysis, thereby increasing the computational cost. In this paper, a single-loop multi-objective reliability-based design optimization formulation is [...] Read more.
Multi-objective reliability-based design optimization (MORBDO) is an efficient tool for generating reliable Pareto-optimal (PO) solutions. However, generating such PO solutions requires many function evaluations for reliability analysis, thereby increasing the computational cost. In this paper, a single-loop multi-objective reliability-based design optimization formulation is proposed that approximates reliability analysis using Karush-Kuhn Tucker (KKT) optimality conditions. Further, chaos control theory is used for updating the point that is estimated through KKT conditions for avoiding any convergence issues. In order to generate the reliable point in the feasible region, the proposed formulation also incorporates the shifting vector approach. The proposed MORBDO formulation is solved using differential evolution (DE) that uses a heuristic convergence parameter based on hypervolume indicator for performing different mutation operators. DE incorporating the proposed formulation is tested on two mathematical and one engineering examples. The results demonstrate the generation of a better set of reliable PO solutions using the proposed method over the double-loop variant of multi-objective DE. Moreover, the proposed method requires 6×377× less functional evaluations than the double-loop-based DE. Full article
Show Figures

Figure 1

19 pages, 4005 KiB  
Article
Many-Objectives Optimization: A Machine Learning Approach for Reducing the Number of Objectives
by António Gaspar-Cunha, Paulo Costa, Francisco Monaco and Alexandre Delbem
Math. Comput. Appl. 2023, 28(1), 17; https://doi.org/10.3390/mca28010017 - 30 Jan 2023
Cited by 2 | Viewed by 4260
Abstract
Solving real-world multi-objective optimization problems using Multi-Objective Optimization Algorithms becomes difficult when the number of objectives is high since the types of algorithms generally used to solve these problems are based on the concept of non-dominance, which ceases to work as the number [...] Read more.
Solving real-world multi-objective optimization problems using Multi-Objective Optimization Algorithms becomes difficult when the number of objectives is high since the types of algorithms generally used to solve these problems are based on the concept of non-dominance, which ceases to work as the number of objectives grows. This problem is known as the curse of dimensionality. Simultaneously, the existence of many objectives, a characteristic of practical optimization problems, makes choosing a solution to the problem very difficult. Different approaches are being used in the literature to reduce the number of objectives required for optimization. This work aims to propose a machine learning methodology, designated by FS-OPA, to tackle this problem. The proposed methodology was assessed using DTLZ benchmarks problems suggested in the literature and compared with similar algorithms, showing a good performance. In the end, the methodology was applied to a difficult real problem in polymer processing, showing its effectiveness. The algorithm proposed has some advantages when compared with a similar algorithm in the literature based on machine learning (NL-MVU-PCA), namely, the possibility for establishing variable–variable and objective–variable relations (not only objective–objective), and the elimination of the need to define/chose a kernel neither to optimize algorithm parameters. The collaboration with the DM(s) allows for the obtainment of explainable solutions. Full article
Show Figures

Figure 1

13 pages, 457 KiB  
Article
Knowledge Transfer Based on Particle Filters for Multi-Objective Optimization
by Xilu Wang and Yaochu Jin
Math. Comput. Appl. 2023, 28(1), 14; https://doi.org/10.3390/mca28010014 - 18 Jan 2023
Cited by 1 | Viewed by 1968
Abstract
Particle filters, also known as sequential Monte Carlo (SMC) methods, constitute a class of importance sampling and resampling techniques designed to use simulations to perform on-line filtering. Recently, particle filters have been extended for optimization by utilizing the ability to track a sequence [...] Read more.
Particle filters, also known as sequential Monte Carlo (SMC) methods, constitute a class of importance sampling and resampling techniques designed to use simulations to perform on-line filtering. Recently, particle filters have been extended for optimization by utilizing the ability to track a sequence of distributions. In this work, we incorporate transfer learning capabilities into the optimizer by using particle filters. To achieve this, we propose a novel particle-filter-based multi-objective optimization algorithm (PF-MOA) by transferring knowledge acquired from the search experience. The key insight adopted here is that, if we can construct a sequence of target distributions that can balance the multiple objectives and make the degree of the balance controllable, we can approximate the Pareto optimal solutions by simulating each target distribution via particle filters. As the importance weight updating step takes the previous target distribution as the proposal distribution and takes the current target distribution as the target distribution, the knowledge acquired from the previous run can be utilized in the current run by carefully designing the set of target distributions. The experimental results on the DTLZ and WFG test suites show that the proposed PF-MOA achieves competitive performance compared with state-of-the-art multi-objective evolutionary algorithms on most test instances. Full article
Show Figures

Figure 1

15 pages, 1293 KiB  
Article
COVID-19 Data Analysis with a Multi-Objective Evolutionary Algorithm for Causal Association Rule Mining
by Santiago Sinisterra-Sierra, Salvador Godoy-Calderón and Miriam Pescador-Rojas
Math. Comput. Appl. 2023, 28(1), 12; https://doi.org/10.3390/mca28010012 - 13 Jan 2023
Viewed by 2159
Abstract
Association rule mining plays a crucial role in the medical area in discovering interesting relationships among the attributes of a data set. Traditional association rule mining algorithms such as Apriori, FP growth, or Eclat require considerable computational resources and generate large volumes [...] Read more.
Association rule mining plays a crucial role in the medical area in discovering interesting relationships among the attributes of a data set. Traditional association rule mining algorithms such as Apriori, FP growth, or Eclat require considerable computational resources and generate large volumes of rules. Moreover, these techniques depend on user-defined thresholds which can inadvertently cause the algorithm to omit some interesting rules. In order to solve such challenges, we propose an evolutionary multi-objective algorithm based on NSGA-II to guide the mining process in a data set composed of 15.5 million records with official data describing the COVID-19 pandemic in Mexico. We tested different scenarios optimizing classical and causal estimation measures in four waves, defined as the periods of time where the number of people with COVID-19 increased. The proposed contributions generate, recombine, and evaluate patterns, focusing on recovering promising high-quality rules with actionable cause–effect relationships among the attributes to identify which groups are more susceptible to disease or what combinations of conditions are necessary to receive certain types of medical care. Full article
Show Figures

Figure 1

22 pages, 1297 KiB  
Article
The Hypervolume Newton Method for Constrained Multi-Objective Optimization Problems
by Hao Wang, Michael Emmerich, André Deutz, Víctor Adrián Sosa Hernández and Oliver Schütze
Math. Comput. Appl. 2023, 28(1), 10; https://doi.org/10.3390/mca28010010 - 9 Jan 2023
Viewed by 3395
Abstract
Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a [...] Read more.
Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a given multi-objective optimization problem (MOP) and seeks to maximize the hypervolume indicator adopting the Newton–Raphson method for deterministic numerical optimization. To extend its scope to non-convex optimization problems, the HVN method was hybridized with a multi-objective evolutionary algorithm (MOEA), which resulted in a competitive solver for continuous unconstrained bi-objective optimization problems. In this paper, we extend the HVN to constrained MOPs with in principle any number of objectives. Similar to the original variant, the first- and second-order derivatives of the involved functions have to be given either analytically or numerically. We demonstrate the applicability of the extended HVN on a set of challenging benchmark problems and show that the new method can be readily applied to solve equality constraints with high precision and to some extent also inequalities. We finally use HVN as a local search engine within an MOEA and show the benefit of this hybrid method on several benchmark problems. Full article
Show Figures

Figure 1

29 pages, 1019 KiB  
Article
An Experimental Study of Grouping Mutation Operators for the Unrelated Parallel-Machine Scheduling Problem
by Octavio Ramos-Figueroa, Marcela Quiroz-Castellanos, Efrén Mezura-Montes and Nicandro Cruz-Ramírez
Math. Comput. Appl. 2023, 28(1), 6; https://doi.org/10.3390/mca28010006 - 5 Jan 2023
Cited by 2 | Viewed by 2052
Abstract
The Grouping Genetic Algorithm (GGA) is an extension to the standard Genetic Algorithm that uses a group-based representation scheme and variation operators that work at the group-level. This metaheuristic is one of the most used to solve combinatorial optimization grouping problems. Its optimization [...] Read more.
The Grouping Genetic Algorithm (GGA) is an extension to the standard Genetic Algorithm that uses a group-based representation scheme and variation operators that work at the group-level. This metaheuristic is one of the most used to solve combinatorial optimization grouping problems. Its optimization process consists of different components, although the crossover and mutation operators are the most recurrent. This article aims to highlight the impact that a well-designed operator can have on the final performance of a GGA. We present a comparative experimental study of different mutation operators for a GGA designed to solve the Parallel-Machine scheduling problem with unrelated machines and makespan minimization, which comprises scheduling a collection of jobs in a set of machines. The proposed approach is focused on identifying the strategies involved in the mutation operations and adapting them to the characteristics of the studied problem. As a result of this experimental study, knowledge of the problem-domain was gained and used to design a new mutation operator called 2-Items Reinsertion. Experimental results indicate that the state-of-the-art GGA performance considerably improves by replacing the original mutation operator with the new one, achieving better results, with an improvement rate of 52%. Full article
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Knowledge-Driven Multi-Objective Optimization for Reconfigurable Manufacturing Systems
by Henrik Smedberg, Carlos Alberto Barrera-Diaz, Amir Nourmohammadi, Sunith Bandaru and Amos H. C. Ng
Math. Comput. Appl. 2022, 27(6), 106; https://doi.org/10.3390/mca27060106 - 9 Dec 2022
Cited by 2 | Viewed by 2937
Abstract
Current market requirements force manufacturing companies to face production changes more often than ever before. Reconfigurable manufacturing systems (RMS) are considered a key enabler in today’s manufacturing industry to cope with such dynamic and volatile markets. The literature confirms that the use of [...] Read more.
Current market requirements force manufacturing companies to face production changes more often than ever before. Reconfigurable manufacturing systems (RMS) are considered a key enabler in today’s manufacturing industry to cope with such dynamic and volatile markets. The literature confirms that the use of simulation-based multi-objective optimization offers a promising approach that leads to improvements in RMS. However, due to the dynamic behavior of real-world RMS, applying conventional optimization approaches can be very time-consuming, specifically when there is no general knowledge about the quality of solutions. Meanwhile, Pareto-optimal solutions may share some common design principles that can be discovered with data mining and machine learning methods and exploited by the optimization. In this study, the authors investigate a novel knowledge-driven optimization (KDO) approach to speed up the convergence in RMS applications. This approach generates generalized knowledge from previous scenarios, which is then applied to improve the efficiency of the optimization of new scenarios. This study applied the proposed approach to a multi-part flow line RMS that considers scalable capacities while addressing the tasks assignment to workstations and the buffer allocation problems. The results demonstrate how a KDO approach leads to convergence rate improvements in a real-world RMS case. Full article
Show Figures

Figure 1

17 pages, 929 KiB  
Article
Is NSGA-II Ready for Large-Scale Multi-Objective Optimization?
by Antonio J. Nebro, Jesús Galeano-Brajones, Francisco Luna and Carlos A. Coello Coello
Math. Comput. Appl. 2022, 27(6), 103; https://doi.org/10.3390/mca27060103 - 30 Nov 2022
Cited by 5 | Viewed by 5060
Abstract
NSGA-II is, by far, the most popular metaheuristic that has been adopted for solving multi-objective optimization problems. However, its most common usage, particularly when dealing with continuous problems, is circumscribed to a standard algorithmic configuration similar to the one described in its seminal [...] Read more.
NSGA-II is, by far, the most popular metaheuristic that has been adopted for solving multi-objective optimization problems. However, its most common usage, particularly when dealing with continuous problems, is circumscribed to a standard algorithmic configuration similar to the one described in its seminal paper. In this work, our aim is to show that the performance of NSGA-II, when properly configured, can be significantly improved in the context of large-scale optimization. It leverages a combination of tools for automated algorithmic tuning called irace, and a highly configurable version of NSGA-II available in the jMetal framework. Two scenarios are devised: first, by solving the Zitzler–Deb–Thiele (ZDT) test problems, and second, when dealing with a binary real-world problem of the telecommunications domain. Our experiments reveal that an auto-configured version of NSGA-II can properly address test problems ZDT1 and ZDT2 with up to 217=131,072 decision variables. The same methodology, when applied to the telecommunications problem, shows that significant improvements can be obtained with respect to the original NSGA-II algorithm when solving problems with thousands of bits. Full article
Show Figures

Figure 1

29 pages, 1082 KiB  
Article
Scarce Sample-Based Reliability Estimation and Optimization Using Importance Sampling
by Kiran Pannerselvam, Deepanshu Yadav and Palaniappan Ramu
Math. Comput. Appl. 2022, 27(6), 99; https://doi.org/10.3390/mca27060099 - 22 Nov 2022
Cited by 2 | Viewed by 2496
Abstract
Importance sampling is a variance reduction technique that is used to improve the efficiency of Monte Carlo estimation. Importance sampling uses the trick of sampling from a distribution, which is located around the zone of interest of the primary distribution thereby reducing the [...] Read more.
Importance sampling is a variance reduction technique that is used to improve the efficiency of Monte Carlo estimation. Importance sampling uses the trick of sampling from a distribution, which is located around the zone of interest of the primary distribution thereby reducing the number of realizations required for an estimate. In the context of reliability-based structural design, the limit state is usually separable and is of the form Capacity (C)–Response (R). The zone of interest for importance sampling is observed to be the region where these distributions overlap each other. However, often the distribution information of C and R themselves are not known, and one has only scarce realizations of them. In this work, we propose approximating the probability density function and the cumulative distribution function using kernel functions and employ these approximations to find the parameters of the importance sampling density (ISD) to eventually estimate the reliability. In the proposed approach, in addition to ISD parameters, the approximations also played a critical role in affecting the accuracy of the probability estimates. We assume an ISD which follows a normal distribution whose mean is defined by the most probable point (MPP) of failure, and the standard deviation is empirically chosen such that most of the importance sample realizations lie within the means of R and C. Since the probability estimate depends on the approximation, which in turn depends on the underlying samples, we use bootstrap to quantify the variation associated with the low failure probability estimate. The method is investigated with different tailed distributions of R and C. Based on the observations, a modified Hill estimator is utilized to address scenarios with heavy-tailed distributions where the distribution approximations perform poorly. The proposed approach is tested on benchmark reliability examples and along with surrogate modeling techniques is implemented on four reliability-based design optimization examples of which one is a multi-objective optimization problem. Full article
Show Figures

Figure 1

13 pages, 2123 KiB  
Article
Multi-Objective Optimization of an Elastic Rod with Viscous Termination
by Siyuan Xing and Jian-Qiao Sun
Math. Comput. Appl. 2022, 27(6), 94; https://doi.org/10.3390/mca27060094 - 15 Nov 2022
Cited by 1 | Viewed by 1423
Abstract
In this paper, we study the multi-objective optimization of the viscous boundary condition of an elastic rod using a hybrid method combining a genetic algorithm and simple cell mapping (GA-SCM). The method proceeds with the NSGAII algorithm to seek a rough Pareto set, [...] Read more.
In this paper, we study the multi-objective optimization of the viscous boundary condition of an elastic rod using a hybrid method combining a genetic algorithm and simple cell mapping (GA-SCM). The method proceeds with the NSGAII algorithm to seek a rough Pareto set, followed by a local recovery process based on one-step simple cell mapping to complete the branch of the Pareto set. To accelerate computation, the rod response under impulsive loading is calculated with a particular solution method that provides accurate structural responses with less computational effort. The Pareto set and Pareto front of a case study are obtained with the GA-SCM hybrid method. Optimal designs of each objective function are illustrated through numerical simulations. Full article
Show Figures

Figure 1

37 pages, 3040 KiB  
Article
A Bounded Archiver for Hausdorff Approximations of the Pareto Front for Multi-Objective Evolutionary Algorithms
by Carlos Ignacio Hernández Castellanos and Oliver Schütze
Math. Comput. Appl. 2022, 27(3), 48; https://doi.org/10.3390/mca27030048 - 1 Jun 2022
Cited by 3 | Viewed by 2545
Abstract
Multi-objective evolutionary algorithms (MOEAs) have been successfully applied for the numerical treatment of multi-objective optimization problems (MOP) during the last three decades. One important task within MOEAs is the archiving (or selection) of the computed candidate solutions, since one can expect that an [...] Read more.
Multi-objective evolutionary algorithms (MOEAs) have been successfully applied for the numerical treatment of multi-objective optimization problems (MOP) during the last three decades. One important task within MOEAs is the archiving (or selection) of the computed candidate solutions, since one can expect that an MOP has infinitely many solutions. We present and analyze in this work ArchiveUpdateHD, which is a bounded archiver that aims for Hausdorff approximations of the Pareto front. We show that the sequence of archives generated by ArchiveUpdateHD yields under certain (mild) assumptions with a probability of one after finitely many steps a Δ+-approximation of the Pareto front, where the value Δ+ is computed by the archiver within the run of the algorithm without any prior knowledge of the Pareto front. The knowledge of this value is of great importance for the decision maker, since it is a measure for the “completeness” of the Pareto front approximation. Numerical results on several well-known academic test problems as well as the usage of ArchiveUpdateHD as an external archiver within three state-of-the-art MOEAs indicate the benefit of the novel strategy. Full article
Show Figures

Figure 1

Back to TopTop