Before the following investigations, a sensitivity analysis was conducted regarding the time steps of the propulsion and aerodynamics propagators, leading to an average simulation time on the order of magnitude of seconds for a 13th Gen Intel® Core™ i9 13900KF 3.00 GHz processor.
To gain confidence in the current problem, various investigations were conducted to scrutinize the hyperparameters of the NSGA-II algorithm, namely , , and , as well as the influence of constraints and the role of the bounds of the DVs.
To mitigate the risk of over-constraining the problem, during the sensitivity analysis focusing on the NSGA-II algorithm, Constraints 3, 4, 5, and 8 were considered non-mandatory for this phase.
Preliminary findings revealed that the optimizer tended to converge towards solutions located at the upper or lower bounds (UB or LB) of the DVs, limiting exploration towards these extremes. Consequently, a decision was made to expand the bounds, aiming to understand the behavior of the optimizer without the constraints imposed by these limits. It is important to note that the expansion of bounds was purely numerical, executed without considering the physical implications, and undertaken bilaterally solely to assess their influence on the final results.
The sensitivity analysis began by varying one parameter at a time while keeping the remaining parameters at their reference values (baseline). A simulation with simultaneously increased and was performed, as well as a simulation with the baseline values and original bounds. Moreover, three additional simulations were performed to clarify the role of randomness along the process.
Following the assessment of hyperparameter influence, the role of constraints was investigated by comparing the unconstrained version with the constrained one. This analysis also considered the relations between original and enlarged bounds and their correlation with the selected objective.
It is crucial to acknowledge that expanding the bounds and eliminating constraints could potentially result in designs that are unfeasible or unrealistic. However, it is imperative to emphasize that these investigations were undertaken solely to comprehend their impact on the optimization process. The primary objective was not to generate practically implementable designs, but rather to gain insights into the behavior of the optimizer.
Subsequently, reverting to the original bounds and constraints, a final optimization was executed with hyperparameters deemed suitable for the problem. The optimization results underwent post-processing through a gradient-based optimizer, culminating in the presentation of the final Pareto front and three proposed designs.
4.1. NSGA-II Sensitivity Analysis
The code initiates by creating an initial set of solutions, forming a population. Following an evaluation process, half of the population undergoes recombination to produce offspring, and these offspring are subject to random mutation based on the specified mutation probability. Each cycle of this procedure is termed a generation.
In the initial generation, the entire population undergoes evaluation, while in each subsequent generation, only the newly generated half is assessed. The total number of function calls
can thus be computed as follows:
The Latin Hypercube Sampling (LHS) method [
56] has been employed to generate the initial population of designs. The parameters for the conducted simulations are summarized in
Table 6.
To conduct the initial parametric study, only one parameter is altered at a time, while the remaining parameters are maintained at their reference value, i.e., the baseline.
Three objectives have been considered: wet mass (mass of the rocket with all the propellant), specific impulse , and GWP measured in kg of emissions of .
Furthermore, simulations were executed using a population size of 100. Nevertheless, the results obtained suggested that, given the nature of the specific problem and the number of DVs, utilizing such a modest population size would result in an inefficient optimization process, yielding a Pareto front constrained to two or three points. Consequently, it would not provide sufficient information to draw meaningful conclusions or make accurate comparisons.
The initial simulations unveiled a linear relationship between mass and emissions, demonstrating a trend where minimizing one objective correlated with the reduction of the other. This distinct relationship is illustrated in detail in
Figure 4, depicting Simulations 1 and 6.
Hence, while all simulations encompassed three objectives, the reported figures will focus solely on the projections concerning and GWP.
Many considerations can be derived from the charts in
Figure 5:
A lower mutation probability leads to a diminished exploration of the design space, while excessive mutation may hinder the exploitation of promising results by overly altering them. These results do not provide a conclusive determination regarding whether a 30% mutation probability represents an optimal compromise.
Augmenting the population size results in broader coverage of the design space, enhancing exploration. However, without a corresponding increase in the number of generations, exploitation may be compromised.
Increasing the number of generations enhances exploitation, driving all designs towards the Pareto front and improving its extremes.
Fine-tuning of these parameters is essential to identify the optimal compromise between exploration and exploitation, aligning with the specific problem under study. For improved comparison, the Pareto fronts of Simulations 1–6 have been presented in
Figure 6.
The Pareto fronts from Simulations 1 to 5 exhibit similar results and trends. As anticipated, capturing the precise influence of the mutation probability proves challenging. Simulation 3 showcases designs with a favorable compromise between the two objectives, benefiting from increased exploration of the design space while maintaining similar extremes. In contrast, Simulation 4 highlights a design with significantly higher due to improved exploitation of promising designs. Simulation 5 illustrates how simultaneous increases in and can synergistically contribute to achieving a superior optimal front.
Notably, optimization with the original bounds yields a highly restricted Pareto front, yet overall superior results compared to other simulations. This can be attributed to the smaller design space being more accessible for exploration and the original bounds being optimized for the initial framework definition. Any expansion in the design space would necessitate a corresponding increase in population and generations to adequately cover it, as well as an adjustment of the fixed inputs.
It is noteworthy that among the eight configurations comprising the optimal front of Simulation 6, only one featured all DVs within their specified bounds. The remaining configurations exhibited two to five variables reaching the boundary limits.
Finally, for a more in-depth understanding of the role of the mutation probability, Simulations 1 & 2 were compared to 5 & 7, and 8 & 9, to investigate whether its effect became more pronounced with increases in population number and generations.
Analysis of the optimization process, depicted in
Figure 7, reveals an intricate relationship between mutation rates, population size, and the number of generations. Notably, for a population of 200 individuals and a maximum of 200 generations, the comparison between mutation rates of 30% and 10% presents an initial ambiguity in identifying the superior rate.
Observing the trend across varying population sizes and generations, it can be said that a pattern emerges, indicating the influence of mutation rates on optimization outcomes. With an increase in both population size and generations, the advantage of a higher mutation rate becomes progressively evident. This observation aligns with the ability of the genetic algorithm to explore a broader solution space and mitigate premature convergence to suboptimal solutions.
Thus, considering the potential for greater exploration and the avoidance of premature convergence, the mutation rate of 30% was selected as the preferred parameter for subsequent optimizations. This choice aims to leverage the inherent capability of the algorithm to diversify and enhance the search for more optimal solutions within the given problem space.
4.2. Unconstrained Problem
To enhance the performances of the optimizer, the objectives were reduced from three to two. Given the linear relationship between mass and emissions, only emissions were considered. Additionally, was replaced by apogee altitude, aligning with the primary objective of designing an orbital vehicle.
To gain deeper insights into the problem, all constraints except the apogee one were removed, and the bounds were further expanded to attempt to obtain a well-defined Pareto front. The number of generations was set equal to the number of designs in a population to prevent under-exploitation, and it was increased for several cases reported in
Figure 8. The mutation rate was fixed at 30%.
Additionally, in this version, the ballast loop was enhanced by incorporating an additional mass at the bottom, to set the maximum SM to 5.
Clearly, with an increase in the number of populations and generations, the optimality front becomes progressively more pushed towards the origin, thus leading to better intermediate results.
In a final analysis, the original bounds were reinstated, and an optimization with
=
= 200 was conducted, then compared to the one with expanded bounds. In this scenario, the obtained fronts exhibit similar trends and values. The front with larger bounds demonstrates superior extremes and compromise designs overall (refer to
Figure 9), albeit with fewer points due to the increased challenge of exploring a larger design space.
Given that the primary changes from the previous version were the alteration in objectives and the removal of constraints, a similar comparison was undertaken using two hybrid versions: an unconstrained one with as one of the objectives and a constrained one retaining the apogee altitude.
Upon comparing the unconstrained and constrained versions, it becomes evident that the imposition of constraints introduces a noticeable offset in the identified fronts (
Figure 9). Furthermore, this effect is more pronounced in the version with larger bounds, once again indicating poorer performance when utilizing the larger bounds. It is hypothesized that, as the constraints and functions have been specifically defined for sounding rockets, the application of larger bounds increases the likelihood of failures throughout the optimization process, ultimately resulting in inferior performance. This phenomenon is not observed in the unconstrained version, where the optimizer has the freedom to explore the entire design space, facilitating the discovery of superior solutions.
When employing the unconstrained version with
as one of the objectives, the version with original bounds once again demonstrated superior performance, as depicted in
Figure 10. It is noteworthy that, in this case, the change in
along the fronts was relatively small. However, for the original bounds, consistently lower emissions were observed to achieve the same specific impulse.
This behavior presents challenges in understanding, given the intricate computation of that involves various aspects of the design. Further investigation is required to elucidate this phenomenon.
4.3. Optimization and Post-Processing
After gaining insights into the problem, the following decisions were made for the final optimization:
The constraints in
Table 5 were reinstated to ensure a feasible design.
The original bounds in
Table 3 were retained, considering the expectation of better performance when these constraints are applied and acknowledging the absence of investigation into potential physical implications associated with their increase.
A population and generation size of = = 1000 was chosen to enhance exploration of the design space, exploit promising designs, and obtain a well-defined Pareto front.
A mutation probability of = 30 was selected, deemed suitable for populations of this order of magnitude.
To validate the simulations, the time steps of the propulsion and aerodynamics propagators were decreased by an order of magnitude, and the points on the obtained Pareto front were reanalyzed. All solutions remained feasible, with maximum relative differences for the apogee altitude and GWP at 0.6224% and 0.0693%, respectively.
To maximize the exploitation of potential results, as well as mitigating the influence of randomness related to genetic algorithms, the results of the NSGA-II algorithm underwent post-processing using the gradient-based optimizer , from MATLAB®, through single-objective optimizations that used the points on the front as starting points with the scope of obtaining a better Pareto front. Given that the designs are already within the feasible region, the sequential quadratic programming algorithm () was employed, with set to and set to . The selection of the specified values for and resulted from a sensitivity analysis. This analysis demonstrated that there were no changes in the objective function with a smaller than . Additionally, for 28 out of 30 designs, no changes were observed due to the since the starting points were already within the feasible region. The remaining two designs experienced some iterations in the unfeasible region but with constraint violations below . A change in this factor was therefore considered unnecessary.
To accommodate the two objectives in a single-objective process, both were normalized, and the objective function was defined as the weighted average of the two.
For normalization, the extremes of the front were extracted, and the respective values were used as minima and maxima using the following relation:
assigning 0 to the minimum and 1 to the maximum.
However, if the objective function was defined as the sum of the two normalized objectives, the optimizer would tend to optimize both to the same extent, which is not conducive to creating a Pareto front. To address this, defining
N as the number of designs on the front and sorting them in descending order according to one of the objectives, the following relation was applied:
resulting in
,
, and weighted objectives in between.
The optimal compromise was identified as the design with the highest value of . As the best compromise from the multi-objective optimization was not located at the position , its post-processing through the was not achieved with equal weights for the two objectives. The optimization was repeated with different weightings to assess their effects on the final result. Specifically, the optimization was rerun with equal weights, and the weights were also slightly adjusted to better define that portion of the Pareto front. The following weight pairs were utilized: 0.35–0.65, 0.45–0.55, 0.5–0.5, 0.55–0.45, and 0.65–0.35. The results of these optimizations were incorporated into the post-processed Pareto front. Additionally, optimizations with weight pairs 1–0 and 0-1 were also conducted from that same starting point.
Lastly, it was observed that the Pareto front obtained in this iteration closely resembled, in terms of trend and values, the one obtained in the previous version, achieved using = = 200. A final optimization with these parameters was consequently executed, and the extremes and optimal compromise were once again optimized through the .
All the comparisons mentioned have been reported in
Figure 11.
As evident, the application of on the results of the NSGA-II optimization results in a notable enhancement of the Pareto front. This improvement can be attributed to the distinct functioning principles of the two solvers: while the NSGA-II excels in exploring the design space, its exploitation of promising designs is constrained to recombination and random mutations. In contrast, a gradient-based solver can discern the direction of minimization, thereby refining the results.
The improvement is more conspicuous in the middle part of the front, with limited impact at the extremes. This is presumed to be linked to the utilized bounds for the DVs, and further clarification will be provided later.
Regarding the optimization of the compromise design, the obtained results were very similar to each other, and though differences are reported in the figure, they are not visually discernible. Furthermore, the best compromise was achieved through the post-processing of another design. Optimization to maximize apogee from this design yielded unsatisfactory results, whereas minimizing emissions found a design with the lowest emissions and a superior apogee altitude compared to the others in that region.
While this might seem peculiar initially, it can be explained by the propensity of gradient-based algorithms to become entrenched in specific regions of the design space. This results in significant dependency on the starting point, and therefore in the possibility of obtaining a superior final design by altering the initial set of DVs.
Finally, optimizations based on the = = 200 simulation extremes yielded suboptimal results for apogee and compromise, while emissions minimization mirrored the outcomes of the other optimal results.
In conclusion, it is posited that the optimizer excels in optimizing for emissions due to its reliance solely on the propulsion module in the developed framework. However, achieving optimal apogee involves the interplay of all disciplines, making it more intricate to correlate with the DVs.
Please note that, although the randomness involved in the optimization process affects its outcome, as noted in
Figure 6,
Figure 7 and
Figure 8 performed at the beginning of the study, it did not affect the overall trends of the results. Moreover, as can be seen in
Figure 11, where a gradient-based optimizer was used to improve the Pareto front resulting from the final NSGA-II optimization, the randomness is attenuated for such a large number of individuals and generations.
For the extreme configurations and the best compromise, various data are presented in
Table 7, comparing the results of NSGA-II and the outcomes of the post-processing.
The configurations of these points are reported in
Figure 12. Indeed, it is notable that all the identified configurations exhibit a high degree of slenderness, characterized by fineness ratios around 40. While these values adhere to the imposed constraints, it is crucial to acknowledge that the absence of a comprehensive structural analysis renders it challenging to assess the risk of bending or buckling. If structural considerations were incorporated, it may potentially result in excessively heavy structures for these designs, consequently diminishing overall performance.
An additional analysis was conducted along the Pareto front to observe the behavior of the DVs within their boundaries. The order of the indices along the Pareto front corresponds to minimizing emissions to maximizing apogee: DVs 6 (), 7(), 8 (), and 10 (), which exhibit unrestricted movement within their bounds, will not be presented.
Furthermore, DVs 2 (), 3 (), and 5 () consistently remained at their LBs along the Pareto front, irrespective of the objectives, whether minimizing the Global Warming Potential (GWP) or maximizing the apogee.
While it may seem intuitive that increasing the length of the fuel grain would lead to a higher apogee, its interrelations with other DVs are more intricate than they appear. The increase in fuel grain length would elevate the fuel mass flow, consequently reducing the oxidizer-to-fuel ratio from optimal values. To counterbalance this effect, an increase in oxidizer mass flow would be required, achievable through a larger pressure drop or a bigger injector port. However, the DVs are interconnected; a larger injector port leads to a smaller pressure drop due to an increase in CC pressure, necessitating a larger throat diameter. Furthermore, an increase in oxidizer mass flow results in a greater mass flux through the grain port, already optimized to the maximum value allowed by the constraint for apogee maximization. To decrease it, a larger initial port is needed, once again altering the fuel mass flow and creating a loop that cannot be linearly handled.
The natural question arises: why does a configuration with low values for all these design variables (DVs 2 (), 3 (), and 5 () on the lower bound and DV 4 () limited to a maximum of 0.4) outperform a configuration where all these variables are increased proportionally to each other? It is believed that, as the code computes the external diameter of the rocket based on the total consumption of the fuel grain given the OT volume, the bounds of the latter would lead to an even thinner configuration if a longer grain is used.
This hypothesis has been substantiated through a gradient-based optimization initiated from the design that maximizes the apogee, wherein an increase in the upper bound of the OT volume led to higher values of DVs 2-5 and an elevated apogee altitude. However, as anticipated, any modification in the bounds of the DVs must be meticulously analyzed to understand the physical implications of such choices.
From
Figure 13, it can be seen that the volume of the oxidizer tank
changes almost linearly with the index. This correlation aligns with the logic that minimizing the volume of the oxidizer tank directly reduces the overall oxidizer mass used during operation. This reduction in oxidizer mass directly impacts emissions, especially because the emissions profile considered is solely related to the combustion process.
DV 9 (
) exhibits an increasing pattern, with peaks when DV 7 (
) shows its lowest points (see
Figure 14a). This suggests potential trade-offs between aerodynamic and stability considerations, where changes in the root chord might affect the aerodynamic characteristics of the fins, potentially leading to higher aspect ratios with smaller root chords. As these variables related to the fins are interconnected, further analysis to try to correlate and optimize them might be conducted.
Lastly, from
Figure 14b, it can be observed that both DVs 4 (
) and 9 (
) appear to be confined within a normalized range of approximately 0.1 to 0.4. This suggests that the boundaries set for these variables might be too expansive for the specific problem at hand. Refining these bounds could potentially lead to a more focused and effective exploration of the design space.
From
Figure 15 and
Table 8, it is evident that after applying the
on the three results, most of the DVs move away from the bounds (see
Table 8). This observation suggests a potential limitation of the NSGA-II. For instance, if the crossover operation predominantly occurs between solutions situated near the bounds due to Pareto dominance or diversity preservation mechanisms, it might perpetuate values at the boundaries across generations, restricting exploration away from those bounds. Additionally, it is noteworthy to mention that the first three DVs remain on the LB in the first solution (minimizing GWP) of the
as well.
To emphasize the comprehensiveness of LCA, it is crucial to consider multiple impact categories. This holistic approach ensures a more comprehensive understanding of environmental impacts, preventing the resolution of one environmental issue from inadvertently giving rise to others. For instance, although they all seem to have the same trend (see
Figure 16), the results obtained in the environmental impact assessment presented in
Table 7 show that while considering an extreme configuration of minimum GWP, despite a decrease of GWP with the outcome design of the post-processing, the impacts associated with the category Ozone Formation increase compared to the results of NSGA-II.