1. Introduction
The design and optimization of mesoscale lattice materials have become increasingly significant in advanced manufacturing due to their exceptional mechanical and functional properties. These materials, characterized by their periodic cellular architectures, offer high specific stiffness [
1], good strength [
2], sound energy absorption [
3], and even supernatural properties, like a negative Poisson’s ratio [
4]. Such attributes make them ideal for high-end industrial applications, including airplane components [
5], thermal management systems [
6], and medical implants [
7]. However, customizing optimized parametric lattice materials to meet specific performance criteria remains a considerable challenge.
Traditional design approaches are predominantly empirical, relying on iterative, manual processes involving geometric modeling, parametric adjustments, and extensive numerical analysis. These methods are time-consuming, costly, and often fail to fully exploit the vast design space available for lattice configurations. The lack of a systematic methodology for exploring and optimizing both the lattice configuration and the morphological features of lattice materials limits innovation and hinders the development of solutions tailored to specific application needs.
Recent advancements in computational design and artificial intelligence (AI) have introduced a generative design paradigm that facilitates the customization and optimization of lattice materials [
8,
9,
10,
11]. Generative design replaces manual design processes, enabling the automatic and efficient generation of highly complex and optimized lattice structures. AI technologies, including neural networks [
12], generative adversarial networks (GANs) [
13], and variational autoencoders (VAEs) [
14], have expedited the generation of high-performance lattice materials. For example, Lee et al. used a hybrid neural network and genetic optimization methods combined with Bézier curves to optimize lattice profiles, enhancing the elastic modulus [
15].
Furthermore, data-driven design methods enable us to elaborate the relationship between design variables and mechanical properties. This allows for forward property prediction and an inverse lattice material with desired properties. Yayati et al., utilizing a denoising diffusion-based model, accelerated the design process of a TPMS-like lattice unit cell structure with desired mechanical properties outperforming traditional simple cubic cells [
16]. Challapalli and Li et al. utilized machine learning to design and optimize lattice configurations, with their optimized lattice cells significantly outperforming octet cells in terms of buckling loads and demonstrating enhanced compressive performance in both experimental and simulation validations [
17]. Despite these advancements, a gap remains in fully integrating generative design, parametric modeling, and machine learning for forward-to-inverse lattice material design.
In the generative design paradigm, digital geometry design determines the design space of lattice material. Lattice materials can be considered as a combination of their skeleton and morphology [
18]. In terms of skeleton design, the spatial positions of lattice material components and their connections have mainly been studied. For example, Chen et al. enhanced the stiffness, expansibility, and energy absorption capabilities of materials by designing self-similar concave tensile lattice configurations [
19]. Ding et al. improved the negative Poisson’s ratio effect of lattice materials by parametrically altering the bending and twisting angles of lattice rods [
20]. Guo et al. used a fast Fourier transform-based homogenization method to explore the mechanical properties of mixed materials with multi-lattice configurations based on triply periodic minimal surfaces (TPMSs), enhancing compressive energy absorption and other properties [
21]. Rahman et al. demonstrated the potential of lattice materials to improve energy absorption and mechanical efficiency by exploring various rod-based lattice configuration hybrid structures [
22]. Compared to skeleton design, strategies for refining lattice morphologies have evolved, ranging from smoothing lattice nodes to parametrically modifying their shapes. Cao et al. found that cross-sectional optimization improves the energy absorption and mechanical performance of the rhombic dodecahedron lattice [
23]. Bernard et al. demonstrated that non-circular sections, such as squares or rectangles, markedly improved the resilience and energy absorption of strut-based lattices [
24]. Uddin et al. designed I-shaped rod sections that improved the compressive performance of pyramid lattice structures and their resistance to buckling and bending [
25]. However, existing research often focuses on either the lattice skeletal configuration or its morphological details in isolation, which restricts the potential for holistic optimization.
To address these challenges, this research proposes a data-driven bi-directional framework that synergizes generative design, machine learning, and optimization algorithms for advanced mesoscale lattice material design. The core objective is to develop a unified approach that simultaneously optimizes both the skeleton and morphology of lattice structures, enhancing their customization and performance across various applications. Parametric SubD modeling is utilized for the detailed digital representation of the lattice structure. A small sample dataset of mechanical properties is collected using the homogenization method, which simplifies the complex lattice structure into an equivalent homogeneous material for efficient analysis. To enhance predictive accuracy, a two-tiered machine learning framework is proposed. The first tier uses polynomial regression to estimate relative density, which is then used as an input feature for the second tier, a Random Forest model that predicts the elastic modulus. A genetic algorithm is employed to customize and optimize lattice designs that meet the desired mechanical specifications and restrictions.
The effectiveness of the proposed approach is validated through numerical simulations and case studies, demonstrating its capability to produce optimized lattice structures that satisfy or surpass desired performance criteria. By integrating generative design, machine learning, and optimization into a cohesive framework, this research provides a comprehensive solution for customizing optimized parametric lattice materials at the mesoscale.
3. Case Study: Property Customization
A lattice unit with a nested cube was selected as case study to demonstrate the validation of the proposed method, as seen in
Figure 6.
Five geometric parameters were defined to describe the lattice shape as shown in
Figure 7a. Four parameters were used to define the lattice morphology:
defines the size of the inner nodes;
defines the size of the outer nodes,
describes the radius of the strut size, and
determines the smoothness of the lattice struts. In terms of configurations, the lattice is formed by a cube nested with a small cube; eight vertexes are connected to form a lattice skeleton.
determines the size of the cubes of lattice materials. This parametric mesh lattice model underwent a transformation into a smoothed lattice model through the Rhino 7-Grasshopper
® SubD component, as shown in
Figure 7b,c.
To validate the efficacy and advantages of the proposed lattice design approach, a comparison analysis was conducted using conventional parametric lattice designs as benchmarks. The comparison focused on the mechanical properties, particularly the elastic modulus, of the designed lattice materials. As illustrated in
Figure 8,
L defines the box size inside a lattice unit, and
D denotes the diameter of the lattice struts. Under the same relative density, the elastic moduli of both SubD parametric lattice and regular parametric lattice are presented.
Figure 8 shows that the lattice materials designed using our methodology demonstrated a significantly higher upper boundary of elastic modulus compared to conventional lattices under equivalent mass conditions. This indicates superior material utilization and performance.
The Latin Hypercube Sampling (LHS) method was used to sample the geometric parameter combinations to ensure uniformity of sampling. The specific sampling ranges for each parameter are detailed in
Table 1. Considering the minimum resolution of Formlab SLA devices, each parameter was rounded to two decimal places. This clearly determined a global design space encompassing 21
21
11
101
6 design possibilities. A total of 280 sets of geometric parameters were extracted through LHS. The Rhino 7-Grasshopper
® Anemone plug-in facilitated the automatic generation of lattice models.
A cubic RVE consisting of 3 × 3 × 3 parametric lattice cells (shown in
Figure 7b) with a side length of 18 mm was chosen. This choice not only ensures stable macroscale mechanical behavior and minimizes size effects but also reduces the computational burden when calculating the elastic modulus of all samples. FormLab White resin was chosen to fabricate all lattice materials. Its mechanical properties are: a density of ρ = 1.10 g/mm
3; a Young’s modulus of Es = 2.51 GPa; and a Poisson’s ratio of ν = 0.23 according to previous testing [
29]. The RVE was treated as orthogonally isotropic, and a displacement (0.09 mm) was applied to the
Z-axis direction to obtain the support reaction force and calculate the modulus of elasticity under periodic boundary conditions, as detailed in Equations (1) and (2). The numerical evaluation of all lattice samples was conducted through batch processing in ANSYS Mechanical APDL.
After collecting the dataset, it was essential to address the presence of outliers in the dataset to enhance the model’s accuracy and robustness. We defined outliers based on the statistical properties of the target variable (elastic modulus
). The 99th percentile was the threshold for outlier detection. Any data point where
exceeded this threshold was regarded as an outlier and removed from the dataset since software may generate a low-quality RVE model, which can lead to an extremely high elastic modulus during numerical evaluation. The filtered data samps are presented in
Figure 9b. Then, correlation analyses were conducted, as seen in
Figure 9a–d. The analysis of the first-tier data presented a strong correlation between
and
RD, while other variables also showed a high correlation with
RD, except for
, as shown in
Figure 9a. Additionally, there were no obvious relations among geometric variables. In the second-tier dataset, a similar trend was observed with
RD and
, where
RD showed a strong correlation with
, as shown in
Figure 9c. The importance of all input features is shown in
Figure 9d.
In the ML training process, the dataset was split into a training set and a validation set (80% of the data for training and the remaining 20% for validation) using a random split technique. To prevent bias in the sequence of the data for model training, the data were randomly shuffled before splitting.
By modulating the order of the polynomial regression, we found that the model fits best when the order is 4. To optimize the performance of the Random Forest regression model, we performed hyperparameter tuning using a grid search approach. This method exhaustively searches through a specified parameter grid to find the combination of hyperparameters that yields the best performance based on five-fold cross-validation results. The hyperparameters and their respective search ranges were specified as follows:
Number of Estimators (): [100, 200, 300].
Maximum Depth (): [None, 10, 20, 30].
Minimum Samples Split (): [2, 5, 10].
Minimum Samples Leaf (): [1, 2, 4].
Maximum Features (): [‘auto’, ‘sqrt’, ‘log2’].
After conducting the grid search, the best combination of hyperparameters was found to be = 300; = 10; = 2; = 1; and = ‘sqrt’.
Additionally, to determine the average performance index, five-fold cross-validation (CV) was used to randomly split and train many times. This increased the robustness of the model performance evaluation and lessened the random effects that could be caused by a single data split. The result, illustrated in
Table 2, reveals the robust predictive capabilities of the ML model.
Additionally, to demonstrate the reliability of this ML model, a robustness analysis was conducted under different noise levels. Gaussian noise ranging from 1%, 2%, 5% to 10% was imported into each of the input features to simulate measurement errors. The results of the study showed that the model maintained strong robustness and stability under varying noise levels, as seen in
Figure 10. The R
2 scores fluctuated slightly in the noise interval from 1.0% to 10.0%, indicating that the model was able to retain a high level of goodness-of-fit in the presence of noise disturbances. This was particularly significant, as the model still effectively explained the main variability in the data at higher noise levels. The MSE and MAE increased gradually with rising noise, but the increase was more moderate, further validating the model’s adaptability under medium-to-high noise conditions. Despite the rise in error due to noise, the model showed a degree of noise tolerance at noise levels of 5.0% and above, which is crucial for property prediction.
In the reverse design phase, a genetic algorithm was used to identify lattice materials that satisfied tailored properties while maintaining minimum relative density, as defined in Equation (13). The range of geometric parameters is defined in
Table 1.
where both
and
determine the weight of the elastic modulus and relative density, respectively.
is the model’s prediction for the mechanical performance based on the input parameters, and
is the desired performance value.
denotes the Random Forest model in which
and
RD are regarded as inputs, and
denotes the polynomial regression model applied to predict the
RD by
. The population size for each generation was set to 100, with 50 iterations. Further experiment validation was achieved through the customization of four distinct lattice materials, targeting elastic modulus values within the range of 80 to 120 MPa while ensuring the lowest possible relative density. Before running the genetic algorithm, a sensitive analysis was conducted. Two weights values,
and
, were explored firstly. We first set the target value of the modulus of elasticity retrieved by the genetic algorithm to 160 MPa, set a series of weighting parameters, and observed that their effects on the best-fit values (as shown in
Table 3) are varied. The weights satisfy (
+
= 1), which ensures a constraint on the total weights.
Figure 11 illustrates the sensitivity analysis of the weighting parameters
and
to the best fitness value. As
increases and
decreases, the optimal fitness value gradually increases. The results show that higher weighted
values help to reduce the fitness value and thus reduce the relative density more efficiently, whereas higher
values focus the optimization on modal matching but tend to lead to higher fitness values. Hence,
and
were set as 0.1 and 0.9, respectively. After determining the sensitivity of
and
, we further explored the sensitivity of the genetic algorithm parameters.
Population size (): [50, 100, 150].
Iteration number (): [15, 20].
Crossover probability (): [0.4, 0.5].
Mutation probability (): [0.1, 0.2, 0.3].
Tournament size ():): [2, 3].
Figure 12 shows that the performance of the genetic algorithm is significantly affected by the number of iteration generations and the probability of variation. Increasing the number of iteration generations (
= 20) can effectively improve the convergence of the algorithm so that the algorithm reaches the optimal fitness value of 3.5416 many times and avoids the sub-optimal solution problem caused by stopping too early. A higher variance probability (
= 0.2 or 0.3) shows advantages in expanding the search space and avoiding local optimums, significantly improving the quality of the solution. In addition, tournament size (
= 3) helps balance exploration and exploitation with a moderate increase in selection pressure, resulting in more stable results. Comparatively, crossover probability (
= 0.4 or 0.5) and population size (
= 50 to 150) have less impact on the algorithm results and can be flexibly adjusted according to resources. But
is set 100 to balance the computational cost with the performance of the algorithm. This combination is expected to improve the quality of the solution and ensure the stable convergence of the algorithm at the same time. Overall, appropriately increasing the number of iteration generations and choosing a higher variance probability are key measures to improve the performance of the algorithm. Based on the results of the analysis, the recommended final parameter settings are as follows:
In the experimental section, we attempted to customize four optimal lattice materials with target elastic moduli ranging from 80 to 120 MPa and the lowest relative density. We selected 3 × 3 × 3 lattice units and conducted compression tests on a Sansi Zongheng universal testing machine. Each lattice sample was produced five times using stereolithography (SLA) on a FormLab3 printer. Post processing included cleaning the sample with a Xiaomei ultrasonic cleaner in 99.8% isopropanol for 10 min to eliminate any residual resin and ensure the cleanliness and integrity of the sample. The lattices were subsequently cured in a FormLab UV curing machine and gradually heated to 60 °C over 60 min. During the printing and cleaning process, damaged lattices were excluded. The final selection of three lattice structures from each group for compression testing was primarily influenced by two factors. Firstly, the Formlabs equipment exhibited printing delamination issues, which affected the integrity of some lattice materials. Secondly, the removal of support structures caused damage to certain lattices, rendering them unsuitable for testing. Thus, the three chosen lattices were those that remained intact and retained their geometric features after these challenges.
Table 4 analyze the actual results of physical experiment
and record stress-strain curves of each sample.
To describe the relative error (RE) between the predicted relative volume (
) and the actual relative volume from CAD software (
), the following equation is used, as shown in Equation (14):
In assessing relative density,
showed variability depending on the target elastic modulus
. As seen relative density error curves in
Table 5,
showed higher errors at lower
values, diminished errors at moderate
values, and increased errors once again at higher
values. Conversely, the polynomial regression-predicted
displayed smaller positive errors at lower desired properties, with errors slightly increasing, then shifting to negative at moderate
values and decreasing again at higher values, as seen elastic modulus error curves in
Table 5. Notably, at
= 100 MPa, the error in
reached −1.18%. This indicated a more stable prediction from polynomial regression as confirmed by data in
Table 5.
In assessing relative density,
exhibited variability depending on the target elastic modulus (
). As depicted in
Table 5,
showed higher errors at lower
values, diminished errors at moderate
values, and increased errors once again at higher
values. Conversely, the polynomial regression-predicted
displayed smaller positive errors at lower
values, with errors slightly increasing, then shifting to negative at moderate
values and decreasing again at higher values. Notably, at
= 100 MPa, the error in
reached −1.18%. This indicates a more stable prediction from polynomial regression. The larger variance in
might be attributed to a higher delta setting as shown in Equation (11).
In terms of customizing the elastic modulus, the relative errors between the predicted elastic modulus () and were minimal. This precision proves the capability of GA to effectively navigate the parameter space defined by the two-tiered ML model to optimize lattice designs. For the errors between and , the results suggest that the prediction capability of ML is not stable but can be referenced as the fluctuation range is from 0.75% to 8.55%. exhibits larger errors at higher values, decreasing gradually with increasing and stabilizing at lower values. It is noteworthy that the deviation trend exhibits high agreement between the prediction error of the elastic modulus and RD. As the error of increases, the error of also amplifies, consistent with the result of feature importance analysis. The RD error of polynomial regression can further expand the elastic modulus error of Random Forest regression. Each of the four designed lattice materials met or exceeded the targeted mechanical properties, thereby validating the predictive accuracy and practical applicability of our design methodology.
Despite these challenges, all four designed lattice materials successfully met or surpassed their targeted mechanical properties, affirming the robustness and practical applicability of our design method. The GA was instrumental in determining the optimal discrete parameters within the polynomial constraints of relative density and geometric parameters. Its application highlights the algorithm’s efficiency in managing the intricate design space of lattice materials, leading to highly precise material property achievements.
4. Case Study: Performance Optimization
The detailed parametric characterization of lattice materials significantly facilitates the exploration of the morphological features of lattice material. This approach not only makes the exploration of lattice morphology more flexible but also broadens the spectrum of achievable mechanical performance for lattice materials. Initially, a body-centered cubic (BCC) lattice unit was modeled using quadrilateral meshes, with each strut having hexagonal cross-sections. Four geometric parameters were defined to describe the lattice morphology, as shown in
Figure 13a. S1 is the distance parameter which constructed points and from which the strut midpoint extended along both sides. S2 and S3 specify the strut radius at both S1 points and the midpoint of the strut, respectively. N1 represents the lattice node size. This parametric mesh lattice model was transformed into a smoothed one using the Rhino 7-Grasshopper
® SubD component, as shown in
Figure 13b. The use of subdivision modeling technology ensures an organic morphology while significantly reducing stress concentrations.
To efficiently explore the vast potential design space and collect geometric features, Latin Hypercube Sampling (LHS) was used. This statistical approach helps to sample four geometric parameters, ensuring uniform coverage across the range of each variable. The specific sampling ranges for each parameter are detailed in
Table 6.
Considering the minimum resolution of our additive manufacturing equipment, each parameter should be rounded to two decimal places. This clearly determined a global design space encompassing 11
36
36
36 design possibilities. A total of 238 sets of geometric parameters were sampled, generated, and evaluated following the procedure described in
Section 3. The deformed 3 × 3 × 3 RVE and its von Mises stress distribution under the compressive load are illustrated in
Figure 14.
The distribution of this lattice sampling is presented in
Figure 15a, and then the parameters’ sensitivity is demonstrated in
Figure 15b. Relative density plays a vital role in determining the elastic modulus, while S1, the distance parameter, has less influence on the mechanical properties of the lattice structure. Node size and strut radii (N1, S2, and S3) have a significant contribution to the lattice’s mechanical performance.
The dataset was collected and pre-processed in the same way as in
Section 3. This dataset was split into 80% for training and 20% for validation, and five-fold cross-validation was applied to assess the models’ effectiveness after grid searching hyperparameters of the Random Forest model. The predictive outcomes, illustrated in
Table 7, highlight the strong predictive performance of the ML models.
The outcome of the two-tiered ML model was the production of
RD and Young’s modulus forecasts for a global design space of parametric lattice materials. This dataset allowed designers to efficiently search and filter lattice materials within the Rhino 7-Grasshopper
® environment, pinpointing options that meet specific mechanical performance criteria and constraints across 11 × 36 × 36 × 36 possible lattice design configurations, as shown in
Figure 15a.
The performance of the machine learning models was validated using the metrics provided in
Table 7. Linear regression achieved a CV MAE of 0.0049 and a CV R
2 of 0.98187, indicating high predictive accuracy for relative density. Random Forest regression, used in the second tier, resulted in an MAE of 4.5046 and an R
2 of 0.91912, showing good predictive capability for Young’s modulus despite the increased complexity of the task. These results highlight the robust performance of the ML models in capturing the non-linear relationships between lattice parameters and mechanical properties, even when handling complex design configurations.
In the experimental phase, we customized four optimal lattice materials with a relative density in the range of 0.155 to 0.160 and maximum elastic modulus. The 6 × 6 × 6 lattice units were selected for compressive tests on a SanSiZongHeng universal testing machine. In the optimization process, we selected the first four parameter combinations with the best fitness at the time of convergence of the genetic algorithm. These parameter combinations represent the design solutions with the best performance under the current optimization conditions. Three lattice materials were produced for each parameter combination and subjected to the same post-processing methods as in
Section 3. They were subsequently compressed, and their compression modulus was averaged. A conventional BCC lattice served as the benchmark for comparing mechanical performance, as detailed in
Table 8.
Table 8 presents a detailed comparison between the traditional BCC lattice material and four customized lattice materials (C1, C2, C3, C4) and shows the stress-strain curves of all samples during the compression testing. Images show that the FormLab3 SLA printer accurately captured the geometric features of the digital model, with all samples maintaining a
RD strictly within the target range of 0.15–0.16. The compressive modulus (Es) metrics revealed the mechanical performance enhancements achieved through the proposed filter strategy. Specifically, C1 exhibited the highest compressive modulus at 11.27 MPa, an increase of about 25.6% over the conventional BCC lattice’s 8.97 MPa. C4 also showed an elastic modulus 21.5% higher than the conventional lattice. Notably, C1 and C4 exhibited closely matched values across parameters, indicating similar geometric features that likely influence their mechanical performance. This significant improvement highlights the benefits of lattice customization in enhancing material stiffness under compressive loads.
5. Discussion
The data-driven bi-directional framework proposed in this study demonstrates significant potential in lattice material design optimization, successfully integrating generative design, machine learning (ML), and genetic algorithms (GA) to achieve highly accurate performance prediction and lattice material optimizations. However, we also recognize that there are some limitations and assumptions in this study, which need to be discussed in depth to clarify the scope of applicability of the methodology and the direction of improvement.
Firstly, we assume that the geometric parameters and relative density can accurately predict the elastic modulus. This simplification may not fully capture the effect of print orientation on the mechanical behavior of the lattice structure in additive manufacturing, especially when the lattice structure is complex and has significant anisotropy. Although relative density is an important factor influencing the elastic modulus, ignoring the effects of microgeometry and local stress concentrations on the mechanical properties of the material during the printing process may lead to deviations in the predicted results from the actual properties. Secondly, we adopted the homogenization technique to simplify the complex lattice structure and thus improve computational efficiency. Secondly, we adopted the homogenization technique to simplify the complex lattice structure and thus improve computational efficiency. However, the method may introduce inaccuracies in approximating the mechanical properties of heterogeneous materials, especially for lattices with complex topologies and significant size effects. Such simplifications may not adequately reflect the subtle interactions within the lattice, leading to discrepancies between numerical predictions and physical results. In addition, this study assumes that the material exhibits linear elastic behavior during loading and does not consider non-linear behaviors such as plastic deformation, viscoelastic effects, or damage evolution.
During customization and optimization, the two-tiered machine learning models (polynomial regression and Random Forest) rely on the quality and diversity of the training data. Since the training data are mainly from numerical simulations, poor meshing accuracy during finite element analysis may increase the deviation in numerical analysis properties from the actual mechanical properties of lattice materials. The prediction accuracy may decrease. Meanwhile, the performance of genetic algorithms is highly sensitive to their parameters (e.g., population size, crossover probability, variance probability, and selection strategy). Although we performed parameter sensitivity analyses to identify appropriate settings, the selected parameters may not guarantee a globally optimal solution. In addition, this study does not consider constraints in the manufacturing process such as minimum feature sizes, tolerances, and defects that may occur during processes such as additive manufacturing. These factors may significantly affect the performance and feasibility of the lattice structure in actual production, and ignoring them may lead to design solutions that are difficult to implement in reality.
The above uncertainties are the main reason leading to deviations between the actual performance and predicted results. The limited number of experimental tests we conducted may not be sufficient to fully assess the accuracy of the model, and experimental errors and randomness of the samples may affect the reliability of the results.
To address these limitations, future research should focus on improving model accuracy and generalization by expanding the dataset to include more diverse lattice geometries, cell types, and material behaviors. Enhancing homogenization techniques and adopting multi-scale modeling can provide more accurate predictions by capturing both overall structural behavior and local effects. Further, integrating fabrication constraints into the optimization framework will ensure designed lattice structures are not only performance-optimized but also highly manufacturable. Comprehensive experimental validation is crucial, involving increased sample sizes and a wider range of tests to validate predicted performances. Establishing a feedback loop to iteratively improve ML models and optimization algorithms using experimental results will enhance model accuracy and robustness. Finally, developing adaptive optimization strategies, such as adaptive genetic algorithms that dynamically adjust parameters during optimization, can enhance convergence to global optimal solutions.