Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model
Abstract
:1. Introduction
2. Research Significance
3. Materials and Methods
3.1. Truss Size Optimization Problem
3.2. GNN-Based Optimization
- Data Preparation. Generating a dataset of trusses with their members assigned a random sample set of cross-sectional areas, , which is then analyzed and transformed into graphs to be used as input to the GNN for training and validation.
- Surrogate Model. Building and training the GNN as a surrogate model for the constraints.
- Optimization. Using a PSO algorithm integrated with the trained GNN to attain the optimal cross-sectional areas of the truss members.
3.2.1. Data Preparation
3.2.2. Surrogate Model
3.2.3. Optimization
3.3. Graphs and Graph Neural Networks
- Directed or undirected graphs. In directed graphs, directional dependencies exist between nodes, and edges only go from one node to another. In contrast, in undirected graphs, edges are considered undirected or bidirectional. Figure 2 illustrates examples of the two edge types.
- Homogeneous or heterogeneous graphs. Vertices and edges are of the same types in homogenous graphs, whereas they have different types in heterogenous graphs. The latter can be important information in further usage of the graph datasets.
- Static or dynamic graphs. The graph is regarded as dynamic when input features or graph configuration changes over time.
- Node-level prediction tasks revolve around node attributes and their roles within a graph. Node classification, node regression, and node clustering are examples of node-level prediction tasks. In node classification, nodes are categorized in a variety of predetermined classes. Node regression aims to predict continuous variables for each node. On the other hand, the goal of node clustering is to make groups of unlabeled nodes with similar attributes.
- Edge-level prediction tasks are edge classification, regression, and link prediction. Here, edge classification pertains to classifying edge types, while edge regression deals with continuous variables associated with edges. Link prediction involves determining the presence or absence of an edge between two given nodes.
- Graph-level prediction tasks seek to predict the properties of an entire graph, including graph classification, graph regression, and graph matching. While the notion of graph classification and regression mirrors that of other types of prediction tasks, the classes and continuous variables are now for the entire graph. Graph matching, meanwhile, aims to investigate the similarity of graphs.
3.3.1. Computational Modules
- Propagation Modules. Propagation modules are used to propagate information through nodes so that an aggregated representation of the graph’s configuration and features are created. Notably, convolution operators serve as propagation modules, as they allow for aggregation of information from the neighboring nodes.
- Sampling Modules. For large-scale graphs, i.e., graphs that cannot be stored and processed by the device, sampling modules work in conjunction with propagation modules to propagate information effectively across the graph.
- Pooling Modules. Pooling modules play a crucial role in extracting information from nodes to construct high-level representations of subgraphs or graphs.
3.3.2. Encode-Process-Decode Architecture
3.3.2.1. Encoder
3.3.2.2. Processor
- ECCConv Layer is an Edge-Conditioned Convolution (ECC) layer that not only captures the features of individual nodes, but also ties the weights, , to the edge features in each layer . The output of ECCConv layers in this study takes the form of a vector with 64 distinct components. The ECC convolution layer is mathematically formulated as a function of both the node and edge features in the following manner [51]:
- CrystalConv is a Crystal Convolution layer introduced by Xie and Grossman [52]. This layer computes:
3.3.2.3. Decoder
3.4. PSO Implementation
Algorithm 1 Particle Swarm Optimization algorithm | |
INPUT: | Initialize the population of particles: |
Set the maximum number of iterations. | |
Set the number of particles in the swarm. | |
Randomly initialize the position and velocity of each particle. Determine the fitness value of each particle. | |
Set the personal best position (pbest) of each particle as its initial position. | |
Determine the global best position (gbest) of the population. | |
OUTPUT: | gbest and its fitness value found in the final iteration. |
1. | For each iteration until the maximum number of iterations is reached: |
2. | Update the velocity and position of each particle using the formulas: |
3. | |
4. | |
5. | Evaluate the fitness of the new position of each particle. |
6. | Update the pbest and gbest: |
7. | If the fitness of the new position is better than its pbest fitness: |
Update pbest. | |
Check if the fitness of pbest is better than the gbest fitness: | |
8. | Update gbest. |
3.4.1. Handling of Size Optimization Constraints
4. Results and Discussion
4.1. Ten-Bar Planar Truss
4.1.1. Data Preparation
4.1.2. GNN Model Training
4.1.3. GNN-Based Optimization
4.2. Seventy-Two-Bar Space Truss
4.2.1. Data Preparation
4.2.2. GNN Model Training
4.2.3. GNN-Based Optimization
4.3. Two-Hundred-Bar Planar Truss Example
4.3.1. Data Preparation
4.3.2. GNN Model Training
4.3.3. GNN-Based Optimization
4.4. Accuracy and Effectiveness
5. Summary, Conclusions, and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Horta, I.M.; Camanho, A.S.; Johnes, J.; Johnes, G. Performance trends in the construction industry worldwide: An overview of the turn of the century. J. Product. Anal. 2013, 39, 89–99. [Google Scholar] [CrossRef]
- Mei, L.; Wang, Q. Structural Optimization in Civil Engineering: A Literature Review. Buildings 2021, 11, 66. [Google Scholar] [CrossRef]
- Kaveh, A.; Khayatazad, M. Ray Optimization for Size and Shape Optimization of Truss Structures. Comput. Struct. 2013, 117, 82–94. [Google Scholar] [CrossRef]
- Dorn, W.; Gomory, R.; Greenberg, H.J. Automatic Design of Optimal Structures. J. Mec. 1964, 3, 25–52. [Google Scholar]
- Hajela, P.; Lee, E. Genetic algorithms in truss topological optimization. Int. J. Solids Struct. 1995, 32, 3341–3357. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, W.H.; Jiang, J.S. Truss shape optimization with multiple displacement constraints. Comput. Methods Appl. Mech. Eng. 2002, 191, 3597–3612. [Google Scholar] [CrossRef]
- Miguel, L.F.F.; Fadel Miguel, L.F. Shape and Size Optimization of Truss Structures Considering Dynamic Constraints through Modern Metaheuristic Algorithms. Expert Syst. Appl. 2012, 39, 9458–9467. [Google Scholar] [CrossRef]
- Stolpe, M. Truss Optimization with Discrete Design Variables: A Critical Review. Struct. Multidiscip. Optim. 2016, 53, 349–374. [Google Scholar] [CrossRef]
- Kaveh, A.; Malakouti Rad, S. Hybrid genetic algorithm and particle swarm optimization for the force method-based simultaneous analysis and design. Iran. J. Sci. Technol. Trans. B Eng. 2010, 34, 15–34. [Google Scholar]
- Li, L.J.; Huang, Z.B.; Liu, F. A Heuristic Particle Swarm Optimization Method for Truss Structures with Discrete Variables. Comput. Struct. 2009, 87, 435–443. [Google Scholar] [CrossRef]
- Renkavieski, C.; Parpinelli, R.S. Meta-heuristic algorithms to truss optimization: Literature mapping and application. Expert Syst. Appl. 2021, 182, 22. [Google Scholar] [CrossRef]
- Saka, M.P.; Hasançebi, O.; Geem, Z.W. Metaheuristics in Structural Optimization and Discussions on Harmony Search Algorithm. Swarm Evol. Comput. 2016, 28, 88–97. [Google Scholar] [CrossRef]
- Du, F.; Dong, Q.Y.; Li, H.S. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method. Algorithms 2017, 10, 128. [Google Scholar] [CrossRef] [Green Version]
- Desale, S.; Rasool, A.; Andhale, S.; Rane, P. Heuristic and Meta-Heuristic Algorithms and Their Relevance to the Real World: A Survey. Int. J. Comput. Eng. Res. Trends 2015, 2, 296–304. [Google Scholar]
- Salehi, H.; Burgueño, R. Emerging Artificial Intelligence Methods in Structural Engineering. Eng. Struct. 2018, 171, 170–189. [Google Scholar] [CrossRef]
- Flood, I. Towards the Next Generation of Artificial Neural Networks for Civil Engineering. Adv. Eng. Inform. 2008, 22, 4–14. [Google Scholar] [CrossRef]
- Lee, S.; Ha, J.; Zokhirova, M.; Moon, H.; Lee, J. Background Information of Deep Learning for Structural Engineering. Arch. Comput. Methods Eng. 2018, 25, 121–129. [Google Scholar] [CrossRef]
- Gu, G.X.; Chen, C.T.; Buehler, M.J. De Novo Composite Design Based on Machine Learning Algorithm. Extrem. Mech. Lett. 2018, 18, 19–28. [Google Scholar] [CrossRef]
- Nguyen, H.; Vu, T.; Vo, T.P.; Thai, H.T. Efficient Machine Learning Models for Prediction of Concrete Strengths. Constr. Build. Mater. 2021, 266, 17. [Google Scholar] [CrossRef]
- Abueidda, D.W.; Koric, S.; Sobh, N.A. Topology Optimization of 2D Structures with Nonlinearities Using Deep Learning. Comput. Struct. 2020, 237, 14. [Google Scholar] [CrossRef]
- Kollmann, H.T.; Abueidda, D.W.; Koric, S.; Guleryuz, E.; Sobh, N.A. Deep Learning for Topology Optimization of 2D Metamaterials. Mater. Des. 2020, 196, 14. [Google Scholar] [CrossRef]
- Yu, Y.; Hur, T.; Jung, J.; Jang, I.G. Deep Learning for Determining a Near-Optimal Topological Design without any Iteration. Struct. Multidiscip. Optim. 2019, 59, 787–799. [Google Scholar] [CrossRef] [Green Version]
- Chandrasekhar, A.; Suresh, K. TOuNN: Topology Optimization Using Neural Networks. Struct. Multidiscip. Optim. 2021, 63, 1135–1149. [Google Scholar] [CrossRef]
- Moghadas, K.R.; Choong, K.K.; Bin Mohd, S. Prediction of Optimal Design and Deflection of Space Structures Using Neural Networks. Math. Probl. Eng. 2012, 2012, 712974. [Google Scholar]
- Yücel, M.; Bekdaş, G.; Nigdeli, S.M. Prediction of Optimum 3-Bar Truss Model Parameters with an ANN Model. In Proceedings of the 6th International Conference on Harmony Search, Soft Computing and Applications, ICHSA 2020, Advances in Intelligent Systems and Computing, Istanbul, Turkey, 22–24 April 2020; Volume 1275, pp. 317–324. [Google Scholar]
- Nguyen, T.-H.; Vu, A.-T. Prediction of Optimal Cross-Sectional Areas of Truss Structures Using Artificial Neural Networks. In Proceedings of the 6th International Conference on Geomatics, Civil Engineering and Structures, CIGOS 2021, Emerging Technologies and Applications for Green Infrastructure, Ha Long, Vietnam, 28–29 October 2021; Volume 203, pp. 1897–1905. [Google Scholar]
- Nourian, N.; El-Badry, M.; Jamshidi, M. Design Optimization of Pedestrian Truss Bridges Using Deep Neural Network. In Proceedings of the 11th International Conference on Short and Medium Span Bridges, SMSB XI, Toronto, ON, Canada, 19–22 July 2022; p. 10. [Google Scholar]
- Hajela, P.; Berke, L. Neurobiological computational models in structural analysis and design. Comput. Struct. 1991, 41, 657–667. [Google Scholar] [CrossRef]
- Hajela, P.; Berke, L. Neural Network Based Decomposition in Optimal Structural Synthesis. Comput. Syst. Eng. 1991, 2, 473–481. [Google Scholar] [CrossRef]
- Papadrakakis, M.; Lagaros, N.D.; Tsompanakis, Y. Optimization of Large-Scale 3-D Trusses Using Evolution Strategies and Neural Networks. Int. J. Space Struct. 1999, 14, 211–223. [Google Scholar] [CrossRef]
- Liu, Y.; Lu, N.; Noori, M.; Yin, X. System Reliability-Based Optimisation for Truss Structures Using Genetic Algorithm and Neural Network. Int. J. Reliab. Saf. 2014, 8, 51–69. [Google Scholar] [CrossRef]
- Zhou, Y.; Zhan, H.; Zhang, W.; Zhu, J.; Bai, J.; Wang, Q.; Gu, Y. A New Data-Driven Topology Optimization Framework for Structural Optimization. Comput. Struct. 2020, 239, 16. [Google Scholar] [CrossRef]
- Nguyen, T.H.; Vu, A.T. Using Neural Networks as Surrogate Models in Differential Evolution Optimization of Truss Structures. In Proceedings of the 12th International Conference on Computational Collective Intelligence, ICCCI 2020, Da Nang, Vietnam, 30 November–3 December 2020; Volume 12496, pp. 152–163. [Google Scholar]
- Mai, H.T.; Kang, J.; Lee, J. A Machine Learning-Based Surrogate Model for Optimization of Truss Structures with Geometrically Nonlinear Behavior. Finite Elem. Anal. Des. 2021, 196, 14. [Google Scholar] [CrossRef]
- Gori, M.; Monfardini, G.; Scarselli, F. A New Model for Learning in Graph Domains. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; Volume 2, pp. 729–734. [Google Scholar]
- Scarselli, F.; Hagenbuchner, M.; Yong, S.L.; Tsoi, A.C.; Gori, M.; Maggini, M. Graph Neural Networks for Ranking Web Pages. In Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence, WI’05, Compiegne, France, 19–22 September 2005; pp. 666–672. [Google Scholar]
- Li, Y.; Tarlow, D.; Brockschmidt, M.; Zemel, R. Gated Graph Sequence Neural Networks. In Proceedings of the International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016; p. 20. [Google Scholar]
- Bronstein, M.M.; Bruna, J.; Lecun, Y.; Szlam, A.; Vandergheynst, P. Geometric Deep Learning: Going beyond Euclidean Data. IEEE Signal Process Mag. 2017, 34, 18–42. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Tong, H.; Xu, J.; Maciejewski, R. Graph Convolutional Networks: A Comprehensive Review. Comput. Soc. Netw. 2019, 6, 23. [Google Scholar] [CrossRef] [Green Version]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef] [Green Version]
- Chami, I.; Abu-El-Haija, S.; Perozzi, B.; Ré, C.; Murphy, K. Machine Learning on Graphs: A Model and Comprehensive Taxonomy. J. Mach. Learn. Res. 2022, 23, 3840–3903. [Google Scholar]
- Zhang, Z.; Cui, P.; Zhu, W. Deep Learning on Graphs: A Survey. IEEE Trans. Knowl. Data Eng. 2022, 34, 249–270. [Google Scholar] [CrossRef] [Green Version]
- Duvenaud, D.; Maclaurin, D.; Aguilera-Iparraguirre, J.; Gómez-Bombarelli, R.; Hirzel, T.; Aspuru-Guzik, A.; Adams, R.P. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, Montreal, QC, Canada, 7–12 December 2015; Volume 2, pp. 2224–2232. [Google Scholar]
- Hamaguchi, T.; Oiwa, H.; Shimbo, M.; Matsumoto, Y. Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, 19–25 August 2017; p. 7. [Google Scholar]
- Battaglia, P.; Pascanu, R.; Lai, M.; Rezende, D.J. Interaction Networks for Learning about Objects, Relations and Physics. In Proceedings of the 29th International Conference on Neural Information Processing Systems, NIPS’16, Barcelona, Spain, 5–10 December 2016; p. 9. [Google Scholar]
- Maurizi, M.; Gao, C.; Berto, F. Predicting stress, strain and deformation fields in materials and structures with graph neural networks. Sci. Rep. 2022, 12, 21834. [Google Scholar] [CrossRef]
- Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; p. 15. [Google Scholar]
- Grattarola, D.; Alippi, C. Graph Neural Networks in TensorFlow and Keras with Spektral. IEEE Comput. Intell. Mag. 2021, 16, 99–106. [Google Scholar] [CrossRef]
- Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph Neural Networks: A Review of Methods and Applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
- Simonovsky, M.; Komodakis, N. Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Honolulu, HI, USA, 21–26 July 2017; pp. 3693–3702. [Google Scholar]
- Xie, T.; Grossman, J.C. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Phys. Rev. Lett. 2018, 120, 6. [Google Scholar] [CrossRef] [Green Version]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the International Conference on Neural Networks, ICNN’95, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Kennedy, J.; Eberhart, R.C. Swarm Intelligence; Elsevier: Amsterdam, The Netherlands, 2001. [Google Scholar]
- Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
- Shi, Y.; Eberhart, R. Modified Particle Swarm Optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, ICEC, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
- Rajeev, B.S.; Krishnamoorthy, C.S. Discrete Optimization of Structures Using Genetic Algorithms. J. Struct. Eng. 1992, 118, 1233–1250. [Google Scholar] [CrossRef]
- Jawad, F.K.J.; Mahmood, M.; Wang, D.; AL-Azzawi, O.; Al-Jamely, A. Heuristic Dragonfly Algorithm for Optimal Design of Truss Structures with Discrete Variables. Structures 2021, 29, 843. [Google Scholar] [CrossRef]
Parameter | Value |
---|---|
Modulus of elasticity | 10,000 ksi |
Material density | 0.1 lb/in3 |
Allowable stresses | ±25 ksi |
Allowable nodal displacement | 2 in |
Training | Validation | |
---|---|---|
MSE (in2) | 0.236 | 0.240 |
RMSE (in) | 0.271 | 0.288 |
Variables | Cross-Sectional Areas (in2) | ||
---|---|---|---|
Element Group | Members | FEM-Based Optimization (Benchmark) | GNN-Based Optimization |
1 | 1 | 15.000 | 15.000 |
2 | 2 | 1.000 | 1.000 |
3 | 3 | 12.268 | 12.012 |
4 | 4 | 7.644 | 7.660 |
5 | 5 | 1.000 | 1.000 |
6 | 6 | 1.000 | 1.000 |
7 | 7 | 4.684 | 4.943 |
8 | 8 | 11.202 | 11.088 |
9 | 9 | 10.913 | 10.966 |
10 | 10 | 1.000 | 1.000 |
(in) | 2.00 | 2.00 | |
(ksi) | 12.49 | 12.04 | |
Weight (lb) | 2780.09 | 2781.56 | |
Nanalyses | 7500 | 2500 |
Parameter | Value |
---|---|
Modulus of elasticity | 10,000 ksi |
Material density | 0.1 lb/in3 |
Allowable stresses | ±25 ksi |
Allowable nodal displacement | 0.25 in |
Joint | Px (kips) | Py (kips) | Pz (kips) |
---|---|---|---|
18 | 5 | 5 | −5 |
19 | 5 | 5 | −5 |
Training | Validation | |
---|---|---|
MSE (in2) | 0.038 | 0.039 |
RMSE (in) | 0.162 | 0.164 |
Variables | Cross-Sectional Areas (in2) | ||
---|---|---|---|
Element Group | Members | FEM-Based Optimization (Benchmark) | GNN-Based Optimization |
1 | 1–4 | 3.000 | 3.000 |
2 | 5–12 | 2.210 | 2.186 |
3 | 13–16 | 0.109 | 0.100 |
4 | 17, 18 | 0.100 | 0.100 |
5 | 19–22 | 3.000 | 3.000 |
6 | 23–30 | 2.207 | 2.184 |
7 | 31–34 | 0.100 | 0.100 |
8 | 35, 36 | 0.100 | 0.100 |
9 | 37–40 | 3.000 | 3.000 |
10 | 41–48 | 2.148 | 2.036 |
11 | 49–52 | 0.100 | 0.100 |
12 | 53, 54 | 0.100 | 0.100 |
13 | 55–58 | 1.627 | 1.851 |
14 | 59–66 | 2.113 | 2.200 |
15 | 67–70 | 0.719 | 0.817 |
16 | 71, 72 | 0.250 | 0.264 |
(in) | 0.25 | 0.25 | |
(ksi) | 7.91 | 7.90 | |
Weight (lb) | 1254.68 | 1256.94 | |
Nanalyses | 7500 | 2500 |
Parameter | Value |
---|---|
Modulus of elasticity | 30,000 ksi |
Material density | 0.283 lb/in3 |
Allowable nodal displacement | 4 in |
Joints | Px (kips) | Py (kips) | Pz (kips) |
---|---|---|---|
1, 6, 15, 20, 29, 34, 43, 48, 57, 62 | 1 | 0 | 0 |
1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 36, 38, 40, 42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60, 61, 62, 64, 66, 68, 70, 71, 72, 73, 74, 75 | 0 | −10 | 0 |
Training | Validation | |
---|---|---|
MSE (in2) | 0.939 | 0.881 |
RMSE (in) | 0.943 | 0.882 |
Variables | Cross-Sectional Areas (in2) | ||
---|---|---|---|
Element Group | Members | FEM-Based Optimization (Benchmark) | GNN-Based Optimization |
1 | 1–4 | 0.100 | 0.100 |
2 | 5, 8, 11, 14, 17 | 0.435 | 0.523 |
3 | 19–24 | 0.100 | 0.100 |
4 | 18, 25, 56, 63, 94, 101, 132, 139, 170, 177 | 0.100 | 0.100 |
5 | 26, 29, 32, 35, 38 | 0.660 | 0.586 |
6 | 6, 7, 9, 10, 12, 13, 15, 16, 27, 28, 30, 31, 33, 34, 36, 37 | 0.100 | 0.100 |
7 | 39–42 | 0.100 | 0.100 |
8 | 43, 46, 49, 52, 55 | 0.754 | 0.855 |
9 | 57–62 | 0.100 | 0.100 |
10 | 64, 67, 70, 73, 76 | 0.925 | 0.802 |
11 | 44, 45, 47, 48, 50, 51, 53, 54, 65, 66, 68, 69, 71, 72, 74, 75 | 0.100 | 0.100 |
12 | 77–80 | 0.100 | 0.100 |
13 | 81, 84, 87, 90, 93 | 0.929 | 0.910 |
14 | 95–100 | 0.100 | 0.100 |
15 | 102, 105, 108, 111, 114 | 1.135 | 1.095 |
16 | 82, 83, 85, 86, 88, 89, 91, 92, 103, 104, 106, 107, 109, 110, 112, 113 | 0.100 | 0.100 |
17 | 115–118 | 0.100 | 0.100 |
18 | 119, 122, 125, 128, 131 | 1.220 | 1.065 |
19 | 133–138 | 0.100 | 0.100 |
20 | 140, 143, 146, 149, 152 | 1.240 | 1.162 |
21 | 120, 121, 123, 124, 126, 127, 129, 130, 141, 142, 144, 145, 147, 148, 150, 151 | 0.100 | 0.100 |
22 | 153–156 | 0.100 | 0.100 |
23 | 157, 160, 163, 166, 169 | 1.434 | 1.816 |
24 | 171–176 | 0.100 | 0.100 |
25 | 178, 181, 184, 187, 190 | 1.446 | 1.619 |
26 | 158, 159, 161, 162, 164, 165, 167, 168, 179, 180, 182, 183, 185, 186, 188, 189 | 0.133 | 0.116 |
27 | 191–194 | 0.839 | 0.823 |
28 | 195, 197, 198, 200 | 1.497 | 1.496 |
29 | 196, 199 | 2.000 | 2.000 |
(in) | 4.00 | 4.00 | |
Weight (lb) | 4166.81 | 4204.00 | |
Nanalyses | 11,250 | 3750 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nourian, N.; El-Badry, M.; Jamshidi, M. Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model. Algorithms 2023, 16, 380. https://doi.org/10.3390/a16080380
Nourian N, El-Badry M, Jamshidi M. Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model. Algorithms. 2023; 16(8):380. https://doi.org/10.3390/a16080380
Chicago/Turabian StyleNourian, Navid, Mamdouh El-Badry, and Maziar Jamshidi. 2023. "Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model" Algorithms 16, no. 8: 380. https://doi.org/10.3390/a16080380
APA StyleNourian, N., El-Badry, M., & Jamshidi, M. (2023). Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model. Algorithms, 16(8), 380. https://doi.org/10.3390/a16080380