Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire
Abstract
:Featured Application
Abstract
1. Introduction
1.1. Purpose and Innovation of This Work
1.2. Advantages and Disadvantages of the Proposed Evaluation Method
- The use of ANNs can lead to a reduction in the cost and resources associated with further fire and heat transfer testing. This also contributes towards achieving the sustainability targets of research institutions and academic organisations by removing the need for the unnecessary use of fuel, test samples, and the construction of testing facilities. The proposed methodology provides an opportunity for a more efficient and accurate assessment of the ANN’s results.
- Artificial Neural Networks can also help reduce the required time for developing and analysing computationally heavy 3D Finite Element heat transfer models. This further reduces the cost associated with acquiring powerful computers to support the aforementioned models. Despite this powerful feature, the use of ANNs needs to be regulated and evaluated; the methodology developed herein sets the foundation for such an evaluation framework.
- ANNs can increase and simplify the reproducibility of heat transfer and fire performance experiments. ANNs introduce efficiency in the exploratory amendment of experiment parameters. Previously recorded figures can feed in and help construct the ANN model, providing the ability to then tweak and replicate the parameters to explore variations of the original experimental arrangements. As this process involves the amendment and adjustment of the input data of ANNs, the workflow introduced in this study can highlight the potential pitfalls arising from this retrospective processing.
- The methodology for input data review proposed herein aims to contribute towards the prevention of misleading or not adequately precise results generated by the ANN, for integration into further research or field applications.
- This body of work also raises awareness regarding the need for a standardised methodology for assessing ANN performance and ANN model development.
- The development of ANNs usually requires a large amount of input data, which are not always easy or affordable to obtain. The proposed methodology essentially introduces additional filters and methods for stricter input data quality assessment, which can make the above process even more involved.
- The interpretation of the ANN output is not always straightforward, and its precision can be difficult to judge unless a solid understanding of the expected results has been developed beforehand. The following sections of this study touch on this specific matter and make recommendations.
- To extend the scope of the scientific and industrial application of the proposed model within the context of the heat transfer and fire performance of wall assemblies, further research is needed to incorporate a wider range of material properties, fire loadings, and geometrical wall configurations.
- There are a plethora of proposed model architectures and ML approaches that could potentially enhance the performance of the proposed model. Although the detailed review and integration of such methodologies fall outside the scope of the present study, it would be beneficial for those to be reviewed and compared against the proposed model structure and topology.
1.3. Scope Limitations and Extended Application
1.4. Intuition of Artificial Neural Networks
1.5. Input Data Reference
1.6. Software and Hardware Utilised for This Research
2. Modelling and Methods
2.1. Masonry Wall Assembly Finite Element (FE) Models
2.1.1. Geometrical Features of Models’ Components
2.1.2. Model Material Properties and Combinations
- Clay brick density (ρ): 2000 kg/m3 and 1000 kg/m3.
- Clay brick thermal conductivity coefficient (λ): 0.8 W/(m·K) and 0.4 W/(m·K).
- Clay brick thermal emissivity coefficient (ε): 0.1, 0.5, 0.9.
- Insulation thickness (d): 0 mm, 50 mm, 100 mm.
2.2. Heat Transfer Analysis, Fire Load, Assumptions, and Conventions
2.2.1. Heat Transfer Fundamentals
2.2.2. Boundary Conditions and Fire Load
2.2.3. Modeling Heat Transfer Assumptions and Conventions
2.3. ANN Complete Input Dataset (CID)
- Index—variable purely counting the number of unique observations included in the data. Each temperature measurement generated by the FE model analysis (time step of 30 s) is used as a separate observation. The complete dataset includes a total of 21,630 observations. The index was excluded from any training or testing of the algorithm.
- Sample reference—this variable enabled the research team to easily cross-reference between the dataset tables and the analysis files. Similarly, it was excluded from any training or testing of the neural network.
- Insulation type—the data structure and ANN algorithm were developed with the intention of incorporating and analysing various insulation materials. Although the present study only considers EPS, it was considered useful to build some flexibility in the algorithm, enabling the further expansion of the scope of work in the future. It takes the values “EPS” and “NoIns”, representing the insulated and non-insulated wall samples, respectively. This categorical variable was encoded into a numerical one as part of the pre-processing phase.
- Insulation position—this variable represents the position of the insulation. It takes three values; “Int”, “Ext”, and “AbsIns”, representing insulation exposed to fire (internal insulation), insulation not exposed to fire (external insulation), and non-insulated wall samples, respectively. As a categorical variable, this was also encoded as part of the pre-processing phase.
- Time—to enable the close observation of the temperature development on the non-exposed face of the wall over time, it was considered necessary to include a “timestamp” variable. This was obtained directly from the FE model analysis output, where time and temperature are given as two separate columns. The temperature is measured every 30 s for a duration of 6 h and a total of 720 observations for each of the 30 models.
- Temperature of the non-exposed face—this is the output of the finite element analysis models in °C as generated by COMSOL Multiphysics® simulation software. It reflects the temperature developed gradually within a reference area on the non-exposed face of the wall panels. This constitutes the dependent variable of the dataset and ultimately is the figure that the ANN algorithm will be trying to predict in the following steps.
2.4. Test Cases Examined
- ANN 1: As mentioned above, this uses the complete dataset for training and testing purposes.
- ANN 2: The second algorithm was developed using only the extreme values of insulation thickness. As such, the wall assemblies considered included the non-insulated ones and those insulated with 100 mm of EPS internally and externally.
- ANN 3: Only the extreme values of the emissivity coefficient were used for the development of the third algorithm. Wall assemblies with ε = 0.5 were disregarded and only those with ε = 0.1 and ε = 0.9 were included in the dataset.
- ANN 4: This was the most input data-deprived algorithm—a combination of the previous two cases. Only the extreme cases of insulation and thermal emissivity coefficient were offered to the algorithm at the training stage, considerably reducing the density of the offered input data.
2.5. ANN Development Protocol
2.5.1. Input Data Selection and Organisation
2.5.2. Input Data Preprocessing
2.5.3. Data Splitting
2.5.4. Model Architecture and Structure Selection
2.5.5. Model Calibration
2.5.6. Model Validation
2.5.7. From Evaluation Methodology to Structured Results
- An initial review of the existing bibliography proposing a protocol for the development of ANNs.
- The development of one ANN algorithm based on the peer-reviewed protocol.
- The training of the ANN algorithm with varying degrees of input data (ending up with 4 regressors of the same architecture but different levels of input data).
- A comparison of the performance of the 4 regressor models (same architecture/different input data) and an evaluation of the impact of the quality of offered input data on each model’s predictive capability.
- The identification of the best-performing ANN and validation against a completely new set of data.
- An outline of observations, conclusions, and recommendations regarding the impact of input data quality and ways of mitigating the problem.
3. Results
3.1. Impact of Data Quality on ANN Performance
3.2. Performance of the Dominant ANN Model
4. Discussion
5. Conclusions and Further Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kurfess, F.J. Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2003; pp. 609–629. [Google Scholar]
- Soni, N.; Sharma, E.K.; Singh, N.; Kapoor, A. Artificial Intelligence in Business: From Research and Innovation to Market Deployment. Procedia Comput. Sci. 2020, 167, 2200–2210. [Google Scholar] [CrossRef]
- Demianenko, M.; de Gaetani, C.I. A Procedure for Automating Energy Analyses in the BIM Context Exploiting Artificial Neural Networks and Transfer Learning Technique. Energies 2021, 14, 2956. [Google Scholar] [CrossRef]
- Paliwal, M.; Kumar, U.A. Neural networks and statistical techniques: A review of applications. Expert Syst. Appl. 2009, 36, 2–17. [Google Scholar] [CrossRef]
- Naser, M.; Kodur, V.; Thai, H.-T.; Hawileh, R.; Abdalla, J.; Degtyarev, V.V. StructuresNet and FireNet: Benchmarking databases and machine learning algorithms in structural and fire engineering domains. J. Build. Eng. 2021, 44, 102977. [Google Scholar] [CrossRef]
- Tran-Ngoc, H.; Khatir, S.; de Roeck, G.; Bui-Tien, T.; Wahab, M.A. An efficient artificial neural network for damage detection in bridges and beam-like structures by improving training parameters using cuckoo search algorithm. Eng. Struct. 2019, 199, 109637. [Google Scholar] [CrossRef]
- Srikanth, I.; Arockiasamy, M. Deterioration models for prediction of remaining useful life of timber and concrete bridges: A review. J. Traffic Transp. Eng. 2020, 7, 152–173. [Google Scholar] [CrossRef]
- Ngarambe, J.; Yun, G.Y.; Santamouris, M. The use of artificial intelligence (AI) methods in the prediction of thermal comfort in buildings: Energy implications of AI-based thermal comfort controls. Energy Build. 2020, 211, 109807. [Google Scholar] [CrossRef]
- Wu, W.; Dandy, G.C.; Maier, H. Protocol for developing ANN models and its application to the assessment of the quality of the ANN model development process in drinking water quality modelling. Environ. Model. Softw. 2014, 54, 108–127. [Google Scholar] [CrossRef]
- Abdolrasol, M.G.M. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
- Naser, M. Properties and material models for modern construction materials at elevated temperatures. Comput. Mater. Sci. 2019, 160, 16–29. [Google Scholar] [CrossRef]
- Véstias, M.P.; Duarte, R.P.; de Sousa, J.T.; Neto, H.C. Moving Deep Learning to the Edge. Algorithms 2020, 13, 125. [Google Scholar] [CrossRef]
- Moon, G.E.; Kwon, H.; Jeong, G.; Chatarasi, P.; Rajamanickam, S.; Krishna, T. Evaluating Spatial Accelerator Architectures with Tiled Matrix-Matrix Multiplication. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 1002–1014. [Google Scholar] [CrossRef]
- Naser, M.Z. Mechanistically Informed Machine Learning and Artificial Intelligence in Fire Engineering and Sciences. Fire Technol. 2021, 57, 2741–2784. [Google Scholar] [CrossRef]
- Olawoyin, A.; Chen, Y. Predicting the Future with Artificial Neural Network. Procedia Comput. Sci. 2018, 140, 383–392. [Google Scholar] [CrossRef]
- Kim, P. MATLAB Deep Learning; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Salehi, H.; Burgueño, R. Emerging artificial intelligence methods in structural engineering. Eng. Struct. 2018, 171, 170–189. [Google Scholar] [CrossRef]
- Al-Jabri, K.; Al-Alawi, S.; Al-Saidy, A.; Alnuaimi, A. An artificial neural network model for predicting the behaviour of semi-rigid joints in fire. Adv. Steel Constr. 2009, 5, 452–464. [Google Scholar] [CrossRef]
- Tealab, A. Time series forecasting using artificial neural networks methodologies: A systematic review. Futur. Comput. Inform. J. 2018, 3, 334–340. [Google Scholar] [CrossRef]
- Kanellopoulos, G.; Koutsomarkos, V.; Kontoleon, K.; Georgiadis-Filikas, K. Numerical Analysis and Modelling of Heat Transfer Processes through Perforated Clay Brick Masonry Walls. Procedia Environ. Sci. 2017, 38, 492–499. [Google Scholar] [CrossRef]
- Kontoleon, K.J.; Theodosiou, T.G.; Saba, M.; Georgiadis-Filikas, K.; Bakas, I.; Liapi, E. The effect of elevated temperature exposure on the thermal behaviour of insulated masonry walls. In Proceedings of the 1st International Conference on Environmental Design, Athens, Greece, 24–25 October 2020; pp. 231–238. [Google Scholar]
- Pedregosa, F.; Michel, V.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Vanderplas, J.; Cournapeau, D.; Varoquaux, G.; Gramfort, A.; et al. Scikit-Learn: Machine Learning in Python. 2011. Available online: http://scikit-learn.sourceforge.net (accessed on 17 October 2021).
- Nguyen, T.-D.; Meftah, F.; Chammas, R.; Mebarki, A. The behaviour of masonry walls subjected to fire: Modelling and parametrical studies in the case of hollow burnt-clay bricks. Fire Saf. J. 2009, 44, 629–641. [Google Scholar] [CrossRef]
- Nguyen, T.D.; Meftah, F. Behavior of clay hollow-brick masonry walls during fire. Part 1: Experimental analysis. Fire Saf. J. 2012, 52, 55–64. [Google Scholar] [CrossRef]
- Nguyen, T.D.; Meftah, F. Behavior of hollow clay brick masonry walls during fire. Part 2: 3D finite element modeling and spalling assessment. Fire Saf. J. 2014, 66, 35–45. [Google Scholar] [CrossRef]
- Theodore, L.B.; Adrienne, S.L.; Frank, P.I.; David, P.D. Introduction to Heat Transfer, 6th ed.; John Wiley and Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
- Fioretti, R.; Principi, P. Thermal Performance of Hollow Clay Brick with Low Emissivity Treatment in Surface Enclosures. Coatings 2014, 4, 715–731. [Google Scholar] [CrossRef] [Green Version]
- British Standards Institution. Eurocode 1: Actions on Structures: Part 1.2 General Actions: Actions on Structures Exposed to Fire; BSI: London, UK, 2002. [Google Scholar]
- Du, Y.; Li, G.-Q. A new temperature-time curve for fire-resistance analysis of structures. Fire Saf. J. 2012, 54, 113–120. [Google Scholar] [CrossRef]
- Mehta, S.; Biederman, S.; Shivkumar, S. Thermal degradation of foamed polystyrene. J. Mater. Sci. 1995, 30, 2944–2949. [Google Scholar] [CrossRef]
- Maier, H.R.; Jain, A.; Dandy, G.C.; Sudheer, K. Methods used for the development of neural networks for the prediction of water resource variables in river systems: Current status and future directions. Environ. Model. Softw. 2010, 25, 891–909. [Google Scholar] [CrossRef]
- Lu, Y.; Wang, S.; Li, S.; Zhou, C. Particle swarm optimizer for variable weighting in clustering high-dimensional data. Mach. Learn. 2009, 82, 43–70. [Google Scholar] [CrossRef] [Green Version]
- Suits, D.B. Use of Dummy Variables in Regression Equations. J. Am. Stat. Assoc. 1957, 52, 548. [Google Scholar] [CrossRef]
- Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
- Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Müller, A.C.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.; Grobler, J.; et al. API Design for Machine Learning Software: Experiences from the Scikit-Learn Project. 2013. Available online: https://github.com/scikit-learn (accessed on 17 October 2021).
- Jørgensen, C.; Grastveit, R.; Garzón-Roca, J.; Payá-Zaforteza, I.; Adam, J.M. Bearing capacity of steel-caged RC columns under combined bending and axial loads: Estimation based on Artificial Neural Networks. Eng. Struct. 2013, 56, 1262–1270. [Google Scholar] [CrossRef]
- Kulathunga, N.; Ranasinghe, N.; Vrinceanu, D.; Kinsman, Z.; Huang, L.; Wang, Y. Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks. Algorithms 2021, 14, 51. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A method for Stochastic Optimization. Available online: https://arxiv.org/abs/1412.6980 (accessed on 22 December 2014).
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Stanovov, V.; Akhmedova, S.; Semenkin, E. Differential Evolution with Linear Bias Reduction in Parameter Adaptation. Algorithms 2020, 13, 283. [Google Scholar] [CrossRef]
- Dawson, C.; Abrahart, R.; See, L. HydroTest: A web-based toolbox of evaluation metrics for the standardised assessment of hydrological forecasts. Environ. Model. Softw. 2007, 22, 1034–1052. [Google Scholar] [CrossRef] [Green Version]
Insulation Position and Thickness | Brick Density | Thermal Conductivity Coefficient | Thermal Emissivity Coefficient | Sample Reference 1 |
---|---|---|---|---|
No insulation | 1000 | 0.4 | 0.1 | Smpl1-1 |
1000 | 0.4 | 0.5 | Smpl1-2 | |
1000 | 0.4 | 0.9 | Smpl1-3 | |
No insulation | 2000 | 0.8 | 0.1 | Smpl2-1 |
2000 | 0.8 | 0.5 | Smpl2-2 | |
2000 | 0.8 | 0.9 | Smpl2-3 | |
Non-exposed EPS (50 mm) | 1000 | 0.4 | 0.1 | Smpl3-1 |
1000 | 0.4 | 0.5 | Smpl3-2 | |
1000 | 0.4 | 0.9 | Smpl3-3 | |
Non-exposed EPS (50 mm) | 2000 | 0.8 | 0.1 | Smpl4-1 |
2000 | 0.8 | 0.5 | Smpl4-2 | |
2000 | 0.8 | 0.9 | Smpl4-3 | |
EPS exposed to fire (50 mm) | 1000 | 0.4 | 0.1 | Smpl5-1 |
1000 | 0.4 | 0.5 | Smpl5-2 | |
1000 | 0.4 | 0.9 | Smpl5-3 | |
EPS exposed to fire (50 mm) | 2000 | 0.8 | 0.1 | Smpl6-1 |
2000 | 0.8 | 0.5 | Smpl6-2 | |
2000 | 0.8 | 0.9 | Smpl6-3 | |
Non-exposed EPS (100 mm) | 1000 | 0.4 | 0.1 | Smpl7-1 |
1000 | 0.4 | 0.5 | Smpl7-2 | |
1000 | 0.4 | 0.9 | Smpl7-3 | |
Non-exposed EPS (100 mm) | 2000 | 0.8 | 0.1 | Smpl8-1 |
2000 | 0.8 | 0.5 | Smpl8-2 | |
2000 | 0.8 | 0.9 | Smpl8-3 | |
EPS exposed to fire (100 mm) | 1000 | 0.4 | 0.1 | Smpl9-1 |
1000 | 0.4 | 0.5 | Smpl9-2 | |
1000 | 0.4 | 0.9 | Smpl9-3 | |
EPS exposed to fire (100 mm) | 2000 | 0.8 | 0.1 | Smpl10-1 |
2000 | 0.8 | 0.5 | Smpl10-2 | |
2000 | 0.8 | 0.9 | Smpl10-3 |
Material | Density kg/m3 | Thermal Conductivity Coefficient W/(m∙K) | Specific Heat Capacity J/kg∙K |
---|---|---|---|
Clay bricks | As above | As above | 1000 |
Insulation (EPS) | 30 | 0.035 | 1500 |
Cement mortar | 2000 | 1.400 | 1000 |
Index | Sample Ref | Brick Density | Thermal Conductivity Coef. | Thermal Emissivity Coef. | Insulation Thickness | Insulation Type | Insulation Position | Time | Temperature of Non-Exposed Face |
---|---|---|---|---|---|---|---|---|---|
… | … | … | … | … | … | … | … | … | … |
11,449 | Smpl6-1 | 2000 | 0.8 | 0.1 | 50 | EPS | Int | 18,990 | 41.77246 |
11,450 | Smpl6-1 | 2000 | 0.8 | 0.1 | 50 | EPS | Int | 19,020 | 41.85622 |
11,451 | Smpl6-1 | 2000 | 0.8 | 0.1 | 50 | EPS | Int | 19,050 | 41.94014 |
11,452 | Smpl6-1 | 2000 | 0.8 | 0.1 | 50 | EPS | Int | 19,080 | 42.02421 |
11,453 | Smpl6-1 | 2000 | 0.8 | 0.1 | 50 | EPS | Int | 19,110 | 42.10843 |
… | … | … | … | … | … | … | … | … | … |
Sample Reference | Properties of Wall Sample | ANN 1 | ANN 2 | ANN 3 | ANN 4 |
---|---|---|---|---|---|
Smpl1-1 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1 | ✓ | ✓ | ✓ | ✓ |
Smpl1-2 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5 | ✓ | ✓ | ||
Smpl1-3 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9 | ✓ | ✓ | ✓ | ✓ |
Smpl2-1 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1 | ✓ | ✓ | ✓ | ✓ |
Smpl2-2 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5 | ✓ | ✓ | ||
Smpl2-3 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9 | ✓ | ✓ | ✓ | ✓ |
Smpl3-1 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 50 mm, External | ✓ | ✓ | ||
Smpl3-2 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 50 mm, External | ✓ | |||
Smpl3-3 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 50 mm, External | ✓ | ✓ | ||
Smpl4-1 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 50 mm, External | ✓ | ✓ | ||
Smpl4-2 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 50 mm, External | ✓ | |||
Smpl4-3 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 50 mm, External | ✓ | ✓ | ||
Smpl5-1 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 50 mm, Internal | ✓ | ✓ | ||
Smpl5-2 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 50 mm, Internal | ✓ | |||
Smpl5-3 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 50 mm, Internal | ✓ | ✓ | ||
Smpl6-1 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 50 mm, Internal | ✓ | ✓ | ||
Smpl6-2 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 50 mm, Internal | ✓ | |||
Smpl6-3 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 50 mm, Internal | ✓ | ✓ | ||
Smpl7-1 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 100 mm, External | ✓ | ✓ | ✓ | ✓ |
Smpl7-2 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 100 mm, External | ✓ | ✓ | ||
Smpl7-3 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 100 mm, External | ✓ | ✓ | ✓ | ✓ |
Smpl8-1 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 100 mm, External | ✓ | ✓ | ✓ | ✓ |
Smpl8-2 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 100 mm, External | ✓ | ✓ | ||
Smpl8-3 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 100 mm, External | ✓ | ✓ | ✓ | ✓ |
Smpl9-1 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 100 mm, Internal | ✓ | ✓ | ✓ | ✓ |
Smpl9-2 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 100 mm, Internal | ✓ | ✓ | ||
Smpl9-3 | ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 100 mm, Internal | ✓ | ✓ | ✓ | ✓ |
Smpl10-1 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 100 mm, Internal | ✓ | ✓ | ✓ | ✓ |
Smpl10-2 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 100 mm, Internal | ✓ | ✓ | ||
Smpl10-3 | ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 100 mm, Internal | ✓ | ✓ | ✓ | ✓ |
Sample Ref | Brick Density | Thermal Conductivity Coef. | Thermal Emissivity Coef. | Insulation Thickness | Insulation Type | Insulation Position |
---|---|---|---|---|---|---|
Test Sample 1 | 2000 | 0.8 | 0.9 | 25 | EPS | Ext |
Test Sample 2 | 2000 | 0.8 | 0.9 | 25 | EPS | Int |
Test Sample 3 | 2000 | 0.8 | 0.7 | 0 | NoIns | AbsIns |
Test Sample 4 | 2000 | 0.8 | 0.3 | 0 | NoIns | AbsIns |
Test Sample 5 | 1500 | 0.6 | 0.9 | 0 | NoIns | AbsIns |
Test Sample 6 | 1500 | 0.6 | 0.7 | 75 | EPS | Int |
Hyper Parameter | Value | Comments |
---|---|---|
Number of epochs | 50 | Following ad hoc experimentation with various values. |
Learning rate | 0.001 | Default value of Adam optimiser. |
Batch size | 10 | To allow more efficient processing of the large dataset. |
Activation function | ReLU | Used the default values of the rectified linear unit activation function. |
Optimiser | Adam | Default values of Adam [38] optimiser were used. |
Loss function | MSE | Mean squared error loss function. |
Metric | Reference | Formula 1 | Perfect Score |
---|---|---|---|
Absolute maximum error | AME | AME = max(||) | 0.0 |
Mean absolute error | MAE | MAE = | 0.0 |
Relative absolute error | RAE | RAE = | 0.0 |
Peak difference | PDIFF | PDIFF = max() − max() | 0.0 |
Per cent error in peak | PEP | PEP = 100 | 0.0 |
Root mean squared error | RMSE | RMSE = | 0.0 |
Coefficient of determination | R2 | R2 = | 1.0 |
Wall Assembly | AME | MAE | RAE | PDIFF | PEP | RMSE | R 2 |
---|---|---|---|---|---|---|---|
WA1 | 1.80 | 0.56 | 2.33% | 1.80 | 1.7% | 0.75 | 0.9993 |
WA2 | 1.63 | 0.78 | 2.78% | 1.55 | 1.4% | 0.88 | 0.9992 |
WA3 | 3.03 | 1.10 | 4.62% | 2.55 | 2.2% | 1.29 | 0.9981 |
WA4 | 2.48 | 0.83 | 2.60% | 1.39 | 1.2% | 1.05 | 0.9991 |
WA5 | 1.69 | 0.45 | 1.42% | −0.65 | −0.5% | 0.55 | 0.9998 |
WA6 | 1.67 | 0.64 | 1.97% | 0.70 | 0.6% | 0.78 | 0.9995 |
Average | 2.05 | 0.73 | 2.62% | 1.22 | 1.1% | 0.88 | 0.9992 |
Wall Assembly | AME | MAE | RAE | PDIFF | PEP | RMSE | R 2 |
---|---|---|---|---|---|---|---|
WA1 | 8.45 | 2.69 | 11.3% | 6.76 | 6.27% | 3.45 | 0.9844 |
WA2 | 11.44 | 3.85 | 13.7% | 2.83 | 2.51% | 5.07 | 0.9741 |
WA3 | 32.04 | 5.06 | 21.3% | 2.34 | 2.03% | 9.74 | 0.8921 |
WA4 | 22.88 | 6.87 | 21.6% | 11.85 | 10.37% | 8.84 | 0.9363 |
WA5 | 13.09 | 4.86 | 15.3% | 3.90 | 2.97% | 6.25 | 0.9702 |
WA6 | 9.96 | 5.73 | 17.7% | −0.30 | −0.24% | 6.57 | 0.9668 |
Average | 16.31 | 4.85 | 16.82% | 4.56 | 4.0% | 6.65 | 0.9540 |
Wall Assembly | AME | MAE | RAE | PDIFF | PEP | RMSE | R2 |
---|---|---|---|---|---|---|---|
WA1 | 28.46 | 10.87 | 45.59% | 22.58 | 20.95% | 15.05 | 0.7023 |
WA2 | 45.00 | 16.21 | 57.80% | 45.00 | 39.93% | 22.55 | 0.4870 |
WA3 | 41.09 | 9.64 | 40.57% | 27.81 | 24.18% | 16.01 | 0.7081 |
WA4 | 48.84 | 15.12 | 47.49% | 27.71 | 24.25% | 22.35 | 0.5922 |
WA5 | 28.77 | 11.37 | 35.85% | 14.56 | 11.07% | 15.06 | 0.8271 |
WA6 | 30.34 | 14.99 | 46.29% | 24.19 | 19.55% | 19.18 | 0.7171 |
Average | 37.08 | 13.03 | 45.60% | 26.98 | 23.3% | 18.37 | 0.6723 |
Wall Assembly | AME | MAE | RAE | PDIFF | PEP | RMSE | R2 |
---|---|---|---|---|---|---|---|
WA1 | 18.13 | 8.36 | 35.08% | 18.13 | 16.82% | 10.18 | 0.8639 |
WA2 | 22.03 | 10.46 | 37.30% | 14.55 | 12.92% | 12.39 | 0.8451 |
WA3 | 29.39 | 9.82 | 41.35% | 28.83 | 25.06% | 13.25 | 0.8002 |
WA4 | 33.29 | 15.44 | 48.49% | 29.33 | 25.66% | 19.49 | 0.6898 |
WA5 | 31.66 | 8.25 | 26.01% | 31.65 | 24.06% | 12.56 | 0.8797 |
WA6 | 22.70 | 7.91 | 24.42% | 22.70 | 18.34% | 10.58 | 0.9140 |
Average | 26.20 | 10.04 | 35.44% | 24.20 | 20.5% | 13.07 | 0.8321 |
Neural Network | Loss |
---|---|
ANN 1 | 5.0840 × 10−4 |
ANN 2 | 5.0721 × 10−4 |
ANN 3 | 2.1836 × 10−4 |
ANN 4 | 2.4811 × 10−4 |
Test Sample | AME | MAE | RAE | PDIFF | PEP | RMSE | R2 |
---|---|---|---|---|---|---|---|
TS1 | 3.83 | 0.90 | 1.89% | −0.44 | −0.28% | 1.20 | 0.9995 |
TS 2 | 4.26 | 1.93 | 3.72% | 3.72 | 2.03% | 2.48 | 0.9981 |
TS 3 | 6.06 | 2.08 | 6.37% | 4.42 | 3.30% | 2.73 | 0.9946 |
TS 4 | 11.42 | 3.80 | 26.50% | 11.42 | 15.04% | 5.36 | 0.8990 |
TS 5 | 12.43 | 4.16 | 10.32% | −4.02 | −2.70% | 5.81 | 0.9832 |
TS 6 | 4.13 | 1.92 | 4.64% | 4.03 | 2.66% | 2.25 | 0.9976 |
Average | 7.02 | 2.47 | 8.91% | 3.19 | 3.3% | 3.31 | 0.9787 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bakas, I.; Kontoleon, K.J. Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Appl. Sci. 2021, 11, 11435. https://doi.org/10.3390/app112311435
Bakas I, Kontoleon KJ. Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Applied Sciences. 2021; 11(23):11435. https://doi.org/10.3390/app112311435
Chicago/Turabian StyleBakas, Iasonas, and Karolos J. Kontoleon. 2021. "Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire" Applied Sciences 11, no. 23: 11435. https://doi.org/10.3390/app112311435
APA StyleBakas, I., & Kontoleon, K. J. (2021). Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Applied Sciences, 11(23), 11435. https://doi.org/10.3390/app112311435