Next Article in Journal
A Method for Gradient Differentiable Network Architecture Search by Selecting and Clustering Candidate Operations
Next Article in Special Issue
An Optimizing Heat Consumption System Based on BMS
Previous Article in Journal
Modal Parameter Identification of Structures Using Reconstructed Displacements and Stochastic Subspace Identification
Previous Article in Special Issue
From Nearly Zero Energy to Carbon-Neutral: Case Study of a Hospitality Building
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire

by
Iasonas Bakas
* and
Karolos J. Kontoleon
Laboratory of Building Construction & Building Physics, Department of Civil Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(23), 11435; https://doi.org/10.3390/app112311435
Submission received: 24 October 2021 / Revised: 23 November 2021 / Accepted: 27 November 2021 / Published: 2 December 2021
(This article belongs to the Special Issue New Trends in Efficient Buildings)

Abstract

:

Featured Application

The use of Artificial Neural Networks for the prediction of heat transfer through a variety of masonry wall build-ups exposed to elevated temperatures οn one side.

Abstract

The multiple benefits Artificial Neural Networks (ANNs) bring in terms of time expediency and reduction in required resources establish them as an extremely useful tool for engineering researchers and field practitioners. However, the blind acceptance of their predicted results needs to be avoided, and a thorough review and assessment of the output are necessary prior to adopting them in further research or field operations. This study explores the use of ANNs on a heat transfer application. It features masonry wall assemblies exposed to elevated temperatures on one side, as generated by the standard fire curve proposed by Eurocode EN1991-1-2. A juxtaposition with previously published ANN development processes and protocols is attempted, while the end results of the developed algorithms are evaluated in terms of accuracy and reliability. The significance of the careful consideration of the density and quality of input data offered to the model, in conjunction with an appropriate algorithm architecture, is highlighted. The risk of misleading metric results is also brought to attention, while useful steps for mitigating such risks are discussed. Finally, proposals for the further integration of ANNs in heat transfer research and applications are made.

1. Introduction

1.1. Purpose and Innovation of This Work

Machine Learning and Artificial Neural Networks, in particular, are increasingly becoming the scientific method of choice for several engineering researchers and practitioners alike. Although the notion was introduced in 1956 [1], its practical adoption was delayed until the early 1990s [2] mainly due to the inadequate computational power available.
Arguably, the benefits Artificial Neural Networks (ANNs) can bring to a scientific project make them an attractive method for analysing data. Enabling researchers to make predictions regarding the phenomenon they study without the need for computationally heavy numerical models or expensive testing facilities, in conjunction with Finite Element or BIM model analysis [3], significantly reduces the cost of their research. In the same direction, the reduction in resources used for experiments and the ability to draw information from previous experimental work could put the use of ANNs at the forefront of efforts for sustainability within the scientific community. In terms of reliability and performance, it has been shown that, usually, ANNs outperform or are of equivalent accuracy to traditional linear and nonlinear statistical analysis methods [4]. Despite these apparent advantages, the adoption of ANNs in certain research fields, such as heat transfer and fire engineering in the context of civil engineering, is still slow [5].
This study’s contribution is twofold; it aims to fill part of the ANN utilisation gap in heat transfer research, on one hand, while stimulating a constructive dialogue regarding the evaluation and significance of input selection for ANNs on the other. The first aim is achieved through the development of an ANN architecture able to make predictions on the thermal performance of masonry wall assemblies exposed to elevated temperatures. It specifically proposes a neural network algorithm capable of predicting the temperature development on the non-exposed face of the wall assemblies over time, when the standard fire curve (as described in Eurocode EN1991-1-2) is applied to the other face of the wall. Although there are a number of studies exploring the use of Machine Learning and Neural Networks in applications such as structural health monitoring [6] and the residual capacity of structural members [7], the adoption of these methods is still not as widespread within the field of fire engineering as in other scientific domains [5]. Equally, there are studies exploring the optimisation of the internal environment for user comfort through the use of Artificial Intelligence [8]; the main focus, though, largely deviates from heat transfer through structural members. The first part of this study’s unique contribution is that it combines aspects of fire engineering (exposure to fire and the change of phase of the components of the structural member) with the transient movement of heat through a masonry wall. This work paves the way for a range of practical applications of heat transfer through the composite, non-uniform, and perforated elements of varying material thermal properties with prospects in the field of civil engineering and materials science.
The latter is approached with a systematic evaluation of the aforementioned algorithm’s performance in terms of accuracy and reliability. Different metrics are used to measure the credibility of the ANN in this specific application and an intuitive comparison to the “ground truth” data is also employed to evaluate the final predictions. To ensure a robust model development and evaluation process, a step by step approach is adopted, closely following previously published ANN development protocols as a guideline [9]. The developed evaluation methodology constitutes an initial step towards the establishment of a standardised process of quantifying the input data impact on the ANN’s performance. A parallel investigation of a wide range of research facets (other ML techniques, optimisation algorithms, the integration of other scientific fields beyond fire engineering, and heat transfer, to name a few) could provide additional value to the current proposal. Nonetheless, this initial attempt already contributes towards building confidence around the accuracy and reliability of ANN results, and it sets an initial foundation for developing a filter and evaluation process during the selection of input datasets through a quantitative and visual assessment of ANN predictive performance.
Several scientific efforts have focused on optimising and assessing the impact of alternative algorithm architectures and the choice of hyperparameters on the performance of ANNs [10]; on the other hand, this study explores the impact of varying degrees of input data quantity and quality. The initially developed ANN architecture is maintained identically throughout the study, while portions of the input data are gradually withheld, with the intention to observe the degradation of the ANN’s predictive capabilities. This results in four different regressor models with identical structures but significantly different ultimate performance. The predictions of each model are assessed and the risk of receiving misleading results is commented upon. Eventually, conclusions and recommendations on how to mitigate such risks arising from specific ANN metrics are made.
Arguably, the field of Machine Learning is constantly evolving and expanding to incorporate new bodies of theoretical study and practical applications. An extensive and analytical review of such works would undeniably be fruitful and beneficial. Nevertheless, the nature of such a thorough investigation does not align directly with the objectives of the current study. Cutting edge research currently attempts to optimise the architecture and performance of Machine Learning algorithms by exploring solutions ranging from the use of genetic algorithms [11] and dispersing the computational and data processing load to peripheral computers (the Edge) [12], to developing state of the art spatial accelerators to expedite big data processing [13]. Despite the utmost significance of these advancements, this study aims to establish an evaluation regime for the more conventional form of ANNs, which are still in the early stages of their implementation within the field of fire engineering and built-environment heat transfer applications.

1.2. Advantages and Disadvantages of the Proposed Evaluation Method

Artificial Neural Networks are progressively used more extensively both in the context of scientific research and industrial applications. Despite the multifaceted benefits their use can bring to the research and industrial processes, careful consideration needs to be given to achieve a balanced assessment of their advantages and drawbacks. The following list outlines some of the main advantages of using ANNs within the context of heat transfer and fire engineering and highlights the benefits of the methodology proposed herein in terms of enhancing these favourable attributes:
  • The use of ANNs can lead to a reduction in the cost and resources associated with further fire and heat transfer testing. This also contributes towards achieving the sustainability targets of research institutions and academic organisations by removing the need for the unnecessary use of fuel, test samples, and the construction of testing facilities. The proposed methodology provides an opportunity for a more efficient and accurate assessment of the ANN’s results.
  • Artificial Neural Networks can also help reduce the required time for developing and analysing computationally heavy 3D Finite Element heat transfer models. This further reduces the cost associated with acquiring powerful computers to support the aforementioned models. Despite this powerful feature, the use of ANNs needs to be regulated and evaluated; the methodology developed herein sets the foundation for such an evaluation framework.
  • ANNs can increase and simplify the reproducibility of heat transfer and fire performance experiments. ANNs introduce efficiency in the exploratory amendment of experiment parameters. Previously recorded figures can feed in and help construct the ANN model, providing the ability to then tweak and replicate the parameters to explore variations of the original experimental arrangements. As this process involves the amendment and adjustment of the input data of ANNs, the workflow introduced in this study can highlight the potential pitfalls arising from this retrospective processing.
  • The methodology for input data review proposed herein aims to contribute towards the prevention of misleading or not adequately precise results generated by the ANN, for integration into further research or field applications.
  • This body of work also raises awareness regarding the need for a standardised methodology for assessing ANN performance and ANN model development.
On the other hand, some of the main disadvantages and limitations that need to be addressed before ANNs and the proposed assessment methodology are fully integrated into the scientific and industrial workflow include:
  • The development of ANNs usually requires a large amount of input data, which are not always easy or affordable to obtain. The proposed methodology essentially introduces additional filters and methods for stricter input data quality assessment, which can make the above process even more involved.
  • The interpretation of the ANN output is not always straightforward, and its precision can be difficult to judge unless a solid understanding of the expected results has been developed beforehand. The following sections of this study touch on this specific matter and make recommendations.
  • To extend the scope of the scientific and industrial application of the proposed model within the context of the heat transfer and fire performance of wall assemblies, further research is needed to incorporate a wider range of material properties, fire loadings, and geometrical wall configurations.
  • There are a plethora of proposed model architectures and ML approaches that could potentially enhance the performance of the proposed model. Although the detailed review and integration of such methodologies fall outside the scope of the present study, it would be beneficial for those to be reviewed and compared against the proposed model structure and topology.

1.3. Scope Limitations and Extended Application

The evaluation process presented in the following paragraphs focuses on a specific application (heat transfer through masonry wall elements exposed to fire on one side) and, as such, it is optimised for the parameters describing this particular case. The parameters used as part of this study could be extended to capture additional features of the experimental and/or modelling data. This would provide a wider scope for the application of the evaluation methodology within the context of heat transfer and fire engineering. The additional parameters could include, but not be limited to, the moisture content of the wall assembly materials, the different types and positions of insulation, a range of different fire loads, ventilation parameters, the different compositions of masonry units with the associated variations in their thermal properties, and the different qualities of bedding and render mortar. The inclusion of a wider array of material properties, geometries, and thermal loading could greatly increase the scope of the modelling process and evaluation method.
Another aspect that could enhance the flexibility of the current proposal would be the integration of other Machine Learning (ML) techniques. Although an extensive comparative study is beyond the scope of the present work, a replication of the proposed process on the output generated by other ML algorithms and models would help build a more holistic understanding of the method’s transferability.
Although there is wide scope for the further expansion of the proposed methodology, a conscious effort has been made to primarily focus on its underpinning principles. This provides an overview of the cornerstones of the evaluation concept and enables a straightforward transfer of its universally applicable values. The aim of the final conclusions and recommendations is to furnish fellow researchers with a guideline on how to assess their ANN output regardless of their specific scientific field and practical application.

1.4. Intuition of Artificial Neural Networks

Artificial Neural Networks (ANN) belong to the sphere of supervised Machine Learning (ML), which in turn falls under the wider concept of Artificial Intelligence (AI) [14]. The cornerstones of ANNs are their architecture and training data. In terms of architecture, each ANN comprises multiple layers of interconnected nodes. These include the input layer (input nodes), responsible for receiving and offering the input data to the network; the output layer (output node), which represents the final prediction of the network; and the hidden layer(s) in between, which capture and factor the various features present in the available dataset. In the case of Deep Learning, the ANN category that the present study falls under, the networks involve more than one hidden layer. These are fully connected; all nodes of one layer are connected to all nodes of the previous and following layers.
The structure of ANNs resembles the structure of the human brain. Each node, mentioned above, represents a neuron, and the connections between those perform similar functions to brain synapses, as seen in Figure 1. Each neuron represents a feature of the training data and can be outlined by Equation (1), where f(x) is an appropriately selected activation function [15,16]:
P = f   ( i = 1 n w i x i + b )
where P is the output figure of the neuron, wi is the weights applied to each feature of the data that are fed to the neuron, xi is the features of the input data themselves, and b is the applied bias.
Each neuron is initially assigned a random weight, quantifying the significance of the represented feature to the value of the dependent variable. These weights are constantly updated and ultimately optimised through an iterative training process (epoch). In each training loop, the contribution of each neuron to the dependent variable and the overall accuracy of the ANN algorithm prediction against a known value is evaluated. The process of comparing the network predictions against known values (the ground truth) is the founding principle of supervised learning [17]. To enable this internal assessment and feedback process, the use of a loss function at the end of each epoch is necessary. This feedforward, backpropagation training process is illustrated schematically in Figure 2.
Similar to the architecture of the network, the choice of an appropriate loss function, along with parameters including, but not limited to, the activation function, number of epochs, learning rate, and batch size, is dependent on the type, quantity, and quality of the input data and type of expected predictions [18]. These are called hyperparameters, and in conjunction with the ANN structure (number of hidden layers and number of neurons on each hidden layer) turn an ANN algorithm into a trainable and functioning predictive model. There is no prescriptive method for standardising the process of ANN development at the moment, in terms of hyperparameters and architecture selection [19]. However, previous scientific studies summarise useful conclusions that provide some guidance in this respect.
Apart from developing an effective architecture and choosing appropriate hyperparameters, an aspect that requires attention is network overfitting [1]. Although an algorithm can perform extremely well in making predictions on the given training and test set, there is the risk of excessive focus on the provided observations, leading to the reduced general applicability of the model. Overfitting limits the scope of use of an artificial neural network and can potentially lead to misleading prediction results. The impact of input data quality and quantity on the overfitting of the ANN is discussed in subsequent sections.
Fundamental details regarding the architecture of the ANN built and utilised for this study are presented in the following paragraphs, allowing for a more holistic understanding of the factors impacting the performance of the model. Similarly, the structure of the input and ground truth data is thoroughly explained in the relevant chapter of this paper, provoking thought on the relationship between input and output precision.

1.5. Input Data Reference

The development, implementation, and evaluation of the ANN algorithm constitute the focal point of this work. The basis of the heat transfer input data, used to train the ANN algorithm, became available after adjusting and interrogating finite element analysis models developed as part of previous scientific research. Specifically, Kanellopoulos and Koutsomarkos [20] set the foundation with their research on heat transfer through clay brick walls, focusing on the contribution of radiation through the voids of the masonry units. This was further developed by Kontoleon et al. [20,21] and their work on heat transfer through insulated and non-insulated masonry wall assemblies. The various amendments and pre-processing routines imposed on this foundation dataset are described in the following paragraphs.

1.6. Software and Hardware Utilised for This Research

The analysis of the physical phenomenon of heat transfer was undertaken using COMSOL Multiphysics® simulation software (v 5.3a). The development, training, and review of the ANN algorithm were carried out using the programming language Python and the application programming interface Keras along with its associated libraries. Other libraries used include Pandas, NumPy, Statistics, Scikit-learn [22], Matplotlib, and Seaborn (the last two for visualisation purposes). Finally, the ANN predictions were exported to MS Excel for ease of diagram formatting and presentation.
The finite element analysis and the development and training of the ANN were performed using a 64-bit operating system, running on an Intel Core i7-920 processor at 2.67 MHz, and with 24GB of RAM installed.

2. Modelling and Methods

2.1. Masonry Wall Assembly Finite Element (FE) Models

2.1.1. Geometrical Features of Models’ Components

The training of the ANN algorithm was based on data extracted from 30 different wall assembly FE models. Despite having a core model onto which the structure of the wall samples was founded, a range of variables was incorporated to enable a more accurate representation of the phenomenon of heat transfer through the wall samples. This pluralism also enhanced the understanding of the impact of various parameters on the process, through the use of ANNs in the following stages of the study.
The foundation model involved the use of a single skin of perforated clay bricks stacked with cement bedding mortar. In the simplest modelling case, either face of this brick core was covered with a single layer of cement render before it was exposed to fire on one side (details regarding the boundary conditions and fire load applied to the model are included in the following paragraphs). Specifically, the masonry core consisted of 250 mm wd × 140 mm dp × 330 mm lg clay bricks incorporating an array of 18 full-length holes (12 holes of 26 mm × 34.7 mm and 6 more elongated holes of 26 mm × 56 mm), as indicated by the following sections in Figure 3. The bedding mortar and cement render were consistently 10 mm thick.
Two variations of this basic matrix model were then developed and analysed; namely, a brick wall insulated with EPS internally (exposed to fire) and a brick wall insulated with EPS externally (non-exposed face of the wall). For each case, two EPS layer thicknesses were analysed: 50 mm and 100 mm.

2.1.2. Model Material Properties and Combinations

The range of material properties incorporated into the FE models enriched the available analysis cases and consequently provided a wide variety of output data. The properties and their values considered as part of the FE analysis included:
  • Clay brick density (ρ): 2000 kg/m3 and 1000 kg/m3.
  • Clay brick thermal conductivity coefficient (λ): 0.8 W/(m·K) and 0.4 W/(m·K).
  • Clay brick thermal emissivity coefficient (ε): 0.1, 0.5, 0.9.
  • Insulation thickness (d): 0 mm, 50 mm, 100 mm.
The above range of variables and material property values allowed for the formation of the following combinations. Each combination appearing in Table 1 represents a separate FE model whose analysis output subsequently offered portions of the overall input data for training, testing, and evaluating the ANN model.
The development of a complete finite element analysis model required the definition of some additional parameters (which were not explicitly used for the development of the ANN). The specific heat capacity (Cp) of all components needed to be specified, along with the density (ρ) and thermal conductivity coefficient (λ) of the cement mortar and insulation panels. Table 2 summarises this additional information.
It is worth highlighting that although most of the thermophysical and mechanical properties of the masonry structure fluctuate depending on the temperature of the components [23], for the needs of this study they have all been assumed to remain constant throughout the development of the phenomenon. It has also been shown that clay brick spalling affects the thermal performance of the masonry wall when exposed to fire [24,25]; this parameter has not been considered in the finite element models. The dramatic impact on the insulation’s performance, due to phase change when exposed to high temperatures, could not be ignored. The modelling convention used to reflect this characteristic behaviour is explained in the following paragraphs.

2.2. Heat Transfer Analysis, Fire Load, Assumptions, and Conventions

To effectively assess the performance of the ANN and evaluate the accuracy of its predictions, a basic description and understanding of the physical phenomenon under consideration are deemed necessary. The founding principles and equations of heat transfer are presented herein, in conjunction with the various assumptions, simplifications, and conventions used when constructing the relevant finite element analysis model.

2.2.1. Heat Transfer Fundamentals

Although the aim of this study is not to review and present the fundamental principles of heat transfer, it is worth briefly presenting the basic mechanisms operating on the finite element analysis model. The analysis commences with the wall samples in thermal balance, with an ambient room temperature applied on either side. At time t = 0 sec, an increasing fire load (as described in the following paragraph) is applied on one side (hereon referred to as “exposed”), initiating the combined heat transfer mechanism of convection and radiation. The fundamental Equations (2) and (3) govern the convective and radiation heat transfers, respectively [26], from the fire front towards the exposed wall surface:
q c o n v = h   T S T
q r a d = ε · σ   T s 4 T s u r 4
where q c o n v and q r a d are the resulting heat flux due to convection and radiation, respectively, h is the convection heat transfer coefficient (W/(m2·K)), Ts is the surface temperature, T is the surrounding fluid temperature (for convection), Tsur is the surrounding environment temperature (for radiation), ε is the emissivity coefficient, and σ is the Stefan–Boltzmann constant (σ = 5.67 × 10−8 W/(m2·K4)).
The heat transfer mechanism of conduction is also activated as heat progressively travels through the solid parts of the wall assembly. Equation (4) is the governing relationship for heat conduction, and it was used to replicate the phenomenon on the finite element analysis model. The clay brick cavities play an integral part in the transfer of heat through the sample [27], and thus attention was paid to modelling them accurately. Heat transfer through radiation between the cavity walls and convection through air movement within the cavities has been considered. The previously described convection and radiation formulas apply:
q c o n d , x = λ T x
where q c o n d , x is the resulting heat flux due to conduction, λ is the thermal conductivity, and T x is the temperature gradient in the direction of the heat’s transient movement.
The final stage of the transient movement of heat through the wall sample is its release to the environment through the non-exposed face of the section. The applicable heat transfer mechanisms are convection, through the surrounding air, and radiation.

2.2.2. Boundary Conditions and Fire Load

A definition of the applicable boundary conditions is necessary before the solution of the differential equations can be attempted. The analysis initiates assuming an ambient room temperature of 20 °C on both faces of the wall assembly. This condition is adhered to throughout the analysis for the non-exposed wall face.
At time t = 0 sec, the “exposed” wall face becomes subject to the increasing fire load represented by the standard fire curve ISO 834 as present in Eurocode EN1991-1-2 [28]. Although scientific research is being carried out to identify and compile new, more accurate fire curves and loading regimes [29], it was considered that the well-established methodology proposed by the Eurocodes currently fulfils the needs of this study. Equation (5) mathematically describes the relationship of temperature increase due to the applied fire load over time, while Figure 4 is the corresponding visual representation:
Qg = 20 + 345 log10 (8 t + 1)
where t is the time in seconds and Qg is the developed temperature in °C.
The boundaries between the various layers of the wall assembly follow the fundamental heat transfer equations, as described in the previous paragraph.

2.2.3. Modeling Heat Transfer Assumptions and Conventions

An element that required special attention was the behaviour of EPS when exposed to significantly high temperatures. The material is a combustible thermoplastic that remains stable when exposed to temperatures up to 100 °C. Its thermal properties start to rapidly degrade shortly after that, while temperatures upwards of 160 °C lead to the complete loss of the inflated bead structure and the liquefaction of the insulation boards. Approaching temperatures of 278 °C leads to the gasification of the material [30]. Provided that the wall samples were subject to temperatures significantly higher than 100 °C, the above behaviour had to be stimulated.
Conventionally, and accepting that no material can be removed from the finite element analysis model once the analysis has commenced, a temperature-dependent variable was utilised to reproduce the reduced performance and ultimate collapse of EPS. The thermal conductivity coefficient (λEPS) of the insulating material was artificially increased when temperatures beyond 150 °C were encountered. That allowed for the unhindered transfer of heat through the melting EPS boards, resembling the gradual removal of the physical barrier. The coefficient was linearly increased from a value of λEPS = 0.035 W/(m∙K) at 150 °C to λEPS = 20 W/(m∙K) at 200 °C (practically no heat transfer resistance). Similarly, its density (ρEPS) and special heat capacity (CpEPS) were reduced to negligible figures. Figure 5 demonstrates the steep material property changes for the melting EPS layer.
It is worth highlighting that the failure of the EPS material would naturally result in the detachment and collapse of the attached render. To ensure the removal of the render’s beneficial insulating function, a similar approach was adopted, wherein all associated thermophysical properties were altered to reproduce its collapse. To ensure the change was introduced in a timely manner, an exploratory analysis was performed to identify the time for the interface between render and EPS to reach the critical temperature of 150 °C. This threshold temperature was achieved at t = 240 sec. The thermal conductivity coefficient (λrender) was increased from λrender = 1.40 W/(m∙K) to λrender = 2000 W/(m∙K), while its density (ρrender) and specific heat capacity (Cprender) were reduced to negligible values.
Although the same process would normally apply to both faces of the wall (exposed to fire and non-exposed), only the properties of the exposed insulation and render were altered within the scope of the present study. Similarly, no allowance has been made for the ignition of EPS volatiles.

2.3. ANN Complete Input Dataset (CID)

Since part of the algorithm’s structure (the size of the input layer) depends on the structure of the input dataset, it was considered important to conclude the data format prior to commencing the development of the ANN itself. A brief outline of the parameters used has been given in the description of the FE models; these were further refined and structured in an appropriate CSV file format ready for reading by the algorithm.
The file incorporated columns representing the independent variables defining the FE models described in previous paragraphs. A “timestamp” column was also added to allow for the observation and correlation of the temperature magnitude on the non-exposed face of the wall over time as the phenomenon developed. The last column of the dataset included the output of the FE model analysis, the temperature observed on the non-exposed face of the wall at a 30 sec time step when the standard Eurocode fire curve was applied to the other face. Table 3 provides a typical section of the dataset file offered to the ANN algorithm; the example reflects the values used for an internally insulated wall panel.
It is apparent that not all columns were necessary for the training, testing, and evaluation of the algorithm, thus the input data was further modified and stripped into a more appropriate format as part of the “preprocessing” phase (see following paragraphs). Nevertheless, these variables and references are useful for an intuitive understanding of the input data content and structure. Variables referring to material properties were discussed in detail in previous paragraphs. Other columns include:
  • Index—variable purely counting the number of unique observations included in the data. Each temperature measurement generated by the FE model analysis (time step of 30 s) is used as a separate observation. The complete dataset includes a total of 21,630 observations. The index was excluded from any training or testing of the algorithm.
  • Sample reference—this variable enabled the research team to easily cross-reference between the dataset tables and the analysis files. Similarly, it was excluded from any training or testing of the neural network.
  • Insulation type—the data structure and ANN algorithm were developed with the intention of incorporating and analysing various insulation materials. Although the present study only considers EPS, it was considered useful to build some flexibility in the algorithm, enabling the further expansion of the scope of work in the future. It takes the values “EPS” and “NoIns”, representing the insulated and non-insulated wall samples, respectively. This categorical variable was encoded into a numerical one as part of the pre-processing phase.
  • Insulation position—this variable represents the position of the insulation. It takes three values; “Int”, “Ext”, and “AbsIns”, representing insulation exposed to fire (internal insulation), insulation not exposed to fire (external insulation), and non-insulated wall samples, respectively. As a categorical variable, this was also encoded as part of the pre-processing phase.
  • Time—to enable the close observation of the temperature development on the non-exposed face of the wall over time, it was considered necessary to include a “timestamp” variable. This was obtained directly from the FE model analysis output, where time and temperature are given as two separate columns. The temperature is measured every 30 s for a duration of 6 h and a total of 720 observations for each of the 30 models.
  • Temperature of the non-exposed face—this is the output of the finite element analysis models in °C as generated by COMSOL Multiphysics® simulation software. It reflects the temperature developed gradually within a reference area on the non-exposed face of the wall panels. This constitutes the dependent variable of the dataset and ultimately is the figure that the ANN algorithm will be trying to predict in the following steps.
It is apparent that the above set of parameters and values are relevant only to the specific masonry wall heat transfer application presented in this study. However, the process of compiling a group of independent variables and organising them into a dataset structure appropriate for analysis in Python is universal. Depending on the number of the recorded independent variables, the architecture of the network might need to be adapted (more or fewer input neurons), different hyperparameters might generate better results, or slightly different preprocessing methods might be applicable (scaling might/might not be necessary, encoding of categoric variables might be needed or not, etc). The list of used variables, and their range of values given in the preceding Table 1 and Table 3, should provide a guide for understanding the form of the dataset and possibly substituting it with other data available to interested research parties. Similarly, the list of hyperparameters included in the following sections of the study, along with the values used for this research, should provide adequate detail for understanding and replicating the structure of the ANN itself, if desired.

2.4. Test Cases Examined

Part of this study’s unique contribution is to examine the impact of varying degrees of input data quantity and quality on the performance of the ANN model. An attempt was made to isolate and assess the influence of data by keeping the same algorithm architecture and gradually altering the amount of information provided for training. This provided a level ground for comparing the algorithms, without introducing inconsistencies due to hyperparameter and architecture variations.
Table 4 summarises the input data used for training each of the 4 ANN models. The original algorithm (ANN 1) was developed using the full dataset, comprising the entirety of the data obtained through the FE analysis, as described in detail in previous paragraphs. Each subsequent algorithm was trained with a subset of the original input information. Specifically, the cases examined include:
  • ANN 1: As mentioned above, this uses the complete dataset for training and testing purposes.
  • ANN 2: The second algorithm was developed using only the extreme values of insulation thickness. As such, the wall assemblies considered included the non-insulated ones and those insulated with 100 mm of EPS internally and externally.
  • ANN 3: Only the extreme values of the emissivity coefficient were used for the development of the third algorithm. Wall assemblies with ε = 0.5 were disregarded and only those with ε = 0.1 and ε = 0.9 were included in the dataset.
  • ANN 4: This was the most input data-deprived algorithm—a combination of the previous two cases. Only the extreme cases of insulation and thermal emissivity coefficient were offered to the algorithm at the training stage, considerably reducing the density of the offered input data.
Each algorithm was eventually compared to the values included in the full set of information, with the aim of identifying the level of inaccuracy introduced by withholding part of the input data. The comparison was carefully made against the wall assemblies incorporating the variable values that the algorithms were deprived of. Since the regressor models were generally trained using extreme values of insulation (with the exception of ANN 1, which utilised the full dataset), the comparison was made against wall assemblies featuring mid-range values (i.e., 50 mm of insulation or ε = 0.5). Although it was anticipated that ANN 1 would have an extremely good predictive score (since it was already trained with full data), it was included in the resulting graphs for comparative reasons.
In addition to assessing each algorithm’s performance, it was important to ensure that overfitting was avoided. Once the most accurate regressor was identified, it was considered necessary to evaluate its applicability on wall assemblies that had not been offered to the algorithm at any stage of its development and were, as such, completely unknown to the model. The variable values and basic structure of these new models had to be within the range of the algorithm’s training space, as shown on Table 5; however, they featured completely new parameter value combinations. To enable the comparison of the ANN predictions with actual ground truth values, six new FE models were developed and analysed. Their output was finally compared to the regressor predictions, as shown in the results section of this study. The six new models included the following:

2.5. ANN Development Protocol

It is a common realisation of the scientific community (at least, the parts working with Machine Learning and Artificial Neural Networks, in particular) that no formal standards, guidance or commonly accepted scientific methods of developing an ANN are available at the moment [9]. Efforts to establish such methods have indeed been made, providing the first step towards standardisation and a defined set of criteria for choosing the algorithm input data, architecture, training parameters, and evaluation metrics. The following paragraphs are dedicated to a step-by-step description of the ANN development process, following relevant, recently proposed protocols [9,31].

2.5.1. Input Data Selection and Organisation

As mentioned previously, data selection and organisation are the first steps towards the development of a functional and effective ANN. The importance of careful input selection is underlined, with a particular focus on data significance and independence. Although there are various statistical ways of determining the significance of data, the use of previous experience and domain knowledge is deemed acceptable and valid [9]. In this case, the independent variables selected are all known to have a considerable contribution to the development of the final values of the dependent variable.
Data filtering and clustering can reduce the number of input parameters, enabling the development of a more efficient and computationally manageable algorithm [32]. Such techniques ensure that any interdependence present in input variables is identified and either removed or merged, reducing the dimensionality of the necessary analysis. The number of independent variables used in the present study is considered small. Although, at present, some dependency between variables is known to exist (i.e., brick density with its thermal conductivity coefficient), the data is structured in a way that allows for the introduction of further test cases able to remove such issues (i.e., varying combinations of brick density and thermal conductivity that are not proportionally related).

2.5.2. Input Data Preprocessing

Although data preprocessing does not appear explicitly as a separate step in the ANN development protocol, it was considered necessary to be mentioned here; it did form an integral part of the work method followed for this study. From the input data presentation, it became apparent that categorical variables have also been included in the dataset (i.e., insulation type, insulation position, etc.). These variables were encoded to ensure that, ultimately, the input offered to the ANN was consistently numerical. To prevent the negative effect of a “dummy trap” [33], some of the resulting encoded numerical variables were removed.
Acknowledging that the range of the variable values was quite wide, taking values from 1 for the encoded variables to 21,600 sec for the timestamps, a scaling of the data was considered appropriate [34]. To prevent an implicit bias of the algorithm towards the higher values of the dataset, standardisation was applied throughout the dataset values. The default values of the “StandardScaler” function from the library sklearn.preprocessing were utilised [35]. Equation (6) describes the scaling method followed for both the input and output data. Once the predictions were made, the same scaling tool was used to reverse the scaling and allow for the inspection of the actual predicted temperature figures generated by the algorithm.
z = x u s
where z is the standard score of the sample being converted, x is the feature being converted, u is the mean, and s is the standard deviation.

2.5.3. Data Splitting

Splitting the dataset into training and testing sets is a fundamental part of the ANN algorithm development process. To avoid introducing figure selection bias, the use of automatic random selection was opted for: the “train_test_split” function from the “sklearn.model_selection” library, under the Scikit Learn API [35]. Random selection is currently the most-used unsupervised splitting technique [9]. Following the example of several other research studies, a ratio of 80–20% of the available observations was allocated to training and testing the algorithm, respectively [14,36].

2.5.4. Model Architecture and Structure Selection

The feedforward multilayer perceptron (MLP), the ANN setup that was utilised for this study, is one of the most popular artificial neural network architectures [31]. The number of input and output nodes was defined by the number of input and output variables, respectively. The input layer features 8 nodes, reflecting the number of input variables. This incorporates the dummy variables introduced due to encoding the categorical features, as mentioned previously. The output layer includes a single node representing the dependent variable, which the ANN has to make predictions against (non-exposed wall-face temperature through time). To achieve a Deep Learning model, a minimum of two hidden layers had to be constructed. The number of nodes on each layer was based on previous experience of ANNs achieving satisfactory prediction results. Figure 6 represents the structure of the ANN used as part of this study.
Although methods of optimising the architecture and hyperparameters of the model do exist, achieving the optimum model was beyond the scope of this study, which instead focused on the impact of the quality and quantity of input data. Given that an identical model architecture was utilised for all 4 different models, the final comparison was undertaken on a level plain. Since the activation function has an impact on the performance of the model [37], it is worth mentioning that the rectified linear unit activation function was used in both hidden layers of all models. For clarity, the other hyperparameters used for the development of the ANN are presented in Table 6.

2.5.5. Model Calibration

Through training, the ANN converges to the optimum values of the initial random weights assigned to its neurons; this enables an accurate prediction of the dependent variable to eventually be made. One of the most computationally efficient algorithms for achieving such optimisation is backpropagation [39]. A loss function is used at the end of every epoch to calculate the difference between the ANN prediction and the ground truth. This study utilises the Mean Squared Error (MSE) loss function, since the physical phenomenon under consideration was not linear and the final predictions were not distinct categories (i.e., aiming for the regression of a range of values). With the level of inaccuracy now quantified, an optimisation function, in this instance Gradient Descent, calculates the local minimum (optimum) and feeds back to the network, updating the neuron weights accordingly to reduce the difference between the prediction and the truth (loss). Although other methodologies for optimising the performance of the model are available [40], they are beyond the scope of this study.

2.5.6. Model Validation

Once the ANN is constructed, trained, and functional, it is necessary to evaluate its performance before deploying it in field operations or using it for further scientific research. The evaluation process of the ANN constitutes the focal point of this study, and although it will be explained thoroughly in the results section, it is worth briefly mentioning the three core evaluation aspects, along with the various metrics to be considered. An outline of the implementation method of the above, within the context of this research effort, will also be given, with the intention of gradually introducing the structure of the upcoming results section.
There are three main steps in validating the functionality and reliability of an artificial neural network [9]. The “replicative validity” is the first thing to check to ensure that the ANN is functional and captures the underlying mechanisms of the physical phenomenon. Essentially, the algorithm needs to be able to replicate the known data observations that were offered as input (including the training and testing sets). This process yields “obvious” results, but it does also provide a sanity check that the algorithm has captured at least the minimum relationship between the independent and dependent variables. The use of fitness metrics or visual observation of comparative graphs between predictions and provided data can aid in this direction. In this study, both methods, metrics and visual inspection of the algorithm’s ability to replicate the data, have been employed.
The validation of the predictive capacity of the algorithm is the second stage in building confidence before its implementation in real applications. In this step, the ability of the algorithm to make accurate predictions, when provided with unknown (not included in the original training data set) input observations, is assessed by the use of specific efficiency indices. The impact of training some models with progressively fewer input data becomes apparent at this stage. An observation of the diminishing scores of the various metrics and also the deviation of the graphical representation of the predictions from the ground truth values elucidates the major contribution that appropriately rich and diverse input data can have on the development of an effective ANN.
Finally, the “structural validity” of the model needs to be confirmed. As part of this step, the neural network is checked against “a priori knowledge” of how the physical system under consideration works [9]. Apart from making correct predictions on specific values, the ANN needs to prove a holistic understanding of the underlying mechanisms that define the phenomenon that is being studied. In this study, instead of generating only individual predictions, the ANN is requested to predict the whole time series of the phenomenon. Thus, the structural validity of the ANN is evaluated through a comparison of the predicted physical behaviour against the known development of the heat transfer process through the various wall samples.
In the previous paragraphs, reference was made to the metrics used to assess the performance of the ANN in terms of accuracy and reliability. Table 7 presents the main indices used as part of this work [41].

2.5.7. From Evaluation Methodology to Structured Results

The above outlines the main steps and methods for the performance and validity evaluation of the developed artificial neural network algorithms; it also informs the structure of the investigation and results of this study. Instead of exploring methods of algorithm optimisation, the research interest, in this case, focuses on the impact of quality and quantity of the provided input data. As such, a single algorithm was developed, adhering to the requirements of the “Architecture and Structure” section above. Then, the same algorithm was trained with varying numbers of input data (as presented in Section 2.4 Test Cases) and was thereon treated as 4 separate predictive models. Each model was subsequently evaluated, using the aforementioned indices and metrics, to reach a conclusion regarding the most effective regressor. As a final step, the dominant model was tested and evaluated again, against completely unknown data combinations. A brief outline of the work sequence followed so far and leading to the following results would include:
  • An initial review of the existing bibliography proposing a protocol for the development of ANNs.
  • The development of one ANN algorithm based on the peer-reviewed protocol.
  • The training of the ANN algorithm with varying degrees of input data (ending up with 4 regressors of the same architecture but different levels of input data).
  • A comparison of the performance of the 4 regressor models (same architecture/different input data) and an evaluation of the impact of the quality of offered input data on each model’s predictive capability.
  • The identification of the best-performing ANN and validation against a completely new set of data.
  • An outline of observations, conclusions, and recommendations regarding the impact of input data quality and ways of mitigating the problem.

3. Results

A rigorous testing procedure, following the development and training of the various ANN models, enabled the assessment of the input data contribution. The graphs included below allow for a visual interpretation of the impact that varying degrees of input quality have on the performance of the ANN, while the accompanying metrics help quantify the same more accurately. For ease of reference, the results are organised in accordance with the structure and nomenclature of the test cases presented earlier herein.

3.1. Impact of Data Quality on ANN Performance

An observation of the following graphs (Figure 7) highlights the excellent fit of the network trained with the full training data (ANN 1). The thick grey line on all graphs represents the ground truth—that is, the actual temperature development on the non-exposed face of the wall through time. The dotted line representing the predictions made by ANN 1 coincides almost entirely with the ground truth line. The above observation is very well recorded and reinforced by the metric results.
The indices used to evaluate the performance of the fully trained network (ANN 1) when tested against the ground truth are summarised in Table 8. The algorithm manages to predict the final temperature with a maximum error of 2.55 °C, which translates to 2.2% (test on Wall Assembly 3) of the scale of the overall developed temperatures throughout the analysis. The good fit is evident not only on the final temperature prediction but throughout the process, where the maximum error appears to be 3.03 °C. The overall agreement between the trained algorithm and the ground truth is demonstrated by the Mean Absolute Error, which on average is 0.73 °C (or a 2.62% difference between the observed values and their mean). An average of 0.9992 coefficient of determination provides robust evidence of the agreement between the observed and predicted data.
The training data offered to ANN 2 was deprived of any middle values of insulation thickness (removed observations incorporating 50 mm insulation externally or internally). The graphical representation of the network’s predictions reveals that the algorithm, despite the reduced data, still captures the “essence” of the physical phenomenon and follows the route of the ground truth curve. Clearly, a perfect fit is not achieved; the prediction lines generally run close but parallel to the ground truth curve. Wall Assembly 4 (WA4) is an exception, where the two lines cross at timestamp 15,300 s; the temperature predictions decline from there onwards.
The above is reflected in the performance indices in Table 9. The predictions of the final temperature are generally within −0.30 °C and 6.76 °C (−0.24% and 6.27% of the observed values, respectively) of the actual values. As observed graphically, the final prediction error for WA4 lies at 11.85 °C (10.37% error compared to the actual temperature value), slightly beyond the range seen on the other tests. Although the prediction and ground truth lines are generally parallel, indicating a good capture of the heat transfer mechanisms, the absolute maximum error of 32.04 °C observed in Wall Assembly 3 is a reminder of the inferior performance of this ANN. This is not an outlier, as on average there is a 16.82% error in all observations made by this network. The degraded performance of this network is also reflected by the lower coefficient of determination and higher root mean square error, which on average are 0.9540 (compared to ANN 1’s score: 0.9992) and 6.65 (compared to ANN 1’s excellent fit score of 0.88), respectively.
A graphical observation of the third network’s (ANN 3) performance reveals similar trends to ANN 2. The predictive capacity of the model is clearly inferior to ANN 1. However, the general mechanism of the heat transfer process has been captured, as indicated by the fact that the ground truth and prediction curves are approximately parallel. The distance between the two lines is larger than observed in the case of ANN 2. It is also worth noting that the curves generated by the ANN 3 predictions appear to be smoother compared to the ones resulting from plotting the predictions of ANN 2.
In a similar fashion, the indices in Table 10 reflect the reduced performance of ANN 3. The degradation from the removal of the middle values of the emissivity coefficient appears to be more severe, with absolute maximum error values in the range of 28.46 °C to 48.84 °C. Throughout the six tests, algorithm ANN 3 generates a relative absolute error of 45.60% on average, with a maximum of 57.80% for the values predicted for Wall Assembly 2. The overall performance reduction is reflected by the low average coefficient of determination (R2) and root mean square error (RMSE) of 0.6723 and 18.37, respectively.
The most data-deprived network, ANN 4, presents an irregular graphic form of prediction results. In all tests, there is an approximate agreement between predictions and observed values in the lower range of temperatures. However, when the impact of heat transfer becomes more evident on the finite element analysis model (i.e., ground truth values), ANN 4 fails to react accordingly.
The indices describing the performance of ANN 4 are presented in Table 11. The average coefficient of determination for ANN 4 between the six tests is 0.8321 and the network’s average maximum error is 26.20 °C. ANN 4 generates ultimate temperature predictions that have an error of 14.55 °C to 31.65 °C (12.92% and 24.06% deviation from the actual ultimate temperature). Between the six wall assembly tests, the ANN 4 predictions constitute a mean absolute error of 10.04 °C (35.44% relative absolute error).
Although the above results give an indication of the performance each ANN achieves, depending on the quality and completeness of the offered training data, it is worth presenting the coefficients of determination obtained after training each algorithm. The figures in Table 12 indicate the goodness of fit between the predictions made by the algorithms and the corresponding observed values on the test set. They all appear to be performing extremely well; this contrasting behaviour is reflected upon further in the discussion section.

3.2. Performance of the Dominant ANN Model

Following the review of the results presented in the previous paragraphs, the superiority of ANN 1 became apparent. To assess the “dominant” model’s performance against completely unknown data, 6 more tests were carried out. Figure 8 is the visual representation of the results of these additional tests. The goodness of fit between the ground truth (data obtained from finite element model analysis) and the predictions made by the ANN is easy to observe. This is further founded and reinforced by the metrics included in Table 13.
The ANN manages to predict the ultimate temperature developed on the non-exposed surface of the wall test samples with a relatively low error, 3.3% on average. Although on one occasion (Test Sample 4) the prediction of peak temperature differs from the observed value by 11.42 °C (15.04% deviation from the actual temperature), on average, the predictions lie within 3.19 °C of the ground truth. This high predictive performance is observed not only on the peak temperature, as an isolated success, but also by the overall low mean absolute error that ranges from 0.90 °C to 4.16 °C.
Tests TS4 and TS5 appear to have more onerous metric results. The absolute maximum errors are 11.42 °C (in peak temperature, as explained above) and 12.43 °C, respectively. These can be observed in the graphic representation of the results as deviations of the prediction curve from the ground truth curve. Despite these local inconsistencies with the observed figures, the overall high coefficient of determination (0.9787 on average between tests) and low root mean squared error (3.31 on average) indicate a high-performing model.

4. Discussion

Although an algorithm capable of predicting the temperature developed on the wall samples’ non-exposed face was eventually constructed, a few items worth highlighting and discussing further appeared during the development and evaluation process. These are listed in the following paragraphs, with the intention to evoke thoughts and discussion regarding potential pitfalls and respective solutions when the ANNs are employed on heat transfer through masonry wall applications.
The perfect fit of ANN 1 with the ground truth is not a surprising result. The network was trained with the full training data, which means that its comparison with the ground truth merely verifies whether its predictions can replicate already-known patterns and figures. Despite its obvious results, however, this step is far from unnecessary, as it ensures the replicative validity of the developed algorithm. It helps in building confidence that the network is not only functional but that it is also able to identify and capture the underlying patterns in the provided dataset.
After achieving this first level of validation, the research could proceed by exploring the limits of the algorithm architecture by varying the quality and quantity of the provided training data. ANN 2, deprived of mid-range insulation thickness values, and ANN 3, deprived of mid-range emissivity coefficient values, both had difficulty in achieving equally high predictive rates as ANN 1, which was trained with the full range of data. It is worth highlighting that the predictions of ANN 2 lay closer to the ground truth but demonstrated some local irregularities. On the contrary, the predictions generated by ANN 3 lay further from the ground truth curve (as reflected by the more onerous performance indices); however, these were largely offset from the observed figures, following their smooth curvature. This raises questions as to the impact different variables may have on the performance of ANNs, depending on their contribution to the physical phenomenon under consideration. The emissivity coefficient is a constant value throughout the finite element analysis, while properties relating to the EPS are time-dependent (within the context of this study). The physical importance of the independent variables needs to be well understood before they are incorporated into an ANN structure.
The most data-deprived network, ANN 4, presents the most irregular graphic form of prediction results. The comparison between the performance indices for ANN 4 and ANN 3 show that the former performed better. However, a visual observation shows clearly that the prediction line of ANN 4 is more erratic compared to ANN 3, whose line is just “offset” (i.e., consistently overestimating or underestimating compared to the ground truth). Although networks ANN 2 and ANN 3 failed to make accurate predictions (at least to the degree ANN 1 did), they appeared to capture the underlying principles and function of the physical phenomenon. On the other hand, the lack of resemblance to the ground truth curve demonstrates that the capture of the underlying mechanisms of the physical phenomenon by ANN 4 is poor, despite its better metrics.
At the training stage, all algorithms returned high indices of fitness to the test data. However, their performance on predicting values beyond their training sets varied greatly depending on the amount and quality of data that was offered at that initial stage. This underlines the need for the validation of the algorithms and a multifaceted evaluation of their performance prior to their application in further research or field operations. It also shows that, despite developing a functioning and potentially effective ANN architecture, the ultimate performance might be compromised by a lack of representative observations in the training set.

5. Conclusions and Further Research

This paper contributes towards the further integration of ANNs in the field of heat transfer through building materials and assemblies. Arguably, there is a wide scope for further development of this research in directions such as the use of different algorithm types and structures, an extended range of building assemblies and materials, and different thermal loadings. Nevertheless, some first conclusions can be drawn from this initial effort, informing future scientific work aiming to employ Artificial Neural Networks for the description of heat transfer phenomena. Although good metric results quantifying the replicative validity of the algorithm offer an indication of a functional ANN, they do not necessarily constitute evidence of a fully operational and reliable network. As such, the use of further validation techniques is of paramount importance. A comparison against unknown data (methodology followed in this study) is an objective way of ensuring the constructed model behaves and performs as expected. When “external” data is not available or difficult to obtain, k-fold cross validation could provide a route for building some confidence in the performance of the model. To mitigate the risk of overfitting, neuron “dropout” can be introduced as part of the algorithm’s hyperparameters. This would artificially weaken the fitting of each neuron to the supplied training data.
As seen in the previous paragraphs, metrics and indices can sometimes be misleading. An intuitive understanding of the predictions and their relevance to factual data is necessary to mitigate the impact of the “black box” phenomenon of blindly accepting output generated by an AI model. Outlining the expectations of the models’ output in advance could provide a measure against which a preliminary assessment of the ANN’s predictions can be undertaken. In the same direction, producing visual representations of the output at every stage can enhance the understanding of the relationship between the ground truth and the predictions and can imminently highlight subtle inaccuracies or major errors.
Depending on the nature and contribution of each parameter to the physical phenomenon, the impact of its loss from the training data can have a more severe impact on the predictive capability of the Neural Network. It is worth researching this relationship by quantifying the contribution of various parameters on the development of a physical phenomenon and then training ANNs with a gradual deprivation of the variables in question. This could provide an opportunity for quantifying the correlation of various physical parameters with the performance of the artificial neural model.
The ANN’s development, analysis, and interrogation need to be considered within the wider context of a specific scientific study or practical application. This contextualisation can have a significant effect on the specification of required accuracy levels for the chosen evaluation methodology. Different scientific applications have different required levels of accuracy in their produced results. As such, it is prudent to avoid trying to establish ill-defined accuracy thresholds without specific application requirements to hand.
This study presented the underpinning principles of an assessment methodology for the evaluation of ANN predictions and highlighted potential pitfalls arising from the use of ANNs within a masonry wall heat transfer context. The focus of the present work is to identify the impact of varying quantities and qualities of input data on the accuracy of a specific ANN architecture and to propose a methodology for demonstrating this relationship. This objective has been accomplished by presenting the inferior results generated by the models trained with reduced amounts of data. Ultimately, the proposed methodology for the assessment and validation of the ANN’s performance was proposed not as a panacea for all ML evaluation problems, or as an optimum precision benchmark, but as a first step towards preventing (or at least enabling the quantification of) accuracy deficiencies in models developed using the data each research team has available.
It is hoped by the authors that future work can feed into the existing proposed protocols for the development of ANNs while aligning such documentation to the needs of heat transfer and building fire research. Expanding the existing understanding of the factors impacting the performance of ANNs and incorporating elements specific to fire engineering and heat transfer through building elements could help safeguard future research in this field from misleading results or discrepancies caused by different ANN models and parameters.

Author Contributions

Conceptualisation, methodology, code writing, data curating, original draft preparation, illustrations: I.B.; review, supervision, heat transfer theoretical background, thermal simulations, foundation input data: K.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data utilised for the development of the ANN algorithms is not currently available as it forms part of ongoing research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kurfess, F.J. Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2003; pp. 609–629. [Google Scholar]
  2. Soni, N.; Sharma, E.K.; Singh, N.; Kapoor, A. Artificial Intelligence in Business: From Research and Innovation to Market Deployment. Procedia Comput. Sci. 2020, 167, 2200–2210. [Google Scholar] [CrossRef]
  3. Demianenko, M.; de Gaetani, C.I. A Procedure for Automating Energy Analyses in the BIM Context Exploiting Artificial Neural Networks and Transfer Learning Technique. Energies 2021, 14, 2956. [Google Scholar] [CrossRef]
  4. Paliwal, M.; Kumar, U.A. Neural networks and statistical techniques: A review of applications. Expert Syst. Appl. 2009, 36, 2–17. [Google Scholar] [CrossRef]
  5. Naser, M.; Kodur, V.; Thai, H.-T.; Hawileh, R.; Abdalla, J.; Degtyarev, V.V. StructuresNet and FireNet: Benchmarking databases and machine learning algorithms in structural and fire engineering domains. J. Build. Eng. 2021, 44, 102977. [Google Scholar] [CrossRef]
  6. Tran-Ngoc, H.; Khatir, S.; de Roeck, G.; Bui-Tien, T.; Wahab, M.A. An efficient artificial neural network for damage detection in bridges and beam-like structures by improving training parameters using cuckoo search algorithm. Eng. Struct. 2019, 199, 109637. [Google Scholar] [CrossRef]
  7. Srikanth, I.; Arockiasamy, M. Deterioration models for prediction of remaining useful life of timber and concrete bridges: A review. J. Traffic Transp. Eng. 2020, 7, 152–173. [Google Scholar] [CrossRef]
  8. Ngarambe, J.; Yun, G.Y.; Santamouris, M. The use of artificial intelligence (AI) methods in the prediction of thermal comfort in buildings: Energy implications of AI-based thermal comfort controls. Energy Build. 2020, 211, 109807. [Google Scholar] [CrossRef]
  9. Wu, W.; Dandy, G.C.; Maier, H. Protocol for developing ANN models and its application to the assessment of the quality of the ANN model development process in drinking water quality modelling. Environ. Model. Softw. 2014, 54, 108–127. [Google Scholar] [CrossRef]
  10. Abdolrasol, M.G.M. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  11. Naser, M. Properties and material models for modern construction materials at elevated temperatures. Comput. Mater. Sci. 2019, 160, 16–29. [Google Scholar] [CrossRef]
  12. Véstias, M.P.; Duarte, R.P.; de Sousa, J.T.; Neto, H.C. Moving Deep Learning to the Edge. Algorithms 2020, 13, 125. [Google Scholar] [CrossRef]
  13. Moon, G.E.; Kwon, H.; Jeong, G.; Chatarasi, P.; Rajamanickam, S.; Krishna, T. Evaluating Spatial Accelerator Architectures with Tiled Matrix-Matrix Multiplication. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 1002–1014. [Google Scholar] [CrossRef]
  14. Naser, M.Z. Mechanistically Informed Machine Learning and Artificial Intelligence in Fire Engineering and Sciences. Fire Technol. 2021, 57, 2741–2784. [Google Scholar] [CrossRef]
  15. Olawoyin, A.; Chen, Y. Predicting the Future with Artificial Neural Network. Procedia Comput. Sci. 2018, 140, 383–392. [Google Scholar] [CrossRef]
  16. Kim, P. MATLAB Deep Learning; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  17. Salehi, H.; Burgueño, R. Emerging artificial intelligence methods in structural engineering. Eng. Struct. 2018, 171, 170–189. [Google Scholar] [CrossRef]
  18. Al-Jabri, K.; Al-Alawi, S.; Al-Saidy, A.; Alnuaimi, A. An artificial neural network model for predicting the behaviour of semi-rigid joints in fire. Adv. Steel Constr. 2009, 5, 452–464. [Google Scholar] [CrossRef]
  19. Tealab, A. Time series forecasting using artificial neural networks methodologies: A systematic review. Futur. Comput. Inform. J. 2018, 3, 334–340. [Google Scholar] [CrossRef]
  20. Kanellopoulos, G.; Koutsomarkos, V.; Kontoleon, K.; Georgiadis-Filikas, K. Numerical Analysis and Modelling of Heat Transfer Processes through Perforated Clay Brick Masonry Walls. Procedia Environ. Sci. 2017, 38, 492–499. [Google Scholar] [CrossRef]
  21. Kontoleon, K.J.; Theodosiou, T.G.; Saba, M.; Georgiadis-Filikas, K.; Bakas, I.; Liapi, E. The effect of elevated temperature exposure on the thermal behaviour of insulated masonry walls. In Proceedings of the 1st International Conference on Environmental Design, Athens, Greece, 24–25 October 2020; pp. 231–238. [Google Scholar]
  22. Pedregosa, F.; Michel, V.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Vanderplas, J.; Cournapeau, D.; Varoquaux, G.; Gramfort, A.; et al. Scikit-Learn: Machine Learning in Python. 2011. Available online: http://scikit-learn.sourceforge.net (accessed on 17 October 2021).
  23. Nguyen, T.-D.; Meftah, F.; Chammas, R.; Mebarki, A. The behaviour of masonry walls subjected to fire: Modelling and parametrical studies in the case of hollow burnt-clay bricks. Fire Saf. J. 2009, 44, 629–641. [Google Scholar] [CrossRef]
  24. Nguyen, T.D.; Meftah, F. Behavior of clay hollow-brick masonry walls during fire. Part 1: Experimental analysis. Fire Saf. J. 2012, 52, 55–64. [Google Scholar] [CrossRef]
  25. Nguyen, T.D.; Meftah, F. Behavior of hollow clay brick masonry walls during fire. Part 2: 3D finite element modeling and spalling assessment. Fire Saf. J. 2014, 66, 35–45. [Google Scholar] [CrossRef]
  26. Theodore, L.B.; Adrienne, S.L.; Frank, P.I.; David, P.D. Introduction to Heat Transfer, 6th ed.; John Wiley and Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  27. Fioretti, R.; Principi, P. Thermal Performance of Hollow Clay Brick with Low Emissivity Treatment in Surface Enclosures. Coatings 2014, 4, 715–731. [Google Scholar] [CrossRef] [Green Version]
  28. British Standards Institution. Eurocode 1: Actions on Structures: Part 1.2 General Actions: Actions on Structures Exposed to Fire; BSI: London, UK, 2002. [Google Scholar]
  29. Du, Y.; Li, G.-Q. A new temperature-time curve for fire-resistance analysis of structures. Fire Saf. J. 2012, 54, 113–120. [Google Scholar] [CrossRef]
  30. Mehta, S.; Biederman, S.; Shivkumar, S. Thermal degradation of foamed polystyrene. J. Mater. Sci. 1995, 30, 2944–2949. [Google Scholar] [CrossRef]
  31. Maier, H.R.; Jain, A.; Dandy, G.C.; Sudheer, K. Methods used for the development of neural networks for the prediction of water resource variables in river systems: Current status and future directions. Environ. Model. Softw. 2010, 25, 891–909. [Google Scholar] [CrossRef]
  32. Lu, Y.; Wang, S.; Li, S.; Zhou, C. Particle swarm optimizer for variable weighting in clustering high-dimensional data. Mach. Learn. 2009, 82, 43–70. [Google Scholar] [CrossRef] [Green Version]
  33. Suits, D.B. Use of Dummy Variables in Regression Equations. J. Am. Stat. Assoc. 1957, 52, 548. [Google Scholar] [CrossRef]
  34. Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
  35. Buitinck, L.; Louppe, G.; Blondel, M.; Pedregosa, F.; Müller, A.C.; Grisel, O.; Niculae, V.; Prettenhofer, P.; Gramfort, A.; Grobler, J.; et al. API Design for Machine Learning Software: Experiences from the Scikit-Learn Project. 2013. Available online: https://github.com/scikit-learn (accessed on 17 October 2021).
  36. Jørgensen, C.; Grastveit, R.; Garzón-Roca, J.; Payá-Zaforteza, I.; Adam, J.M. Bearing capacity of steel-caged RC columns under combined bending and axial loads: Estimation based on Artificial Neural Networks. Eng. Struct. 2013, 56, 1262–1270. [Google Scholar] [CrossRef]
  37. Kulathunga, N.; Ranasinghe, N.; Vrinceanu, D.; Kinsman, Z.; Huang, L.; Wang, Y. Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks. Algorithms 2021, 14, 51. [Google Scholar] [CrossRef]
  38. Kingma, D.; Ba, J. Adam: A method for Stochastic Optimization. Available online: https://arxiv.org/abs/1412.6980 (accessed on 22 December 2014).
  39. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  40. Stanovov, V.; Akhmedova, S.; Semenkin, E. Differential Evolution with Linear Bias Reduction in Parameter Adaptation. Algorithms 2020, 13, 283. [Google Scholar] [CrossRef]
  41. Dawson, C.; Abrahart, R.; See, L. HydroTest: A web-based toolbox of evaluation metrics for the standardised assessment of hydrological forecasts. Environ. Model. Softw. 2007, 22, 1034–1052. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Graphical representation of the neurons the ANN consists of. The illustrated activation function is indicative only and represents the one used as part of this study.
Figure 1. Graphical representation of the neurons the ANN consists of. The illustrated activation function is indicative only and represents the one used as part of this study.
Applsci 11 11435 g001
Figure 2. Graphical representation of the feedforward and backpropagation process of a typical ANN.
Figure 2. Graphical representation of the feedforward and backpropagation process of a typical ANN.
Applsci 11 11435 g002
Figure 3. Sections of typical masonry wall assemblies modelled as part of this study. (a) Wall section insulated externally, (b) non-insulated wall section, (c) wall section insulated internally.
Figure 3. Sections of typical masonry wall assemblies modelled as part of this study. (a) Wall section insulated externally, (b) non-insulated wall section, (c) wall section insulated internally.
Applsci 11 11435 g003
Figure 4. Typical fire curve (ISO 834) applied to finite element analysis models as seen in EN1991-1-2.
Figure 4. Typical fire curve (ISO 834) applied to finite element analysis models as seen in EN1991-1-2.
Applsci 11 11435 g004
Figure 5. Artificial steep modification of insulation material properties to emulate its degradation until complete destruction due to fire exposure.
Figure 5. Artificial steep modification of insulation material properties to emulate its degradation until complete destruction due to fire exposure.
Applsci 11 11435 g005
Figure 6. Graphical representation of the employed ANN architecture.
Figure 6. Graphical representation of the employed ANN architecture.
Applsci 11 11435 g006
Figure 7. Graphical representation of the predictive performance of the 4 ANN models (same algorithm, varying quality of input data).
Figure 7. Graphical representation of the predictive performance of the 4 ANN models (same algorithm, varying quality of input data).
Applsci 11 11435 g007
Figure 8. Graphical representation of the predictive performance of the dominant ANN model against completely unknown wall assembly combinations.
Figure 8. Graphical representation of the predictive performance of the dominant ANN model against completely unknown wall assembly combinations.
Applsci 11 11435 g008
Table 1. Material property combinations defining the various FE models analysed.
Table 1. Material property combinations defining the various FE models analysed.
Insulation Position and ThicknessBrick DensityThermal Conductivity CoefficientThermal Emissivity CoefficientSample Reference 1
No insulation10000.40.1Smpl1-1
10000.40.5Smpl1-2
10000.40.9Smpl1-3
No insulation20000.80.1Smpl2-1
20000.80.5Smpl2-2
20000.80.9Smpl2-3
Non-exposed EPS (50 mm)10000.40.1Smpl3-1
10000.40.5Smpl3-2
10000.40.9Smpl3-3
Non-exposed EPS (50 mm)20000.80.1Smpl4-1
20000.80.5Smpl4-2
20000.80.9Smpl4-3
EPS exposed to fire (50 mm)10000.40.1Smpl5-1
10000.40.5Smpl5-2
10000.40.9Smpl5-3
EPS exposed to fire (50 mm)20000.80.1Smpl6-1
20000.80.5Smpl6-2
20000.80.9Smpl6-3
Non-exposed EPS (100 mm)10000.40.1Smpl7-1
10000.40.5Smpl7-2
10000.40.9Smpl7-3
Non-exposed EPS (100 mm)20000.80.1Smpl8-1
20000.80.5Smpl8-2
20000.80.9Smpl8-3
EPS exposed to fire (100 mm)10000.40.1Smpl9-1
10000.40.5Smpl9-2
10000.40.9Smpl9-3
EPS exposed to fire (100 mm)20000.80.1Smpl10-1
20000.80.5Smpl10-2
20000.80.9Smpl10-3
1 Only used for ease of reference in the following sections of this study and for cross-referencing with model analysis files by the research team.
Table 2. Other material properties used for the development of the FE models.
Table 2. Other material properties used for the development of the FE models.
MaterialDensity
kg/m3
Thermal Conductivity Coefficient
W/(m∙K)
Specific Heat Capacity
J/kg∙K
Clay bricksAs aboveAs above1000
Insulation (EPS)300.0351500
Cement mortar20001.4001000
Table 3. Representative example of the dataset structure and contents.
Table 3. Representative example of the dataset structure and contents.
IndexSample RefBrick DensityThermal Conductivity Coef.Thermal Emissivity Coef.Insulation ThicknessInsulation TypeInsulation PositionTimeTemperature of Non-Exposed Face
11,449Smpl6-120000.80.150EPSInt18,99041.77246
11,450Smpl6-120000.80.150EPSInt19,02041.85622
11,451Smpl6-120000.80.150EPSInt19,05041.94014
11,452Smpl6-120000.80.150EPSInt19,08042.02421
11,453Smpl6-120000.80.150EPSInt19,11042.10843
Table 4. List of wall assembly analysis output used for training each algorithm.
Table 4. List of wall assembly analysis output used for training each algorithm.
Sample ReferenceProperties of Wall SampleANN 1ANN 2ANN 3ANN 4
Smpl1-1ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1
Smpl1-2ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5
Smpl1-3ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9
Smpl2-1ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1
Smpl2-2ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5
Smpl2-3ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9
Smpl3-1ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 50 mm, External
Smpl3-2ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 50 mm, External
Smpl3-3ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 50 mm, External
Smpl4-1ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 50 mm, External
Smpl4-2ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 50 mm, External
Smpl4-3ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 50 mm, External
Smpl5-1ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 50 mm, Internal
Smpl5-2ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 50 mm, Internal
Smpl5-3ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 50 mm, Internal
Smpl6-1ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 50 mm, Internal
Smpl6-2ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 50 mm, Internal
Smpl6-3ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 50 mm, Internal
Smpl7-1ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 100 mm, External
Smpl7-2ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 100 mm, External
Smpl7-3ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 100 mm, External
Smpl8-1ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 100 mm, External
Smpl8-2ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 100 mm, External
Smpl8-3ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 100 mm, External
Smpl9-1ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.1, d = 100 mm, Internal
Smpl9-2ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.5, d = 100 mm, Internal
Smpl9-3ρ = 1000 kg/m3, λ: 0.4 W/(m∙K), ε = 0.9, d = 100 mm, Internal
Smpl10-1ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.1, d = 100 mm, Internal
Smpl10-2ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.5, d = 100 mm, Internal
Smpl10-3ρ = 2000 kg/m3, λ: 0.8 W/(m∙K), ε = 0.9, d = 100 mm, Internal
Table 5. Additional FE model parameters.
Table 5. Additional FE model parameters.
Sample RefBrick DensityThermal Conductivity Coef.Thermal Emissivity Coef.Insulation ThicknessInsulation TypeInsulation Position
Test Sample 120000.80.925EPSExt
Test Sample 220000.80.925EPSInt
Test Sample 320000.80.70NoInsAbsIns
Test Sample 420000.80.30NoInsAbsIns
Test Sample 515000.60.90NoInsAbsIns
Test Sample 615000.60.775EPSInt
Table 6. Hyperparameters used for the development of the ANN.
Table 6. Hyperparameters used for the development of the ANN.
Hyper ParameterValueComments
Number of epochs50Following ad hoc experimentation with various values.
Learning rate0.001Default value of Adam optimiser.
Batch size10To allow more efficient processing of the large dataset.
Activation functionReLUUsed the default values of the rectified linear unit activation function.
OptimiserAdamDefault values of Adam [38] optimiser were used.
Loss functionMSEMean squared error loss function.
Table 7. Metrics used for the evaluations of the performance of the ANN.
Table 7. Metrics used for the evaluations of the performance of the ANN.
MetricReferenceFormula 1Perfect Score
Absolute maximum errorAMEAME = max(| Q i Q ^ i |)0.0
Mean absolute errorMAEMAE = 1 n i = 1 n Q i Q ^ i 0.0
Relative absolute errorRAERAE = i = 1 n Q i Q ^ i i = 1 n Q i Q ¯ 0.0
Peak differencePDIFFPDIFF = max( Q i ) − max( Q ^ i )0.0
Per cent error in peakPEPPEP = max Q i max Q ^ i max Q ^ i × 1000.0
Root mean squared errorRMSERMSE = i = 1 n Q i Q ^ i n 0.0
Coefficient of determinationR2R2 = i = 0 n Q i Q ^ Q ^ Q ˜ i = 1 n Q i Q ¯ 2 i = 1 n Q ^ i Q ˜ 2 2 1.0
1 Nomenclature of the above formulas: n = number of data points; Q i = observed value; Q ^ i = ANN value prediction; Q ¯ = mean of the observed data points; Q ˜ = mean of the values predicted by the ANN.
Table 8. Metric scores for ANN 1 in the testing phase.
Table 8. Metric scores for ANN 1 in the testing phase.
Wall AssemblyAMEMAERAEPDIFFPEPRMSER 2
WA11.800.562.33%1.801.7%0.750.9993
WA21.630.782.78%1.551.4%0.880.9992
WA33.031.104.62%2.552.2%1.290.9981
WA42.480.832.60%1.391.2%1.050.9991
WA51.690.451.42%−0.65−0.5%0.550.9998
WA61.670.641.97%0.700.6%0.780.9995
Average2.050.732.62%1.221.1%0.880.9992
Table 9. Metric scores for ANN 2 in the testing phase.
Table 9. Metric scores for ANN 2 in the testing phase.
Wall AssemblyAMEMAERAEPDIFFPEPRMSER 2
WA18.452.6911.3%6.766.27%3.450.9844
WA211.443.8513.7%2.832.51%5.070.9741
WA332.045.0621.3%2.342.03%9.740.8921
WA422.886.8721.6%11.8510.37%8.840.9363
WA513.094.8615.3%3.902.97%6.250.9702
WA69.965.7317.7%−0.30−0.24%6.570.9668
Average16.314.8516.82%4.564.0%6.650.9540
Table 10. Metric scores for ANN 3 in the testing phase.
Table 10. Metric scores for ANN 3 in the testing phase.
Wall AssemblyAMEMAERAEPDIFFPEPRMSER2
WA128.4610.8745.59%22.5820.95%15.050.7023
WA245.0016.2157.80%45.0039.93%22.550.4870
WA341.099.6440.57%27.8124.18%16.010.7081
WA448.8415.1247.49%27.7124.25%22.350.5922
WA528.7711.3735.85%14.5611.07%15.060.8271
WA630.3414.9946.29%24.1919.55%19.180.7171
Average37.0813.0345.60%26.9823.3%18.370.6723
Table 11. Metric scores for ANN 4 in the testing phase.
Table 11. Metric scores for ANN 4 in the testing phase.
Wall AssemblyAMEMAERAEPDIFFPEPRMSER2
WA118.138.3635.08%18.1316.82%10.180.8639
WA222.0310.4637.30%14.5512.92%12.390.8451
WA329.399.8241.35%28.8325.06%13.250.8002
WA433.2915.4448.49%29.3325.66%19.490.6898
WA531.668.2526.01%31.6524.06%12.560.8797
WA622.707.9124.42%22.7018.34%10.580.9140
Average26.2010.0435.44%24.2020.5%13.070.8321
Table 12. Final metric scores for each ANN following the completion of their training with their respective training dataset.
Table 12. Final metric scores for each ANN following the completion of their training with their respective training dataset.
Neural NetworkLoss
ANN 15.0840 × 10−4
ANN 25.0721 × 10−4
ANN 32.1836 × 10−4
ANN 42.4811 × 10−4
Table 13. Metric scores for ANN 1, as the dominant network, on unknown data sets (wall test samples).
Table 13. Metric scores for ANN 1, as the dominant network, on unknown data sets (wall test samples).
Test SampleAMEMAERAEPDIFFPEPRMSER2
TS13.830.901.89%−0.44−0.28%1.200.9995
TS 24.261.933.72%3.722.03%2.480.9981
TS 36.062.086.37%4.423.30%2.730.9946
TS 411.423.8026.50%11.4215.04%5.360.8990
TS 512.434.1610.32%−4.02−2.70%5.810.9832
TS 64.131.924.64%4.032.66%2.250.9976
Average7.022.478.91%3.193.3%3.310.9787
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bakas, I.; Kontoleon, K.J. Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Appl. Sci. 2021, 11, 11435. https://doi.org/10.3390/app112311435

AMA Style

Bakas I, Kontoleon KJ. Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Applied Sciences. 2021; 11(23):11435. https://doi.org/10.3390/app112311435

Chicago/Turabian Style

Bakas, Iasonas, and Karolos J. Kontoleon. 2021. "Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire" Applied Sciences 11, no. 23: 11435. https://doi.org/10.3390/app112311435

APA Style

Bakas, I., & Kontoleon, K. J. (2021). Performance Evaluation of Artificial Neural Networks (ANN) Predicting Heat Transfer through Masonry Walls Exposed to Fire. Applied Sciences, 11(23), 11435. https://doi.org/10.3390/app112311435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop