Next Article in Journal
Long-Term Optimal Scheduling of Hydro-Photovoltaic Hybrid Systems Considering Short-Term Operation Performance
Previous Article in Journal
Comprehensive Investigation of Factors Affecting Acid Fracture Propagation with Natural Fracture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Artificial Neural Networks in Predicting the Thermal Performance of Heat Pipes

by
Thomas Siqueira Pereira
1,†,
Pedro Leineker Ochoski Machado
2,†,
Barbara Dora Ross Veitia
3,†,
Felipe Mercês Biglia
2,†,
Paulo Henrique Dias dos Santos
4,†,
Yara de Souza Tadano
1,†,
Hugo Valadares Siqueira
3,† and
Thiago Antonini Alves
1,2,*,†
1
Graduate Program in Mechanical Engineering, Federal University of Technology—Parana, Ponta Grossa 84017-220, Brazil
2
Graduate Program in Mechanical and Materials Engineering, Federal University of Technology—Parana, Curitiba 81280-340, Brazil
3
Graduate Program in Industrial Engineering, Federal University of Technology—Parana, Ponta Grossa 84017-220, Brazil
4
Mechanical Engineering Department, Federal University of Paraiba, Joao Pessoa 58051-900, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2024, 17(21), 5387; https://doi.org/10.3390/en17215387
Submission received: 19 September 2024 / Revised: 18 October 2024 / Accepted: 26 October 2024 / Published: 29 October 2024
(This article belongs to the Section J: Thermal Management)

Abstract

:
The loss of energy by heat is a common problem in almost all areas of industry, and heat pipes are essential to increase efficiency and reduce energy waste. However, in many cases, they have complex theoretical equations with high percentages of error, limiting their development and causing dependence on empirical methods that generate a waste of time and material, resulting in significant expenses and reducing the viability of their use. Thus, Artificial Neural Networks (ANNs) can be an excellent option to facilitate the construction and development of heat pipes without knowledge of the complex theory behind the problem. This investigation uses experimental data from previous studies to evaluate the ability of three different ANNs to predict the thermal performance of heat pipes with different capillary structures, each of them in various configurations of the slope, filling ratio, and heat load. The goal is to investigate results in as many different scenarios as possible to clearly understand the networks’ capacity for modeling heat pipes and their operating parameters. We chose two classic ANNs (the most used, Multilayer Perceptron (MLP) network, and the Radial Basis Function (RBF) network) and the Extreme Learning Machine (ELM), which has not yet been applied to heat pipes studies. The ELM is an Unorganized Machine with a fast training process and a simple codification. The ANN results were very close to the experimental ones, showing that ANNs can successfully simulate the thermal performance of heat pipes. Based on the RMSE (error metric being reduced during the training step), the ELM presented the best results (RMSE = 0.384), followed by MLP (RMSE = 0.409), proving their capacity to generalize the problem. These results show the importance of applying different ANNs to evaluate the system deeply. Using ANNs in developing heat pipes is an excellent option for accelerating and improving the project phase, reducing material loss, time, and other resources.

1. Introduction

In response to the rising global energy demand driven by modernization and industrialization, improving the efficiency of energy systems is critical [1]. A significant amount of energy is lost as heat during various transformation processes, creating an opportunity for technologies that can recapture and utilize this wasted heat [2]. Heat exchangers play a crucial role in this process, and enhancing their performance improves energy efficiency in industrial applications [3]. Heat pipes have shown considerable promise among these technologies due to their high thermal conductivity and capacity to transport heat efficiently [4].
Numerous experimental and numerical studies have been conducted to analyze how operating parameters affect the thermal resistance of heat pipes. These studies have examined factors such as geometric dimensions (diameter and length) [5,6,7,8], the filling ratio (the ratio between the working fluid volume and the evaporator) [9,10,11,12], the working slope [13,14,15], the type of working fluid [16,17], and the different wick structures used [18,19].
Even with the significant development in heat pipes, their theoretical modeling has a considerable complexity, mainly arising from the processes’ convective and phase change characteristics, often leading to limitations in the feasibility of using these devices [20,21,22]. Mathematical/numerical models to predict the temperature distribution of heat pipes can present a 4% [23] to 30 K [24,25] result, which is different from the experimental results. For CFD simulations, the wall temperature can vary from 11.1 to 37.7 K [26], and the heat flux can have an average relative difference of up to 59.9% [27].
Recent advancements have incorporated Artificial Neural Networks (ANNs) into the modeling and optimization of heat pipe systems, allowing for more precise predictions and control over performance [28]. Table 1 summarizes the application of ANNs in predicting and optimizing the thermal performance of heat pipes. More studies on the application of ANNs in heat pipe modeling and optimization can be found in [29]. The name of the applied ANN in Table 1 is the same as mentioned in each research. However, different names (MFFNN, backpropagation, three layered backpropagation) comprise the same ANN, the MLP.
The literature review presented on the use of ANNs in developing and evaluating heat pipes and other heat exchangers demonstrates a growth interest in this type of study, many of which demonstrated good results. Still, many works are in initial phases without delving into the possibilities of different networks and variations in heat pipes and their operating parameters. Furthermore, many studies demonstrate some issues, such as the low visibility of the algorithm used, little explanation of the error metrics and their meaning, and few variations in algorithms and heat pipe types.
In this context, this work seeks to obtain a deeper assessment of the variation in results using different ANNs to evaluate heat pipes with different configurations and capillary structures, as well as to verify the ability of the networks for predicting the thermal behavior of heat pipes before being built. With the results, we hope to have a clearer view of the usability of neural networks in this type of problem. Considering the complexity of modeling the behavior of heat pipes before their construction, ANNs could be used to improve the design of such equipment and avoid rework and loss of material during the manufacturing of these devices.
In this study, three Artificial Neural Networks (ANNs) were selected: the most commonly used, the Multilayer Perceptron (MLP) network (as shown in Table 1), the Radial Basis Function (RBF) network, and the Extreme Learning Machine (ELM). The first two are classical ANNs, while the third is an Unorganized Machine (UM) that features a fast training process and simple implementation. In terms of novelty, this study highlights the use of ELM networks to predict the thermal performance of heat pipes for the first time. Additionally, this work uses the working slope, filling ratio, and heat load as inputs for the ANN, a combination that has only been applied to wickless heat pipes (thermosyphons). As shown in Table 1, only one study has investigated different capillary structures using neural networks; this has also been investigated in this study.

2. Methodology

This section provides information on the theoretical background on heat pipes and the experimental data regarding the performance of heat pipes, which serve as the database for the application of neural networks, as well as a description of the ANNs used in this research.

2.1. Heat Pipes

Heat pipes are heat exchangers capable of transferring large amounts of heat, even with small differences in temperatures and without any external pumping. These devices are usually made from metal tubes with internal capillary structures. The tubes are subsequently evacuated, filled with working fluid, and sealed. The resulting tube has a controlled pressure, allowing the working fluid to change phase easily. The operation of heat pipes begins with the heat transfer between the working fluid and the heat source, which causes the working fluid to change from liquid to vapor and carry the heat toward the cold source. Then, the working fluid loses heat and changes to a liquid phase, restarting the thermodynamic cycle [45]. The pumping of the working fluid occurs through the phenomenon of capillarity and can vary depending on the type of capillary structure added to the heat pipe. This structure is of great importance, as it is mainly responsible for fluid movement within the equipment and allows the heat pipe to operate with the equipment in different positions and microgravity conditions [46].
Even though it is a versatile component and can be built in different sizes and shapes, the operating principle of a heat pipe is the same [47]. As shown in Figure 1, a heat pipe can be divided into three different parts: evaporator, adiabatic section, and condenser. The evaporator region remains in contact with the heat source. In contrast, the condenser region transfers heat to the cold source. The adiabatic section does not exchange heat and is kept isolated from the environment. In some cases, the adiabatic section may not be present. More detailed information about the operation of heat pipes can be found in [48,49,50].

2.2. Database

The database is composed of data previously obtained in the works of [51,52,53]. All authors employed similar procedures, differing only in the wick structures of the heat pipes. This section describes the heat pipes’ characteristics, the parameters’ range, the experimental apparatus, and the experimental procedure.

2.2.1. Characteristics of the Heat Pipes

Krambeck [51], Nishida et al. [52], and Krambeck et al. [53] used heat pipes with the same characteristics, as seen in Table 2. The manufacturing methodology, which includes the cleaning and assembly of the parts, the tightness test, the evacuation procedure, and filling with the working fluid, the experimental design, and data analysis, was based on the information provided in [54].
Krambeck [51] experimentally evaluated metal mesh screens composed of phosphor bronze using three different types of mesh. This research used the results obtained from the evaluation of heat pipes and a single layer of mesh screen #100 was used (Figure 2a).
Nishida et al. [52] present data regarding heat pipes with capillary structures of axial microgrooves fabricated by wire electrical discharge machining (wire-EDM) directly into the copper tube. In their work, data were obtained for three different variations of the capillary structure, varying the depth and distance between the microgrooves. For the database of this work, the results were obtained using microgrooves with a thickness of 0.035 mm and a depth of 0.030 mm (Figure 2b).
Krambeck et al. [53] used sintered copper powder capillary structures in heat pipes. The sintered capillary structure used in the study was manufactured from copper powder obtained through gas atomization. The material was then sintered, producing structures with three different internal diameters. For this work, data were obtained for the structure with an internal diameter of 2.125 mm (Figure 2c).

2.2.2. Experimental Analysis

Figure 3 presents the apparatus applied in the experimental investigations, which was composed of a KeysightTM 34970A data acquisition system with KeysightTM 34901A multiplexer with 20 channels, a KeysightTM U8002A power supply unit, an uninterrupted power supply, an UltrarTM fan, a universal support, and a DellTM microcomputer. For the evaluation of the thermal performance of the different heat pipes, K-type thermocouples Omega EngineeringTM were used.
Figure 4 presents a schematic diagram of the experimental apparatus, which consists of a heat pipe equipped with thermocouples in its various operating regions, thermal insulation, a ribbon resistor connected to a power supply unit to heat the evaporator region, a fan for air cooling in the condenser region, a data acquisition system, and a microcomputer.
The heat pipe slope and filling ratio were varied for each capillary structure used to evaluate different configurations. The dissipated power was also varied, starting from 5 W, increasing in steps of 5 W to a maximum of 50 W or until reaching the critical temperature of 150 °C, a safety limit for heat pipes sealed with tin, which has a low melting temperature. Each heat load was maintained for about 15 min to attain steady-state temperatures. The experimental uncertainties are associated to the thermocouples, the data logger, and the power supply. The experimental temperature uncertainty is estimated to be approximately ±1.27 °C and the thermal load was ±1 %. Table 3 shows a summary of the configurations used in the experimental procedure.

2.2.3. Data Reduction

The parameter used to evaluate the thermal performance of heat pipes is their thermal resistance, which can be defined as the ratio between the temperature drop through the device and the power dissipated in the evaporator [55]. It can be calculated using Equation (1).
R t h = T e v a p T c o n d q i n
where Tevap and Tcond are the average temperatures of the evaporator and condenser, respectively. Rth is the thermal resistance and qin is the dissipated power.
The complete database compilation can be viewed in Appendix ATable A1.

2.3. Artificial Neural Networks

This section describes the Artificial Neural Networks (ANNs) used in this investigation and their operating principles. The three networks used are the Multilayer Perceptron (MLP) network, the Radial Basis Function (RBF) network, and the Extreme Learning Machine (ELM) network.
Different networks can often have very varied results for the same problem. Therefore, it is preferable to evaluate several networks to have a clear view of the ANN application. The MLP and RBF networks were selected because they are widely known universal approximators and have two different approaches to solving the problem [56]. On the other hand, the ELM presents an analytical resolution for training that differs from others. Despite being a relatively new network, it demonstrates many good results in several problems. Each of the networks proposed was programmed using the well-developed open-source machine learning libraries TensorFlow and Keras and written in Python, following the literature and theory presented in this section.
ANNs are computational models inspired by the nervous system of higher organisms. These algorithms are formed by connecting small modules, usually called neurons, which are mathematical expressions capable of processing information nonlinearly and connecting and communicating with other neurons to form structures like the one represented in Figure 5 [57].
Even though there are many variations of ANNs, they are usually divided into interconnected layers that receive data from an input layer and perform predetermined mathematical operations to obtain new values that will be transmitted to the following layers until, finally, the results from the output layer are obtained [58]. Usually, at least one input layer and one output layer are used in addition to one or more hidden layers, which are located between the input and output layers and do not connect with the outside directly [59]. The direction of communication is usually from the input layer towards the output layer (black arrows in Figure 5), but in some cases, there may be data feedback, and the information flows in the opposite direction or between neurons on the same layer (blue arrows in Figure 5).
As an advantage, these networks can map problems in a nonlinear way, obtaining good results even when using data with interference and without needing a physical analysis of the problem. Some of the negative points are related to the iterative nature of the training process, the dependence on the quality of the database, and the difficulty in adjusting the network parameters, which can be costly depending on the problem and the database used [60]. Additionally, these algorithms have a certain complexity when compared to methods such as linear models [61].
Several ANN models have been designed to solve a wide range of problems related to learning and pattern recognition in the most diverse situations. Currently, these neural networks are used, for example, to estimate health risks related to air pollution [62,63,64,65,66,67], predicting the useful life of materials that undergo fatigue [68], long-term energy demand prediction [69], power curve prediction for wind turbines [70], time series forecasts [71,72,73,74,75], and oil price forecasting [76], among many others.

2.3.1. Multilayer Perceptron

The Multilayer Perceptron (MLP) ANN is one of the most used ANN architectures. It can be defined as a Feedforward Multilayer Network with one or more hidden or intermediate layers in addition to one output and one input layer. In most cases, the number of neurons in the input and output layers are defined by the problem’s format and usually are equal, respectively, to the number of inputs and outputs of the network. The number of neurons in the intermediate layers directly impacts the mapping quality of the MLP network, and a reduced number of neurons can lead to an insufficient approximation of the desired function, generating high errors. In contrast, an excessive number of neurons can lead to another problem: overfitting. In this case, the network reduces its error relative to the training group. However, it has a lower generalization capacity, that is, to predict the behavior of new data, as it adapts excessively to the specific training group [56].
Even though there are approximations for defining the number of neurons in the hidden layer, these usually do not take into account the type of problem and, in some cases, may not lead to the best results, so it is often preferable to search by checking a large range for the number of neurons using a grid search. This method is limited in each network by the processing power required for the many tests performed. Thus, depending on the needs of each network, a different number of neurons can be tested. For the MLP, the grid search on the hidden layer was made beginning at 3 neurons and going up to 200 neurons.
Each neuron in the input layer receives one of the data points applied to the network as an input. Each hidden layer neuron will typically receive all data from the previous layer, multiplied by its respective connection weight. These values are then added together with the bias value, which can be considered an input with value 1. The sum of the values is then applied to an activation function. Different functions, such as the hyperbolic tangent or the sigmoid function, can be used. The activation function’s resulting value is the neuron’s output, which is then passed to the next layer. For some functions, the network inputs must be normalized within the function’s valid range [59].
Several algorithms have been developed for MLP training. Among them, the most used and well-known is the Backpropagation Algorithm, which is based on the error correction learning rule and consists of two phases: (a) propagation: input data are applied to the network input, propagating through the following layers and producing a set of outputs. In this step, there is no change in weights; (b) backpropagation: the response obtained in the propagation step is used together with the known output data to produce an error signal, which is then backpropagated through the network and used to modify the weights. According to Haykin [56], the propagation step of the output of each neuron can be represented by Equation (2):
y j n = i = 0 m w j i n   y i n
where wji is the weight that connects the output of index i to the input of index j. yi(n) is the output of the neuron i and m is the number of neurons in the previous layer. For the first hidden layer, the value of yi(n) is the same as that of the input xi(n).
During backpropagation, the values of the weights wji(n) are modified in a supervised way (with a group of data already known as a reference). The new weight can be found for each connection by applying a correction Δw to the current weight to minimize the error.

2.3.2. Radial Basis Function Network

A Radial Basis Function network, or RBF, is a neural network with two layers. While its output layer is analogous to the MLP-type ANN, its only hidden layer receives the input data directly. It applies a nonlinear transformation in the input space to a higher dimensional space. The name given to this Artificial Neural Network comes from the type of nonlinear activation functions used in this type of ANN, usually a Gaussian function, which results in decision boundaries in an elliptical format when in a two-dimensional plane [56].
Training an RBF requires different treatments for the hidden and output layers. During the first part, it is necessary to find the properties of each neuron in the hidden layer, which uses a Gaussian function as an activation function. The Gaussian function can be written as follows:
φ x = a e x c j 2 2 σ 2
where
x c j 2 = i = 0 3 x i c i 2
and where ci is the center closest to the input i defined in the training phase.
The parameter a can be viewed as the height of the function’s peak, parameter c represents the value of the x-axis at the peak or center of the Gaussian function, while σ is the standard deviation. The positions of the centers are usually found using algorithms such as PSO and k-means, while the dispersions can also be different for each centroid but are usually given by a fixed and equal value that is found by the following function:
σ = d max 2 m
where m is the number of centers used, which comes from the number of neurons in the hidden layer, and dmax is the maximum distance between two centers.
The training of the hidden layer of an RBF is performed in an unsupervised manner. That is, it uses only the input data with no relation to the respective output data. This part of the training aims to find groupings of these data in space. The positioning of the Gaussian functions carried out in the first part of the training is done to acceptably separate the clusters found [59,77].
Once the radii of the Gaussian functions are found, algorithms such as PSO and k-means can then be applied iteratively so that the centroids are found and started randomly for each neuron in the hidden layer. Once the first stage of training is complete, the input weights for the output layer must then be found in a manner analogous to MLP or using the Moore–Penrose Pseudo Inverse Method, which can be defined by the following equation:
A   = A A 1 A
where A is the pseudo-inverse and A* is the transposed matrix or adjoint matrix of A.
Similar to the MLP, the number of neurons in the hidden layer of the RBF was determined using a grid search. Due to the greater computational demand, fewer neuron numbers were tested for this network, starting at 3 neurons and going up to 150.

2.3.3. Extreme Learning Machines

The Extreme Learning Machine (ELM), known as an Unorganized Machine (UM), is a learning algorithm proposed by [78] for Feedforward Networks with only one hidden layer that uses constant random weights in the intermediate layer and an analytical method to determine the weights of the output layer, not needing iterative methods based on the gradient descent. The advantage of using this method in contrast with other ANNs is its training speed, which, according to [79], can be thousands of times faster than training via backpropagation, in addition to avoiding several other problems, such as convergence to local minima and overfitting. The most significant difference in ELM training is that the hidden layer is not adjusted; only the output layer is adjusted, which speeds up the training process.
Training begins with creating the Matrix W of random weights of the hidden layer. This matrix, which is not modified during training, is multiplied by the data set Matrix X to generate a Matrix J containing the calculated values that are then applied to the activation function generating Matrix H.
W = w 11 w 1 d w m 1 w m d b 1 b d
X = x 1 1 x m 1 1 x 1 i x m i 1
J = X × W
H = f J 1 1 f J 1 d + 1 f J i + 1 1 f J i + 1 d + 1
where the index i represents the number of groups of training data, m represents the number of network entries, d is the number of neurons in the hidden layer, and b is the bias of each neuron.
Having the value of Matrix H, the formulation for calculating the output of an ELM network is given by Equation (11):
Y = H β
Y is the vector of desired outputs and can be expressed by the following:
Y = y 1 ,   ,   y n
and β is the weight matrix that stores the training information, and can be obtained by solving the system in Equation (13):
β = H Y
where H is the generalized Moore–Penrose of the Matrix H seen in Equation (6).
Thus, Equation (11) is as follows:
Y = H H Y
Like the other networks, it was also necessary to determine the number of neurons in the hidden layer. The search for the ELM was made from 3 to 300 neurons, as it has a faster training process.

3. Results

This section presents the results obtained during this study, their meaning, and the evaluation metrics used.
The literature does not frequently address comparing values found by theoretical equations and experimental values for heat pipes. Thus, a value acquired based on experiments is used as a basis for evaluating the results obtained. The value used to define an acceptable result is a Mean Absolute Percentage Error (MAPE) of 30% (Equation (15)). Higher values represent a variation in the expected thermal resistance that generates significant losses from an experimental point of view.
M A P E = 1 N t = 1 N d t y t d t   100
In addition to the MAPE, we also used the Mean Absolute Error (MAE), which represents the average of the absolute value of the errors found (Equation (16)), and the Square Root of the Mean Square Error (RMSE), which is similar to the MAE, but is more punitive for larger values of absolute error (Equation (17)).
M A E = 1 N t = 1 N d t y t
R M S E = 1 N t = 1 N d t y t 2
where dt represents the experimental output of the database, yt is the output of the neural networks, and N represents the amount of data used.
Different error assessment methods are important since, in many cases, one metric cannot clearly express the results. Using different methods to understand the values obtained is essential in these cases.
Table 4 presents the results for each ANN model used for the heat pipe database. The NN value represents the number of neurons in the hidden layer. The values were obtained from the average of errors between 30 independent tests.
The results show that the MLP and ELM networks can better generalize the problem, generating consistent errors between tests and within the expected levels. The MLP network presents better results concerning MAPE and MAE errors for this problem. The MAPE found for the MLP network is around 45% lower than that presented by the ELM network. At the same time, the MAE is around 15% smaller. The lowest RMSE was found for the ELM network, with a result that was approximately 6% lower than the result for the MLP network. The RBF network demonstrated more significant errors during the tests, with results of much lower quality than those shown by other neural networks. It is important to highlight that the RMSE is the error metric that has been reduced during the neural models training [64,65,66,67]. Figure 6, Figure 7 and Figure 8 are the boxplots of MAPE, RMSE, and MAE, respectively, which include calculations for the 30 simulations performed for each neural network used.
Compared to the MLP network, the ELM network presents more dispersion concerning the MAPE (Figure 6). The RMSE values found for these neural networks were very close, both in average and dispersion (Figure 7), while the values found for the MAE are slightly better for the ELM network (Figure 8). Both neural networks present results within expectations and with acceptable values for evaluating heat pipes, which usually work with significant design errors. The RBF network demonstrates a large dispersion, as expected, given the most significant average error shown, indicating that the ANN could not correctly map the problem.
Figure 9 compares the thermal resistances of the heat pipes obtained experimentally and those obtained using ANNs. The thermal resistance represents the system resistance to heat flow and can be used to evaluate the heat pipe. The results obtained in the works of [51,52,53] were compared to those obtained by applying the heat pipe’s characteristics as inputs to the ANN.
In addition to generating good average error values, the neural networks generate consistent results for the individual values obtained, as shown in the box diagram. Most of the results obtained are within the 30% error range, in addition to being concentrated close to the central line, which represents the optimal values. In this case, both neural networks (ELM and MLP) have similar behaviors, although the ELM network presents some discrepant points (outliers). The RBF network, once again, presents the worst results, with many points outside the 30% error area.
The results demonstrate that using ANNs is viable since two of the three networks used show an evident ability to adapt to the problem, obtaining results concentrated in the areas with the lowest error in the graph presented. Furthermore, the RBF network has more difficulty adapting to the problem than other networks. However, some hypotheses can be made to explain this behavior. It is necessary to realize that different neural networks sometimes have different behaviors for the same problems because each ANN has its method to achieve the result, which can often cause divergences. Still, the evident difference in the results of the RBF network to other neural networks is enough to expect the existence of other causes. One of these is the dependence of the RBF network on the initial condition, usually caused by the unsupervised training algorithm responsible for the first part of the training. This condition was alleviated in this work using the PSO Algorithm in conjunction with k-means. However, this method is limited when the number of neurons increases significantly since the number of PSO iterations and agents must increase simultaneously, which generates an exponential increase in processing time, becoming a simulation limiter. Thus, due to these problems, the algorithms used become less efficient with the increased number of neurons, which may explain some of the results.
Given the problems of the RBF network, the neural network can be considered not adequate for this problem, but since the objective in this work was to find at least one neural network capable of successfully generalizing the problem, the results are still of great value, since both the MLP network and the ELM network were able to generate results below 30% MAPE.
Most of the highest percentage errors of all neural networks is concentrated in the region of lowest thermal resistance (between 0 and 1.2 K/W). This can often be explained by the low values since, in this case, variations in the environment and all types of errors, such as measurement errors, have a more significant influence on the system and the results obtained. The percentage values in this region also represent smaller absolute errors since a 30% error for a thermal resistance of 7 K/W represents a much greater absolute value than a 30% error for a thermal resistance of 0.5 K/W, for example.

4. Conclusions

Finding the ideal configurations for operating a heat pipe in a specific function is difficult. It requires much experimental effort, loss of materials, and reworking. The variation of several parameters, such as geometry, filling ratio, slope, working fluid, and thermal load, generates difficulties that limit traditional methods. As an alternative to reduce the experimental workload, applying computational methods such as Artificial Neural Networks (ANNs) is an important topic to be evaluated. These algorithms are universal nonlinear mappers capable of approximating any function under certain conditions. Within these algorithms, there are many possibilities to be studied that are strong candidates not widely found in the literature. Some of these networks were used in this work to predict the thermal behavior of heat pipes with different capillary structures. The ELM and MLP networks presented good results, as expected, proving capable of generalizing the problem. Based on the RMSE, the ELM showed the best results (RMSE = 0.384), followed by MLP (RMSE = 0.409). However, based on the MAPE, the MLP showed a better result (MAPE = 13.96). This shows the importance of using different networks and metric errors to evaluate the system deeply. Therefore, ELM and MLP networks can be used to evaluate the thermal behavior of the heat pipe even before it is manufactured. The errors obtained in both databases are within acceptable values for the evaluated devices since they work with phase changes and convection, which generally imply complex systems. The results indicate that using ANNs to aid in thermal projects of heat pipes can be beneficial, increasing assertiveness in producing prototypes and reducing the reworking and time needed to carry out studies.

Author Contributions

Conceptualization, T.A.A., T.S.P. and Y.d.S.T.; methodology, H.V.S., T.A.A., T.S.P. and Y.d.S.T.; software, B.D.R.V., H.V.S. and T.S.P.; validation, P.H.D.d.S., T.A.A., T.S.P. and Y.d.S.T.; formal analysis, P.L.O.M. and T.A.A.; investigation, F.M.B., P.H.D.d.S., P.L.O.M., T.A.A. and T.S.P.; resources, T.A.A. and Y.d.S.T.; data curation, F.M.B., T.A.A. and T.S.P.; writing—original draft preparation, P.L.O.M. and T.S.P.; writing—review and editing, T.A.A. and Y.d.S.T.; visualization, B.D.R.V., F.M.B., P.L.O.M., T.A.A. and T.S.P.; supervision, T.A.A. and Y.d.S.T.; project administration, T.A.A.; funding acquisition, H.V.S., T.A.A. and Y.d.S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed in part by the Coordination for the Improvement of Higher Education Personnel—Brazil (CAPES)—Finance Code 001. The authors thank the Brazilian National Council for Scientific and Technological Development (CNPq), process numbers 315298/2020-0, 409631/2021-3, and 312367/2022-8, and Araucária Foundation.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Database results.
Table A1. Database results.
TestTypeSlope [º]Filling Ratio [%]qin
[W]
Rth
[K/W]
1Microgrooves06055.66
2Microgrooves06055.09
3Microgrooves06051.22
4Microgrooves456054.54
5Microgrooves456051.66
6Microgrooves456055.01
7Microgrooves906055.10
8Microgrooves906052.11
9Microgrooves906054.79
10Microgrooves060105.04
11Microgrooves060105.44
12Microgrooves060100.73
13Microgrooves4560105.23
14Microgrooves4560100.79
15Microgrooves4560104.70
16Microgrooves9060104.84
17Microgrooves9060100.92
18Microgrooves9060105.09
19Microgrooves060154.07
20Microgrooves060150.60
21Microgrooves060154.16
22Microgrooves4560153.67
23Microgrooves4560153.63
24Microgrooves4560150.55
25Microgrooves9060153.68
26Microgrooves9060153.73
27Microgrooves9060150.62
28Microgrooves060202.98
29Microgrooves060200.55
30Microgrooves060202.96
31Microgrooves4560202.55
32Microgrooves4560200.48
33Microgrooves4560202.55
34Microgrooves9060200.54
35Microgrooves9060202.60
36Microgrooves9060202.43
37Microgrooves060252.14
38Microgrooves060250.50
39Microgrooves060252.12
40Microgrooves4560251.79
41Microgrooves4560250.43
42Microgrooves4560251.78
43Microgrooves9060250.46
44Microgrooves9060251.68
45Microgrooves9060251.84
46Microgrooves060301.72
47Microgrooves060300.49
48Microgrooves060301.62
49Microgrooves4560300.40
50Microgrooves4560301.29
51Microgrooves4560301.36
52Microgrooves9060301.45
53Microgrooves9060301.27
54Microgrooves9060300.44
55Microgrooves060350.49
56Microgrooves060351.32
57Microgrooves4560350.37
58Microgrooves4560351.03
59Microgrooves4560351.05
60Microgrooves9060350.41
61Microgrooves9060351.08
62Microgrooves9060350.94
63Microgrooves060400.44
64Microgrooves4560400.35
65Microgrooves4560400.75
66Microgrooves9060400.38
67Microgrooves9060400.77
68Microgrooves9060400.89
69Microgrooves4560450.34
70Microgrooves9060450.37
71Microgrooves4560500.33
72Screen mesh06056.93
73Screen mesh456057.13
74Screen mesh906056.94
75Screen mesh06052.63
76Screen mesh456052.43
77Screen mesh906052.65
78Screen mesh060106.61
79Screen mesh4560106.39
80Screen mesh9060106.44
81Screen mesh060101.50
82Screen mesh4560101.28
83Screen mesh9060101.36
84Screen mesh060151.16
85Screen mesh4560150.89
86Screen mesh9060150.95
87Screen mesh060154.49
88Screen mesh4560154.42
89Screen mesh9060154.36
90Screen mesh060200.99
91Screen mesh4560200.76
92Screen mesh9060200.80
93Screen mesh060203.14
94Screen mesh4560203.13
95Screen mesh9060203.06
96Screen mesh060252.42
97Screen mesh4560252.48
98Screen mesh9060252.38
99Screen mesh060250.85
100Screen mesh4560250.66
101Screen mesh9060250.71
102Screen mesh060300.80
103Screen mesh4560300.59
104Screen mesh9060300.64
105Screen mesh060301.89
106Screen mesh4560301.99
107Screen mesh9060301.91
108Screen mesh060350.70
109Screen mesh4560350.50
110Screen mesh9060350.55
111Screen mesh060400.67
112Screen mesh4560400.46
113Screen mesh9060400.52
114Screen mesh4560450.42
115Screen mesh9060450.47
116Screen mesh4560500.42
117Sintered06053.48
118Sintered010055.28
119Sintered4510054.89
120Sintered456053.28
121Sintered906053.35
122Sintered9010054.73
123Sintered012055.21
124Sintered08052.94
125Sintered4512054.63
126Sintered458053.06
127Sintered9012055.43
128Sintered908052.93
129Sintered060101.58
130Sintered0100102.60
131Sintered4560101.54
132Sintered45100102.41
133Sintered9060101.52
134Sintered90100102.21
135Sintered080101.44
136Sintered0120102.65
137Sintered45120102.32
138Sintered4580101.49
139Sintered90120102.29
140Sintered9080101.43
141Sintered0100151.43
142Sintered060151.03
143Sintered45100151.40
144Sintered4560151.00
145Sintered9060150.98
146Sintered90100151.38
147Sintered0120151.77
148Sintered080151.12
149Sintered45120151.51
150Sintered4580151.02
151Sintered90120151.52
152Sintered9080151.01
153Sintered080200.93
154Sintered060200.81
155Sintered0120201.37
156Sintered0100201.24
157Sintered45120201.16
158Sintered45100201.04
159Sintered4560200.77
160Sintered4580200.78
161Sintered9080200.80
162Sintered90120201.18
163Sintered9060200.74
164Sintered90100201.05
165Sintered060250.73
166Sintered0120251.17
167Sintered0100251.03
168Sintered080250.81
169Sintered4560250.67
170Sintered4580250.69
171Sintered45100250.84
172Sintered45120250.97
173Sintered9080250.69
174Sintered90100250.84
175Sintered90120251.00
176Sintered9060250.64
177Sintered060300.68
178Sintered0100300.96
179Sintered0120301.05
180Sintered080300.74
181Sintered4580300.63
182Sintered4560300.62
183Sintered45120300.87
184Sintered45100300.71
185Sintered90120300.88
186Sintered9080300.63
187Sintered9060300.59
188Sintered90100300.73
189Sintered0120350.93
190Sintered45120350.79
191Sintered90120350.79
192Sintered080350.70
193Sintered0100350.85
194Sintered4580350.59
195Sintered45100350.62
196Sintered90100350.65
197Sintered9080350.59
198Sintered060350.65
199Sintered4560350.58
200Sintered9060350.53
201Sintered080400.65
202Sintered4580400.55
203Sintered9080400.56
204Sintered0100400.78
205Sintered060400.62
206Sintered0120400.91
207Sintered4560400.54
208Sintered45100400.56
209Sintered45120400.74
210Sintered9060400.50
211Sintered90120400.72
212Sintered90100400.62
213Sintered0100450.69
214Sintered4580450.52
215Sintered45100450.54
216Sintered9060450.47
217Sintered9080450.51
218Sintered90100450.60

References

  1. International Energy Agency. Available online: https://www.iea.org/energy-system/energy-efficiency-and-demand/energy-efficiency (accessed on 15 May 2024).
  2. Cullen, J.M.; Allwood, J.M. Theoretical efficiency limits for energy conversion devices. Energy 2010, 35, 2059–2069. [Google Scholar] [CrossRef]
  3. Antonini Alves, T.; Altemani, C.A.C. An invariant descriptor for heaters temperature prediction in conjugate cooling. Int. J. Therm. Sci. 2012, 58, 92–101. [Google Scholar] [CrossRef]
  4. Krambeck, L.; Nishida, F.B.; Aguiar, V.M.; Santos, P.H.D.; Antonini Alves, T. Thermal performance evaluation of different passive devices for electronics cooling. Therm. Sci. 2019, 23, 1151–1160. [Google Scholar] [CrossRef]
  5. Seo, J.; Lee, J.Y. Length effect on entrainment limitation of vertical wickless heat pipe. Int. J. Heat Mass Transf. 2016, 101, 373–378. [Google Scholar] [CrossRef]
  6. Santos, P.H.D.; Antonini Alves, T.; Oliveira Junior, A.A.M.; Bazzo, E. Analysis of a flat capillary evaporator with a bi-layered porous wick. Therm. Sci. 2020, 24, 1951–1962. [Google Scholar] [CrossRef]
  7. Shen, C.; Zhang, Y.; Wang, Z.; Zhang, D.; Liu, Z. Experimental investigation on the heat transfer performance of a flat parallel flow heat pipe. Int. J. Heat Mass Transf. 2021, 168, 120856. [Google Scholar] [CrossRef]
  8. Machado, P.L.O.; Dimbarre, V.V.; Szmoski, R.M.; Antonini Alves, T. Experimental Investigation on the Influence of the Diameter on Thermosyphons for Application in a Hybrid Photovoltaic/Thermal System. In Proceedings of the 9th International Renewable and Sustainable Energy Conference IRSEC, Tetouan, Morocco, 23–27 November 2021; pp. 1–5. [Google Scholar]
  9. Xu, Z.; Zhang, Y.; Li, B.; Wang, C.C.; Ma, Q. Heat performances of a thermosyphon as affected by evaporator wettability and filling ratio. Appl. Therm. Eng. 2018, 29, 665–673. [Google Scholar] [CrossRef]
  10. Kim, Y.; Shin, D.H.; Kim, J.S.; You, S.M.; Lee, J. Boiling and condensation heat transfer of inclined two-phase closed thermosyphon with various filling ratios. Appl. Therm. Eng. 2018, 145, 328–342. [Google Scholar] [CrossRef]
  11. Babu, E.R.; Reddy, N.C.; Babbar, A.; Chandrashekar, A.; Kumar, R.; Bains, P.S.; Alsubih, M.; Islam, S.; Joshi, S.K.; Rizal, A.; et al. Characteristics of pulsating heat pipe with variation of tube diameter, filling ratio, and SiO2 nanoparticles: Biomedical and engineering implications. Case Stud. Therm. Eng. 2024, 55, 104065. [Google Scholar] [CrossRef]
  12. Markal, B.; Aksoy, K. The combined effects of filling ratio and inclination angle on thermal performance of a closed loop pulsating heat pipe. Heat Mass Transf. 2021, 57, 751–763. [Google Scholar] [CrossRef]
  13. Xu, Z.; Zhang, Y.; Li, B.; Wang, C.C.; Li, Y. The influences of the inclination angle and evaporator wettability on the heat performance of a thermosyphon by simulation and experiment. Int. J. Heat Mass Transf. 2018, 116, 675–684. [Google Scholar] [CrossRef]
  14. Arat, H.; Arslan, O.; Ercetin, U.; Akbulut, A. Experimental study on heat transfer characteristics of closed thermosyphon at different volumes and inclination angles for variable vacuum pressures. Case Stud. Therm. Eng. 2021, 26, 101117. [Google Scholar] [CrossRef]
  15. Wang, Z.; Zhang, H.; Yin, L.; Yang, D.; Yang, G.; Akkurt, N.; Liu, D.; Zhu, L.; Qiang, Y.; Yu, F.; et al. Experimental study on heat transfer properties of gravity heat pipes in single/hybrid nanofluids and inclination angles. Case Stud. Therm. Eng. 2022, 34, 102064. [Google Scholar] [CrossRef]
  16. Gallego, A.; Herrera, B.; Buitrago-Sierra, R.; Zapata, C.; Cacua, K. Influence of filling ratio on the thermal performance and efficiency of a thermosyphon operating with Al2O3-water based nanofluids. Nano-Struct. Nano-Objects 2020, 22, 100448. [Google Scholar] [CrossRef]
  17. Kim, J.S.; Kim, Y.; Shin, D.H.; You, S.M.; Lee, J. Heat transfer and flow visualization of a two-phase closed thermosiphon using water, acetone, and HFE7100. Appl. Therm. Eng. 2021, 187, 116571. [Google Scholar] [CrossRef]
  18. Krambeck, L.; Bartmeyer, G.A.; Fusão, D.; Santos, P.H.D.; Antonini Alves, T. Experimental Research of Capillary Structure Technologies for Heat Pipes. Acta Sci.-Technol. 2020, 42, 48189. [Google Scholar] [CrossRef]
  19. Krambeck, L.; Bartmeyer, G.A.; Souza, D.O.; Fusão, D.; Santos, P.H.D.; Antonini Alves, T. Experimental thermal performance of different capillary structures for heat pipes. Energy Eng. 2021, 118, 1–14. [Google Scholar] [CrossRef]
  20. Vieira, G.C.; Flórez, J.P.; Mantelli, M.B.H. Improving heat transfer and eliminating Geyser boiling in loop thermosyphons: Model and experimentation. Int. J. Heat Mass Transf. 2020, 156, 119832. [Google Scholar] [CrossRef]
  21. Souza, D.O.; Machado, P.L.O.; Chiarello, C.; Santos, E.N.; Silva, M.J.; Santos, P.H.D.; Antonini Alves, T. Experimental study of hydrodynamic parameters regarding on geyser boiling phenomenon in glass thermosyphon using wire-mesh sensor. Therm. Sci. 2022, 26, 1391–1404. [Google Scholar]
  22. Souza, F.G.; Cisterna, L.H.R.; Milanez, F.H.; Mantelli, M.B.H. Geyser boiling experiments in thermosyphons filled with immiscible working fluids. Int. J. Therm. Sci. 2023, 185, 108066. [Google Scholar] [CrossRef]
  23. Chhokar, C.; Ashouri, M.; Bahrami, M. Modeling the thermal and hydrodynamic performance of grooved wick flat heat pipes. Appl. Therm. Eng. 2024, 257, 124281. [Google Scholar] [CrossRef]
  24. Zhong, R.; Feng, W.; Ma, Y.; Deng, J.; Liu, Y.; Ding, S.; Wang, X.; Liang, Y.; Yang, G. Experimental study of heat pipe start-up characteristics and development of an enhanced model considering gas diffusion effects. Appl. Therm. Eng. 2024, 257, 124460. [Google Scholar] [CrossRef]
  25. Ma, Y.; Zhang, Y.; Yu, H.; Huang, J.; Zhang, S.; Wang, X.; Huang, S.; Su, G.H.; Zhang, M. Numerical modeling of alkali metal heat pipes. Ann. Nucl. Energy 2025, 210, 110855. [Google Scholar] [CrossRef]
  26. Su, Z.; Li, Z.; Wang, K.; Kuang, Y.; Wang, H.; Yang, J. Investigation of improved VOF method in CFD simulation of sodium heat pipes using a multi-zone modeling method. Int. Commun. Heat Mass Transf. 2024, 157, 107669. [Google Scholar] [CrossRef]
  27. Biglia, F.M.; Dimbarre, V.V.; Bartmeyer, G.A.; Santos, P.H.D.; Antonini Alves, T. Numerical-experimental study of the boiling heat transfer coefficient in a thermosyphon. Therm. Sci. 2024, 181. [Google Scholar] [CrossRef]
  28. Machado, P.L.O.; Pereira, T.S.; Guerreiro, M.T.; Biglia, F.M.; Santos, P.H.D.; Tadano, Y.S.; Siqueira, H.V.; Antonini Alves, T. Estimating thermal performance of thermosyphons by Artificial Neural Networks. Alex. Eng. J. 2023, 79, 93–104. [Google Scholar] [CrossRef]
  29. Olabi, A.G.; Haridy, S.; Sayed, E.T.; Radi, M.A.; Alami, A.H.; Zwayyed, F.; Salameh, T.; Abdelkareem, M.A. Implementation of Artificial Intelligence in Modeling and Control of Heat Pipes: A Review. Energies 2023, 16, 760. [Google Scholar] [CrossRef]
  30. Sivaraman, B.; Mohan, N.K. Analysis of heat pipe solar collector using artificial neural network. J. Sci. Ind. Res. 2007, 66, 995–1001. [Google Scholar]
  31. Chen, R.H.; Su, G.H.; Qiu, S.Z.; Fukuda, K. Prediction of CHF in concentric-tube open thermosiphon using artificial neural network and genetic algorithm. Heat Mass Transf. 2010, 46, 345–353. [Google Scholar] [CrossRef]
  32. Salehi, H.; Heris, S.Z.; Salooki, M.K.; Noei, S.H. Designing a neural network for closed thermosyphon with nanofluid using a genetic algorithm. Braz. J. Chem. Eng. 2011, 28, 157–168. [Google Scholar] [CrossRef]
  33. Shanbedi, M.; Jafari, D.; Amiri, A.; Heris, S.Z.; Baniadam, M. Prediction of temperature performance of a two-phase closed thermosyphon using Artificial Neural Network. Heat Mass Transf. 2013, 49, 65–73. [Google Scholar] [CrossRef]
  34. Wang, X.; Yan, Y.; Meng, X.; Chen, G. A general method to predict the performance of closed pulsating heat pipe by artificial neural network. Appl. Therm. Eng. 2019, 157, 113761. [Google Scholar] [CrossRef]
  35. Kahani, M.; Vatankhah, G. Thermal performance prediction of wickless heat pipe with Al2O3/water nanofluid using artificial neural network. Chem. Eng. Commun. 2019, 206, 509–523. [Google Scholar] [CrossRef]
  36. Maddah, H.; Ghazvini, M.; Ahmadi, M.H. Predicting the efficiency of CuO/water nanofluid in heat pipe heat exchanger using neural network. Int. Commun. Heat Mass Transf. 2019, 104, 33–40. [Google Scholar] [CrossRef]
  37. Liang, F.; Gao, J.; Xu, L. Thermal performance investigation of the miniature revolving heat pipes using artificial neural networks and genetic algorithms. Int. J. Heat Mass Transf. 2020, 151, 119394. [Google Scholar] [CrossRef]
  38. Rajab, R.H.; Ahmad, H.H. Analysis of thermosiphon heat pipe performance using an Artificial Neural Network. J. Inst. Eng. (India) Ser. C 2021, 102, 243–255. [Google Scholar] [CrossRef]
  39. Nair, A.; Ramkumar, P.; Mahadevan, S.; Prakash, C.; Dixit, S.; Murali, G.; Vatin, N.I.; Epifantsev, K.; Kumar, K. Machine Learning for Prediction of Heat Pipe Effectiveness. Energies 2022, 15, 3276. [Google Scholar] [CrossRef]
  40. Kim, M.; Moon, J.H. Deep neural network prediction for effective thermal conductivity and spreading thermal resistance for flat heat pipe. Int. J. Numer. Methods Heat Fluid Flow 2023, 33, 437–455. [Google Scholar] [CrossRef]
  41. Taghipour Kani, G.; Ghahremani, A. Predicting the thermal performance of heat pipes applying various machine learning methods and a proposed correlation. Int. Commun. Heat Mass Transf. 2023, 142, 106671. [Google Scholar] [CrossRef]
  42. Bakhirathan, A.; Lachireddi, G.K.K. Comparative predictive analysis using ANN and RCA for experimental investigation on branched and conventional micro heat pipe. Therm. Sci. Eng. Prog. 2024, 54, 102811. [Google Scholar] [CrossRef]
  43. Jin, I.J.; Park, Y.Y.; Bang, I.C. Heat transfer performance prediction for heat pipe using deep learning based on wick type. Int. J. Therm. Sci. 2024, 197, 108806. [Google Scholar] [CrossRef]
  44. Li, X.; Zhao, X.; Shi, X.; Zhang, Z.; Zhang, C.; Liu, S. Developing a machine learning model for heat pipes considering different input features. Int. J. Therm. Sci. 2025, 208, 109398. [Google Scholar] [CrossRef]
  45. Groll, M.; Rösler, S. Operation principles and performance of heat pipes and closed two-phase thermosyphons. J. Non-Equilib. Thermodyn. 1992, 17, 91–151. [Google Scholar]
  46. Mantelli, M.B.H. Thermosyphons and Heat Pipes: Theory and Applications, 1st ed.; Springer Nature: Cham, Switzerland, 2021. [Google Scholar]
  47. Peterson, G.P. An Introduction to Heat Pipes: Modeling, Testing, and Applications, 1st ed.; John Wiley & Sons: Hoboken, NJ, USA, 1994. [Google Scholar]
  48. Faghri, A. Heat Pipe Science and Technology, 2nd ed.; Global Digital Press: Rajastham, India, 2016. [Google Scholar]
  49. Reay, D.A.; Kew, P.A.; McGlen, R.J. Heat Pipe: Theory, Design and Applications, 6th ed.; Butterworth-Heinemann: Oxford, UK, 2014. [Google Scholar]
  50. Zohuri, B. Heat Pipe Design and Technology: Modern Applications for Practical Thermal Management, 2nd ed.; Springer Nature: Cham, Switzerland, 2016. [Google Scholar]
  51. Krambeck, L. Experimental Investigation of Wire Mesh Thermal Performance in Heat Pipes. Bachelor’s Thesis, Mechanical Engineering, Federal University of Technology—Paraná (UTFPR), Ponta Grossa, Brazil, 2016. (In Portuguese). [Google Scholar]
  52. Nishida, F.B.; Krambeck, L.; Santos, P.H.D.; Antonini Alves, T. Experimental investigation of heat pipe thermal performance with microgrooves fabricated by wire electrical discharge machining (wire-EDM). Therm. Sci. 2020, 24, 701–711. [Google Scholar]
  53. Krambeck, L.; Bartmeyer, G.A.; Souza, D.O.; Fusão, D.; Santos, P.H.D.; Antonini Alves, T. Selecting sintered capillary structure for heat pipes based on experimental thermal performance. Acta Scientiarum. Technol. 2022, 44, 57099. [Google Scholar] [CrossRef]
  54. Antonini Alves, T.; Krambeck, L.; Santos, P.H.D. Heat pipe and thermosyphon for thermal management of thermoelectric cooling. In Bringing Thermoelectricity into Reality; Aranguren, P., Ed.; IntechOpen: London, UK, 2018; pp. 353–374. [Google Scholar]
  55. Rohsenow, W.M.; Hartnett, J.P.; Cho, Y.I. Handbook of Heat Transfer, 1st ed.; McGraw-Hill: New York, NY, USA, 1998. [Google Scholar]
  56. Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Pearson Prentice Hall: New York, NY, USA, 2009. [Google Scholar]
  57. Ozturk, M.C.; Xu, D.; Príncipe, J.C. Analysis and design of Echo State Networks. Neural Comput. 2007, 19, 111–138. [Google Scholar] [CrossRef]
  58. Graupe, D. Principals of Artificial Neural Network: Basic Designs to Deep Learning, 4th ed.; World Scientific: Singapore, 2019. [Google Scholar]
  59. Siqueira, H.; Luna, I. Performance comparison of feedforward neural networks applied to stream flow series forecasting. Math. Eng. Sci. Aerosp. 2019, 10, 41–53. [Google Scholar]
  60. Ewim, D.R.E.; Okwu, M.O.; Onyiriuka, E.J.; Abiodun, A.S.; Abolarin, S.M.; Kaood, A. A quick review of the applications of Artificial Neural Networks (ANN) in the modelling of thermal systems. Eng. Appl. Sci. Res. 2022, 49, 444–458. [Google Scholar]
  61. Gedik, E.; Kurt, H.; Pala, M.; Alakour, A.; Kaya, M. Experimental and Artificial Neural Network investigation on the thermal efficiency of two-phase closed thermosyphon. Int. J. Therm.-Fluid Eng. Mod. Energetics 2022, 1, 19–33. [Google Scholar] [CrossRef]
  62. Polezer, G.; Tadano, Y.S.; Siqueira, H.V.; Godoi, A.F.L.; Yamamoto, C.I.; André, P.A.; Pauliquevis, T.; Andrade, M.F.; Oliveira, A.; Saldiva, P.H.N.; et al. Assessing the impact of PM2.5 on respiratory disease using Artificial Neural Networks. Environ. Pollut. 2018, 235, 394–403. [Google Scholar] [CrossRef]
  63. Belotti, J.T.; Castanho, D.S.; Araujo, L.N.; Silva, L.V.; Antonini Alves, T.; Tadano, Y.S.; Stevan Junior, S.L.; Correa, F.C.; Siqueira, H. Air pollution epidemiology: A simplified generalized linear model approach optimized by bio-inspired metaheuristics. Environ. Res. 2020, 1, 110106. [Google Scholar] [CrossRef] [PubMed]
  64. Araujo, L.N.; Belotti, J.T.; Antonini Alves, T.; Tadano, Y.S.; Siqueira, H. Ensemble method based on Artificial Neural Networks to estimate air pollution health risks. Environ. Model. Softw. 2020, 123, 104567. [Google Scholar] [CrossRef]
  65. Kachba, Y.; Chiroli, D.M.G.; Belotti, J.T.; Antonini Alves, T.; Tadano, Y.S.; Siqueira, H. Artificial Neural Networks to estimate the influence of vehicular emission variables on morbidity and mortality in the largest metropolis in South America. Sustainability 2020, 12, 2621. [Google Scholar] [CrossRef]
  66. Tadano, Y.S.; Bacalhau, E.T.; Casacio, L.; Puchta, E.D.P.; Pereira, T.S.; Antonini Alves, T.; Ugaya, C.M.L.; Siqueira, H. Unorganized Machines to estimate the number of hospital admissions due to respiratory diseases caused by PM 10 concentration. Atmosphere 2021, 12, 1345. [Google Scholar] [CrossRef]
  67. Siqueira, H.; Bacalhau, E.T.; Casacio, L.; Puchta, E.D.P.; Antonini Alves, T.; Tadano, Y.S. Hybrid unorganized machines to estimate the number of hospital admissions caused by PM10 concentration. Environ. Sci. Pollut. Res. 2023, 30, 113175–113192. [Google Scholar] [CrossRef]
  68. Kumar, C.H.; Swamy, R.P. Fatigue life prediction of glass fiber reinforced epoxy composites using Artificial Neural Networks. Compos. Commun. 2021, 26, 100812. [Google Scholar] [CrossRef]
  69. Tai, V.C.; Tan, Y.C.; Rahman, N.F.A.; Che, H.X.; Chia, C.M.; Saw, L.H.; Ali, M.F. Long-term electricity demand forecasting for Malaysia using Artificial Neural Networks in the presence of input and model uncertainties. Energy Eng. 2021, 118, 715–725. [Google Scholar]
  70. Tai, V.C.; Tan, Y.C.; Rahman, N.F.A.; Chia, C.M.; Zhakiya, M.; Saw, L.H. A novel power curve prediction method for horizontal-axis wind turbines using Artificial Neural Networks. Energy Eng. 2021, 118, 507–516. [Google Scholar]
  71. De Mattos Neto, P.S.G.; Marinho, M.H.N.; Siqueira, H.; Tadano, Y.S.; Machado, V.; Antonini Alves, T.; Oliveira, J.F.L.; Madeiro, F. A methodology to increase the accuracy of particulate matter predictors based on time decomposition. Sustainability 2020, 12, 7310. [Google Scholar] [CrossRef]
  72. Campos, D.S.; Tadano, Y.S.; Antonini Alves, T.; Siqueira, H.V.; Marinho, M.H.N. Unorganized Machines and linear multivariate regression model applied to atmospheric pollutants forecasting. Acta Scientiarum. Technol. 2020, 42, 48203. [Google Scholar] [CrossRef]
  73. Siqueira, H.; Macedo, M.; Tadano, Y.S.; Antonini Alves, T.; Stevan, S.L., Jr.; Oliveira, D.S., Jr.; Marinho, M.H.N.; De Mattos Neto, P.S.G.; Oliveira, J.F.L.; Luna, I.; et al. Selection of temporal lags for predicting riverflow series from hydroelectric plants using variable selection methods. Energies 2020, 13, 4236. [Google Scholar] [CrossRef]
  74. Belotti, J.T.; Siqueira, H.; Araujo, L.N.; Stevan, S.L., Jr.; De Mattos Neto, P.S.G.; Marinho, M.H.N.; Oliveira, J.F.L.; Usberti, F.L.; Leone Filho, M.A.; Converti, A.; et al. Neural-based ensembles and Unorganized Machines to predict streamflow series from brazilian hydroelectric plants. Energies 2020, 13, 4769. [Google Scholar] [CrossRef]
  75. De Mattos Neto, P.S.G.; Firmino, P.R.A.; Siqueira, H.; Tadano, Y.S.; Antonini Alves, T.; Oliveira, J.F.L.; Marinho, M.H.N.; Madeiro, F. Neural-based ensembles for particulate matter forecasting. IEEE Access 2021, 9, 14470–14490. [Google Scholar] [CrossRef]
  76. Santos, J.L.F.; Vaz, A.J.C.; Kachba, Y.R.; Stevan Junior, S.L.; Antonini Alves, T.; Siqueira, H.V. Linear Ensembles for WTI Oil Price Forecasting. Energies 2024, 17, 4058. [Google Scholar] [CrossRef]
  77. Tadano, Y.S.; Siqueira, H.; Antonini Alves, T. Unorganized Machines to Predict Hospital Admissions for Respiratory Diseases. In Proceedings of the 2016 IEEE Latin American Conference on Computational Intelligence (LA-CCI 2016), Cartagena, Colombia, 2–4 November 2016; pp. 25–30. [Google Scholar]
  78. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme Learning Machine: A new learning scheme of feedforward neural networks. IEEE Int. Jt. Conf. Neural Netw. 2004, 2, 985–990. [Google Scholar]
  79. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme Learning Machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
Figure 1. Main components and operation of a heat pipe. Red arrows: “heat in”; blue arrows: “heat out”.
Figure 1. Main components and operation of a heat pipe. Red arrows: “heat in”; blue arrows: “heat out”.
Energies 17 05387 g001
Figure 2. Capillary structures: (a) mesh #100, (b) microgrooves, and (c) sintered copper powder.
Figure 2. Capillary structures: (a) mesh #100, (b) microgrooves, and (c) sintered copper powder.
Energies 17 05387 g002
Figure 3. Experimental apparatus.
Figure 3. Experimental apparatus.
Energies 17 05387 g003
Figure 4. Schematic diagram of the experimental setup.
Figure 4. Schematic diagram of the experimental setup.
Energies 17 05387 g004
Figure 5. Neural network architecture.
Figure 5. Neural network architecture.
Energies 17 05387 g005
Figure 6. Boxplot for the MAPE of 30 simulations for the neural networks used.
Figure 6. Boxplot for the MAPE of 30 simulations for the neural networks used.
Energies 17 05387 g006
Figure 7. Boxplot for the RMSE of 30 simulations for the neural networks used.
Figure 7. Boxplot for the RMSE of 30 simulations for the neural networks used.
Energies 17 05387 g007
Figure 8. Boxplot for the MAE of 30 simulations for the neural networks used.
Figure 8. Boxplot for the MAE of 30 simulations for the neural networks used.
Energies 17 05387 g008
Figure 9. Comparison between the values obtained for each ANN with the experimental value.
Figure 9. Comparison between the values obtained for each ANN with the experimental value.
Energies 17 05387 g009
Table 1. Application of ANNs for heat pipes.
Table 1. Application of ANNs for heat pipes.
ReferenceDeviceANNInputOutputError
Sivaraman and Mohan
[30]
Heat pipe solar collectorMFFNN 1 *Total length/inner diameter of heat pipe, condenser length/evaporator length, tilt angle, solar intensity, water inlet temperatureWater outlet temperature0.64%
Chen et al.
[31]
Concentric-tube open thermosyphonANN + GA 2Density ratio, the ratio of the heated tube length to the inner diameter of the outer tube, the ratio of frictional area, and the ratio of equivalent heated diameter to characteristic bubble sizeKutateladze number18.4%
Salehi et al. [32]Closed
thermosyphon
MLP 3 + Backpropagation AlgorithmMagnetic field intensity, the volume fraction of nanofluid in water, and the dissipated powerThermal efficiency and resistanceR2 = 0.99
Shanbedi et al. [33]Two-phase closed thermosyphonMLP + Levenberg–Marquardt AlgorithmWorking fluid vapor quality parameters, the power dissipated in the heat pipe, and the length of the heat pipeExpected temperature distributionR2 = 0.99
Wang et al. [34]Closed pulsating
heat pipe
Back propagation * learning algorithmKutateladze, Bond, Prandtl, Jacob numbers, number of turns (N), and the ratio of the evaporation section length to the diameterThermal resistanceMSE = 0.0138
Kahani and Vatankhah [35]Wickless heat pipeMLPInput power, volume concentration of nanofluid, filling ratio and mass rate in condenser sectionThermal efficiencyMEA = 0.84%
Maddah et al. [36]Heat pipe
heat exchanger
Three-layered forward neural network * and -Lewenberg Marquard
Training Algorithm
Filling ratio, nanofluid concentration, and input power wereHeat exchanger efficiencyR2 > 0.99
Liang et al. [37]Miniature revolving heat pipesBack-propagation * + GABond, Jacob, Prandtl and Froude numbers, and filling ratioKutateladze numberR2 = 0.87977; R2 = 0.8812
Rajab and Ahmad [38]ThermosyphonMLP + Back-propagationWorking fluid, mixing ratio, and dissipated powerThermal resistanceRMSE = 0.098
Nair et al. [39]Heat pipe30 different algorithmsAngle, temperature, mass flow rateEffectivenessMAE = 1.176
Kim and Moon [40]Flat heat pipeDeep neural networkThermal conductivity, heat sink area, heater area, thickness, and heat transfer coefficientThermal resistanceMAPE = 10.8%
Machado et al. [28]ThermosyphonsELM, ESN, RBF, and MLPSlope, filling ratio, heat loadThermal resistance25%
Kani and Ghahremani [41]Heat pipes9 machine learning regression methodsInner and outer diameters, lengths of evaporator and condenser sections, number of turns, working fluids, inclination angle, filling ratio, and heat inputThermal resistanceR2 = 0.6–0.95
Bakhirathan and Lachireddi [42]Micro heat pipeMLPHeat input, heat rejected, geometry and thermos-physical propertiesThermal resistance3%
Jin et al. [43]Heat pipesANN + Deep Neural Network + Convolutional Neural NetworksWick type, nanoparticle type, and operating conditionsThermal resistance20%
Li et al. [44]Heat pipesGenetic algorithm based back propagation neural network13 different inputsEffective thermal conductivityR2 = 0.9580
1 Multilayer Feed-Forward Neural Network (MFFNN). 2 Genetic algorithm (GA). 3 Multilayer perceptron (MLP). * All this ANN refers to MLP.
Table 2. Summary of the main physical characteristics of the heat pipes.
Table 2. Summary of the main physical characteristics of the heat pipes.
CharacteristicValues
Inner diameter [mm]7.75
Outer diameter [mm]9.45
Evaporator length [mm]80
Adiabatic Section length [mm]20
Condenser length [mm]100
Table 3. Summary of the main parameters used in the experimental investigation.
Table 3. Summary of the main parameters used in the experimental investigation.
ParameterScreen MeshAxial MicrogroovesSintered
Working Slope [°]0, 45, and 900, 45, and 900, 45, and 90
Filling Ratio [%]606060, 80, 100, and 120
Heat Load [W]5, 10, 15, 20, 25, 30, 35, 40, 45, and 505, 10, 15, 20, 25, 30, 35, 40, 45, and 505, 10, 15, 20, 25, 30, 35, 40, and 45
Table 4. Errors found for the best configuration of each network.
Table 4. Errors found for the best configuration of each network.
ModelNNHidden Layer FunctionNumber of Hidden LayersMAERMSEMAPE [%]
ELM84Logistic10.2850.38425.74
MLP15Logistic10.2410.40913.96
RBF102Gaussian10.6690.88267.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira, T.S.; Machado, P.L.O.; Veitia, B.D.R.; Biglia, F.M.; dos Santos, P.H.D.; Tadano, Y.d.S.; Siqueira, H.V.; Antonini Alves, T. Application of Artificial Neural Networks in Predicting the Thermal Performance of Heat Pipes. Energies 2024, 17, 5387. https://doi.org/10.3390/en17215387

AMA Style

Pereira TS, Machado PLO, Veitia BDR, Biglia FM, dos Santos PHD, Tadano YdS, Siqueira HV, Antonini Alves T. Application of Artificial Neural Networks in Predicting the Thermal Performance of Heat Pipes. Energies. 2024; 17(21):5387. https://doi.org/10.3390/en17215387

Chicago/Turabian Style

Pereira, Thomas Siqueira, Pedro Leineker Ochoski Machado, Barbara Dora Ross Veitia, Felipe Mercês Biglia, Paulo Henrique Dias dos Santos, Yara de Souza Tadano, Hugo Valadares Siqueira, and Thiago Antonini Alves. 2024. "Application of Artificial Neural Networks in Predicting the Thermal Performance of Heat Pipes" Energies 17, no. 21: 5387. https://doi.org/10.3390/en17215387

APA Style

Pereira, T. S., Machado, P. L. O., Veitia, B. D. R., Biglia, F. M., dos Santos, P. H. D., Tadano, Y. d. S., Siqueira, H. V., & Antonini Alves, T. (2024). Application of Artificial Neural Networks in Predicting the Thermal Performance of Heat Pipes. Energies, 17(21), 5387. https://doi.org/10.3390/en17215387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop