This study examined the effects of volume fraction, heat sink, and Coriolis force on the behavior of water conveying 47 nm alumina nanoparticles over a uniform surface using a soft computing technique. To generate nanofluids, researchers used Al
O
nanoparticles ranging in size from 13 nm to 302 nm, with a 2 percent to 36 percent increase in thermal conductivity (see [
29]). As a result of ever-increasing heat production, industries face cooling issues and product maintenance obstacles. However, scientists appreciate the precise nature and thermal properties of various fluids generated by adding solid particles (on the micrometer and millimeter scales) to reduce energy consumption and processing time. Nanofluids are employed in a variety of applications, including automotive engine coolants, cancer therapies, nano-drug delivery, syphilis diagnosis, and detergent with nanofluids (see [
8]). Particles with less than 100 nanometers diameter have better mechanical, thermal, optical, magnetic, chemical, and electrical properties than ordinary solids (see [
16]). Alumina nanoparticles with a diameter of 47 nanometers are suitable in the above applications because they have a substantial surface-area-to-volume ratio. Furthermore, Ali et al. (see [
10]) discussed how to keep the temperature at the heat sink’s base as low as possible as the heat transfer rate increases. In comparison to CuO–water nanofluid and distilled water situations, the results demonstrate that Al
O
–water nanofluid has a higher heat transfer rate. Furthermore, the heat sink can reduce generated heat by 89.6% between the mini-channels. At all levels of volume fraction, it was revealed that local skin friction proportional to friction at the wall is negligible during the motion of water transporting alumina nanoparticles. The effect of particle size on the dynamic viscosity of Nguyen et al. [
9] described the effect of particle size on the dynamic viscosity of:
This means that nanoparticles with a higher viscosity have better thermal, electrical, chemical, mechanical, magnetic, and optical capabilities. The viscosity of Al
O
nanofluid is significantly higher than water–36 nm. Here, the particle volume fraction is more significant than 4%, why is why we are solely interested in studying 47 nm alumina particles. Additionally, Al
O
and TiO
were employed as nanoparticles, with thermal oil serving as the base fluid. The needed amount of nanoparticles and base fluid were mixed. The diameter of alumina Al
O
nanoparticles in spherical shape ranged from 5 nm to 250 nm, with a mean diameter of 47 nm, according to the manufacturer (Sigma-Aldrich Co.) (see [
30]). This is also the reason that we are talking only about the 47 nm alumina nanoparticles in our study.
The method we used in this paper and all the results obtained are given in this section. All 1001 points were obtained using the “NDSolve” command in Mathematica using the Runge-Kutta order-4 method. We set the step size 0.004 for receiving the 1001 points between 0 and 4. Then, we used those data set points in the neural toolbox using Matlab and suggest the LMB-NN method for the best result.
Tabular and Statistical Analysis
Two scenarios are discussed in this paper; see
Table 1. In scenario number 1, we change the value of rotation parameter K (0.1, 0.2, 0.3) and discuss the behavior of the solution of the system of ODEs and train a neural network for each case individually. Similarly, in scenario number 2, we change the value of volume fraction
(0.1, 0.2, 0.3) and observe the behavior of the solution of the given system of ODEs and then train the neural network for each case and compare the results with RK4 data obtained by using the NDSolve command in Mathematica. We discussed two effects: the rotation parameter K and the volume fraction
on the water containing 47 nm alumina nanoparticles. In Algorithm 1, all the procedures of the LMBNN are given in detail.
In
Table 2 we present the numerical results obtained from the solution of LMB-NN. We discuss all the cases of both scenarios and observe the time taken to get the mean square error of the training, testing, and validation samples. Also shown are the performance, gradient, Mu values, and the epoch at which we got the good values.
Table 3,
Table 4,
Table 5 and
Table 6 show the comparison of RK4 and LMB-NN in all the three cases of scenario 1 in the solution of all the ODEs i.e., F, H,
, and
, respectively. As mentioned above, the first column in all the tables present the input value. The second column shows the results obtained by RK4, while the third column shows the results obtained through LMB-NN when K = 0.1. Similarly, the fourth and sixth columns represent the values of the results of RK4, while the fifth and seventh columns show the importance of LMB-NN when the value of K = 0.2 and K = 0.3, respectively. We can observe the data set of the methods, and it is clear that there is a tiny error between the data set of both approaches.
Algorithm 1All the process of LMB-NN is given in the pseudocode |
Starting of LMB-NN Step 1: Construction Construct input and data set Step 2: Selection of Data Target data and input data is chosen in non-linear form i.e., matrix form. Step 3: Startup Startup the ratio of Neuron numbers, testing, validation and training. ▸ 90 Percent is for training ▸ 5 percent is for validation ▸ 5 percent is for testing ▸ Number of hidden neurons is 40 ▸ Number of hidden layers is 4 Step 4: Weights for training The selected data is trained from the activation function in LMB-NN Step 5: Stopping criteria Step 4 will stop automatically if the following conditions are satisfied. ⋇ Reaching Mu to the maximum value ⋇ Performance value reaches to minimum ⋇ Maximum number of epoch achieved ⋇ Performance of validation is less than maximum fail ⋇ Gradient of performance less than minimum gradient Testing data help us determine that the network is generalized. If the outputs are good and useful forward to step 7, and if the outputs are not desirable, retrain the network. Step 6: Retraining For retraining the hidden neurons, the ratio of testing, training, and validation is changed. Then move again to step 4 and do the same procedure. Phase 7: Output saving The process is ended by saving the output simulation of data statistically as well as numerically. Ending of LMB-NN
|
Figure 5,
Figure 6,
Figure 7 and
Figure 8 show the solution and absolute error graphs of the variation of rotation parameter K of F, H,
, and
, respectively. In these graphs, we see for the given system of equations of ODEs, when we change the value of rotation parameter K, how the solution of the system of ODEs changes. The error graph shows the error between the RK4 solution and ANN solutions. The error between both methods is minimal, and both methods’ graphs overlap. The error graphs show us the residual error in the solution of both methods.
Figure 5a shows the numerical solutions of F of all the cases of scenario 1, while
Figure 5b–d show the residual errors of F in case 1, case 2, and case 3 of scenario 1, respectively in the solutions of RK4 and LMB-NN.
Figure 6a shows the numerical solutions of H of all the cases of scenario 1, while
Figure 6b–d show the residual errors of H in case 1, case 2, and case 3 of scenario 1, respectively in the solutions of RK4 and LMB-NN.
Figure 7a shows the numerical solutions of
of all the cases of scenario 1, while
Figure 7b–d show the residual errors of
in case 1, case 2, and case 3 of scenario 1, respectively in the solutions of RK4 and LMB-NN.
Figure 8a shows the numerical solutions of
of all the cases of scenario 1, while
Figure 8b–d show the residual errors of
case 1, case 2, and case 3 of scenario 1, respectively in the solutions of RK4 and LMB-NN.
Scenario 1 is the variation of rotation parameter (K). We change the value of K and apply ANN for every individual value of the rotation parameter (K). In each case, we obtain the mean square error graphs, training graphs, error histogram graphs, regression graphs, and fitness graphs of the system of ODEs. We note how the values of the above change when we change the rotation parameter (K). The effect of the variation of K can be observed clearly from the values shown in the graphs. We train a neural network for each case by using the “nftool” command in the Matlab environment, setting the input and output data for the network, and using the LMB-NN method to find the best solution for the present system of ODEs. In this problem, we have one input and four hidden output layers. Moreover, we used 40 hidden neurons for a good result.
The results obtained from LMB-NN of case 1, 2, and 3 of scenario 1 are shown in
Figure 9,
Figure 10 and
Figure 11 respectively. These figures contain performance, training, error histogram, regression, and fitness graphs.
Figure 9,
Figure 10 and
Figure 11 show the solutions obtained by artificial neural networks (ANNs). In these figures, we show the performance, training, error histogram, regression, and fitness graphs of the ANNs of all the cases of scenario 1. We use the input data of RK4 from Mathematica by using the “NDSolve” command.
Figure 9a shows the performance of case 1 of scenario 1, in which we show the mean square error.
Figure 9b shows the training samples of the ANNs of case 1 of scenario 1.
Figure 9c shows the error histogram graph of case 1 of scenario 1. The regression of case 1 of scenario 1 is shown in
Figure 9d and the fitness graphs of case 1 scenario 1 are shown in
Figure 9e. Similarly, we show the performance, training, error histogram, regression, and the fitness figures in subfigures (a–e) of the remaining two cases of scenario 1 in
Figure 10 and
Figure 11, respectively.
When the value of MSE of a system is smaller, the system is stable. Among all the above cases discussed in the paper, the MSE of scenario 1 case 1 is smaller than all the other cases. The mean square error (MSE) of all the cases discussed in the paper ranges from to . The Mu and gradient show a better rate of convergence. Auto-correlation shows the relationship between two variables. The and value of Mu shows us the better convergence. The histogram indicates the reliability of a technique, and the regression of all the above-discussed cases gives good results. The linear relationship between target data and output data are shown by regression. The data gives an accurate solution and is well trained investigated by fitness plots.
We obtained the RK4 outputs from the Mathematica window and then applied the LMBNN method in the neural network to get the outcomes of the technique. The errors between the results of both the methods are given in the
Table 7,
Table 8,
Table 9 and
Table 10. The residual error in the variation of the rotation parameter K in all the four ODEs can be easily observed from the tables. Column 1 of the above tables represents the input values, while columns 2, 3, and 4 illustrate the residual error present between the methods RK4 and LMB-NN obtained from Matlab when the rotation parameter K varies. The error in the tables is minimal, which shows that this technique has greater accuracy; moreover, the method is so simple and easy to implement on many problems of various fields and obtains fruitful results.
Table 11,
Table 12,
Table 13 and
Table 14 show us the solution of all the ODEs (F, H,
,
) of the system of RK4 and LMB-NN methods of the variation of the volume fraction
respectively. The comparison of RK4 and LMB-NN outputs are shown. The first column of
Table 11,
Table 12,
Table 13 and
Table 14 shows the input values. The second column shows the results obtained by the RK4 method, while the third column shows the results generated by LMB-NN at different input values. Similarly, the fourth and sixth columns represent the results generated by RK4, and the fifth and seventh columns of the tables show the results generated by LMB-NN at different input points, respectively. For each ODE, a separate table is given for comparison. All three cases of both scenarios are present in the tables.
Figure 12,
Figure 13,
Figure 14 and
Figure 15 show the solution of the variation of volume fraction
of all the ODEs along with the figures of absolute error. The effect of the volume fraction
on F is given in
Figure 12a. The errors in the variation of volume fraction
(i.e., 0.1, 0.2, 0.3) are given in
Figure 12b–d, respectively. Similarly, the effect of volume fraction
on H is shown in
Figure 13a and the absolute errors in the variation of volume fraction
are shown in
Figure 13b–d. In the same manner, the effect of
and errors on the rest of the ODEs are shown in
Figure 14 and
Figure 15.
Figure 16,
Figure 17 and
Figure 18 show the solutions obtained by artificial neural networks (ANNs). In these graphs, we show the performance, training, error histogram, regression, and fitness graphs of the ANNs of all the cases of scenario 2. We use the input data of RK4 from Mathematica by using the “NDSolve” command.
Figure 16a shows the performance of case 1 of scenario 2, in which we show the mean square error.
Figure 16b shows the training samples of the ANNs of case 1 of scenario 2.
Figure 16c shows the error histogram graph of case 1 of scenario 2. The regression of case of scenario 2 is shown in
Figure 16d and the fitness graphs of case 1 scenario 2 are shown in
Figure 16e. Similarly, we show the performance, training, error histogram, regression, and the fitness graphs in subfigures (a–e) of the remaining two cases of scenario 2 in
Figure 17 and
Figure 18, respectively.
In this paper, first, the outputs are obtained from Mathematica using the RK4 command, and then these outputs are used, and new outcomes are obtained through Matlab by using the “nftool” command for both the parameters. The error in both the outputs of all the four ODEs of the system when
varies is given in
Table 15,
Table 16,
Table 17 and
Table 18. These tables show the residual errors in the outputs of both methods when different values of volume fraction
are used. The first column of
Table 15,
Table 16,
Table 17 and
Table 18 represents the input values. The second column of the tables represents the residual error in both the methods when
= 0.1. Similarly, the last two columns of the table represent the residual error in both methods when
= 0.2 and
= 0.3, respectively. The residual error is obtained from Matlab.
In our problem, we compare the solutions of artificial neural networks with the RK4 method, and it is clear from all the analyses that the solution of ANNs are entirely overlapping with the solutions of the RK4 method. Our method is best because we used the ANNs method having four hidden output layers, and the result is good to fast and accurate. Another thing is that our method is a computer-based method and an advanced method in which the chances of mistakes in the calculation are about negligible, and we can completely trust the results and require less time, and provide good results. The results of both scenarios’ cases are accurate, and significantly fewer errors mean that this method is very consistent. Moreover, the method is less time-consuming. This method may be used for various problems in the various fields to obtain accurate, fruitful, and valuable results, such as in the water present over a rotating surface in medicine, biodiesel, and maintaining cooling.