Next Article in Journal
Mechanical Properties, Corrosion Damage Evolution Laws, and Durability Deterioration Indicators of High-Performance Concrete Exposed to Saline Soil Environment for 8 Years
Previous Article in Journal
Comparison of the Performance Parameters of BioHPP® and Biocetal® Used in the Production of Prosthetic Restorations in Dentistry—Part I: Mechanical Tests: An In Vitro Study
Previous Article in Special Issue
Application of Product of Vitrification of Asbestos-Cement Waste and CRT Glass Cullet as Reinforcing Phase in Surface Composites Produced by FSP Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Possibilities of Using Specific Jominy Distance in ANN Models for Predicting Low-Alloy Steels’ Microstructure

University of Rijeka, Faculty of Engineering, Vukovarska 58, 51000 Rijeka, Croatia
*
Author to whom correspondence should be addressed.
Materials 2025, 18(3), 564; https://doi.org/10.3390/ma18030564
Submission received: 20 December 2024 / Revised: 17 January 2025 / Accepted: 22 January 2025 / Published: 26 January 2025

Abstract

:
Understanding the volume fractions of microstructure constituents such as ferrite, pearlite, bainite, and martensite in low-alloy steels is critical for tailoring mechanical properties to specific engineering applications. To address the complexity of these relationships, this study explores the use of artificial neural networks (ANNs) as a robust tool for predicting these microstructure constituents based on alloy composition, specific Jominy distance, and heat treatment parameters. Unlike previous ANN-based predictions that rely on the hardness after quenching as an input parameter, this study excludes hardness. The developed model relies on readily available input parameters, enabling accurate estimation of microstructure composition prior to heat treatment, which significantly improves its practicality for process planning, optimization, and reducing trial-and-error on industrial applications. Three different input configurations were tested to evaluate the predictive capabilities of ANNs, with results showing that the use of specific Jominy distance as an input variable enhances model performance. Furthermore, the findings suggest that specific Jominy distance could serve as a practical alternative to detailed chemical composition data in industrial applications. The predictions for ferrite, pearlite, and martensite were more accurate than those for bainite, which can be attributed to the complex nature of bainite formation.

1. Introduction

Quenching is a frequently used heat treatment process, usually involving the rapid cooling of steel from its austenitizing temperature. The primary goal of quenching is to achieve the desired mechanical properties by forming martensite, or by stabilizing high-temperature phases in alloys. However, the achieving of these goals must be balanced with minimizing deformation and residual stresses, which remain significant challenges due to the complex nature of quenching. For example, in [1], strain and stress evolution during the transformation of coupled bainite and martensite is modeled, and mechanism-based cooling strategies to design tailored residual stress configurations in multiphase steel tubes are proposed. In [2], authors studied the quenching process of AISI 4340 steel samples using a FEM model to predict dimensional behavior, internal stress evolution, and mechanical properties. In [3], the impact of titanium microalloying on the quenching residual stress of H13 steel is investigated to analyze the temperature and stress evolution of Ti microalloying H13 steel.
The prediction of microstructure transformations is essential for modeling and optimizing the quenching process [4,5]. Such predictions are not only important for estimating mechanical properties but also for anticipating stresses and strains generated during quenching [6]. The traditional approaches to modeling these transformations rely on mathematical models; however, these models often involve simplifying assumptions that limit their accuracy [7,8,9]. Furthermore, the theory of microstructural transformations in steels is still a subject of controversy [10]. As a result, numerical simulations based on such models may deviate from experimental observations and could be improved.
Advanced computational tools for prediction, such as artificial neural networks (ANNs), have shown immense promise in addressing these challenges. ANNs, a subset of machine learning, are designed to model nonlinear and complex relationships by learning directly from data. This capability makes them particularly well suited not only for predicting microstructure transformations in steels, but also for forecasting their mechanical properties, such as hardness, tensile strength, and yield strength.
For example, Sitek and Trzaska (2021) reviewed the practical aspects of designing and using artificial neural networks in materials engineering, emphasizing the importance of dataset quality and ANN architecture in accurately predicting steel properties under varying processing conditions [11]. Patel et al. (2024) developed an integrated model combining artificial neural networks and genetic algorithms to predict the relationships between chemical composition, microstructure, and mechanical properties in additively manufactured steels, enabling more precise tailoring of material characteristics for specific applications [12].
The aim of this research was to explore the application of artificial neural networks in predicting the volume fractions of microstructure constituents—ferrite, pearlite, bainite, and martensite—with a particular focus on steels for quenching and tempering as well as case-hardening steels designed for carburizing. The research builds upon the findings presented in the study focused on developing ANN models to predict the hardness of low-alloy steels when detailed chemical composition is unknown [13].
Steels for quenching and tempering as well as case-hardening steels designed for carburizing are hypoeutectoid low-alloy steels, containing less than ~0.8 wt.% carbon, and alloying elements, including carbon, with a total content of up to ~5.0 wt.%. Steels intended for heat treatment, which enables the achievement of desired mechanical properties, should exhibit high hardenability to ensure attaining high strength and toughness throughout the entire heat-treated component.
The developed ANN-based prediction of steels microstructure is based on readily available input parameters, including detailed chemical composition, the specific Jominy distance, and heat treatment parameters, all of which can be determined before the heat treatment process. Unlike previous ANN-based models that rely on post-quenching hardnesses as an input parameter [14,15], this approach excludes hardness since it is not known before the heat treatment. Moreover, excluding hardness avoids the direct correlation between hardness and microstructure. This methodology enables the development of a more versatile predictive model that can estimate microstructure composition prior to performing the heat treatment, enhancing its practical utility in process planning and optimization.

2. Materials and Data

2.1. Materials

Table 1 presents the chemical composition of studied steels for quenching and tempering as well as case-hardening steels designed for carburizing. Quenching, tempering, and case-hardening are essential heat treatment processes for steels, enhancing their mechanical properties for various industrial applications. Quenching increases hardness and wear resistance, tempering reduces brittleness and improves toughness, and case-hardening creates a hard, wear-resistant surface with a tough core. Together, these processes achieve an optimal balance for durability and performance, making steel suitable for high-stress and wear-intensive conditions. In Table 1, rows No. 1–8 correspond to case-hardening steels, while rows No. 9–24 refer to steels intended for quenching and tempering. Steels marked with superscript “1” have a higher carbon content compared to the standard carbon content in steels.
The primary alloying elements in these steels are carbon, silicon, manganese, chromium, molybdenum, and nickel. Manganese, chromium, molybdenum, and nickel improve the hardenability of steels, while silicon increases yield strength and further enhances hardenability when combined with manganese or molybdenum [16].
Table 1. The chemical composition of the studied steels (balance Fe) [13,17].
Table 1. The chemical composition of the studied steels (balance Fe) [13,17].
Data No.Designation (DIN)Chemical Composition, wt.%
CSiMnPSAlCrCuMoNiV
1.Ck150.150.220.410.0210.024<0.0050.060.15-0.06-
2.Ck15 10.300.290.390.0120.0260.0030.120.215---
3.16MnCr50.160.221.120.0300.0080.0150.99-0.020.120.01
4.15CrNi60.130.310.510.0230.0090.0101.50-0.061.55<0.01
5.20MoCr4 10.280.300.660.0180.0110.0490.560.180.440.15-
6.20MoCr4 10.570.300.660.0180.0110.0490.560.180.440.15-
7.25MoCr4 10.310.200.670.0170.0220.0340.50-0.450.11-
8.20NiMoCr6 10.280.150.620.0150.0200.0150.47-0.481.58-
9.Ck450.440.220.660.0220.029-0.15---0.02
10.37MnSi50.381.051.140.0350.019-0.23---0.02
11.42MnV70.430.281.670.0210.008-0.320.060.030.110.10
12.34Cr40.350.230.650.0260.013-1.110.180.050.23<0.01
13.34Cr40.360.290.690.0110.014-1.090.120.070.080.01
14.41Cr40.440.220.800.0300.023-1.040.170.040.26<0.01
15.41Cr40.410.250.710.0310.024-1.060.170.020.22<0.01
16.36Cr60.360.250.490.0210.020-1.540.160.030.21<0.01
17.25CrMo40.220.250.640.0100.011-0.970.160.230.33<0.01
18.34CrMo40.300.220.640.0110.012-1.010.190.240.11<0.01
19.42CrMo40.380.230.640.0190.013-0.990.170.160.08<0.01
20.50CrMo40.500.320.800.0170.022-1.040.170.240.11<0.01
21.50CrMo40.460.220.500.0150.014-1.000.260.210.22<0.01
22.27MnCrV40.240.211.060.0140.020-0.790.170.020.18<0.01
23.50CrV40.550.220.980.0170.013-1.020.07-0.010.11
24.50CrV40.470.350.820.0350.015-1.200.14-0.040.11
1 Higher carbon content relative to standard carbon content in steels.

2.2. Input Variables and Data

The volume fractions of steel microconstituents, namely martensite, bainite, and ferrite-pearlite mixture, primarily depend on the chemical composition and the temperature evolution during the heat treatment. For this reason, the prediction of the volume fractions of steel microconstituents using ANNs was designed to utilize 10 input variables: the main alloying elements, the heat treatment parameters, and the specific Jominy distance, as detailed in Table 2. In line with this, an original dataset was generated. All input data were derived from experimental results obtained from the literature [17]. Based on these input data, a model for microstructure prediction has been established, which can be used even for steels whose experimental data are not known or available in the literature.
Different alloying elements suppress the diffusional pearlitic and bainitic transformations in steels to varying degrees, significantly influencing the volume fractions of microconstituents.
The temperature variation at any point in the component is the key driving force for phase transformations. Higher cooling rates during the cooling of steel from the austenitizing temperature result in a higher volume fraction of hard martensite, whereas lower cooling rates lead to a higher volume fraction of the softer ferrite-pearlite mixture. From this, it can be concluded that the volume fractions of these microconstituents are primarily determined by the cooling rate of the steel, which is effectively defined by the cooling time from the austenitizing temperature to 500 °C. This is further supported by the established relationship between the as-quenched hardness of steel and the cooling time from 800 °C to 500 °C, as widely recognized in the literature [17] and practice.
Furthermore, heat treatment parameters, such as the austenitizing temperature and the austenitizing time, are also included as input parameters, as higher austenitizing temperatures and longer austenitizing times promote austenite grain growth and enhance the solubility of carbon and other alloying elements in austenite, thereby influencing the kinetic of austenite decomposition.
The microstructure achieved through steel quenching also depends on the steel’s hardenability. Therefore, in addition to the chemical composition, the specific Jominy distance was included as an input variable. This distance is directly related to the hardenability of steel, depending mostly on alloying elements, and corresponds to the Jominy distance at which 50% of the microstructure consists of martensite [13].

3. Methods

Development of Artificial Neural Networks for Prediction of Low-Alloy Steels’ Volume Fractions of Microstructure Constituents

In its nature, the prediction of steels volume fractions of microstructure constituents is a regression, i.e., function approximation problem. For such problems, in various research areas including material property prediction, ANNs are being used as versatile, nonlinear computational models inspired by biological neural systems. The proposed procedure of estimating the low-alloy steels volume fractions of microstructure constituents using artificial neural networks, including data extraction and preparation, ANNs model building and training, and analysis of the ANNs performance, is given in the flow chart in Figure 1 and explained in more details in the following paragraphs.
ANNs consist of an input layer, one or more hidden layers, and an output layer. Input and output layers have one neuron per input and output variable, while the size and the number of hidden layer(s) can vary. In a fully connected ANN, all neurons are interconnected by weights. A fully connected multilayer perceptron with one hidden layer and all input variables relevant for this study (listed in Table 2) is shown in Figure 2.
In this study, i.e., our prediction of steels’ volume fractions of microstructure constituents, several two-layer multilayer perceptrons (MLPs) were developed to estimate the steels microstructure, taking advantage of MLPs’ ability to model complex, nonlinear relationships in the data. According to [18], a two-layer MLP with a hyperbolic tangent transfer function in the hidden layer and linear transfer function in the output layer is considered a universal approximator and is efficient in solving most regression problems. Due to the dataset being on the smaller side, a single hidden layer, which is appropriate for avoiding overfitting and ensuring generalization with limited data, was chosen. However, a combination of tansig and linear activation functions did not prove to be adequate for the problem presented in this research. Two conditions should be met when estimating the volume fractions of microstructure constituents. First, estimated values should be non-negative, and second, the sum of all outputs should be equal to 1. In addition to the linear transfer function, the logsig function was explored for the hidden layer. In the output layer, in addition to the linear transfer function, sigmoid and softmax transfer functions were considered. The final configuration yielded the logsig transfer function in the hidden layer and softmax in the output layer. The outputs of the softmax transfer function can be interpreted as the probabilities associated with each class (or here, the volume fraction of each phase). Each output will fall between 0 and 1, and the sum of the outputs will equal 1 [18]. Although softmax is more commonly used in pattern recognition or classification problems, here it aligns with the nature of the research problem.
The backpropagation algorithm, which is often used for supervised learning with a multilayer perceptron, is also used in this research. The primary goal of artificial neural network training is to adjust the weights so that the error function is minimized. Here, the mean square error, MSE, is chosen as the error function. The initialization of weights marks the beginning of the training, followed by the propagation of input signals through the network from the input layer to the output layer (forward phase). The forward phase is followed by backpropagation (backward phase). In backpropagation, error signals which are calculated by comparing predicted (i.e., output) and target values, propagate from the output layer to the input layer. During this phase, the weights are updated iteratively to reduce the error. The process continues until a specified stopping criterion is met.
Different combinations of input variables were considered and used for artificial neural networks development. The first configuration of input variables included the main alloying elements, austenitizing temperature, Ta; austenitizing time, ta; cooling time to 500 °C, t500; and specific Jominy distance, Ed. The second configuration omits the specific Jominy distance, while in the third configuration of input variables specific Jominy distance was used as the input variable (instead of the main alloying elements) along with the heat treatment parameters Ta, ta, and t500. Input variables for individual configurations are listed in Table 3. Every MLP had three output variables—volume fractions of ferrite-pearlite, bainite, and martensite. This was decided after the initial investigation, where the prediction of each microstructural constituent was performed with a separate ANN. In theory, three ANNs with one output should yield the same result as one ANN with three outputs. However, this did not provide good results in this case, since outputs are strongly interdependent.
For the development and testing of the artificial neural networks with all three configurations of input variables, 423 datasets for 24 steels were used (Supplementary Materials, Table S1). For the purpose of this research, ANNs were developed using the computer software MATLAB R2022b [19].
ANN robustness is ensured by preventing overlearning and overfitting, as well as by evaluating the ANNs’ performances on test data, i.e., those that were not used for ANNs development.
Overlearning was prevented by combining the “growth method” for the determination of the number of neurons in the hidden layer, and early stopping as a principle for improving generalization. Early stopping implies that weights are updated for the training dataset while the error function (mean square error, MSE) is calculated for the validation dataset. The training is stopped once the value of MSE on the validation dataset reaches a minimum, and then increases for a predefined number of epochs.
The size of the hidden layer, i.e., the maximum number of neurons in the hidden layer, H, for which the ANNs with different configurations of input variables were trained, is determined depending on the number of available training equations, Ntraineq, the number of input variables, I, and the number of output variables, O:
H O ( N t r a i n e q 1 ) I + O + 1 .
Limiting the maximum number of neurons in the hidden layer, H, is important for preventing overfitting. The Levenberg–Marquardt algorithm with early stopping was used for the training of networks with three different combinations of input variables (as listed in Table 3), and hidden layer size from one neuron to H neurons (“growth method”). According to [18], this learning algorithm with backpropagation appears to be among the fastest ANN training algorithms for moderate numbers of network parameters. The maximum number of neurons H was different for each configuration, in line with Equation (1). Each architecture was trained 10 times with random initial weights and data divisions. In total, 2410 networks were trained—630 for Configuration No. 1, 680 for Configuration No. 2, and 1100 for Configuration No. 3.
Initialization of weights determines the starting point of ANN training and, if this is carried out randomly, for 10 iterations, the odds that said starting point is determined well and that MSE will reach the global minimum is increased.
The most important hyperparameters explored for the development of ANNs in this research are given in Table 4. Hyperparameters that were selected for the final ANN configurations are underlined.
The Levenberg–Marquardt algorithm requires data division into training, validation and testing datasets. Using ten trainings per each architecture also ensures that random data division is performed in such a way that these three datasets represent the entire population. Fractions of data assigned to training, validation and testing are commonly set to 0.7/0.15/0.15 for training, validation, and testing datasets, respectively, which is adopted here as well.
If N is the total number of datasets used for the development of ANNs, the number of training examples, Ntrain, is then:
N t r a i n = 0.7 N
while the number of training equations Ntraineq is:
N t r a i n e q = O · N t r a i n .
The number of training equations Ntraineq was constant for all networks, regardless of the number of input variables and architecture. Output variables were always volume fractions of ferrite and pearlite, bainite, and martensite, which gives the number of outputs, O = 3.
The number of unknown weights, Nw, in a fully connected MLP with one hidden layer is:
N w = I + 1 H + ( H + 1 ) O .
Several limitations should be kept in mind when determining the size of the hidden layer. The number of weights, Nw, should be a lot smaller than the number of training equations, i.e., Nw << Ntraineq, or in extreme cases their difference, the number of degrees of freedom, Ndof, must be greater than zero:
N d o f = N t r a i n e q N w > 0 .
The number of training equations, Ntraineq, should be 4–5 times greater than the number of unknown weights, Nw. For a worst-case scenario with a maximum number of input variables (10, Configuration No. 1), to fulfill these conditions, the maximum feasible number of neurons in hidden layer, H, is between 12 and 15.
Since the original data were divided into training, validation, and testing subsets, the regression analysis between target and output values should be performed on each subset individually, as well as on the full dataset. Should the ANN show accurate fitting on the training subset, but poor results on the validation and test subsets, this would indicate overfitting. If training and validation results are good, but the testing results are poor, this could indicate extrapolation [18]. Since ANNs learn by example, they are only reliable if applied to the same data distribution, as was the case with the learning dataset.
The selection of the best performing artificial neural network, for all three configurations and all architectures trained (hidden layer size H), was based on the value of coefficients of correlation, r, and the value of root mean square error, RMSE, for the whole dataset. A useful indicator of model accuracy is root mean square error, RMSE (the square root of the MSE). It gives prediction errors of different models in the same unit as the variable that is to be predicted. The greater the r and the smaller the RMSE, the better the network’s performance is considered to be. If several networks had similar results, the one with a smaller hidden layer size was chosen. In accordance with the above-mentioned discrepancies that could occur between training, validation, and the testing of subsets, coefficients of correlation rtrain, rval, and rtest, as well as the values of root mean square error RMSEtrain, RMSEval, and RMSEtest were also analyzed for the obtained subsets so that balanced prediction of all three microconstituents is ensured, where possible. Results for selected architectures of all three configurations are summarized in Table 5 and Table 6.
The comparison of given metrics overall and for individual subsets shows that these do not differ significantly. That is especially important for training and testing values of coefficients of correlation r and root mean square error RMSE, i.e., rtrain and rtest, and RMSEtrain and RMSEtest. Based on these indicators, it can be concluded that the design and selection of ANNs ensured robustness and good generalization capabilities.

4. Results

When training the ANNs, the error function MSE was calculated as a mean for all three outputs. Consequently, RMSE values in Table 5 are given as an average for all three outputs. However, it is also useful to see ANNs’ performances for each microstructure constituent individually. Coefficients of correlation rtest, as well as the values of root mean square error RMSEtest for each microstructure constituent, are given in Table 6. Indices “F-P”, “B” and “M” refer to ferrite–pearlite, bainite, and martensite, respectively.
It can be seen that Configuration No. 1, where all input variables are included, results in the best performance overall, as well as for each microconstituent. Configuration No. 2 including chemical composition along with heat treatment parameters, and Configuration No. 3, which includes specific Jominy distance and heat treatment parameters as inputs, perform similarly, with the latter configuration being marginally better in almost all metrics values. This shows that specific Jominy distance can successfully be used for the prediction of the microstructure of low-alloy steels in situations where the detailed chemical composition of steel is not known. This somewhat aligns with [13], where similar findings were found in the prediction of total hardness after continuous cooling, HVtot. When the prediction of individual microstructure constituents is observed, it can be seen that all models predict the volume fractions of ferrite–pearlite and martensite better than bainite.
A common practice in the evaluation of ANNs, which also contributes to comparability with results published in the literature, is to use metrics other than coefficients of correlation r and the value of root mean square error, RMSE, to estimate an ANN’s performance. Various metrics are used for this purpose, such as coefficient of determination, R2; mean absolute error, MAE; mean absolute percentage error, MAPE; and others. The MAPE value is the most common metric used to measure the accuracy of estimated vs. actual values, i.e., as a forecasting goodness indicator [13]. However, the calculation of MAPE implies dividing with target values which are, according to the nature of the problem, sometimes zero, thus making MAPE an inadequate metric for assessing an ANN’s performance. Consequently, in this paper, ANNs were additionally evaluated using the coefficient of determination Rtest2 (Equation (6)) and mean absolute error MAE (Equation (7)).
R 2 = 1 i n t i y i 2 i n t i y ¯ 2 ,
M A E = i n t i y i n .
In Equation (6), the numerator represents the residual sum of squares (sum of squared differences between target values ti and the predicted values yi), i.e., the variability that the model cannot explain. The denominator represents the total sum of squares (sum of squared differences between target value, ti, and the mean of the target values, y ¯ ), i.e., the total variability in the target values of dependent variables. The coefficient of determination Rtest2 thus measures how well the predicted values approximate the true data. In Equation (7), the numerator represents the sum of absolute differences between target values ti and the predicted values yi for each observation, while n in the denominator is the number of observations. Mean absolute error MAE measures the average absolute difference between the predicted and target values.
Values of MAE are obtained in the same unit as the target variable which enables interpretability, similarly to the RMSE. However, MAE is more robust regarding outliers since errors are not squared as is the case in the calculation of RMSE. Significant differences between RMSE and MAE values could indicate large error outliers. Table 7 shows Rtest2 and MAEtest values for each microstructure constituent. Both metrics show the same trend as RMSEtest for all three configurations, with Configuration No. 1 being the best (Rtest2 = 0.887, MAEtest = 0.084 and RMSEtest = 0.132) and Configurations No. 2 (Rtest2 = 0.828, MAEtest = 0.100 and RMSEtest = 0.163) and No. 3 (Rtest2 = 0.820, MAEtest = 0.102 and RMSEtest = 0.167) performing slightly worse, but still within acceptable limits.
The trend for individual microconstituents is the same as for RMSEtest values, i.e., ANNs’ predictions for bainite are somewhat less accurate than for ferrite–pearlite and martensite. For example, the best configuration, No. 1, yields MAEtest values per microstructure constituent as follows: MAEtest,F-P = 0.059, MAEtest,B = 0.105, and MAEtest,M = 0.087. Although RMSE and MAE are not to be directly compared, their differences indicate that there are some variances in errors.

5. Discussion and Conclusions

In this study, an innovative methodology was developed and applied for predicting the volume fractions of microstructure constituents in heat-treated low-alloy steels using artificial neural networks. The research includes a systematic process for ANN development, starting from the selection of input variables, and going into network training and performance evaluation. By considering three configurations of input variables, the predictive potential of specific Jominy distance as an alternative input when detailed chemical composition is not available was explored. This is advantageous, as it enables practical modeling in scenarios where full material data might be limited.
The use of the “growth method” to limit the ANN’s hidden layer size and the Levenberg–Marquardt algorithm with early stopping to train the networks ensures robust optimization and prevents overfitting. The adoption of the softmax transfer function in the output layer, although not typical in the application of ANNs to regression problems, ensures non-negative outputs that sum to 1.
Selection and performance evaluation of ANNs was based on multiple metrics, including root mean square error, RMSE, mean absolute error, MAE, and coefficient of determination, R2, ensuring that the networks’ predictive accuracy is thoroughly assessed for both overall and individual microstructure constituents.
Configuration No. 1, which includes all input variables (alloying elements, austenitizing temperature and time, cooling time to 500 °C, and specific Jominy distance), yields the best overall predictive performance. This result was consistent across all metrics, including RMSE, MAE, and R2, suggesting that a combination of both chemical composition and specific Jominy distance provides the most accurate representation of microstructure transformations.
Configuration No. 3, in which the chemical composition is replaced with specific Jominy distance as an input variable, performs comparably well. This indicates that specific Jominy distance can effectively replace the chemical composition as an input variable for the prediction of the volume fractions of microstructure constituents. The result aligns well with findings from an earlier study investigating the prediction of total hardness of low-alloy steels after continuous cooling [13], based on Jominy distance as a practical input parameter.
As for individual microstructure constituents, the predictions for ferrite and pearlite, as well as martensite, are more accurate than those for bainite across all configurations. This could be attributed to the complexity of bainite formation and its sensitivity to variations in cooling rates, making it more challenging to model accurately. The research in the field of bainite is extensive [20,21,22,23], and the qualitative theory explaining bainite formation remains somewhat controversial [21,24]. One theory suggests a diffusion-controlled transformation where bainitic growth occurs via a diffusional ledge mechanism, while another proposes that the bainite reaction is a displacive transformation [21]. Both theories have led to the development of models to predict the transformation kinetics [9,10,25].
Overall, this study showed that ANNs are an effective tool for modeling complex, nonlinear relationships relevant in steel microstructure prediction. Once again, it is important to highlight the novelty of this research. The ability to use the specific Jominy distance as a substitute for chemical composition expands the applicability of these models, particularly in industrial settings where detailed material data may not always be available. The research introduces an ANN-based model for predicting steel microstructure using only pre-heat treatment parameters, excluding hardness, which is typically unknown before heat treatment. Additionally, the model can be applied to steels for which experimental data are neither known nor available in the literature, expanding its practical use in process planning and optimization. Industrial implementation of this means that, based on chemical composition and/or specific Jominy distance, heat treatment parameters (by which we can calculate cooling time to 500 °C in every location of a heat-treated part), it is possible to estimate the microstructure of a machine part within seconds with the trained ANN models, which is crucial for its behavior in applications. Future work could focus on improving the prediction of bainite by incorporating additional parameters or exploring alternative network architectures.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/ma18030564/s1: Table S1: Data used for development and testing of artificial neural networks.

Author Contributions

Conceptualization, S.S.H. and D.I.; methodology, S.S.H., T.M. and D.I.; software, T.M.; validation, S.S.H., T.M., D.I. and R.B.; formal analysis, T.M. and R.B.; investigation, S.S.H. and D.I.; writing, S.S.H., T.M., D.I. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Croatian Science Foundation under the project IP-2020-02-5764 and the University of Rijeka under the project numbers uniri-iskusni-tehnic-23-302 and uniri-iskusni-tehnic-23-233.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

Variables
AlAluminum (wt.%)
CCarbon (wt.%)
CrChromium (wt.%)
CuCopper (wt.%)
EdSpecific Jominy distance (mm)
HNumber of neurons in hidden layer, i.e., hidden layer size (-)
HVtotTotal hardness after continuous cooling (HV)
INumber of input variables (-)
MnManganese (wt.%)
MoMolybdenum (wt.%)
nNumber of observations (-)
NTotal number of datasets/observations (-)
NdofNumber of degrees of freedom of ANN (-)
NiNickel (wt.%)
NtrainNumber of training examples (-)
NtraineqNumber of training equations (-)
NwNumber of weights in ANN (-)
ONumber of output variables (-)
PPhosphorus (wt.%)
rCoefficient of correlation (-)
R2Coefficient of determination (-)
SSulfur (wt.%)
SiSilicon (wt.%)
t500Cooling time to 500 °C (s)
TaAustenitizing temperature (°C)
taAustenitizing time (min)
tii-th observation target value (-)
VVanadium (wt.%)
yii-th observation output value (estimated) (-)
y ¯ Mean value of yi (-)
Abbreviations
ANNArtificial neural network
MAEMean absolute error
MSEMean squared error
RMSERoot mean squared error
Subscripts
BBainite
F-PFerrite–pearlite
MMartensite
test Testing subset
trainTraining subset
val Validation subset

References

  1. Leitner, S.; Winter, G.; Klarner, J.; Antretter, T.; Ecker, W. Model-Based Residual Stress Design in Multiphase Seamless Steel Tubes. Materials 2020, 13, 439. [Google Scholar] [CrossRef] [PubMed]
  2. Lopez-Garcia, R.D.; Medina-Juarez, I.; Maldonado-Reyes, A. Effect of Quenching Parameters on Distortion Phenomena in AISI 4340 Steel. Metals 2022, 12, 759. [Google Scholar] [CrossRef]
  3. Feng, X.; Wang, Y.; Han, J.; Li, Z.; Jiang, L.; Yang, B. Numerical Simulation and Experimental Verification of the Quenching Process for Ti Microalloying H13 Steel Used to Shield Machine Cutter Rings. Metals 2024, 14, 313. [Google Scholar] [CrossRef]
  4. Zener, C. Kinetics of decomposition of austenite. Trans. AIME 1946, 167, 550–595. [Google Scholar]
  5. Lusk, M.T.; Lee, Y.-K. A global material model for simulating the transformation kinetics of low alloy steels. In Heat Treatment and Surface Engineering of Light Alloys: Proceedings of the 7th International Seminar of IFHT, Budapest, Hungary, 15–17 September 1999; Lendvai, J., Réti, T., Eds.; Hungarian Scientific Society of Mechanical Engineering: Budapest, Hungary, 1999; pp. 273–282. [Google Scholar]
  6. Smoljan, B.; Iljkić, D.; Smokvina Hanza, S.; Jokić, M.; Štic, L.; Borić, A. Mathematical Modeling and Computer Simulation of Steel Quenching. Mater. Perform. Charact. 2019, 8, 17–36. [Google Scholar] [CrossRef]
  7. Serajzadeh, S. A Mathematical Model for Prediction of Austenite Phase Transformation. Mater. Lett. 2004, 58, 1597–1601. [Google Scholar] [CrossRef]
  8. Militzer, M.; Hoyt, J.J.; Provatas, N.; Rottler, J.; Sinclair, C.W.; Zurob, H.S. Multiscale Modeling of Phase Transformations in Steels. JOM 2024, 66, 740–746. [Google Scholar] [CrossRef]
  9. Smoljan, B.; Iljkić, D.; Smokvina Hanza, S.; Hajdek, K. Mathematical Modelling of Isothermal Decomposition of Austenite in Steel. Metals 2021, 11, 1292. [Google Scholar] [CrossRef]
  10. Quidort, D.; Brechet, Y.J.M. A model of isothermal and non-isothermal transformation kinetics of bainite in 0.5% C steels. ISIJ Int. 2002, 42, 1010–1017. [Google Scholar] [CrossRef]
  11. Sitek, W.; Trzaska, J. Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering. Metals 2021, 11, 1832. [Google Scholar] [CrossRef]
  12. Patel, S.; Nathani, A.; Poozesh, A.; Xu, S.; Kazempoor, P.; Ghamarian, I. Combining Neural Networks and Genetic Algorithms to Understand Composition-Microstructure-Property Relationships in Additively Manufactured Metals. J. Manuf. Mater. Process. 2024, 8, 269. [Google Scholar] [CrossRef]
  13. Smokvina Hanza, S.; Marohnić, T.; Iljkić, D.; Basan, R. Artificial Neural Networks-Based Prediction of Hardness of Low-Alloy Steels using Specific Jominy Distance. Metals 2021, 11, 714. [Google Scholar] [CrossRef]
  14. Smoljan, B.; Smokvina Hanza, S.; Filetin, T. Prediction of Phase Transformation Using Neural Networks. In Proceedings of the 2nd International Conference Heat Treatment and Surface Engineering in Automotive Applications, Riva del Garda, Italy, 20–22 June 2005. [Google Scholar]
  15. Smokvina Hanza, S.; Iljkić, D.; Tomašić, N. Modelling of Microstructure Transformation during the steel quenching. In Proceedings of the 4th International Ph.D. Conference on Mechanical Engineering, Pilsen, Czech Republic, 11–13 September 2006. [Google Scholar]
  16. Liščić, B. Hardenability. In Steel Heat Treatment Handbook: Metallurgy and Technologies, 2nd ed.; Totten, G.E., Ed.; Taylor & Francis Group: Boca Raton, FL, USA; CRC Press: Boca Raton, FL, USA, 2007; pp. 213–276. [Google Scholar]
  17. Rose, A.; Hougardy, H. Atlas zur Wärmebehandlung der Stähle; Verlag Stahleisen: Düsseldorf, Germany, 1972. [Google Scholar]
  18. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; De Jesús, O. Neural Network Design, 2nd ed. 2014. Available online: https://hagan.okstate.edu/NNDesign.pdf (accessed on 5 December 2024).
  19. MATLAB, version 9.13.0 (R2022b); The MathWorks Inc.: Natick, MA, USA, 2022.
  20. Davenport, E.S.; Bain, E.C. Transformation of austenite at constant subcritical temperatures. Metall. Trans. 1970, 1, 3503–3530. [Google Scholar] [CrossRef]
  21. Bhadeshia, H.K.D.H. Bainite in Steels: Transformations, Microstructure and Properties, 2nd ed.; IOM Communications: London, UK, 2001. [Google Scholar]
  22. Fielding, L.C.D. The bainite controversy. Mater. Sci. Technol. 2013, 29, 383–399. [Google Scholar] [CrossRef]
  23. Yang, Z.G.; Fang, H.-S. An overview on bainite formation in steels. Curr. Opin. Solid State Mater. Sci. 2005, 9, 277–286. [Google Scholar] [CrossRef]
  24. Hillert, M. The nature of bainite. ISIJ Int. 1995, 35, 1134–1140. [Google Scholar] [CrossRef]
  25. Rees, G.I.; Bhadeshia, H.K.D.H. Bainite transformation kinetics Part 1 Modified model. Mater. Sci. Technol. 1992, 8, 985–993. [Google Scholar] [CrossRef]
Figure 1. Flow chart of development procedure of ANNs for prediction of low-alloy steels’ volume fraction of microstructure constituents.
Figure 1. Flow chart of development procedure of ANNs for prediction of low-alloy steels’ volume fraction of microstructure constituents.
Materials 18 00564 g001
Figure 2. Fully connected multilayer perceptron with one hidden layer.
Figure 2. Fully connected multilayer perceptron with one hidden layer.
Materials 18 00564 g002
Table 2. The input variables.
Table 2. The input variables.
Data No.VariableData No.Variable
1.Carbon (C, wt.%)6.Nickel (Ni, wt.%)
2.Silicon (Si, wt.%)7.Austenitizing temperature (Ta, °C)
3.Manganese (Mn, wt.%)8.Austenitizing time (ta, min.)
4.Chromium (Cr, wt.%)9.Cooling time to 500 °C (t500, s)
5.Molybdenum (Mo, wt.%)10.Specific Jominy distance (Ed, mm)
Table 3. Variables used for the development of artificial neural networks.
Table 3. Variables used for the development of artificial neural networks.
VariableConfiguration No. 1Configuration No. 2Configuration No. 3
InputsCarbon (C, wt.%)++
Silicon (Si, wt.%)++
Manganese (Mn, wt.%)++
Chromium (Cr, wt.%)++
Molybdenum (Mo, wt.%)++
Nickel (Ni, wt.%)++
Austenitizing temperature (Ta, °C)+++
Austenitizing time (ta, min.) +++
Cooling time to 500 °C (t500, s)+++
Specific Jominy distance (Ed, mm)+ +
OutputsVolume fraction of ferrite–pearlite (F-P, -)+++
Volume fraction of bainite (B, -)+++
Volume fraction of martensite (M, -)+++
Table 4. Overview of the most important hyperparameters explored for the development of ANNs.
Table 4. Overview of the most important hyperparameters explored for the development of ANNs.
HyperparameterExperimented Variables/Values
Number of outputs1, 3
Normalization of input variablesmin–max
Learning algorithmLevenberg–Marquardt with early stopping
Loss functionmean squared error, mse
Number of hidden layers1
Number of neurons in a hidden layer determined by the growth method
Transfer function (hidden layer)tansig, logsig
Transfer functions (output layer)linear, sigmoid, softmax
Cross-validationcross-validation (train, val and test subset, random division)
k-fold cross-validation
Weight initialization10 iterations
Table 5. Performance of selected artificial neural networks.
Table 5. Performance of selected artificial neural networks.
Config. No.Hidden Layer Size HTraining No.rRMSErtrainRMSEtrainrvalRMSEvalrtestRMSEtest
1550.9320.14110.9410.1310.8740.1870.9470.132
2910.9220.1470.9320.1380.8960.1710.9050.163
31470.9110.1580.9120.1540.9020.1700.9130.167
Table 6. Detailed performance information for selected artificial neural networks on test subsets, per microstructure constituent: coefficients of correlation rtest and root mean square errors RMSEtest.
Table 6. Detailed performance information for selected artificial neural networks on test subsets, per microstructure constituent: coefficients of correlation rtest and root mean square errors RMSEtest.
Config. No.rtestRMSEtestrtest,F-PRMSEtest,F-Prtest,BRMSEtest,Brtest,MRMSEtest,M
10.9630.1320.9620.1130.9160.1550.9630.125
20.9050.1630.9310.1640.8610.1650.9220.160
30.9130.1670.9330.1610.8680.1880.9370.148
Table 7. Detailed performance of selected artificial neural networks on test dataset, per microstructure constituent: coefficient of determination, Rtest2, and mean absolute values, MAEtest.
Table 7. Detailed performance of selected artificial neural networks on test dataset, per microstructure constituent: coefficient of determination, Rtest2, and mean absolute values, MAEtest.
Config. No.Rtest2MAEtestMAEtest,F-PMAEtest,BMAEtest,M
10.8870.0840.0590.1050.087
20.8280.1000.0870.1050.108
30.8200.1020.0830.1260.096
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marohnić, T.; Smokvina Hanza, S.; Iljkić, D.; Basan, R. Possibilities of Using Specific Jominy Distance in ANN Models for Predicting Low-Alloy Steels’ Microstructure. Materials 2025, 18, 564. https://doi.org/10.3390/ma18030564

AMA Style

Marohnić T, Smokvina Hanza S, Iljkić D, Basan R. Possibilities of Using Specific Jominy Distance in ANN Models for Predicting Low-Alloy Steels’ Microstructure. Materials. 2025; 18(3):564. https://doi.org/10.3390/ma18030564

Chicago/Turabian Style

Marohnić, Tea, Sunčana Smokvina Hanza, Dario Iljkić, and Robert Basan. 2025. "Possibilities of Using Specific Jominy Distance in ANN Models for Predicting Low-Alloy Steels’ Microstructure" Materials 18, no. 3: 564. https://doi.org/10.3390/ma18030564

APA Style

Marohnić, T., Smokvina Hanza, S., Iljkić, D., & Basan, R. (2025). Possibilities of Using Specific Jominy Distance in ANN Models for Predicting Low-Alloy Steels’ Microstructure. Materials, 18(3), 564. https://doi.org/10.3390/ma18030564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop