Next Article in Journal
Effectiveness of Food Processing By-Products as Dust Suppressants for Exposed Mine Soils: Results from Laboratory Experiments and Field Trials
Previous Article in Journal
Use of the Dynamic Cone Penetrometer in Compacted Clay–Sand Layers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Particle-Swarm-Optimization Algorithm for a Prediction Model of Steel Slab Temperature

1
School of Mechanical & Automotive Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
2
Shandong Institute of Mechanical Design and Research, Jinan 250353, China
3
Key Laboratory of High-Efficiency and Clean Mechanical Manufacture, Ministry of Education, School of Mechanical Engineering, Shandong University, Jinan 250061, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(22), 11550; https://doi.org/10.3390/app122211550
Submission received: 10 September 2022 / Revised: 3 November 2022 / Accepted: 6 November 2022 / Published: 14 November 2022
(This article belongs to the Topic Numerical Modeling on Metallic Materials)

Abstract

:
Aiming at the problem of the low accuracy of temperature prediction, a mathematical model for predicting the temperature of a steel billet is developed. For the process of temperature prediction, an improved particle-swarm-optimization algorithm (called XPSO) is developed. XPSO was designed based on a multiple swarm scheme to improve the global search capability and robustness; thus, it can improve the low accuracy of prediction and overcome the problem of easy entrapment into local optima. In the XPSO, the multiple swarm scheme comprises four modified components: (1) the strategy of improving the positional initialization; (2) the mutation strategy for particle swarms; (3) the adjustment strategy of inertia weights; (4) the strategy of jumping out local optima. Based on widely used unimodal, multimodal and composite benchmark functions, the effectiveness of the XPSO algorithm was verified by comparing it with some popular variant PSO algorithms (PSO, IPSO, IPSO2, HPSO, CPSO). Then, the XPSO was applied to predict the temperatures of steel billets based on simulation data sets and measured data sets. Finally, the obtained results show that the XPSO is more accurate than other PSO algorithms and other optimization approaches (WOA, IA, GWO, DE, ABC) for temperature prediction of steel billets.

1. Introduction

In the steel industry, the reheating furnace must reheat the material (slabs) to the desired uniformity temperature profiles at the exit. However, the slab reheating furnace’s operation is a complex physical and chemical process [1]. To better control and optimize the furnace’s operations, there should be a suitable temperature prediction model to predict the accurate temperatures for the slabs inside the furnace. Given the continuous development of artificial intelligence techniques, the demand for a suitable temperature prediction model is increasing [2]. Therefore, a suitable mathematical model which can predict the discharge temperatures of billets accurately and quickly should be proposed for the control and optimization of the reheating furnaces. In general, the prediction models can be divided into two categories [3]: the mechanism models based on first principles [4] and the empirical models based on the production data and black-box approaches [5]. The first kind of model needs to fully understand the physical and chemical processes inside the reheating furnace—e.g., [1,6,7,8]. The computational requirements of these models vary widely depending on the level of complexity [3]. Finally, the heat-transfer process is often summarized by partial differential equations (PDEs). Usually, mechanism models are complicated and nonlinear (such as those in ([9,10,11,12,13,14]). Hong et al. [13] investigated the sequential function specification coupled with the Broyden combined method (BC-SFSM) to obtain the temperature field of a steel billet based on the inverse heat-conduction problem. The results illustrate that the majority of relative errors during the whole reheating time are less than 5%. Chen et al. [14] presented a novel method to obtain these parameters by combining a "black box" test with a billet heat transfer model. The surface temperature and center temperature’s relative error were 2.34% and 3.51%, respectively. The numerical computation of PDEs will require a lot of computational time, which will exceed a practical time criterion. Therefore, this kind of model cannot satisfy the requirements for an online system.
The empirical models are often determined by identification methods involving the genetic algorithm (GA), support vector machines (SVMs), neural networks (NNs), particle swarm optimization (PSO) and other intelligent techniques ([15,16,17,18]). For instance, Thanawat et al. [19] investigated a prediction model using the production data from Ratchasima Steel Products Company. The GA method was used to identify the model’s unknown factors. Tang et al. [20] presented the SVM predictive model for the temperature control problem. The PSO was applied to determine proper parameters for the SVM model and finally obtained good performance. Wang et al. [21] constructed a prediction model of slab discharging temperature by combining GA with a BP neural network. The mean-square error of the network was 72.3477, and the error was lower than 20 °C. Yang et al. [22] used the relevance vector machine (RVM) method to predict the slab temperature. The maximum prediction error of slab temperature was 10.46 °C. In general, the empirical model is often simplified to a simple formula, so the calculation time is small enough to satisfy online production [23]. The production data and the performance of the intelligent algorithm used will determine the advantages and disadvantages of an empirical model. As the industrial software and database techniques continue developing, more production data are being obtained [24]. Thus, an intelligent algorithm with excellent performance is indispensable, and it behooves us to study the intelligent algorithms carefully.
The PSO algorithm has the characteristics of high solution accuracy and a fast approach to the optimal solution. However, the basic PSO varies in its ability to solve problems in different application contexts. It also easily falls into local optima. Researchers have studied various strategies for the improvement of PSO. For instance, Alsaidy et al. [25] proposed the longest job to fastest processor PSO (LJFP-PSO) algorithm and the minimum completion time PSO (MCT-PSO) algorithm for the task-scheduling problem. The effective performance of the two algorithms was proved by simulation results. Yue et al. [26] presented a novel multi-objective PSO that has ring topology for solving multimodal multi-objective optimization problems. The ring topology is used to form stable niches and locate multiple optima. Peng et al. [27] proposed a symbiotic PSO (SPSO) algorithm to control a water-bath-temperature system. A multiple swarm scheme was proposed for the SPSO algorithm. Three major components (create initial swarms, evaluating fitness values and updating each particle) are used to escape from a locally optimal solution.
Inspired by these algorithms, we propose an improved PSO (named XPSO) algorithm that uses the multiple swarm scheme in this paper. The scheme comprises four modified components: (1) improving the positional initialization of the particle swarms: one randomly generated and the other uniformly generated; (2) adding the mutation strategy for the particle swarm to increase its population diversity; (3) adjusting the inertia weight through a "stepped" adaptive model; (4) adding the strategy of escaping from the local minimized point.
This paper is presented as follows: the prediction model of the slab temperature is established in Section 2; the detailed strategies for improvement of XPSO are described in Section 3; the simulation and discussion are given in Section 4; Section 5 summarizes the conclusions.

2. The Prediction Model of the Slab Temperature

2.1. The Structure of a Reheating Furnace

The heat transfer processes in the furnace mainly consist of radiation heat transfer and convection heat exchange. The main function of reheating furnaces is to help the slabs reach the desired discharging temperatures for the next rolling process. The slab passes through the preheating zone, multistage heating zone and soaking zone from the furnace’s inlet to the outlet, as shown in Figure 1. The multi-stage heating zone can be divided into four subzones (upper zones 1 and 2; bottom zones 4 and 5), and the soaking zone is divided into two subzones (upper zone 3, bottom zone 6). The key components of a subzone are a series of regenerative burners, the corresponding furnace nozzle temperatures T N j and the furnace zone temperature T P k . Here, T N j is easily obtained by the thermocouples nearby the location of burner’s nozzle j and T P k = 1 n j = 1 n T N j , which stands for the average value of the nozzle temperatures T N j in zone k.

2.2. The Prediction Model and Optimization Problem

The most important quality criterion of reheating furnace is the average outlet temperature of the steel slab. There are many factors affecting the slab’s discharging temperature. By analyzing the field data and some related research ([22,23]), we can confirm some key factors: the initial temperature of the billet, T 0 ; the furnace nozzle temperature, T j ; the mean temperature of each zone, T p ; the reheating time of the billet in the furnace, θ ; the material of the slab, α ; the thickness of the slab, d. Thus, the inputs of the prediction model are X = [ X 1 , X 2 , X 3 ] T , which consist of the slab’s physical parameters, X 1 = [ α , d , θ , T 0 ] T ; the vector of the furnace nozzle temperatures, X 2 = [ T N 1 , T N 2 , , T N m ] T ; and the vector of furnace zone temperatures, X 3 = [ T P 1 , T P 2 , , T P 6 ] T . Notice, m is the total number of burners in the reheating furnace. The outputs are the predicted temperature Y = T E , which is the vector of the predicted temperatures of the slabs when discharged out from the furnace. Finally, the formula of predicting model is constructed as:
Y = f X , W = w 0 d + w 1 α + w 2 T 0 1 + w 3 T 0 2 + i = 1 m T N i w i + 3 T N i + j = 1 6 w j + m + 3 c o s T P j θ + j = 1 6 w j + m + 9 s i n T P j θ
where W = [ w 0 , w 1 , , w m + 15 ] represents the vector of unknown weight parameters, which will be determined by the proposed XPSO algorithm. The formula f is composed of a polynomial and a Fourier function, which was designed by the author based on experience. To make the analytical response Y equal to the measured value Y * , an optimization problem should be established with the objective of minimizing the error between the theoretically calculated values and the measured data. To alleviate overfitting and improve generalization performance, the strategy of regularization is employed for the optimization problem. Finally, the formula of the optimization problem is shown as follows.
M i n i m i z e J = Y Y * 2 + λ 1 W + λ 2 W 2 s t . Y = f X , W
where Y * represents the measured temperature of the slab, and λ 1 , λ 2 are the regularization parameters.

3. Improved Particle-Swarm-Optimization Algorithm

3.1. The Basic PSO Algorithm

The PSO algorithm originates from a researcher observing the social behavior of a flock of birds or fish [28]. Each particle in the population is a potential solution to the objective function. The particle has two attributes: velocity V i = v i 1 , v i 2 , , v i j , , v N D and position X i = [ x i 1 , x i 2 , , x i j , , x N D ] , where i = 1 , 2 , , N N is the size of particle swarm; and j = 1 , 2 , , D D is the dimension of the solution space.
In each iteration, the information of individual particle is filtered to find the information about the historically optimal position p of the individual and the historically optimal position g of the population. The known information is brought into the velocity updating equation, Equation (3), and the position updating equation, Equation (4), to adjust the search direction of the population, so that the particle swarm approaches the global optimal solution, which is the optimal position of the population.
v i j k + 1 = w v i j k + r 1 c 1 p i x i j k + r 2 c 2 g x i j k
x i j ( k + 1 ) = x i j ( k ) + v i j ( k + 1 )
where w represents the inertia weight; r 1 , r 2 [ 0 , 1 ] are two uniformly distributed random values; c 1 , c 2 are the acceleration parameters, which are non-negative constants; k represents the current iteration; and k = 1 , 2 , 3 , . . . , G , where G is the maximum number of iterations. The velocity updating of the particle is influenced by three factors: the current moment particle velocity V i j ( k ) , the particle’s self-experience Δ V P and the experience of particle swarm Δ V g . Δ V P is the part of the particle that learns from its historical information, and Δ V g represents the part of the particle that learns from the historical information of other particles within the population.

3.2. Improvement Strategies of the XPSO Algorithm

The basic PSO algorithm is a non-globally-convergent optimization algorithm [29]. To reduce the premature probability of falling into a local optimal solution and improve the convergence speed of the basic PSO, an improved PSO (named XPSO) algorithm is proposed based on the multi-strategy co-evolutionary approach. Four specific improvements are described as follows.
  • Improving the positional initialization of the particle swarm: one randomly generated and the other uniformly generated.
    In a basic PSO algorithm, all of the particles are randomly initialized. The expression is given as follows:
    X N D = l x 11 x 12 x 1 D x 21 x 22 x 2 D x N 1 x N 2 x N D
    An increase in the positional diversity of particle swarms can facilitate the exploration of global range. However, increasing the diversity of particle swarms also makes it difficult to converge to the global optimum every time. Hence, an improved approach, based on the “double-edged sword” nature of particle swarm’s diversity, is proposed to improve the algorithm’s stability. During the initialization of the particle swarm, a dimension called X-Dim in the position matrix is randomly selected. The positional information of the particle in X-Dim is generated according to a uniform distribution, as shown in Equation (6).
    x i j = O max O min N i
    where j is the randomly selected dimension; and O m a x and O m i n are the upper and lower limits of the value range of independent variables in different dimensions, respectively.
  • The mutation strategy is introduced into the position updating of particle swarm to compensate for the decline in the overall diversity of particle swarm after improved initialization.
    Unlike some other meta-heuristic algorithms, standard PSO has no evolution operators such as crossover or mutation. The mutation strategy will be implemented by screening the particles in each iteration. If the corresponding fitness function value of someone particle is lower than the average fitness function value, the mutation strategy is performed in the current iteration. The formula of positional mutation is:
    x i j * k = x i j k w v i j k w g p i
    where x * denotes the position of the particle after mutation.
  • The adjustment of inertia weight is given to improve the flexibility of particle flight speed change, and the idea of “stepped” adaptive change is injected into the updating of inertia weight.
    Inertia weight w is directly related to the convergence speed. Most researches use the subtraction function as its updating formula for inertia weight [30]. Some others use the “stepped” improvement method to update the inertia weight [31]. In our method, the inertia weight is adjusted by combining the strategy of decreasing function and the “stepped” improvement. The specific change is given as follows:
    • A “three-step” strategy is proposed to switch the range [ w s , w e ] of inertia weight by determining the fitness function value of the best position so far. The switching formula is given as follows:
      w s , w e = w s 1 , w e 1 f g F i t n e s s 1 w s 2 , w e 2 F i t n e s s 2 < f g < F i t n e s s 1 w s 3 , w e 3 f g F i t n e s s 2
      where [ w s , w e ] is the range of inertia weight; f ( g ) is the fitness function value corresponding to the global optimal solution; F i t n e s s 1 and F i t n e s s 2 are the autonomous set values. The values of ( w s 1 , w e 2 , [ w s 2 , w e 2 ] and [ w s 3 , w e 3 ] ) need to be adjusted according to the conditions of the objective function in different application contexts.
    • After the ranges of the inertia weight [ w s , w e ] have been determined, a decreasing function is introduced to adjust the w. The switching condition is related to the fitness function value of the best position so far. The update formula is given as follows:
      w = r   sin w s π 4 , f g F i t n e s s 3 w s w s w e K G , f g < F i t n e s s 3
      where r [ 0 , 4 ) is a uniformly distributed random number; and w s and w e are the initial and final values of the range [ w s , w e ] of the inertia weight, respectively. F i t n e s s 3 is the autonomous set value, k is the current iteration and G is the maximum iterations.
  • The strategy of jumping out local optimum is proposed.
    A slope parameter t r is given to judge whether the particle swarm has fallen in the local optimum. Here, t r is the count of the condition when S l o p e is less than the value ε in five iterations. The slope is calculated as follows:
    S l o p e = f k g f k 5 g 5
    If the value of t r equals the value of s, which is an autonomous set value, the particle swarm is trapped in a local optimum. Then, the particle swarm will perform a “jumping out local optimum” operation, which is done to change its position. The specific formula of a particle jumping out of a local optimum is given as follows:
    x i j k = x i j k r 1 c 1 g x i j k + r 2 c 2 b a d x i j k
    where b a d represents the information of the global worst position. The core of this strategy is that the particle swarm should be nearest to the global worst position while staying away from the local optimum.
Finally, the pseudo-code of the XPSO algorithm is demonstrated in Algorithm 1.
Algorithm 1: The pseudo-code of the XPSO algorithm
1:Initialize the parameters: (N, G, D, O m a x , O m i n , V m a x , V m i n , t, s, ε )
2:Combine uniform and random distribution to initialize position matrix X N × D
3:Generate the initial velocity V i of each particle randomly
4:Evaluate the fitness value of each particle
5:Set p i with a copy of X i
6:Initialize g and bad with the best and worst fitness value among population
7:While k < G
8:   If  k 6
9:      Update the slope of the fitness function curve
10:      Slope = ( f ( g ) k f ( g ) k 5 )/5
11:      If  S l o p e ε
12:         t = t + 1
13:      End If
14:   End If
15:   Update inertia weight ω by Equations (8) and (9)
16:   For  i = 1 : N
17:      Update the velocity V i
18:       v i j k + 1 = w v i j k + r 1 c 1 p i x i j k + r 2 c 2 g x i j k
19:      Update the velocity X i
20:      If  t = s
21:            For  m = 1 : 50
22:                x i j k = x i j k r 1 c 1 g x i j k + r 2 c 2 b a d x i j k
23:            End For
24:      Else
25:          x i j ( k + 1 ) = x i j ( k ) + v i j ( k + 1 )
26:      End If
27:      Calculate the fitness values of the new particle  X i
28:      Execute position mutation
29:          x i j * k = x i j k w v i j k w g p i
30:      Calculate the fitness values of the new particle  X *
31:      If  X i or X * is better than p i
32:         Update p i
33:      End If
34:      If  X i or X * is better than g
35:         Update g
36:      End If
37:      If  X i or X * is worse than b a d
38:         Update bad
39:      End If
40:   End For
41:   k = k + 1
42:End While

4. Simulations and Discussion

To verify the performance of the XPSO algorithm, some simulations are designed to involve both the performance and application of the algorithm. To obtain an unbiased CPU time comparison, all simulations were programmed by MATLAB R2017b and implemented on a computer with an Intel i5-11400F GPU, 2.60 GHz, 16 GB RAM.

4.1. Validation of XPSO by Benchmark Test Functions

The XPSO algorithm is compared with some popular variant PSO algorithms (PSO [32], IPSO [33], IPSO2 [30], HPSO [34], CPSO [35]) on a series of widely used optimization benchmark functions. A set of benchmark functions were selected from papers ([36,37]).
The benchmark set consisted of three main groups of benchmark functions: 4 unimodal (UM) functions, 2 multimodal (MM) functions and 3 composition (CM) functions. The UM functions (f1–f4) with a unique global best can expose the intensification capacities of different algorithms. The MM functions (f5–f6) can expose the diversification of algorithms. The CM functions (f7–f9) were selected from the IEEE CEC 2005 competition [37], which are also utilized in many papers to test the performances (balancing the exploration and exploitation inclinations and escaping from local optima) of algorithms.
The mathematical formulation and characteristics of UM and MM functions are shown in Table 1. Details of the CM functions are shown in Table 2. The parameters of both the PSO algorithms and the optimization problem (2) were as follows. The specific parameter combinations for the inertia weight were [ w s 1 , w e 1 ] = [ 0.9 , 0.4 ] , [ w s 2 , w e 2 ] = [ 0.65 , 0 ] and [ w s 3 , w e 3 ] = [ 0.55 , 0.05 ] . The maximum and minimum velocities of the particle were V m a x = 0.1 and V m i n = 0.1 , respectively. In addition, the acceleration coefficients c 1 and c 2 were selected to be 2.5 and 1.5, respectively. The inertia weight of the HPSO varied randomly in the range (0, 1). The parameters related to jumping out of local optimal were s = 270 and ε = 0.001 . The values of the regularization parameters in the optimization problem were λ 1 = 1.2 , λ 2 = 1.0 .
The maximum number of iterations for all benchmark functions (f1–f9) was selected as 8000. The dimensions of these benchmark functions (f1–f9) were selected as 10, 30 and 50. Thus, the performances of the six variant PSO algorithms can be compared in different dimensions.
Each algorithm was run individually 10 times, and the average statistical error was calculated. The mean of objective values (Mean) and standard deviation of its solving error (S.D.) were chosen as the performance measures for each algorithm. The simulation results are shown in Table 3 and Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. Table 3 shows best values in bold.
According to Table 3, the XPSO algorithm is superior to other PSO algorithms in terms of solution accuracy. Considering the UM benchmark functions, the results of (f1–f3) solved by XPSO are better than those of the other algorithms. For f4, the best value of Mean was obtained by the XPSO algorithm for the (10, 30, 50)-dimensional cases. The best values of S.D for the (10, 30, 50)-dimensional cases were obtained by XPSO, HPSO and IPSO2. Except f5 in the 10-dimensional case and f6 in the 50-dimensional case, the results for the MM benchmark functions by XPSO are superior to those of the other algorithms. Based on values of the Mean of hybrid CM functions (f7–f9) in Table 3, high-quality solutions can be obtained by the XPSO algorithm.
The convergence curves of the mean of best function were plotted in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 to enable clearer visualization of each algorithm’s performance. In Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, the XPSO algorithm does not show outstanding performance in the early stage of the solution, but its search capability is better in the late stage of the solution, and it can significantly improve the convergence speed in the late evolution. The reason is that the updating strategy of inertia weight includes the formula based on chaotic mapping; consequently, the global search has a high convergence speed for particles. In addition, the updating formula for inertia weight switched in local search, which resulted a slow flying speed of particles but improved the accuracy of the solution. Finally, it can be concluded that the XPSO algorithm has excellent performance in numerical optimization problems and can be used to solve problems in various application contexts.

4.2. Validation of XPSO by Benchmark Test Functions

Firstly, 1280 simulation data sets were generated based on the existing mechanism model to verify the validity of the XPSO algorithm. In total, 1200 data sets were randomly selected as training sets and the remaining 80 as test sets. Here, the XPSO algorithm is compared with not only other PSO algorithms (PSO [32], IPSO [33], IPSO2 [30], HPSO [34], CPSO [35]), but also the other optimization algorithms (WOA [38], IA [39], GWO [40], DE [41], ABC [42]). Secondly, the actual data sets were collected from a reheating furnace in Angang’s building. Fourty-three sets were randomly selected as the training sets. The remaining 10 sets were used as the test sets.

4.2.1. Comparison of XPSO and Other PSO Algorithms

Due to the random initialization of the PSO algorithms, each PSO algorithm independently ran 10 times. The relevant parameters for the PSO algorithms are shown in Table 4. Each algorithm was evaluated by the mean, maximum, median, variance and standard deviation of the errors. The simulation results are shown in Table 5 and Figure 11 and Figure 12.
In Figure 11, the XPSO algorithm clearly has a faster search speed than other algorithms in the early iterations and can quickly move to convergence. Table 5 shows that XPSO is also more accurate in terms of computational accuracy at the later stages of the iterations. Figure 12 ensures the accuracy of the proposed XPSO method for temperature prediction. The IPSO algorithm gave the worst results, for which the maximum error value was almost 24 °C. The mean and median prediction errors by the XPSO algorithm were 0.55 and 0.46 °C, and 99% of the prediction errors by the XPSO algorithm were within 2 °C.

4.2.2. Comparison of XPSO and Other Optimization Algorithms

In this section, the XPSO algorithm is compared with the other optimization algorithms (WOA [38], IA [39], GWO [40], DE [41], ABC [42]) that have been proposed in recent years. The parameters of these algorithms are listed in Table 6. Each algorithm was tested 10 times independently to reduce statistical errors. The mean, maximum, median, variance and standard difference of simulation results were recorded and are shown in Table 7. The best results are shown in bold type. The convergence graph of each algorithm is shown in Figure 13. The slab temperature prediction errors of the XPSO algorithm and other optimization algorithms are shown in Figure 14.
Table 7 proves that the solution of the XPSO algorithm gives the best value. In Figure 13, the XPSO algorithm is more successful than all of the other optimization approaches, and the algorithm determined the global optimal solution after approximately 500 generations. As shown in Figure 14, the temperature prediction errors by the XPSO algorithm were much lower than the errors by the other optimization algorithms. The WOA algorithm gave the worst results; its maximum error value was almost 29 °C. In summary, the proposed XPSO algorithm exhibited fast search performance and accuracy when predicting the billet temperature based on simulation data sets. The results and figures show that this prediction model of billet temperature is credible and reliable. Its accurate prediction is expected to satisfy the future control needs of industrial reheating furnace systems, which will let operators adjust production plans in time to ensure efficiency and reliability.

4.2.3. Validation of the Temperature Prediction Model with Measured Data

The proposed XPSO algorithm was applied to predict slabs’ discharging temperatures based on actual data sets from Angang (company). The algorithm was independently run 20 times, and then the average prediction error was calculated. The predicted temperatures are compared with the actual temperatures in Figure 15. The errors between prediction results and measured data | Y Y * | are shown in Table 8.
From Figure 15 and Table 8, the minimum error of the XPSO algorithm can be seen to be 1.17 °C, and the average error was 4.99 °C. For this actual reheating furnace company, the error between the calculation results and measured data should be lower than 10 °C. Thus, the accuracy of the proposed method is high enough. As the actual discharging temperature was between 1150–1250 °C, the relative error was only 0.4%. Finally, the slab temperature prediction by the XPSO algorithm has achieved the desired effect.

5. Conclusions

In this paper, the XPSO algorithm was proposed to establish a prediction model of slab temperature in a reheating furnace. A novel weight-updating strategy that combines a decreasing function and the adaptive "stepped" strategy was introduced into the XPSO algorithm, so it can greatly improve the search capabilities at a later stage. The validity and feasibility of the XPSO were verified by nine classical benchmark functions, simulation data sets generated by the existing mechanism model and actual data sets from Angang. The following conclusions are given.
  • The benchmark results indicate that the XPSO algorithm has a superior performance to other PSO algorithms (PSO, IPSO, IPSO2, HPSO, CPSO).
  • The XPSO algorithm, which can accurately predict the billet temperature (99% of the prediction errors were less than 2 °C) while ensuring faster convergence, was more successful than all of the other optimization approaches (WOA, IA, GWO, DE, ABC).
  • The prediction model based on the XPSO algorithm can predict more accurate discharging temperatures for the operators. Consequently, the paper verifies the feasibility of the XPSO algorithm and the success of the establishment of the prediction model of slab temperature, and provides a theoretical basis for subsequent research.

Author Contributions

Conceptualization, M.L., P.Y. and. Z.Y.; methodology, M.L.; software, M.L. and P.L.; validation, M.L. and J.Q.; formal analysis, J.Q.; investigation, P.L. and Z.Y.; resources, M.L. and P.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L. and Z.Y.; visualization, Z.Y.; supervision, P.Y. and Z.Y.; project administration, P.Y. and Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the PhD research startup foundation of Qilu University of Technology, grant number (81110535), and the Industry-University-Research Collaborative Innovation Fund (grant number 2020-CXY46)—Development of an automated system for casting post-processing processes.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, M.; Chen, G.; Liu, X.; Wu, C.; Chu, H. Numerical simulation of slab heating process in a regenerative walking beam reheating furnace. Int. J. Heat Mass Transf. 2014, 76, 405–410. [Google Scholar] [CrossRef]
  2. Gao, Q.; Pang, Y.; Sun, Q.; Liu, D.; Zhang, Z. Modeling approach and numerical analysis of a roller-hearth reheating furnace with radiant tubes and heating process optimization. Case Stud. Therm. Eng. 2021, 28, 101618. [Google Scholar] [CrossRef]
  3. Hu, Y.; Tan, C.; Broughton, J.; Roach, P.A.; Varga, L. Model-based multi-objective optimisation of reheating furnace operations using genetic algorithm. Energy Procedia 2017, 142, 2143–2151. [Google Scholar] [CrossRef]
  4. Pantelides, C.C.; Renfro, J.G. The online use of first-principles models in process operations: Review, current status and future needs. Comput. Chem. Eng. 2013, 51, 136–148. [Google Scholar] [CrossRef]
  5. Staalman, D.F.; Kusters, A. On-line slab temperature calculation and control. Manuf. Sci. Eng. 1996, 4, 307–314. [Google Scholar]
  6. Ji, W.; Li, G.; Wei, L.; Yi, Z. Modeling and determination of total heat exchange factor of regenerative reheating furnace based on instrumented slab trials. Case Stud. Therm. Eng. 2021, 24, 100838. [Google Scholar] [CrossRef]
  7. Emadi, A.; Saboonchi, A.; Taheri, M.; Hassanpour, S. Heating characteristics of billet in a walking hearth type reheating furnace. Appl. Therm. Eng. 2014, 63, 396–405. [Google Scholar] [CrossRef]
  8. Tang, G.; Wu, B.; Bai, D.; Wang, Y.; Bodnar, R.; Zhou, C.Q. Modeling of the slab heating process in a walking beam reheating furnace for process optimization. Int. J. Heat Mass Transf. 2017, 113, 1142–1151. [Google Scholar] [CrossRef]
  9. Kim, M.Y. A heat transfer model for the analysis of transient heating of the slab in a direct-fired walking beam type reheating furnace. Int. J. Heat Mass Transf. 2007, 50, 3740–3748. [Google Scholar] [CrossRef]
  10. Singh, V.K.; Talukdar, P. Comparisons of different heat transfer models of a walking beam type reheat furnace. Int. Commun. Heat Mass Transf. 2013, 47, 20–26. [Google Scholar] [CrossRef]
  11. Morgado, T.; Coelho, P.J.; Talukdar, P. Assessment of uniform temperature assumption in zoning on the numerical simulation of a walking beam reheating furnace. Appl. Therm. Eng. 2015, 76, 496–508. [Google Scholar] [CrossRef]
  12. Casal, J.M.; Porteiro, J.; Míguez, J.L.; Vázquez, A. New methodology for CFD three-dimensional simulation of a walking beam type reheating furnace in steady state. Appl. Therm. Eng. 2015, 86, 69–80. [Google Scholar] [CrossRef]
  13. Hong, D.; Li, G.; Wei, L.; Yi, Z. An improved sequential function specification coupled with Broyden combined method for determination of transient temperature field of the steel billet. Int. J. Heat Mass Transf. 2022, 186, 122489. [Google Scholar] [CrossRef]
  14. Chen, D.; Xu, H.; Lu, B.; Chen, G.; Zhang, L. Solving the heat transfer boundary condition of billet in reheating furnace by combining “black box” test with mathematic model. Case Stud. Therm. Eng. 2022, 40, 102486. [Google Scholar] [CrossRef]
  15. Kim, Y.I.; Moon, K.C.; Kang, B.S.; Han, C.; Chang, K.S. Application of neural network to the supervisory control of a reheating furnace in the steel industry. Control. Eng. Pract. 1998, 6, 1009–1014. [Google Scholar] [CrossRef]
  16. Laurinen, P.; Röning, J. An adaptive neural network model for predicting the post roughing mill temperature of steel slabs in the reheating furnace. J. Mater. Process. Technol. 2005, 168, 423–430. [Google Scholar] [CrossRef]
  17. Liao, Y.; Wu, M.; She, J.H. Modeling of reheating-furnace dynamics using neural network based on improved sequential-learning algorithm. In Proceedings of the 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, Munich, Germany, 4–6 October 2006; pp. 3175–3181. [Google Scholar]
  18. Tan, C.; Wilcox, S.; Ward, J. Use of artificial intelligence techniques for optimisation of co-combustion of coal with biomass. J. Energy Inst. 2006, 79, 19–25. [Google Scholar] [CrossRef]
  19. Pongam, T.; Khomphis, V.; Srisertpol, J. System modeling and temperature control of reheating furnace walking hearth type in the setting up process. J. Mech. Sci. Technol. 2014, 28, 3377–3385. [Google Scholar] [CrossRef]
  20. Tang, Z.; Yang, Y. Two-stage particle swarm optimization-based nonlinear model predictive control method for reheating furnace process. ISIJ Int. 2014, 54, 1836–1842. [Google Scholar] [CrossRef] [Green Version]
  21. Aoxiang, W.; Xiaohua, L.; Xiaolin, W. Temperature optimization setting model of the reheating furnace on 1700 line in tangsteel. In Proceedings of the 2018 Chinese Control Additionally, Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 4099–4103. [Google Scholar]
  22. Yang, Y.; Liu, Y.; Liu, X.; Qin, S. Billet temperature soft sensor model of reheating furnace based on RVM method. In Proceedings of the 2011 Chinese Control and Decision Conference (CCDC), Mianyang, China, 23–25 May 2011; pp. 4003–4006. [Google Scholar]
  23. Yi, Z.; Su, Z.; Li, G.; Yang, Q.; Zhang, W. Development of a double model slab tracking control system for the continuous reheating furnace. Int. J. Heat Mass Transf. 2017, 113, 861–874. [Google Scholar] [CrossRef]
  24. Chen, Y.W.; Chai, T.Y. Modelling and prediction for steel billet temperature of heating furnace. Int. J. Adv. Mechatron. Syst. 2010, 2, 342–349. [Google Scholar] [CrossRef]
  25. Alsaidy, S.A.; Abbood, A.D.; Sahib, M.A. Heuristic initialization of PSO task scheduling algorithm in cloud computing. J. King Saud -Univ.-Comput. Inf. Sci. 2020, 34, 2370–2382. [Google Scholar] [CrossRef]
  26. Yue, C.; Qu, B.; Liang, J. A multiobjective particle swarm optimizer using ring topology for solving multimodal multiobjective problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  27. Peng, C.C.; Chen, C.H. Compensatory neural fuzzy network with symbiotic particle swarm optimization for temperature control. Appl. Math. Model. 2015, 39, 383–395. [Google Scholar] [CrossRef]
  28. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  29. Kennedy, J. The particle swarm: Social adaptation of knowledge. In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation (ICEC’97), Indianapolis, IN, USA, 13–16 April 1997; pp. 303–308. [Google Scholar]
  30. Ravi, K.; Rajaram, M. Optimal location of FACTS devices using improved particle swarm optimization. Int. J. Electr. Power Energy Syst. 2013, 49, 333–338. [Google Scholar] [CrossRef]
  31. Zhang, L.; Zhao, L. High-quality face image generation using particle swarm optimization-based generative adversarial networks. Future Gener. Comput. Syst. 2021, 122, 98–104. [Google Scholar] [CrossRef]
  32. Ouyang, A.; Tang, Z.; Zhou, X.; Xu, Y.; Pan, G.; Li, K. Parallel hybrid pso with cuda for ld heat conduction equation. Comput. Fluids 2015, 110, 198–210. [Google Scholar] [CrossRef]
  33. Gao, Z.; Lu, H. Logistics Route Optimization Based on Improved Particle Swarm Optimization. In Proceedings of the Journal of Physics: Conference Series, Diwaniyah, Iraq, 21–22 April 2021; Volume 1995, p. 012044. [Google Scholar]
  34. Wu, J.; Long, J.; Liu, M. Evolving RBF neural networks for rainfall prediction using hybrid particle swarm optimization and genetic algorithm. Neurocomputing 2015, 148, 136–142. [Google Scholar] [CrossRef]
  35. Liu, B.; Wang, L.; Jin, Y.H.; Tang, F.; Huang, D.X. Improved particle swarm optimization combined with chaos. Chaos, Solitons Fractals 2005, 25, 1261–1271. [Google Scholar] [CrossRef]
  36. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  37. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization; KanGAL Report Number 2005005; Nanyang Technological University: Singapore, 2005. [Google Scholar]
  38. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  39. Hong, G.; Zong-Yuan, M. Immune algorithm. In Proceedings of the 4th World Congress on Intelligent Control and Automation (Cat. No. 02EX527), Shanghai, China, 10–14 June 2002; Volume 3, pp. 1784–1788. [Google Scholar]
  40. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  41. Arslan, M.; Çunkaş, M.; Sağ, T. Determination of induction motor parameters with differential evolution algorithm. Neural Comput. Appl. 2012, 21, 1995–2004. [Google Scholar] [CrossRef]
  42. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a heating furnace’s structure.
Figure 1. Schematic diagram of a heating furnace’s structure.
Applsci 12 11550 g001
Figure 2. Comparison of performances on f1 by XPSO and other PSO algorithms.
Figure 2. Comparison of performances on f1 by XPSO and other PSO algorithms.
Applsci 12 11550 g002
Figure 3. Comparison of performances on f2 by XPSO and other PSO algorithms.
Figure 3. Comparison of performances on f2 by XPSO and other PSO algorithms.
Applsci 12 11550 g003
Figure 4. Comparison of performances on f3 by XPSO and other PSO algorithms.
Figure 4. Comparison of performances on f3 by XPSO and other PSO algorithms.
Applsci 12 11550 g004
Figure 5. Comparison of performances on f4 by XPSO and other PSO algorithms.
Figure 5. Comparison of performances on f4 by XPSO and other PSO algorithms.
Applsci 12 11550 g005
Figure 6. Comparison of performances on f5 by XPSO and other PSO algorithms.
Figure 6. Comparison of performances on f5 by XPSO and other PSO algorithms.
Applsci 12 11550 g006
Figure 7. Comparison of performances on f6 by XPSO and other PSO algorithms.
Figure 7. Comparison of performances on f6 by XPSO and other PSO algorithms.
Applsci 12 11550 g007
Figure 8. Comparison of performances on f7 by XPSO and other PSO algorithms.
Figure 8. Comparison of performances on f7 by XPSO and other PSO algorithms.
Applsci 12 11550 g008
Figure 9. Comparison of performances on f8 by XPSO and other PSO algorithms.
Figure 9. Comparison of performances on f8 by XPSO and other PSO algorithms.
Applsci 12 11550 g009
Figure 10. Comparison of performances on f9 by XPSO and other PSO algorithms.
Figure 10. Comparison of performances on f9 by XPSO and other PSO algorithms.
Applsci 12 11550 g010
Figure 11. Comparison of fitness for XPSO and other PSO algorithms.
Figure 11. Comparison of fitness for XPSO and other PSO algorithms.
Applsci 12 11550 g011
Figure 12. Prediction error of XPSO and other PSO algorithms.
Figure 12. Prediction error of XPSO and other PSO algorithms.
Applsci 12 11550 g012
Figure 13. Comparison of XPSO with other optimization algorithms.
Figure 13. Comparison of XPSO with other optimization algorithms.
Applsci 12 11550 g013
Figure 14. Prediction error of XPSO and other optimization algorithms.
Figure 14. Prediction error of XPSO and other optimization algorithms.
Applsci 12 11550 g014
Figure 15. The slab temperature prediction by the XPSO algorithm.
Figure 15. The slab temperature prediction by the XPSO algorithm.
Applsci 12 11550 g015
Table 1. Descriptions of unimodal and multimodal benchmark functions.
Table 1. Descriptions of unimodal and multimodal benchmark functions.
FunctionNameFunction’s ExpressionsSearch RangeGlobal Opt. 1
f1Sphere f 1 = i = 1 n x i 2 [ 100 , 100 ] n 0
f2Schwefel’s 1.2 f 2 = i = 1 n j = 1 i x j 2 [ 100 , 100 ] n 0
f3Schwefel’s 2.21 f 3 = max x i , 1 i n [ 100 , 100 ] n 0
f4Quartic Noise f 4 = i = 1 n i x i 4 + r a n d o m 0.1 [ 1.28 , 1.28 ] n 0
f5Generalized Rastrigin f 5 = i = 1 n x i 2 10 cos 2 π x i + 10 [ 5.12 , 5.12 ] n 0
f6Generalized Penalized Function 2 f 6 = 0.1 sin 2 3 π x 1 + 0.1 i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + 0.1 x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5 , 100 , 4 [ 50 , 50 ] n 0
1 Global Opt.: global optimal solution.
Table 2. Details of hybrid composition functions.
Table 2. Details of hybrid composition functions.
Function (CEC2005-ID)DescriptionPropertiesRangeGlobal Opt.
f7(C16)Rotated Hybrid Composition FunctionMM 1, R 2, NS 3, S 4 [ 5 , 5 ] n 120
f8(C18)Rotated Hybrid Composition FunctionMM, R, NS, S [ 5 , 5 ] n 10
f9(C21)Rotated Hybrid Composition FunctionMM, R, NS, S [ 5 , 5 ] n 360
1 MM: Multi-modal, 2 R: Rotated, 3 NS: Non-Separable, 4 S: Scalable.
Table 3. Results of benchmark functions (Dim = 10, 30, 50).
Table 3. Results of benchmark functions (Dim = 10, 30, 50).
F 1 D 2XPSOCPSOIPSOIPSO2PSOHPSO
Mean 3S.D. 4MeanS.D.MeanS.D.MeanS.D.MeanS.D.MeanS.D.
f1101.29 × 10−1022.89 × 10−1023.42 × 1022.07 × 1025.81 × 10−82.39 × 10−81.07 × 10−32.38 × 10−47.95 × 10−91.63 × 10−82.33 × 10−86.15 × 10−8
306.41 × 10−571.92 × 10−564.40 × 1022.25 × 1024.15 × 10−61.22 × 10−60.310.417.51 × 10−64.04 × 10−64.22 × 10−72.57 × 10−7
501.59 × 10−83.89 × 10−84.57 × 1022.80 × 1021.93 × 10−55.87 × 10−66.7313.422.00 × 10−51.30 × 10−51.02 × 10−63.08 × 10−7
f2101.06 × 10−771.50 × 10−771.06 × 1038.87 × 1025.93 × 10−73.33 × 10−72.95 × 1023.70 × 1024.08 × 10−74.74 × 10−71.35 × 10−81.71 × 10−8
308.67 × 10−121.23 × 10−114.36 × 1031.95 × 1031.44 × 10−34.36 × 10−49.07 × 1028.11 × 1025.74 × 10−32.33 × 10−33.11 × 10−51.98 × 10−5
501.36 × 10−51.35 × 10−27.74 × 1036.45 × 1031.88 × 10−23.33 × 10−31.25 × 1032.74 × 1032.49 × 10−29.20 × 10−31.91 × 10−48.34 × 10−5
f3101.11 × 10−211.91 × 10−214.591.132.70 × 10−41.47 × 10−46.41 × 10−27.51 × 10−24.45 × 10−42.61 × 10−48.41 × 10−47.47 × 10−4
301.03 × 10−191.79 × 10−196.151.659.01 × 10−32.77 × 10−30.710.163.03 × 10−21.62 × 10−21.96 × 10−21.99 × 10−2
506.84 × 10−51.37 × 10−44.870.650.230.225.760.410.360.330.130.11
f4100.240.160.430.260.440.260.460.320.610.240.50.29
300.270.20.580.280.530.320.350.30.640.310.520.18
500.350.240.650.30.540.310.380.230.530.230.580.3
f51012.933.9812.084.4610.454.439.33.410.944.099.834.16
3018.054.1648.4317.8126.966.9124.816.527.169.6728.566.13
5020.92.5467.2318.5638.5111.9434.184.9339.9914.3633.7310.42
f6101.450.949.414.092.131.372.622.472.813.261.481.46
305.083.5625.395.5614.518.4222.095.699.495.426.596.24
509.836.9837.217.5424.369.8331.428.1414.718.789.298.89
f710196.3526.50502.96179.38381.9694.03731.4880.86362.1136.18428.3459.63
30542.9346.56867.86204.51584.27122.681223.15110.51505.2880.91623.8690.26
50563.9297.82937.18241.44593.34132.261378.35129.68599.16106.17651.98126.37
f8101090.7241.531364.6823.431135.1822.511492.8639.301136.8865.801098.1559.30
301147.03123.611408.6950.611182.6470.861561.7756.231194.76120.681287.8084.09
501167.73134.831486.2456.471202.59132.751620.5370.991249.79131.051464.34151.77
f9101066.4828.841387.3919.111120.6338.331538.219.211286.0923.521287.6939.38
301291.7838.311440.3131.751319.8643.931604.2741.321303.9730.271309.1452.25
501316.1554.371485.7541.981338.2268.711633.6182.421326.4843.431524.7768.02
1 F: Function, 2 D: dimensional, 3 Mean: mean of objective values, 4 S.D.: standard deviation.
Table 4. Classical benchmark functions.
Table 4. Classical benchmark functions.
SymbolNameSize
NParticle swarm size125
DParticle Swarm Dimension35
GMaximum number of iterations8000
w s Initial value of inertia weights0.8
w e Final value of inertia weights0.05
c 1 Acceleration coefficient 12.5
c 2 Acceleration coefficient 21.5
V m a x Value of maximum particle’s velocity0.1
V m i n Value of minimum particle’s velocity−0.1
Table 5. Classical benchmark functions.
Table 5. Classical benchmark functions.
AlgorithmMeanMaximumMedianVarianceS.D.
XPSO0.552.290.460.2160.465
CPSO3.913.993.418.0982.846
IPSO723.755.8526.5825.156
IPSO23.7613.5838.3882.896
PSO3.5811.652.776.8182.611
HPSO0.782.940.610.4460.668
Table 6. Parameters of other optimization algorithms.
Table 6. Parameters of other optimization algorithms.
AlgorithmsPopulationMaximum IterationDimOther
WOA125800035 r 1 , r 2 [ 0 , 1 ] are random numbers
IA125800035 p m = 0.7 , α = β = 1 , δ = 0.2 , n c l = 10
GWO125800035 r 1 , r 2 [ 0 , 1 ] are random numbers
DE125800035 F 0 = 0.4 , C R = 0.1
ABC125800035 α = 1
Table 7. Results of XPSO and other optimization algorithms.
Table 7. Results of XPSO and other optimization algorithms.
AlgorithmMeanMaximumMedianVarianceS.D.
XPSO0.552.290.460.2160.465
WOA4.5328.273.4518.944.3519
IA6.9227.526.4328.00325.2918
GWO2.2813.391.26.62492.5739
DE2.310.532.12.89711.7021
ABC6.5927.239.7924.20554.9199
Table 8. Prediction errors between calculation results and measured data.
Table 8. Prediction errors between calculation results and measured data.
Points1 (°C)2 (°C)3 (°C)4 (°C)5 (°C)6 (°C)7 (°C)8 (°C)9 (°C)10 (°C)Mean (°C)
XPSO5.432.918.937.323.475.171.175.094.096.434.99
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, M.; Yan, P.; Liu, P.; Qiao, J.; Yang, Z. An Improved Particle-Swarm-Optimization Algorithm for a Prediction Model of Steel Slab Temperature. Appl. Sci. 2022, 12, 11550. https://doi.org/10.3390/app122211550

AMA Style

Liu M, Yan P, Liu P, Qiao J, Yang Z. An Improved Particle-Swarm-Optimization Algorithm for a Prediction Model of Steel Slab Temperature. Applied Sciences. 2022; 12(22):11550. https://doi.org/10.3390/app122211550

Chicago/Turabian Style

Liu, Ming, Peng Yan, Pengbo Liu, Jinwei Qiao, and Zhi Yang. 2022. "An Improved Particle-Swarm-Optimization Algorithm for a Prediction Model of Steel Slab Temperature" Applied Sciences 12, no. 22: 11550. https://doi.org/10.3390/app122211550

APA Style

Liu, M., Yan, P., Liu, P., Qiao, J., & Yang, Z. (2022). An Improved Particle-Swarm-Optimization Algorithm for a Prediction Model of Steel Slab Temperature. Applied Sciences, 12(22), 11550. https://doi.org/10.3390/app122211550

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop