Next Article in Journal
Effects of Curing Conditions on Pore Structure of Ultra-High-Strength Shotcrete (UHSSC) Based on X-ray Computed Tomography
Next Article in Special Issue
Fundamental Study of Phased Array Ultrasonic Cavitation Abrasive Flow Polishing Titanium Alloy Tubes
Previous Article in Journal
In Vitro Evaluation of Cellular Interactions with Nanostructured Spheres of Alginate and Zinc-Substituted Carbonated Hydroxyapatite
Previous Article in Special Issue
Processing Optimization for Halbach Array Magnetic Field-Assisted Magnetic Abrasive Particles Polishing of Titanium Alloy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Optimization Algorithms and Their Applications and Prospects in Manufacturing Engineering

1
Department of Basic Courses, Suzhou City University, Suzhou 215104, China
2
College of Mechanical Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
*
Author to whom correspondence should be addressed.
Materials 2024, 17(16), 4093; https://doi.org/10.3390/ma17164093
Submission received: 22 June 2024 / Revised: 28 July 2024 / Accepted: 12 August 2024 / Published: 17 August 2024
(This article belongs to the Special Issue Advanced Abrasive Processing Technology and Applications)

Abstract

:
In modern manufacturing, optimization algorithms have become a key tool for improving the efficiency and quality of machining technology. As computing technology advances and artificial intelligence evolves, these algorithms are assuming an increasingly vital role in the parameter optimization of machining processes. Currently, the development of the response surface method, genetic algorithm, Taguchi method, and particle swarm optimization algorithm is relatively mature, and their applications in process parameter optimization are quite extensive. They are increasingly used as optimization objectives for surface roughness, subsurface damage, cutting forces, and mechanical properties, both for machining and special machining. This article provides a systematic review of the application and developmental trends of optimization algorithms within the realm of practical engineering production. It delves into the classification, definition, and current state of research concerning process parameter optimization algorithms in engineering manufacturing processes, both domestically and internationally. Furthermore, it offers a detailed exploration of the specific applications of these optimization algorithms in real-world scenarios. The evolution of optimization algorithms is geared towards bolstering the competitiveness of the future manufacturing industry and fostering the advancement of manufacturing technology towards greater efficiency, sustainability, and customization.

1. Introduction

In modern manufacturing, optimization algorithms have become a key tool for improving processing technology efficiency and quality [1]. With the development of computing technology and the advancement of artificial intelligence, these algorithms play an increasingly important role in the parameter optimization of machining processes [2]. By precisely adjusting processing parameters, optimization algorithms can significantly improve material utilization, reduce energy consumption, shorten production cycles, and improve the quality and consistency of the final product [3]. The ultimate goal of manufacturing is to produce high-quality products with minimal cost and time consumption [4]. To achieve these goals, one of the considerations is to optimize the processing parameters.
The quality, microstructure, performance, cost, and lifespan of part machining are directly impacted by process parameters, including spindle speed, feed rate, cutting depth, tool material, and cooling conditions [5]. The determination and design of process parameters are considered the fundamental activities in the implementation of part manufacturing based on specific processes. Process parameters are influenced by process methods, component materials, and part shapes, as per the theoretical knowledge in material science and related fields [6]. For the design optimization of process parameters that are not fully determined and cannot be calculated through formulas, the “trial and error” method is predominantly utilized [7]. Optimal process parameters are selected by observing various process parameters and conducting experiments to analyze the factors influencing part processing quality. However, the complete trial and error method is characterized by a lengthy cycle, high cost, and is constrained by experimental costs and conditions. The optimization of process parameters aims at maximizing production efficiency while fulfilling product quality requirements [8].
Studying process parameter optimization algorithms for intelligent manufacturing can effectively compensate for the shortcomings of empirical or trial and error methods in the process of optimizing process parameters and improve the stability and efficiency of the production process [9]. The introduction of intelligent optimization algorithms is to assist engineers in process planning for multiple machining processes. In practical industrial applications, the manufacturing performance of parts has been improved [4]. Among the existing optimization algorithms, the genetic algorithm, simulated annealing algorithm, and particle swarm optimization algorithm are widely used in process parameter optimization problems [10]. The genetic algorithm has achieved success in many fields by simulating the process of biological evolution and searching for optimal solutions through selection, crossover, and mutation. The simulated annealing algorithm simulates the process of solid annealing by accepting better or worse solutions with a certain probability of jumping out of local extremes [11]. In addition to common intelligent optimization algorithms, deep learning algorithms have also made some breakthroughs in process parameter optimization in recent years [12]. The combination of predictive models and improved intelligent algorithms has emerged as an effective method for enhancing process optimization and machining processes. Initially, the predictive model establishes the correlation between the input and output, with the improved intelligent algorithm serving as an effective solver to obtain highly optimized solutions [4,13].
However, in practical applications, process parameter optimization algorithms also need to consider multiple aspects, such as minimizing production costs and time simultaneously, as well as maximizing profit margins [14,15]. The constraint conditions also need to meet the limitations of machine tool power, feed rate, spindle speed, and other related parameters. The use of optimization algorithms needs to consider the mutual influence between process parameters, avoiding overly single optimization of one parameter while ignoring the influence of other parameters [16,17]. With the development of artificial intelligence and deep learning technology, methods from fields such as machine learning, artificial intelligence, and expert systems can also be combined for comprehensive application in practical problems [15,18]. For example, machine learning can predict and optimize process parameters by training models and learning data, use deep learning algorithms to learn patterns and patterns from a large amount of data, and provide corresponding prediction and optimization results [19,20].
In summary, the application and development status of optimization algorithms in process parameters within the realm of engineering manufacturing are summarized in this article. The definitions, classifications, advantages, and disadvantages, as well as specific applications of optimization algorithms (response surface method, genetic algorithm, and particle swarm optimization algorithm), are analyzed. This article also looks forward to the future use of new machine learning and other optimization algorithms in parameter optimization, offering a reference for relevant engineering problems and researchers.

2. Classification, Advantages and Disadvantages, and Application Areas of Optimization Algorithms

Optimization of processing technology parameters is an important research direction in the manufacturing industry, aimed at improving product quality, reducing production costs, and enhancing production efficiency [21]. In the traditional process of optimizing process parameters, the most common methods are single-factor experiments, orthogonal experiments, response surface methods, and so on [22]. With the development of computational technology, the optimization methods of process parameters have also shifted from traditional empirical methods to more systematic and scientific algorithm optimizations [23,24]. The single-factor experiment and orthogonal experiment are simple and widely used, and they will not be described in detail in this section. This section mainly describes the response surface method, genetic algorithm, particle swarm optimization algorithm, and other algorithms applied to process parameter optimization, introducing their definitions, usage methods, advantages, and disadvantages.

2.1. Response Surface Method

Response surface design is a statistical method that utilizes reasonable experimental design methods and obtains certain data through experiments [25]. It uses multiple quadratic regression equations to fit the functional relationship between factors and response values, and it analyzes the regression equations to seek the optimal process parameters and solve multivariate problems [26]. The response surface method first requires experimental design. The design process of the response surface method is shown in Figure 1 [27,28]. A suitable mathematical model is established based on a large amount of experimental data, and a graph is drawn based on this mathematical model. This method has the advantages of fewer experiments, shorter cycles, and higher accuracy [29].
The specific design process of response surface methodology can be described as follows: Firstly, a suitable experimental design (e.g., a central composite de-design) is selected to determine the range of processing parameters on the basis of a one-factor experiment, and then the response surface methodology experimental design is carried out [30]. Secondly, based on the normal distribution maps of variance, range, and residual, the response surface method experimental results are analyzed to insepct the interaction between processing parameters. Finally, the optimal solution for processing parameters is obtained.
The response surface method establishes a mathematical model between variables and response values. Based on the Taylor expansion, a polynomial fitting function can be established that only considers constant and linear terms, as shown in Equation (1) [31]:
y = β 0 + i = 1 k β i x i + ε
where y is the response value in the fitting function, β0 is a constant term, k is the number of variables, βi is the linear impact regression coefficient of xi, xi is the input independent variable, and ε is the residual of the mathematical model.
In addition, a quadratic polynomial model is used to describe the main effects in process variables and the mutual effects between variables. The quadratic polynomial fitting model is shown in Equation (2):
y = β 0 + i = 1 m β i x i + i = 1 m β i i x i x i + i < j β i j x i x j + ε
where βii is the quadratic regression coefficient with respect to the independent variable xi, βij is the linear interaction influence coefficient with respect to xi and xj, and the other terms have the same meaning as in Equation (1) above.

2.2. Genetic Algorithm

The genetic algorithm is an optimization search method based on natural selection and genetic principles, proposed by American computer scientist John H. Holland [32,33]. The genetic algorithm is an algorithm that iteratively updates the solution of a problem by simulating the Darwinian natural selection and genetic laws in the natural evolution process of organisms in order to search for the optimal solution or approximate optimal solution [34]. The basic process of genetic algorithm is roughly as follows (as shown in Figure 2). Firstly, map the problem into a mathematical problem, that is, establish a mathematical model. Secondly, initialize a population consisting of multiple individuals, each representing a solution. Then, evaluate the individuals based on the fitness function and select excellent individuals for reproduction [35]. Subsequently, perform a crossover operation by randomly selecting two individuals for chromosome crossover, generating new offspring, and updating the optimal solution. Finally, repeat the above steps until the preset number of iterations is reached or other stopping conditions are met [36].
The standard method of genetic algorithm includes steps such as population initialization, selection, crossover (hybridization), mutation, and substitution [37,38]. The advantage of genetic algorithms lies in their adaptability and ability to handle complex nonlinear, non-convex optimization problems, and dynamic environments. In addition, genetic algorithms also have good global search ability, which can avoid becoming stuck in local optimal solutions [39]. However, the convergence speed of genetic algorithms may be slow, and it is necessary to adjust parameters such as population size, crossover probability, and mutation probability to achieve better performance.
The specific application areas of genetic algorithm mainly include function optimization, combinatorial optimization, machine learning, bioinformatics, and control signal systems [40,41]. For example, combinatorial optimization strategies, audio and image processing, signal regulation of control systems, parameters in machine learning, and protein structure prediction have broad applications and development prospects but need to solve problems such as convergence and operational efficiency [42].
NSGA-II has fast convergence and better population diversity at the Pareto front, with high algorithm accuracy. Compared to NSGA, which mainly focuses on non-dominated sorting, NSGA-II has three main running times: non-dominated sorting (constructing non-dominated sets), calculating focus distance, and constructing partial ordered sets [43]. The time complexity of NSGA is MN3, while the time complexity of NSGA-II is MN2, where M is the target number and N is the number of individuals in the population. The non-dominated sorting method is based on Pareto-dominated feasible solutions. For a typical multi-objective optimization problem, the vectors of the design variables for population individuals i and j are represented by Equation (3):
X i = x 1 i , x 2 i , x 3 i , , x m i X j = x 1 j , x 2 j , x 3 j , , x m j
Furthermore, the vectors of the response variables for individuals i and j in the population are represented by Equation (4):
R i = r 1 i , r 2 i , r 3 i , , r n i R j = r 1 j , r 2 j , r 3 j , , r n j
If any response variable in Ri is not worse than the response variable in Rj, and at least one response variable in Ri is better than the response variable in Rj, then Xi dominates Xj. If Xi is not dominated by any other solution, Ri can be represented as a point on the Pareto front, which describes the optimal solution that achieves trade-offs between optimization objectives, that is, it cannot improve any objectives without affecting other objectives [34].
Genetic algorithms can innovate crossover and mutation strategies in the future and can also break through population initialization and diversity preservation. In addition, by mixing with other optimization algorithms, genetic algorithms can be combined with other optimization algorithms (such as particle swarm optimization, simulated annealing, etc.) to utilize their respective advantages and study the convergence and parameter adaptability of genetic algorithm.

2.3. Particle Swarm Optimization

Particle swarm optimization is an evolutionary computing technique proposed by Dr. Eberhart and Dr. Kennedy in 1995, originating from the study of foraging behavior in bird populations (as shown in Figure 3) [44]. The basic idea of this algorithm is to find the optimal solution through a collaboration and information sharing among individuals in the group.
The particle swarm algorithm simulates birds in a bird swarm by designing a mass-free example, where particles only have two attributes, velocity and position, where velocity represents the speed of bird movement and position represents the direction of bird movement [45]. The flowchart of the particle swarm optimization algorithm is shown in Figure 4 [46].
The particle swarm optimization (PSO) algorithm proceeds through the following specific steps:
(1)
Initializing the velocity and position of the particle swarm, the inertia factor acceleration constant, and the maximum number of iterations [47].
(2)
Evaluating the initial fitness value of each particle and substituting it into the objective function.
(3)
Using the initial fitness value as the local optimum of the current particle (dependent variable) and the position as the position of the current local optimum (independent variable).
(4)
Taking the optimal local optimal value (initial fitness value) among all particles as the current global optimal value and the optimal position as the position of the global optimal value.
(5)
Substituting the velocity update relation to update the flight velocity of the particles.
(6)
Updating the position of each particle.
(7)
To compare whether the adaptation value of each particle is better than the historical local optimum. If yes, take the current fitness value as the local optimal value of the particle and take the corresponding position as the local optimal position of the particle.
(8)
To find out the global optimal value in the current particle swarm and take the corresponding position as the global optimal position.
(9)
Reiterating steps 5–9 until the set minimum error or maximum number of iterations is reached. Output the optimal value and position, the local optimal value, and position of other particles [48].
In the particle swarm algorithm, particles need to update their velocity and position via Equation (5), with a specific expression, as follows [49,50]:
v i d = ω v i d 1 + c 1 r 1 p b e s t i d x i d + c 2 r 2 g b e s t d x i d
where n is the number of examples, c1 is the individual learning factor of the particle, c2 is the social learning factor of the particle, ω is the inertia weight of the velocity, vid is the velocity of the ith particle at the dth iteration, xid is the position where the ith particle is at the dth iteration, f(x) refers to the fitness at position x, pbestid is the best position the ith particle has passed the dth iteration by, and gbestid is the best position passed by all particles by the dth iteration [51].
Further, the position where the particle is located at step d + 1 is the sum of the position at step d and the position at step d multiplied by the movement time, which can be described as Equation (6) [52]:
x i d + 1 = x i d + v i d
where the inertia weight ω decreases as the number of iterations increases, ultimately allowing the particle swarm algorithm to exhibit a local convergence ability; the expression for ω is shown in Equation (7) [53]:
ω = ω max ω max ω min iter iter max
where ωmax and ωmin are the maximum and minimum inertia weights, respectively, iter is the current iteration number, and itermax is the maximum iteration number.
Furthermore, the above is an introduction to the basic particle swarm algorithm. With the development of technology and application needs, a hybrid particle swarm algorithm has been developed to solve related engineering problems and optimize process parameters [54,55]. As shown in Figure 5, the calculation process of the hybrid particle swarm optimization algorithm is introduced.
Firstly, during the initialization process, some machining process parameters will be selected based on the relevant performance parameters of the machine tool. Then, the appropriate fitting degree of the process parameter position will be calculated through the objective function, and the PBest solution set will be obtained. In addition, the adaptive density grid algorithm is integrated into the particle swarm algorithm to select the appropriate solution set. Subsequently, the dynamic inertia weight algorithm is used to determine the weight of the current motion’s impact on the new position [4]. Finally, after a certain number of iteration steps, the final process parameter solution can be obtained.
The particle swarm optimization algorithm can be applied to optimize the parameters of many manufacturing processes in the following ways: milling, turning, drilling, injection molding, welding process, laser cutting, laser welding, other laser processing, surface treatment, electroplating, painting, heat treatment, and 3D printing [56,57]. The application of the particle swarm optimization algorithm in these manufacturing processes usually needs to be realized by adjusting the algorithm parameters and designing a reasonable fitness function in combination with specific process characteristics and optimization objectives [58].

2.4. Machine Learning and Other Algorithms

Machine learning is an interdisciplinary field that covers the knowledge of probability theory, statistics, approximation theory, and complex algorithms. It uses computers as tools and is committed to simulating human learning methods in real time; moreover, it divides existing content into knowledge structures to effectively improve the learning efficiency. Optimization algorithms in machine learning are mathematical methods used to solve model parameters or features, which can improve the accuracy and generalization ability of the model. Common optimization algorithms include the decision tree algorithm, naive Bayesian algorithm, artificial neural network algorithm, and so on.

2.4.1. Decision Tree Algorithm

Decision trees are one of the machine learning algorithms that can handle both complete and incomplete data. Decision trees and their variants are algorithms that divide the input space into different regions, each with independent parameters. The decision tree algorithm fully utilizes the tree model, as shown in Figure 6. The path from the root node to a leaf node is a classification path rule, and each leaf node represents a judgment category. First, the samples are divided into subsets; then, they are split and recursed until the same type of sample is obtained for each subset. From the root node, the test is started from the subtree and then to the leaf nodes to obtain the predicted categories. This method is characterized by a simple structure and efficient data acquisition [59].
The decision tree algorithm is widely used in fields such as machine learning, image recognition, traffic control, and biomedicine [60]. The decision tree algorithm can effectively compare numerical features and thresholds during the testing process [61,62]. The decision tree algorithm is commonly used for grouping functions, achieving precise analysis of classification category features through data mining [63]. Figure 7 shows an example of the application of the decision tree algorithm [64].
The main types of decision tree algorithms are iterative dichotomization, classification and regression trees, multivariate adaptive regression spline, conditional association trees, and so on [65,66]. In addition, the main objective of decision tree algorithms is to build a training model and then use machine learning to compute the decision rules from the training data, which in turn gives the thresholds for predicting the target variables [67,68]. However, the advantages and disadvantages of decision tree algorithms are also more significant. First, the decision tree algorithm is easy to understand and can categorize categories and numerical results [69]. Second, the decision tree algorithm does not make a priori assumptions about how good or bad the results will be [70]. However, the decision tree algorithm may make wrong decisions because the optimal decision-making mechanism is blocked. Secondly, the decision book algorithm requires a relatively large number of training samples, which increases the computational cost [71,72].

2.4.2. Naive Bayes Algorithm

The naive Bayes algorithm is a classification method based on the Bayesian theorem and independent assumption of feature conditions. For a given training dataset, the joint probability distribution of the input and output is first learned based on the independent assumption of feature conditions. Then, based on this model, for the given input x, use the Bayesian theorem to find the output y with the highest posterior probability. The algorithm process is shown in Figure 8.
According to the description of the naive Bayesian algorithm mentioned above, it is necessary to first understand the Bayesian theorem. The Bayesian theorem refers to the probability under the condition of event B (occurrence), which is different from the probability of event B under the condition of event A (occurrence), but there is a definite relationship between the two. The Bayesian theorem is a description of these two relationships, as shown in Equation (8).
P A | B = P B | A P ( A ) P ( B )
Furthermore, the final naive Bayesian distribution model can be obtained by calculating the posterior probability distribution, as shown in Equation (9).
y = f ( x ) = a r g m a x c k P ( Y = c k ) j P ( X ( j ) = x ( j ) | Y = c k ) k P ( Y = c k ) j P ( X ( j ) = x ( j ) | Y = c k )
The naive Bayes algorithm has a stable classification efficiency, can handle multiple classification tasks, is suitable for incremental training of small data, is relatively simple, and is suitable for text classification. However, due to the assumed prior model, the prediction performance is poor, and there is a certain error rate in classification decisions, which is sensitive to the expression form of input data.

2.4.3. Support Vector Machine Algorithm

The support vector machine (SVM) algorithm is a classification algorithm that represents instances as points in space, separates data points with a straight line, and solves the optimal hyperplane by optimizing convex quadratic programming problems, which includes minimizing model complexity (minimizing the sum of squared weights) while limiting the misclassification of training samples. The SVM algorithm can map input features to high-dimensional space through the kernel function, making originally linear inseparable data linearly separable in high-dimensional space. Then, assume that the training set on a given feature space is illustrated in Equation (10), as follows:
T = x 1 , y 1 , ( x 2 , y 2 ) , ( x 3 , y 3 ) ( x N , y N )
where xiR, yi ∈ (−1, 1), i = 1, 2, 3~N, xi is the ith sample, and yi is the labeling of xi.
Based on the linearly separable training dataset proposed above, the separation hyperplane can be obtained by maximizing the interval, as described in Equation (11) [73]:
y ( x ) = ω T ϕ ( x ) + b
where ϕ(x) is a transformation function of some defined feature space that maps x to higher dimensions, i.e., the kernel function.
In addition, its corresponding classification decision function is given in Equation (12):
f ( x ) = s i g n ( ω T ϕ ( x ) + b )
Therefore, we need to find the hyperplane y(x) to optimally separate two sets. The schematic diagram of the optimal hyperplane is shown in Figure 9a.
The SVM algorithm can be used for classification, regression, and outlier detection, with good robustness. The process of the SVM model is shown in Figure 9b. The SVM algorithm is relatively efficient in high-dimensional space, but its explanatory power for high-dimensional mapping of kernel functions is not strong [74].

2.4.4. Random Forest Algorithm

The random forest algorithm can be viewed as a set of decision trees, where each decision tree in the random forest estimates a classification, a process known as voting [75]. In an ideal scenario, we would choose the classification with the most votes based on each vote in each decision tree. In addition, in regression problems, the average of multiple regression results is taken as the final result. The specific process of the random forest algorithm is shown in Figure 10 [76].
The random forest algorithm can handle a large number of input variables and is able to assess the importance of variables when deciding on categories. In addition, for many kinds of information, the random forest algorithm can produce highly accurate classifiers that balance the error. However, the random forest algorithm loses the interpretability of the decision tree and may not improve the accuracy of the base learner in problems with multiple categorical variables.

2.4.5. Artificial Neural Network Algorithm

An abnormally complex network composed of artificial neural networks and neurons is generally similar, consisting of individual units connected to each other, with each unit having numerical inputs and outputs, which can take the form of real numbers or linear combination functions. The structure of the artificial neural network is shown in Figure 11. The artificial neural network is a widely parallel interconnected network composed of adaptive simple units, and its organizational structure can simulate the interaction reactions of biological neural systems to the real world.
Artificial neural network algorithms have a wide range of applications, including data analysis and mining, engineering pattern recognition, bioinformatics processing, and humanoid robots, among others [77]. For example, their applications include being virtual assistants (Siri, Alexa, etc.), traffic prediction (GPS navigation services), filtering spam and malware, rapid detection of cellular internal structures, deep space exploration, and so on [78,79].
On the other hand, adaptive Neuro Fuzzy Inference System (ANFIS) is a hybrid intelligent system that combines neural networks and fuzzy logic inference systems [80]. It utilizes the learning ability of neural networks to adjust the parameters of fuzzy logic systems, thereby optimizing fuzzy rules [81]. ANFIS is commonly used to handle modeling and control problems of complex systems with uncertainty and fuzziness. ANFIS is a Takagi Sugeno fuzzy model consisting of a series of If/Then rules, where the If part defines the fuzzy set and the Then part represents the output through linear or nonlinear functions [82]. ANFIS achieves parameter learning and optimization of these fuzzy rules through the structure of neural networks [83,84]. ANFIS can be applied in various scenarios in manufacturing processes, such as quality control (by analyzing data during the production process, predicting product quality, and conducting real-time monitoring and control during the production process) [85], fault diagnosis (analyzing equipment operation data can timely detect potential faults and abnormalities) [86], process optimization (optimize key parameters in the manufacturing process) [81,87].
Secondly, deep learning models include but are not limited to convolutional neural networks, recurrent neural networks, long short-term memory networks, and generative adversarial networks [88,89]. These models have achieved breakthrough results in fields such as image recognition, speech recognition, natural language processing, and gaming [90]. Deep learning can automatically extract features and process complex data. However, due to the high demand for data and intensive computing resources, training deep learning models requires a significant amount of computing resources [91,92]. In manufacturing processes, deep learning can be applied to various aspects such as quality inspection, predictive maintenance, process optimization, supply chain management, and more [93,94]. Overall, the application of deep learning in manufacturing processes can significantly improve the production efficiency, product quality, and the level of intelligence in supply chain management; however, at the same time, attention needs to be paid to its demand for data and computing resources, as well as the interpretability of models [95,96,97].

3. Optimization Objectives and Constraints of Process Parameters

3.1. Optimization Objective

Machining force, material removal rate, sub surface damage depth, residual stress, and residual strength are often optimization objectives that need to be considered in process parameter research. Reducing the machining force can suppress surface damage and improve machining surface quality; therefore, minimizing the grinding force is necessary. In addition, the material removal rate can reflect the processing efficiency of the actual production site of the parts, so it is necessary to maximize the material removal rate. Secondly, the service performance of the processed parts is affected by the sub surface depth, so it is necessary to reduce the sub surface damage depth to improve the machining quality of the parts and thereby increase the retention rate of material mechanical properties [98]. After searching in Web of Science using process parameter and optimization as keywords, 90 articles were randomly selected, and the types of optimization objectives were obtained, as shown in Figure 12.
According to Figure 12, it is found that most existing literature reports focus on optimizing the surface roughness and material removal rate, and the production cost of parts and sub surface damage of workpieces are also areas of concern for researchers. In addition, optimization objectives can generally be expressed using objective functions; common objective function expressions are shown in Equation (13) [78].
minimize f ( M M R ) , f ( S a ) , f ( F n )

3.2. Constraint Condition

Finding a set of parameter values under a series of constraint conditions to achieve the optimal objective value of a certain or a set of functions is a constraint problem in process parameter optimization [1]. The constraints can be either equality constraints or inequality constraints. The key to finding this set of parameter values is to meet the constraints and achieve the optimal target value. The constraint conditions are mostly within the parameter range set by the process experiment, and some also consider performance parameter requirements such as surface integrity of the workpiece [99].

4. Typical Applications of Optimization Algorithms in Practical Working Conditions

4.1. Genetic Algorithm

This section mainly describes the relevant literature on using genetic algorithms to optimize process parameters, introduces the optimization objectives, constraints, and research results. Li et al. [100] established a mapping model between process parameters and surface roughness based on orthogonal experimental results, analyzed the influence weights of process parameters, and further established a process parameter optimization model using an improved genetic algorithm (as shown in Figure 13) to obtain the optimal process parameter solution for ultrasonic vibration-assisted grinding. Compared with the previous optimization, the surface quality was significantly improved, and the surface roughness was reduced. The results show that both the surface roughness prediction model and the process parameter optimization model based on the improved genetic algorithm have certain accuracy and reliability [101].
Optimizing the process parameters in ultrasonic vibration-assisted grinding is crucial for improving the surface integrity of the parts. In genetic algorithms, parameters such as population size N, crossover probability P, and mutation probability Pm are intervalized [100]. Then, use the improved genetic algorithm to find the optimal solution of the objective function. Develop a process parameter optimization model that incorporates constraint conditions and objective functions, as detailed in Equation (14). Calibrate the model by setting the iteration count and ensuring convergence (as depicted in Figure 13b). Subsequently, conduct a comparative analysis of the surface roughness outcomes and their residuals across various optimization strategies, as illustrated in Figure 13c.
M i n Y R a = Y R a ( q , v , a p , A ) 0.2 q 0.8 50 v 250 20 a p 60 0 A 2.4
Padhi et al. [102] used the genetic algorithm weighted sum method to search for the optimal machining parameter values, aiming to maximize the cutting efficiency and minimize surface roughness and dimensional deviation. They formulated the optimal search for machining parameter values as a multi-objective, multivariate, nonlinear optimization problem and evaluated its performance. Huang et al. [103] optimized the process parameters (laser power, cutting speed, auxiliary gas pressure, and focusing position) using a non-dominated sorting genetic algorithm (NSGAII) and outputted a complete optimal solution set, ultimately achieving nonlinear optimization of multi-objective parameters such as incision width, incision taper, and incision section roughness. Moreover, Zhao et al. [104] used the genetic algorithm (GA) to optimize the machining parameters of the NiTi shape memory alloy during dry turning in order to obtain better surface roughness and residual depth ratio. The optimized machining parameters were vc = 126 m/min, f = 0.11 mm/rev, and ap = 0.14 mm. The ratio of surface roughness to residual depth was 0.489 μm and 64.13%, respectively.
In summary, genetic algorithms can be used to optimize various parameters, such as cutting parameters (cutting speed, feed rate, cutting depth, etc.), heat treatment parameters (temperature, time, cooling rate, etc.), and injection molding parameters (injection pressure, holding time, cooling time, etc.) [105,106]. By optimizing these parameters, product quality can be improved, production costs can be reduced, and production efficiency can be enhanced. The genetic algorithm is suitable for complex process parameter optimization problems [107]. However, in order to achieve optimal performance, careful selection and adjustment of algorithm parameters are required, and they may need to be combined with other optimization methods.

4.2. Response Surface Method

Response surface methodology is used in manufacturing process parameter optimization to determine the optimal combination of process parameters in order to improve product quality, reduce costs, and increase production efficiency [108]. Response surface methodology selects experimental design methods such as central composite design, Box–Behnken design, orthogonal design, etc., based on research objectives and parameter quantities. Some scholars use response surface methodology to optimize the detection of tool wear during machining, the optimization of machining trajectories for complex components, and process parameters [109,110]. In response surface methodology, it is necessary to consider experimental design, experimental factors and levels, select appropriate models, and then use model prediction and optimization algorithms (such as gradient descent, simplex method, etc.) to find the optimal combination of process parameters. Camposeco-Negrete [111] used central composite material design experiments and obtained regression models for energy consumption, specific energy, surface roughness, and material removal rate during the processing using response surface methodology. The sufficiency of the model was demonstrated through variance analysis, achieving the selection of process parameters for minimum specific energy consumption and minimum surface roughness [112]. Li et al. [100] designed an experiment on ultrasonic-assisted grinding and dressing of white alumina grinding wheels using the Box–Behnken method, established a predictive model for the surface roughness of bearing ring grinding, and analyzed the influence of process parameters (speed ratio, feed rate, dressing depth, and ultrasonic amplitude) on surface roughness using the response surface methodology [113]. It was found that speed ratio and ultrasonic amplitude are the main influencing factors of Ra.
However, response surface methodology combined with genetic algorithms is often used to optimize process parameters [114]. Response surface methodology is used to establish a predictive model for the objective function related to process parameters and experimentally verify the accuracy of the model. Then, genetic algorithms or improved genetic algorithms are used to find the optimal solution for the objective function [115]. Li et al. [116] used response surface methodology and genetic algorithm to optimize the process parameters in laser-assisted grinding RB-SiC machining. The schematic diagram and apparatus of the experimental system are shown in Figure 14a,b. Moreover, a four-factor and five-level process parameter experimental table was designed, as shown in Table 1, to achieve the minimum surface roughness, subsurface damage, and maximum material removal rate.
In addition, the genetic algorithm optimization method was used to optimize four types of process parameters: feed rate, spindle speed, laser power, and grinding depth. The objective function is shown in Equation (15), and the surface roughness empirical model, subsurface damage prediction model, and input variables were used as constraints to achieve maximum material removal efficiency. Further analyze the variance of surface roughness and the probability distribution of standard residuals and obtain a response surface graph between surface roughness and process parameters [117]. By applying the weighted sum method, the multi-objective optimization problem is simplified into a single-objective problem, and a Desirability function with the minimum normalization error is established, as shown in Equation (16). The contour map of the process parameters and the optimal solution of the calculation results are analyzed.
Y = ϕ P , V , F , D ± ε
where Y is the response function and ε is the error.
U P , F , D , V = W R a R a R a R a + W D s u b D s u b D s u b D s u b + W M R M R M R M R
where Ra′, Dsub′, and WMR′ are constraints, and Ra, Dsub, and WMR are weighting factors.

4.3. Taguchi Method

The Taguchi method identifies and optimizes key factors affecting product performance through experimental design, while reducing process variability [118]. It particularly emphasizes reducing sensitivity to environmental changes and manufacturing process fluctuations in product design and manufacturing processes [119]. Some researchers focus on optimizing the process parameters of abrasive waterjet, laser drilling, dry cutting, and energy field-assisted mechanical manufacturing using the Taguchi method. They establish a mapping relationship between process parameters and processing quality, material removal rate, and tool wear; use residual distribution, regression equations, and experimental analysis to verify the accuracy of the prediction model; and obtain the optimal process parameters and influence weights [120,121]. Fan et al. [122] established an orthogonal experiment using the Taguchi method to analyze the influence of process parameters on the surface quality of microcrystalline glass optical free-form surfaces in laser-assisted rapid tool servo machining and established a mapping relationship between process parameters and surface integrity. Pradhan et al. [123] used the Taguchi method to analyze the relationship between material removal rate, tool wear rate, and machining process parameters during micro electrical discharge machining of mold steel and obtained the optimal electrical discharge machining process parameters (current, pulse time, and gap voltage).
Moreover, K. Siva Prasad et al. [124] investigated the effects of water pressure, distance, abrasive flow rate, fiber orientation, material thickness, and abrasive particle size on surface quality during abrasive water jet machining (AWJM) of GFRP/epoxy composite materials. L27 orthogonal experiments were designed based on Taguchi’s method, and analysis of variance (ANOVA) was used to statistically analyze the experimental results. The AWJM process parameters and response parameters were correlated, and the experimental results showed that abrasive particle size is the main factor affecting surface roughness. Song et al. [125] studied the influence of spindle speed, feed speed, cutting depth, and laser pulse duty ratio on the cutting force in the process of fused silica LAM based on the Taguchi method (TM) experiment (as shown in Figure 15). An orthogonal experiment and central composite design experiment were used to analyze the square error (ANOVA), signal-to-noise ratio (S/N), main effect diagram, three-dimensional response surface, and its corresponding contour map to evaluate the influence of various factors on the cutting force [126,127,128].
It should be noted that the reference [20]’s definition of the signal-to-noise ratio is shown in Equation (17).
S / N = 10 log [ 1 n y 1 2 + y 2 2 + + y n 2 ]
In the formula, S/N represents the response value, while y1, y2~yn represent the output values under n repeated test conditions [129].
Suleyman Simsek et al. [130] designed three levels as control parameters using the Taguchi method and L27 orthogonal experiment and optimized them to obtain the optimal combination of control parameters to optimize response characteristics. The maximum error between the optimization results and experimental results was 9.42%. Carmita Camposeco Negrete [111] analyzed the effects of cutting depth, feed rate, and cutting speed on response variables using orthogonal experiments, signal-to-noise ratio (S/N), and analysis of variance (ANOVA), and introduced the Taguchi method to identify the main effects. In addition, many researchers have combined the Taguchi method with optimization algorithms such as the genetic algorithm and response surface methodology to develop prediction models for surface roughness, cutting force, subsurface damage, or other objectives [131]. They have also used Design Expert 13 and Minitab software 21.1 to estimate and analyze the significance of the regression models and finally obtained the optimal process parameters to analyze surface integrity. Li et al. [11] proposed a combination of the Taguchi method, response surface method, and NSGA-II method to optimize the injection molding process of fiber-reinforced composite materials. Based on orthogonal experimental design, the importance of various parameters on the three quality objectives was studied through analysis of variance (ANOVA). Three response surface models were created to map the complex nonlinear relationship between design parameters and quality objectives. Finally, genetic algorithm II (NSGA-II) was connected to the prediction model to find the optimal design parameter value.

4.4. Particle Swarm Optimization

The particle swarm optimization algorithm simulates the foraging behavior of bird flocks and seeks the optimal solution through collaboration and information sharing among individuals in the group [132]. Particle swarm optimization can achieve multi-objective optimization and optimize multiple objectives simultaneously. Additionally, it can be combined with other optimization techniques such as genetic algorithms and simulated annealing to improve optimization performance [132,133]. The application cases of particle swarm optimization in practical manufacturing process optimization are increasing, such as successful applications in mechanical processing, material processing, chemical process optimization, and other fields [134]. Latchoumi et al. [132] used particle swarm optimization (PSO) technology to solve water jet machining related to cavitation shot peening problems. Group initialization starts from water pressure, standing distance, and lateral velocity, and the fitness estimation of PSO is residual stress, hardness, and surface contour roughness. These related and independent parameters are used for water jet machining to induce beneficial residual stresses in the surface layer. Chen et al. [4] proposed a multi-objective optimization process method for mixed particle swarm optimization of multi-pass roller grinding (the principle of multi-pass roller grinding is shown in Figure 16a), combined with a response surface model for surface roughness evolution. Hybrid particle swarm optimization considers the entire grinding process parameters as a whole and achieves parameter optimization by considering multi-objective and constraint conditions. A response surface model for surface roughness evolution was established based on process parameters such as rough grinding, semi precision grinding, and precision grinding [98]. The effectiveness of the mixed particle swarm multi-objective optimization method was verified through experiments (Figure 16c). The results showed that compared with experimental roughness, the error between predicted roughness and experimental roughness was less than 16.53%, and the grinding efficiency was improved by 17.00% (Figure 16d).
Li et al. [28] proposed a complex cutting parameter optimization method based on the Taguchi method, response surface methodology (RSM), and multi-objective particle swarm optimization algorithm (MOPSO), with energy efficiency and processing time as the objectives. A response regression model was established using RSM, and the process parameters with the smallest specific energy consumption and processing time were determined using an improved MOPSO algorithm. Finally, a balance value was obtained between processing time and energy consumption. In the process of optimizing process parameters, the particle swarm optimization algorithm can consider a parameter adjustment strategy, improve local search ability, and prevent the premature convergence of particle swarm to non-optimal solutions by introducing a diversity preservation mechanism or penalty mechanism [135]. In addition, it can develop effective strategies to address constraints in optimization problems, such as upper and lower limits of process parameters, process stability, etc. Of course, it can also improve the algorithm’s adaptability to different problems and problems of different scales [136].

4.5. Other Algorithms

Soheyl Khalilpourazari et al. [137] proposed a sine cosine whale optimization algorithm, as shown in Figure 17a, to minimize the total production time in the multi-pass milling process. Zheng et al. [138] combined the update operator of the sine cosine algorithm with the whale optimization algorithm and studied the performance of three optimization algorithms in experimental studies.
In addition, Guo et al. [139] used the multi-objective particle swarm optimization algorithm to optimize the prediction model of surface roughness and cutting force based on crowded distance sorting and determined the optimal combination of process parameters to reduce surface roughness and achieve a smaller cutting force. Pramanik et al. [140] studied the influence of laser beam processing parameters on the cutting of titanium high-temperature alloys and applied particle swarm optimization technology based on the metaheuristic algorithm to model and optimize the quality characteristics in laser cutting. Soheyl Khalilpourazari et al. [114] aimed to optimize the grinding process parameters while optimizing the surface quality, grinding cost, and total processing time of the parts. They adopted a multi-objective dragonfly algorithm (as shown in Figure 17b) and used a weighting method to optimize the three objectives into one. The research results showed that the proposed optimization algorithm has significant advantages compared to traditional genetic algorithms [141]. Mohamed Arezki Mellal et al. [18] used the rhododendron optimization algorithm to minimize the total production time in multi-pass milling processes, successfully addressed constraints, and compared the optimal results with those of other optimization algorithms. Lin et al. [142] used the Convolutional Neural Fuzzy Network (1DCNFN) to establish a surface roughness prediction model and then used the particle swarm optimization algorithm to optimize the process parameters in an ultrasonic-assisted grinding process. Li et al. [16] used the black hole continuous ant colony algorithm to optimize the grinding specific energy and surface roughness of CNC machine tools during the grinding process and solved the optimal combination of ant colony algorithms. Finally, they found that this algorithm can effectively solve the optimization problem that traditional algorithms are prone to getting stuck on in local areas.

5. Conclusions

This article systematically reviews the application and development trends of optimization algorithms in practical engineering production. It explores the classification, definition, and research status of process parameter optimization algorithms in engineering manufacturing processes, both domestically and internationally. Furthermore, it provides a detailed introduction to the specific applications of various optimization algorithms in actual working conditions. Based on this analysis, the following conclusions regarding the current development status of process parameters in engineering applications using optimization algorithms were drawn.
(1)
Currently, the response surface method, genetic algorithm, Taguchi method, and particle swarm optimization algorithm are relatively mature regarding their stage of development and have been extensively applied in process parameter optimization. These methods are increasingly utilized as optimization targets in various aspects of manufacturing, including surface roughness, subsurface damage, cutting force, and mechanical properties, whether in mechanical processing or special processing. However, these optimization algorithms have reached a level of maturity and are no longer novel.
(2)
In addition, as existing optimization algorithms continue to evolve and be updated, machine learning and other advanced optimization algorithms will emerge. When establishing mathematical optimization models, these approaches consider specific problem-specific analyses and set objective functions and constraints for optimization objectives.
(3)
However, the constraints encountered in real-world engineering manufacturing processes are frequently intricate and often entail multi-objective optimization challenges, where distinct optimization goals are inter-related by specific constraints. This intricate nature poses a significant obstacle to the formulation of an optimization model that can effectively address multiple objectives at once. Subsequent research has demonstrated that employing a weighting method can effectively consolidate multiple objectives into a unified target, thereby facilitating the identification of optimal process parameter solutions.
(4)
With the advent of intelligent manufacturing, the application of optimization algorithms in machining technology increasingly relies on data-driven and real-time feedback mechanisms. This trend enhances the intelligence and automation of the production process. These optimization algorithms not only bolster the competitiveness of the manufacturing industry but also drive the evolution of manufacturing technology towards greater efficiency, sustainability, and customization.

6. The Development Trend of Optimization Algorithms

The aforementioned analysis suggests that in the future, intelligent manufacturing processes will predominantly focus on technologies such as deep reinforcement learning and digital twinning for process optimization. Process optimization algorithms represent an effective technical approach to achieving intelligent manufacturing. These algorithms can substantially enhance the manufacturing efficiency, decrease production costs, elevate the quality of components, and provide new technical support for the optimization of process parameters. Looking ahead, the following areas are poised to become the primary research focal points, as depicted in Figure 18.
(1)
Adaptation and self-optimization
In the future, optimization algorithms need to be designed with stronger capabilities and better performance for adaptive and self-adjusting algorithms. In actual production and manufacturing, process parameter optimization can be achieved by self-adjusting and dynamically adjusting parameters, reducing manual intervention. Moreover, advanced optimization algorithms will be capable of performing online adjustments to model parameters or adaptively optimizing the model structure.
(2)
Complex systems and multi-objective optimization
The challenges of practical engineering applications are notably complex. Consequently, future optimization algorithms must concentrate more on multi-objective optimization processes, multi-constraint interactions, and high-precision function design to effectively search for optimal solutions across multiple objectives. The optimization of process parameters ultimately aims to enhance production quality, reduce waste and costs, and meet production goals such as low-carbon environmental protection and green manufacturing. Therefore, complex systems and multi-objective optimization in production can be systematically addressed in the future.
(3)
Algorithm fusion and integrated innovation
Future optimization algorithms could consider the integration and fusion of multiple algorithms (including deep learning, particle swarm optimization, genetic algorithms, and other intelligent optimization techniques) to create a more efficient, flexible, intelligent, and high-performance hybrid optimization strategy. This strategy can not only be utilized to find certain engineering parameters but can also be applicable to a range of other optimization challenges and engineering domains.

Author Contributions

J.S.: methodology, writing—original draft preparation, and writing—review and editing; B.W.: writing—review and editing; X.H.: supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zheng, S.; Tan, J.; Jiang, C.; Li, L. Automated Multi-Modal Transformer Network (AMTNet) for 3D Medical Images Segmentation. Phys. Med. Biol. 2023, 68, 025014. [Google Scholar] [CrossRef] [PubMed]
  2. Babu, K. Comparison of PSO, AGA, SA and Memetic Algorithms for Surface Grinding Optimization. Appl. Mech. Mater. 2016, 852, 241–247. [Google Scholar] [CrossRef]
  3. Liao, Z.; Xu, C.; Chen, W.; Chen, Q.; Wang, F.; She, J. Effective Throughput Optimization of SAG Milling Process Based on BPNN and Genetic Algorithm. In Proceedings of the 2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS), Wuhan, China, 8–11 May 2023. [Google Scholar] [CrossRef]
  4. Chen, Z.; Li, X.; Wang, L.; Zhang, S.; Cao, Y.; Jiang, S.; Rong, Y. Development of a Hybrid Particle Swarm Optimization Algorithm for Multi-Pass Roller Grinding Process Optimization. Int. J. Adv. Manuf. Technol. 2018, 99, 97–112. [Google Scholar] [CrossRef]
  5. Ding, W.; Zhao, L.; Chen, M.; Cheng, J.; Chen, G.; Lei, H.; Liu, Z.; Geng, F.; Wang, S.; Xu, Q. Determination of Stress Waves and Their Effect on the Damage Extension Induced by Surface Defects of KDP Crystals under Intense Laser Irradiation. Optica 2023, 10, 671. [Google Scholar] [CrossRef]
  6. Mellal, M.A.; Williams, E.J. Parameter Optimization of Advanced Machining Processes Using Cuckoo Optimization Algorithm and Hoopoe Heuristic. J. Intell. Manuf. 2016, 27, 927–942. [Google Scholar] [CrossRef]
  7. Li, Z.; Grandhi, R.V.; Srinivasan, R. Distortion Minimization during Gas Quenching Process. J. Mater. Process. Technol. 2006, 172, 249–257. [Google Scholar] [CrossRef]
  8. Gholami, M.H.; Azizi, M.R. Constrained Grinding Optimization for Time, Cost, and Surface Roughness Using NSGA-II. Int. J. Adv. Manuf. Technol. 2014, 73, 981–988. [Google Scholar] [CrossRef]
  9. Wang, Z.; Dong, Z.; Ran, Y.; Kang, R. On Understanding the Mechanical Properties and Damage Behavior of CF/SiC Composites by Indentation Method. J. Mater. Res. Technol. 2023, 26, 3784–3802. [Google Scholar] [CrossRef]
  10. Huang, Q.; Zhao, B.; Qiu, Y.; Cao, Y.; Fu, Y.; Chen, Q.; Tang, M.; Deng, M.; Liu, G.; Ding, W. MOPSO Process Parameter Optimization in Ultrasonic Vibration-Assisted Grinding of Hardened Steel. Int. J. Adv. Manuf. Technol. 2023, 128, 903–914. [Google Scholar] [CrossRef]
  11. Li, K.; Yan, S.; Zhong, Y.; Pan, W.; Zhao, G. Multi-Objective Optimization of the Fiber-Reinforced Composite Injection Molding Process Using Taguchi Method, RSM, and NSGA-II. Simul. Model. Pract. Theory 2019, 91, 69–82. [Google Scholar] [CrossRef]
  12. Ding, W.; Cheng, J.; Zhao, L.; Wang, Z.; Yang, H.; Liu, Z.; Xu, Q.; Wang, J.; Geng, F.; Chen, M. Determination of Intrinsic Defects of Functional KDP Crystals with Flawed Surfaces and Their Effect on the Optical Properties. Nanoscale 2022, 14, 10041–10050. [Google Scholar] [CrossRef] [PubMed]
  13. Cao, Y.; Yin, J.; Ding, W.; Xu, J. Alumina Abrasive Wheel Wear in Ultrasonic Vibration-Assisted Creep-Feed Grinding of Inconel 718 Nickel-Based Superalloy. J. Mater. Process. Technol. 2021, 297, 117241. [Google Scholar] [CrossRef]
  14. Zhang, S.; Zhang, G.; Ran, Y.; Wang, Z.; Wang, W. Multi-Objective Optimization for Grinding Parameters of 20CrMnTiH Gear with Ceramic Microcrystalline Corundum. Materials 2019, 12, 1352. [Google Scholar] [CrossRef] [PubMed]
  15. Jiang, W.; Lv, L.; Xiao, Y.; Fu, X.; Deng, Z.; Yue, W. A Multi-Objective Modeling and Optimization Method for High Efficiency, Low Energy, and Economy. Int. J. Adv. Manuf. Technol. 2023, 128, 2483–2498. [Google Scholar] [CrossRef]
  16. Li, H.; Xu, L.; Li, J.; He, K.; Zhao, Y. Research on Grinding Parameters Optimization Method of CNC Grinding Machine Based on Black Hole-Continuous Ant Colony Algorithm. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; pp. 1182–1186. [Google Scholar] [CrossRef]
  17. Liu, Z.; Li, X.; Wu, D.; Qian, Z.; Feng, P.; Rong, Y. The Development of a Hybrid Firefly Algorithm for Multi-Pass Grinding Process Optimization. J. Intell. Manuf. 2019, 30, 2457–2472. [Google Scholar] [CrossRef]
  18. Mellal, M.A.; Williams, E.J. Total Production Time Minimization of a Multi-Pass Milling Process via Cuckoo Optimization Algorithm. Int. J. Adv. Manuf. Technol. 2016, 87, 747–754. [Google Scholar] [CrossRef]
  19. Xing, C.; Ping, X.; Guo, R.; Zhang, H.; Yang, F.; Yu, M.; Yang, A.; Wang, Y. Machine Learning-Based Multi-Objective Optimization and Thermodynamic Evaluation of Organic Rankine Cycle (ORC) System for Vehicle Engine under Road Condition. Appl. Therm. Eng. 2023, 231, 120904. [Google Scholar] [CrossRef]
  20. Zhang, X.; Li, A.; Chen, J.; Ma, M.; Ding, P.; Huang, X.; Yu, T.; Zhao, J. Sustainability-Driven Optimization of Ultrasonic Atomization-Assisted Micro Milling Process with Ceramic Matrix Composite. Sustain. Mater. Technol. 2022, 33, e00465. [Google Scholar] [CrossRef]
  21. He, Z.; Shi, T.; Xuan, J.; Li, T. Research on Tool Wear Prediction Based on Temperature Signals and Deep Learning. Wear 2021, 478–479, 203902. [Google Scholar] [CrossRef]
  22. Chen, W.C.; Nguyen, M.H.; Chiu, W.H.; Chen, T.N.; Tai, P.H. Optimization of the Plastic Injection Molding Process Using the Taguchi Method, RSM, and Hybrid GA-PSO. Int. J. Adv. Manuf. Technol. 2016, 83, 1873–1886. [Google Scholar] [CrossRef]
  23. Chen, W.J.; Huang, C.K.; Yang, Q.Z.; Yang, Y.L. Optimal Prediction and Design of Surface Roughness for Cnc Turning of AL7075-T6 by Using the Taguchi Hybrid QPSO Algorithm. Trans. Can. Soc. Mech. Eng. 2016, 40, 883–895. [Google Scholar] [CrossRef]
  24. Ding, W.; Zhao, L.; Chen, M.; Cheng, J.; Yin, Z.; Liu, Q.; Chen, G.; Lei, H. Quantitative Identification of Deposited Energy in UV-Transmitted KDP Crystals from Perspectives of Electronic Defects, Atomic Structure and Sub-Bandgap Disturbance. J. Mater. Chem. C 2024, 12, 4699–4710. [Google Scholar] [CrossRef]
  25. Vamsi Krishna, M.; Anthony Xavior, M. A New Hybrid Approach to Optimize the End Milling Process for AL/SiC Composites Using RSM and GA. Indian J. Sci. Technol. 2016, 9, 1–7. [Google Scholar] [CrossRef]
  26. Yi, J.; Yuanguang, X.; Zhengjia, L. Optimization Design and Performance Evaluation of a Novel Asphalt Rejuvenator. Front. Mater. 2022, 9, 1081858. [Google Scholar] [CrossRef]
  27. Tao, T.; Li, L.; He, Q.; Wang, Y.; Guo, J. Mechanical Behavior of Bio-Inspired Honeycomb–Core Composite Sandwich Structures to Low-Velocity Dynamic Loading. Materials 2024, 17, 1191. [Google Scholar] [CrossRef] [PubMed]
  28. Li, C.; Xiao, Q.; Tang, Y.; Li, L. A Method Integrating Taguchi, RSM and MOPSO to CNC Machining Parameters Optimization for Energy Saving. J. Clean. Prod. 2016, 135, 263–275. [Google Scholar] [CrossRef]
  29. Lucas, J.M. How to Achieve a Robust Process Using Response Surface Methodology. J. Qual. Technol. 1994, 26, 248–260. [Google Scholar] [CrossRef]
  30. Masoudi, S.; Mirabdolahi, M.; Dayyani, M.; Jafarian, F.; Vafadar, A.; Dorali, M.R. Development of an Intelligent Model to Optimize Heat-Affected Zone, Kerf, and Roughness in 309 Stainless Steel Plasma Cutting by Using Experimental Results. Mater. Manuf. Process. 2019, 34, 345–356. [Google Scholar] [CrossRef]
  31. Guo, R.; Chen, M.; Wang, G.; Zhou, X. Milling Force Prediction and Optimization of Process Parameters in Micro-Milling of Glow Discharge Polymer. Int. J. Adv. Manuf. Technol. 2022, 122, 1293–1310. [Google Scholar] [CrossRef]
  32. Hussain, S.; Qazi, M.I.; Abas, M. Investigation and Optimization of Plasma Arc Cutting Process Parameters for AISI 304 by Integrating Principal Component Analysis and Composite Desirability Method. J. Braz. Soc. Mech. Sci. Eng. 2024, 46, 33. [Google Scholar] [CrossRef]
  33. Aravind, S.; Shunmugesh, K.; Akhil, K.T.; Pramod Kumar, M. Process Capability Analysis and Optimization in Turning of 11sMn30 Alloy. Mater. Today Proc. 2017, 4, 3608–3617. [Google Scholar] [CrossRef]
  34. Shehadeh, H.A.; Idris, M.Y.I.; Ahmedy, I. Multi-Objective Optimization Algorithm Based on Sperm Fertilization Procedure (MOSFP). Symmetry 2017, 9, 241. [Google Scholar] [CrossRef]
  35. Vishnu Vardhan, M.; Sankaraiah, G.; Yohan, M.; Jeevan Rao, H. Optimization of Parameters in CNC Milling of P20 Steel Using Response Surface Methodology and Taguchi Method. Mater. Today Proc. 2017, 4, 9163–9169. [Google Scholar] [CrossRef]
  36. Mohamed, L.; Christie, M.; Demyanov, V. Comparison of Stochastic Sampling Algorithms for Uncertainty Quantification. SPE J. 2010, 15, 31–38. [Google Scholar] [CrossRef]
  37. Bousnina, K.; Hamza, A.; Ben Yahia, N. An Energy Survey to Optimize the Technological Parameters during the Milling of AISI 304L Steel Using the RSM, ANN and Genetic Algorithm. Adv. Mater. Process. Technol. 2023, 1–19. [Google Scholar] [CrossRef]
  38. Godreau, V.; Ritou, M.; de Castelbajac, C.; Furet, B. Diagnosis of Spindle Failure by Unsupervised Machine Learning from in-Process Monitoring Data in Machining. Int. J. Adv. Manuf. Technol. 2024, 131, 749–759. [Google Scholar] [CrossRef]
  39. Soorya Prakash, K.; Gopal, P.M.; Karthik, S. Multi-Objective Optimization Using Taguchi Based Grey Relational Analysis in Turning of Rock Dust Reinforced Aluminum MMC. Meas. J. Int. Meas. Confed. 2020, 157, 107664. [Google Scholar] [CrossRef]
  40. Gu, P.; Zhu, C.; Sun, Y.; Wang, Z.; Tao, Z.; Shi, Z. Surface Roughness Prediction of SiCp/Al Composites in Ultrasonic Vibration-Assisted Grinding. J. Manuf. Process. 2023, 101, 687–700. [Google Scholar] [CrossRef]
  41. Gu, P.; Zhu, C.; Wu, Y.; Mura, A. Energy Consumption Prediction Model of Sicp/al Composite in Grinding Based on Pso-Bp Neural Network. Solid State Phenom. 2020, 305, 163–168. [Google Scholar] [CrossRef]
  42. Chen, X.; Tianfield, H.; Mei, C.; Du, W.; Liu, G. Biogeography-Based Learning Particle Swarm Optimization. Soft Comput. 2017, 21, 7519–7541. [Google Scholar] [CrossRef]
  43. Adalarasan, R.; Santhanakumar, M.; Rajmohan, M. Application of Grey Taguchi-Based Response Surface Methodology (GT-RSM) for Optimizing the Plasma Arc Cutting Parameters of 304L Stainless Steel. Int. J. Adv. Manuf. Technol. 2015, 78, 1161–1170. [Google Scholar] [CrossRef]
  44. Chen, F.; Xu, Z.; Yang, X. Capacity Optimization Configuration of Grid-Connected Microgrid Considering Green Certificate Trading Mechanism. In Proceedings of the 2023 26th International Conference on Electrical Machines and Systems, ICEMS 2023, Zhuhai, China, 5–8 November 2023. [Google Scholar]
  45. Yin, P.; Peng, M. Station Layout Optimization and Route Selection of Urban Rail Transit Planning: A Case Study of Shanghai Pudong International Airport. Mathematics 2023, 11, 1539. [Google Scholar] [CrossRef]
  46. Duarte, A.; Carrão, L.; Espanha, M.; Viana, T.; Freitas, D.; Bártolo, P.; Faria, P.; Almeida, H.A. Segmentation Algorithms for Thermal Images. Procedia Technol. 2014, 16, 1560–1569. [Google Scholar] [CrossRef]
  47. Wen, C.; Xia, B.; Liu, X. Solution of Second Order Ackley Function Based on SAPSO Algorithm. In Proceedings of the 2017 3rd IEEE International Conference on Control Science and Systems Engineering, ICCSSE 2017, Beijing, China, 17–19 August 2017. [Google Scholar]
  48. Dong, Y.; Huo, L.; Zhao, L. An Improved Two-Layer Model for Rumor Propagation Considering Time Delay and Event-Triggered Impulsive Control Strategy. Chaos Solitons Fractals 2022, 164, 112711. [Google Scholar] [CrossRef]
  49. Li, F.; Li, Y.; Yan, C.; Ma, C.; Liu, C.; Suo, Q. Swing Speed Control Strategy of Fuzzy PID Roadheader Based on PSO-BP Algorithm. In Proceedings of the IEEE 6th Information Technology and Mechatronics Engineering Conference, ITOEC 2022, Chongqing, China, 4–6 March 2022. [Google Scholar]
  50. Zhang, Q.; Wen, X.; Na, D. Research on Intelligent Bus Scheduling Based on QPSO Algorithm. In Proceedings of the 8th International Conference on Intelligent Computation Technology and Automation, ICICTA 2015, Nanchang, China, 14–15 June 2015. [Google Scholar]
  51. Gao, X.; Tian, Y.; Jiao, J.; Li, C.; Gao, J. Non-Destructive Measurements of Thickness and Elastic Constants of Plate Structures Based on Lamb Waves and Particle Swarm Optimization. Meas. J. Int. Meas. Confed. 2022, 204, 111981. [Google Scholar] [CrossRef]
  52. Cheng, P.; Xu, Z.; Li, R.; Shi, C. A Hybrid Taguchi Particle Swarm Optimization Algorithm for Reactive Power Optimization of Deep-Water Semi-Submersible Platforms with New Energy Sources. Energies 2022, 15, 4565. [Google Scholar] [CrossRef]
  53. Yang, Z.; Li, N.; Zhang, Y.; Li, J. Mobile Robot Path Planning Based on Improved Particle Swarm Optimization and Improved Dynamic Window Approach. J. Robot. 2023, 2023, 6619841. [Google Scholar] [CrossRef]
  54. Thepsonthi, T.; Özel, T. Multi-Objective Process Optimization for Micro-End Milling of Ti-6Al-4V Titanium Alloy. Int. J. Adv. Manuf. Technol. 2012, 63, 903–914. [Google Scholar] [CrossRef]
  55. Zhang, S.; Wang, W.; Jiang, R.; Xiong, Y.; Huang, B.; Wang, J. Multi-Objective Optimization for the Machining Performance during Ultrasonic Vibration-Assisted Helical Grinding Hole of Thin-Walled CF/BMI Composite Laminates. Thin Walled Struct. 2023, 192, 111086. [Google Scholar] [CrossRef]
  56. Osorio-Pinzon, J.C.; Abolghasem, S.; Marañon, A.; Casas-Rodriguez, J.P. Cutting Parameter Optimization of Al-6063-O Using Numerical Simulations and Particle Swarm Optimization. Int. J. Adv. Manuf. Technol. 2020, 111, 2507–2532. [Google Scholar] [CrossRef]
  57. Faisal, N.; Kumar, K. Utilization Of Particle Swarm Optimization Technique For Process Parameter Optimization In Electrical Discharge Machining. Adv. Mater. Manuf. Charact. 2018, 8, 58–68. [Google Scholar]
  58. Sen, B.; Debnath, S.; Bhowmik, A. Sustainable Machining of Superalloy in Minimum Quantity Lubrication Environment: Leveraging GEP-PSO Hybrid Optimization Algorithm. Int. J. Adv. Manuf. Technol. 2024, 130, 4575–4601. [Google Scholar] [CrossRef]
  59. Bishnoi, S.; Hooda, B.K. Decision Tree Algorithms and Their Applicability in Agriculture for Classification. J. Exp. Agric. Int. 2022, 44, 20–27. [Google Scholar] [CrossRef]
  60. Mrva, J.; Neupauer, S.; Hudec, L.; Sevcech, J.; Kapec, P. Decision Support in Medical Data Using 3D Decision Tree Visualisation. In Proceedings of the 2019 7th E-Health and Bioengineering Conference, EHB 2019, Iasi, Romania, 21–23 November 2019. [Google Scholar]
  61. Damanik, I.S.; Windarto, A.P.; Wanto, A.; Poningsih; Andani, S.R.; Saputra, W. Decision Tree Optimization in C4.5 Algorithm Using Genetic Algorithm. Proc. J. Phys. Conf. Ser. 2019, 1255, 012012. [Google Scholar] [CrossRef]
  62. Barros, R.C.; Basgalupp, M.P.; De Carvalho, A.C.P.L.F.; Freitas, A.A. A Survey of Evolutionary Algorithms for Decision-Tree Induction. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 291–312. [Google Scholar] [CrossRef]
  63. Mahesh, B. Machine Learning Algorithms-a Review. Int. J. Sci. Res. IJSR Internet 2020, 9, 381–386. [Google Scholar] [CrossRef]
  64. Charbuty, B.; Abdulazeez, A. Classification Based on Decision Tree Algorithm for Machine Learning. J. Appl. Sci. Technol. Trends 2021, 2, 20–28. [Google Scholar] [CrossRef]
  65. Tso, G.K.F.; Yau, K.K.W. Predicting Electricity Energy Consumption: A Comparison of Regression Analysis, Decision Tree and Neural Networks. Energy 2007, 32, 1761–1768. [Google Scholar] [CrossRef]
  66. Singh, S.; Gupta, P. Comparative Study ID3, Cart and C4. 5 Decision Tree Algorithm: A Survey. Int. J. Adv. Inf. Sci. Technol. 2014, 27, 97–103. [Google Scholar]
  67. Loh, W.Y. Fifty Years of Classification and Regression Trees. Int. Stat. Rev. 2014, 82, 329–348. [Google Scholar] [CrossRef]
  68. Jiao, S.R.; Song, J.; Liu, B. A Review of Decision Tree Classification Algorithms for Continuous Variables. Proc. J. Phys. Conf. Ser. 2020, 1651, 012083. [Google Scholar] [CrossRef]
  69. Taneja, S.; Gupta, C.; Goyal, K.; Gureja, D. An Enhanced K-Nearest Neighbor Algorithm Using Information Gain and Clustering. In Proceedings of the International Conference on Advanced Computing and Communication Technologies, ACCT, Rohtak, India, 8–9 February 2014. [Google Scholar]
  70. Wen, J. The Investigation of Prediction for Stroke Using Multiple Machine Learning Models. Highlights Sci. Eng. Technol. 2024, 81, 143–152. [Google Scholar] [CrossRef]
  71. Mittal, K.; Khanduja, D.; Tewari, P.C. An Insight into “Decision Tree Analysis”. World Wide J. Multidiscip. Res. Dev. 2017, 3, 111–115. [Google Scholar]
  72. Priyanka; Kumar, D. Decision Tree Classifier: A Detailed Survey. Int. J. Inf. Decis. Sci. 2020, 12, 246. [Google Scholar] [CrossRef]
  73. Dai, A.; Zhou, X.; Dang, H.; Sun, M.; Wu, Z. Intelligent Modeling Method for a Combined Radiation-Convection Grain Dryer: A Support Vector Regression Algorithm Based on an Improved Particle Swarm Optimization Algorithm. IEEE Access 2018, 6, 14285–14297. [Google Scholar] [CrossRef]
  74. Zhang, Y. Glucose Prediction Based on the Recurrent Neural Network Model. In Proceedings of the 2023 International Conference on Intelligent Supercomputing and BioPharma, ISBP 2023, Zhuhai, China, 6–8 January 2023. [Google Scholar]
  75. Yuazhi, Z.; Guozheng, Z. Prediction of Surface Roughness and Optimization of Process Parameters for Efficient Cutting of Aluminum Alloy. Adv. Mech. Eng. 2024, 16, 16878132231197906. [Google Scholar] [CrossRef]
  76. Li, D.; You, S.; Liao, Q.; Sheng, M.; Tian, S. Prediction of Shale Gas Production by Hydraulic Fracturing in Changning Area Using Machine Learning Algorithms. Transp. Porous Media 2023, 149, 373–388. [Google Scholar] [CrossRef]
  77. Lu, Y.J.; Lee, W.C.; Wang, C.H. Using Data Mining Technology to Explore Causes of Inaccurate Reliability Data and Suggestions for Maintenance Management. J. Loss Prev. Process. Ind. 2023, 83, 105063. [Google Scholar] [CrossRef]
  78. Ren, X.; Fan, J.; Pan, R.; Sun, K. Modeling and Process Parameter Optimization of Laser Cutting Based on Artificial Neural Network and Intelligent Optimization Algorithm. Int. J. Adv. Manuf. Technol. 2023, 127, 1177–1188. [Google Scholar] [CrossRef]
  79. Wang, W.; Tian, G.; Chen, M.; Tao, F.; Zhang, C.; AI-Ahmari, A.; Li, Z.; Jiang, Z. Dual-Objective Program and Improved Artificial Bee Colony for the Optimization of Energy-Conscious Milling Parameters Subject to Multiple Constraints. J. Clean. Prod. 2020, 245, 118714. [Google Scholar] [CrossRef]
  80. Tao, H.; Abba, S.I.; Al-Areeq, A.M.; Tangang, F.; Samantaray, S.; Sahoo, A.; Siqueira, H.V.; Maroufpoor, S.; Demir, V.; Dhanraj Bokde, N.; et al. Hybridized Artificial Intelligence Models with Nature-Inspired Algorithms for River Flow Modeling: A Comprehensive Review, Assessment, and Possible Future Research Directions. Eng. Appl. Artif. Intell. 2024, 129, 107559. [Google Scholar] [CrossRef]
  81. Usman, A.G.; Işik, S.; Abba, S.I. Hybrid Data-Intelligence Algorithms for the Simulation of Thymoquinone in HPLC Method Development. J. Iran. Chem. Soc. 2021, 18, 1537–1549. [Google Scholar] [CrossRef]
  82. Rathnayake, N.; Rathnayake, U.; Dang, T.L.; Hoshino, Y. Water Level Prediction Using Soft Computing Techniques: A Case Study in the Malwathu Oya, Sri Lanka. PLoS ONE 2023, 18, e0282847. [Google Scholar] [CrossRef]
  83. Pham, Q.B.; Abba, S.I.; Usman, A.G.; Linh, N.T.T.; Gupta, V.; Malik, A.; Costache, R.; Vo, N.D.; Tri, D.Q. Potential of Hybrid Data-Intelligence Algorithms for Multi-Station Modelling of Rainfall. Water Resour. Manag. 2019, 33, 5067–5087. [Google Scholar] [CrossRef]
  84. Yaseen, Z.M.; Mohtar, W.H.M.W.; Ameen, A.M.S.; Ebtehaj, I.; Razali, S.F.M.; Bonakdari, H.; Salih, S.Q.; Al-Ansari, N.; Shahid, S. Implementation of Univariate Paradigm for Streamflow Simulation Using Hybrid Data-Driven Model: Case Study in Tropical Region. IEEE Access 2019, 7, 74471–74481. [Google Scholar] [CrossRef]
  85. Lin, Z.C.; Liu, C.Y. Analysis and Application of the Adaptive Neuro-Fuzzy Inference System in Prediction of CMP Machining Parameters. Int. J. Comput. Appl. Technol. 2003, 17, 80. [Google Scholar] [CrossRef]
  86. Phate, M.; Bendale, A.; Toney, S.; Phate, V. Prediction and Optimization of Tool Wear Rate during Electric Discharge Machining of Al/Cu/Ni Alloy Using Adaptive Neuro-Fuzzy Inference System. Heliyon 2020, 6, e05308. [Google Scholar] [CrossRef] [PubMed]
  87. Jović, S.; Arsić, N.; Vukojević, V.; Anicic, O.; Vujičić, S. Determination of the Important Machining Parameters on the Chip Shape Classification by Adaptive Neuro-Fuzzy Technique. Precis. Eng. 2017, 48, 18–23. [Google Scholar] [CrossRef]
  88. Bengio, Y. Learning Deep Architectures for AI. In Foundations and Trends® in Machine Learning; Now Publishers Inc.: Hanover, MA, USA, 2009; Volume 2. [Google Scholar] [CrossRef]
  89. Wegayehu, E.B.; Muluneh, F.B. Multivariate Streamflow Simulation Using Hybrid Deep Learning Models. Comput. Intell. Neurosci. 2021, 2021, 5172658. [Google Scholar] [CrossRef]
  90. Zakhrouf, M.; Hamid, B.; Kim, S.; Madani, S. Novel Insights for Streamflow Forecasting Based on Deep Learning Models Combined the Evolutionary Optimization Algorithm. Phys. Geogr. 2023, 44, 31–54. [Google Scholar] [CrossRef]
  91. Dahou, A.; Abd Elaziz, M.; Chelloug, S.A.; Awadallah, M.A.; Al-Betar, M.A.; Al-qaness, M.A.A.; Forestiero, A. Intrusion Detection System for IoT Based on Deep Learning and Modified Reptile Search Algorithm. Comput. Intell. Neurosci. 2022, 2022, 6473507. [Google Scholar] [CrossRef]
  92. Haznedar, B.; Kilinc, H.C.; Ozkan, F.; Yurtsever, A. Streamflow Forecasting Using a Hybrid LSTM-PSO Approach: The Case of Seyhan Basin. Nat. Hazards 2023, 117, 681–701. [Google Scholar] [CrossRef]
  93. Yang, R.; Singh, S.K.; Tavakkoli, M.; Amiri, N.; Yang, Y.; Karami, M.A.; Rai, R. CNN-LSTM Deep Learning Architecture for Computer Vision-Based Modal Frequency Detection. Mech. Syst. Signal Process. 2020, 144, 106885. [Google Scholar] [CrossRef]
  94. Li, W.; Li, B.; He, S.; Mao, X.; Qiu, C.; Qiu, Y.; Tan, X. A Novel Milling Parameter Optimization Method Based on Improved Deep Reinforcement Learning Considering Machining Cost. J. Manuf. Process. 2022, 84, 1362–1375. [Google Scholar] [CrossRef]
  95. Wu, P.; He, Y.; Li, Y.; He, J.; Liu, X.; Wang, Y. Multi-Objective Optimisation of Machining Process Parameters Using Deep Learning-Based Data-Driven Genetic Algorithm and TOPSIS. J. Manuf. Syst. 2022, 64, 40–52. [Google Scholar] [CrossRef]
  96. Jiang, Y.; Chen, J.; Zhou, H.; Yang, J.; Hu, P.; Wang, J. Contour Error Modeling and Compensation of CNC Machining Based on Deep Learning and Reinforcement Learning. Int. J. Adv. Manuf. Technol. 2022, 118, 551–570. [Google Scholar] [CrossRef]
  97. Serin, G.; Sener, B.; Ozbayoglu, A.M.; Unver, H.O. Review of Tool Condition Monitoring in Machining and Opportunities for Deep Learning. Int. J. Adv. Manuf. Technol. 2020, 109, 953–974. [Google Scholar] [CrossRef]
  98. Li, R.; Wang, Z.; Yan, J. Multi-Objective Optimization of the Process Parameters of a Grinding Robot Using LSTM-MLP-NSGAII. Machines 2023, 11, 882. [Google Scholar] [CrossRef]
  99. Deng, X. Multiconstraint Fuzzy Prediction Analysis Improved the Algorithm in Internet of Things. Wirel. Commun. Mob. Comput. 2021, 2021, 5499173. [Google Scholar] [CrossRef]
  100. Li, C.; Jiao, F.; Ma, X.; Niu, Y.; Tong, J. Dressing Principle and Parameter Optimization of Ultrasonic-Assisted Diamond Roller Dressing WA Grinding Wheel Using Response Surface Methodology and Genetic Algorithm. Int. J. Adv. Manuf. Technol. 2024, 131, 2551–2568. [Google Scholar] [CrossRef]
  101. Manav, O.; Chinchanikar, S. Multi-Objective Optimization of Hard Turning: A Genetic Algorithm Approach. Mater. Today Proc. 2018, 5, 12240–12248. [Google Scholar] [CrossRef]
  102. Padhi, P.C.; Mahapatra, S.S.; Yadav, S.N.; Tripathy, D.K. Multi-Objective Optimization of Wire Electrical Discharge Machining (WEDM) Process Parameters Using Weighted Sum Genetic Algorithm Approach. J. Adv. Manuf. Syst. 2016, 15, 85–100. [Google Scholar] [CrossRef]
  103. Huang, S.; Fu, Z.; Liu, C.; Li, J. Multi-Objective Optimization of Fiber Laser Cutting Quality Characteristics of Glass Fiber Reinforced Plastic (GFRP) Materials. Opt. Laser Technol. 2023, 167, 109720. [Google Scholar] [CrossRef]
  104. Zhao, Y.; Cui, L.; Sivalingam, V.; Sun, J. Understanding Machining Process Parameters and Optimization of High-Speed Turning of NiTi SMA Using Response Surface Method (RSM) and Genetic Algorithm (GA). Materials 2023, 16, 5786. [Google Scholar] [CrossRef]
  105. Meng, M.; Zhou, C.; Lv, Z.; Zheng, L.; Feng, W.; Wu, T.; Zhang, X. Research on a Method of Robot Grinding Force Tracking and Compensation Based on Deep Genetic Algorithm. Machines 2023, 11, 1075. [Google Scholar] [CrossRef]
  106. Xiao, G.; Gao, H.; Zhang, Y.; Zhu, B.; Huang, Y. An Intelligent Parameters Optimization Method of Titanium Alloy Belt Grinding Considering Machining Efficiency and Surface Quality. Int. J. Adv. Manuf. Technol. 2023, 125, 513–527. [Google Scholar] [CrossRef]
  107. Mallick, B.; Sarkar, B.R.; Doloi, B.; Bhattacharyya, B. Analysis on the Effect of ECDM Process Parameters during Micro-Machining of Glass Using Genetic Algorithm. J. Mech. Eng. Sci. 2018, 12, 3942–3960. [Google Scholar] [CrossRef]
  108. Sharma, A.; Chaturvedi, R.; Sharma, K.; Saraswat, M. Force Evaluation and Machining Parameter Optimization in Milling of Aluminium Burr Composite Based on Response Surface Method. Adv. Mater. Process. Technol. 2022, 8, 4073–4094. [Google Scholar] [CrossRef]
  109. Murthy, B.R.N.; Rao, U.S.; Naik, N.; Potti, S.R.; Nambiar, S.S. A Study to Investigate the Influence of Machining Parameters on Delamination in the Abrasive Waterjet Machining of Jute-Fiber-Reinforced Polymer Composites: An Integrated Taguchi and Response Surface Methodology (RSM) Optimization to Minimize Delamination. J. Compos. Sci. 2023, 7, 475. [Google Scholar] [CrossRef]
  110. Jiang, E.; Yue, Q.; Xu, J.; Fan, C.; Song, G.; Yuan, X.; Ma, Y.; Yu, X.; Yang, P.; Feng, P.; et al. A Wear Testing Method of Straight Blade Tools for Nomex Honeycomb Composites Machining. Wear 2024, 546–547, 205325. [Google Scholar] [CrossRef]
  111. Camposeco-Negrete, C. Optimization of Cutting Parameters for Minimizing Energy Consumption in Turning of AISI 6061 T6 Using Taguchi Methodology and ANOVA. J. Clean. Prod. 2013, 53, 195–203. [Google Scholar] [CrossRef]
  112. Pan, Y.; Zhou, P.; Yan, Y.; Agrawal, A.; Wang, Y.; Guo, D.; Goel, S. New Insights into the Methods for Predicting Ground Surface Roughness in the Age of Digitalisation. Precis. Eng. 2021, 67, 393–418. [Google Scholar] [CrossRef]
  113. Kwak, J.S. Application of Taguchi and Response Surface Methodologies for Geometric Error in Surface Grinding Process. Int. J. Mach. Tools Manuf. 2005, 45, 327–334. [Google Scholar] [CrossRef]
  114. Khalilpourazari, S.; Khalilpourazary, S. Optimization of Time, Cost and Surface Roughness in Grinding Process Using a Robust Multi-Objective Dragonfly Algorithm. Neural Comput. Appl. 2020, 32, 3987–3998. [Google Scholar] [CrossRef]
  115. Bijukumar, B.; Chakkarapani, M.; Ganesan, S.I. On the Importance of Blocking Diodes in Thermoelectric Generator Arrays and Their Effect on MPPs Under Temperature Mismatch Conditions. IEEE Trans. Energy Convers. 2023, 38, 2730–2743. [Google Scholar] [CrossRef]
  116. Li, Z.; Zhang, F.; Luo, X.; Chang, W.; Cai, Y.; Zhong, W.; Ding, F. Material Removal Mechanism of Laser-Assisted Grinding of RB-SiC Ceramics and Process Optimization. J. Eur. Ceram. Soc. 2019, 39, 705–717. [Google Scholar] [CrossRef]
  117. Alajmi, M.S.; Alfares, F.S.; Alfares, M.S. Selection of Optimal Conditions in the Surface Grinding Process Using the Quantum Based Optimisation Method. J. Intell. Manuf. 2019, 30, 1469–1481. [Google Scholar] [CrossRef]
  118. Wang, D.A.; Lin, Y.C.; Chow, H.M.; Fan, S.F.; Wang, A.C. Optimization of Machining Parameters Using EDM in Gas Media Based on Taguchi Method. Adv. Mater. Res. 2012, 459, 170–175. [Google Scholar] [CrossRef]
  119. Kucukoglu, A.; Yuce, C.; Sozer, I.E.; Karpat, F. Multi-Response Optimization for Laser Transmission Welding of PMMA to ABS Using Taguchi-Based TOPSIS Method. Adv. Mech. Eng. 2023, 15, 16878132231193260. [Google Scholar] [CrossRef]
  120. Abdul Shukor, J.; Said, S.; Harun, R.; Husin, S.; Kadir, A. Optimising of Machining Parameters of Plastic Material Using Taguchi Method. Adv. Mater. Process. Technol. 2016, 2, 50–56. [Google Scholar] [CrossRef]
  121. Saravanan, K.; Francis Xavier, J.; Sudeshkumar, M.P.; Maridurai, T.; Suyamburajan, V.; Jayaseelan, V. Optimization of SiC Abrasive Parameters on Machining of Ti-6Al-4V Alloy in AJM Using Taguchi-Grey Relational Method. Silicon 2022, 14, 997–1004. [Google Scholar] [CrossRef]
  122. Fan, M.; Zhou, X.; Chen, S.; Jiang, S.; Song, J. Study of the Surface Roughness and Optimization of Machining Parameters during Laser-Assisted Fast Tool Servo Machining of Glass-Ceramic. Surf. Topogr. Metrol. Prop. 2023, 11, 025017. [Google Scholar] [CrossRef]
  123. Pradhan, B.B.; Tiwary, A.P.; Masanta, M.; Bhattacharyya, B. Investigation of Micro-Electro-Discharge Machining Process Parameters during Machining of M2 Hardened Die-Steel. Proc. Inst. Mech. Eng. Part E J. Process Mech. Eng. 2024, 09544089241234626. [Google Scholar] [CrossRef]
  124. Siva Prasad, K.; Chaitanya, G. Optimization of Process Parameters on Surface Roughness during Drilling of GFRP Composites Using Taguchi Technique. Mater. Today Proc. 2020, 39, 1553–1558. [Google Scholar] [CrossRef]
  125. Song, H.; Dan, J.; Li, J.; Du, J.; Xiao, J.; Xu, J. Experimental Study on the Cutting Force during Laser-Assisted Machining of Fused Silica Based on the Taguchi Method and Response Surface Methodology. J. Manuf. Process. 2019, 38, 9–20. [Google Scholar] [CrossRef]
  126. Kilickap, E. Determination of Optimum Parameters on Delamination in Drilling of GFRP Composites by Taguchi Method. Indian J. Eng. Mater. Sci. 2010, 17, 265–274. [Google Scholar]
  127. Sarikaya, M.; Güllü, A. Taguchi Design and Response Surface Methodology Based Analysis of Machining Parameters in CNC Turning under MQL. J. Clean. Prod. 2014, 65, 604–616. [Google Scholar] [CrossRef]
  128. Gill, J.S.; Singh, P.; Singh, L. Taguchi’s Design Optimization for Finishing of Plane Surface with Diamond-Based Sintered Magnetic Abrasives. Eng. Res. Express 2022, 4, 035004. [Google Scholar] [CrossRef]
  129. Rao, R.V. Modeling and Optimization of Modern Machining Processes. In Advanced Modeling and Optimization of Manufacturing Process; Springer Series in Advanced Manufacturing; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  130. Simsek, S.; Uslu, S.; Simsek, H.; Uslu, G. Multi-Objective-Optimization of Process Parameters of Diesel Engine Fueled with biodiesel/2-Ethylhexyl Nitrate by Using Taguchi Method. Energy 2021, 231, 120866. [Google Scholar] [CrossRef]
  131. Krishna Madhavi, S.; Sreeramulu, D.; Venkatesh, M. Evaluation of Optimum Turning Process of Process Parameters Using DOE and PCA Taguchi Method. Mater. Today Proc. 2017, 4, 1937–1946. [Google Scholar] [CrossRef]
  132. Latchoumi, T.P.; Balamurugan, K.; Dinesh, K.; Ezhilarasi, T.P. Particle Swarm Optimization Approach for Waterjet Cavitation Peening. Meas. J. Int. Meas. Confed. 2019, 141, 184–189. [Google Scholar] [CrossRef]
  133. Diyaley, S.; Das, P.P. Metaheuristic-Based Parametric Optimization of Abrasive Water-Jet Machining Process—A Comparative Analysis. In International Conference on Production and Industrial Engineering; Lecture Notes in Mechanical Engineering; Springer Nature: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  134. Sahu, A.K.; Mahapatra, S.S.; Thomas, J.; Patterson, A.E.; Leite, M.; Goel, S. Optimization of Electro-Discharge Machining Process Using Rapid Tool Electrodes via Metaheuristic Algorithms. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 470. [Google Scholar] [CrossRef]
  135. Feng, Q.; Li, Q.; Quan, W.; Pei, X.M. Overview of Multiobjective Particle Swarm Optimization Algorithm. Gongcheng Kexue Xuebao Chin. J. Eng. 2021, 43, 745–753. [Google Scholar] [CrossRef]
  136. Huang, H.; Qiu, J.; Riedl, K. On the Global Convergence of Particle Swarm Optimization Methods. Appl. Math. Optim. 2023, 88, 30. [Google Scholar] [CrossRef]
  137. Khalilpourazari, S.; Khalilpourazary, S. SCWOA: An Efficient Hybrid Algorithm for Parameter Optimization of Multi-Pass Milling Process. J. Ind. Prod. Eng. 2018, 35, 135–147. [Google Scholar] [CrossRef]
  138. Zheng, J.; Chen, Y.; Pan, H.; Tong, J. Composite Multi-Scale Phase Reverse Permutation Entropy and Its Application to Fault Diagnosis of Rolling Bearing. Nonlinear Dyn. 2023, 111, 459–479. [Google Scholar] [CrossRef]
  139. Guo, C.; Chen, X.; Li, Q.; Ding, G.; Yue, H.; Zhang, J. Milling Optimization of GH4169 Nickel–based Superalloy under Minimal Quantity Lubrication Condition Based on Multi-Objective Particle Swarm Optimization Algorithm. Int. J. Adv. Manuf. Technol. 2022, 123, 3983–3994. [Google Scholar] [CrossRef]
  140. Pramanik, D.; Roy, N.; Kuar, A.S.; Sarkar, S.; Mitra, S. Experimental Investigation of Sawing Approach of Low Power Fiber Laser Cutting of Titanium Alloy Using Particle Swarm Optimization Technique. Opt. Laser Technol. 2022, 147, 107613. [Google Scholar] [CrossRef]
  141. Chen, T.J.; Zheng, Y.L. Modified Multiobjective Dynamic Multi-Swarm Particle Swarm Optimization for Mineral Grinding Process. Adv. Mater. Res. 2014, 971–973, 1242–1246. [Google Scholar] [CrossRef]
  142. Lin, C.J.; Jhang, J.Y.; Young, K.Y. Parameter Selection and Optimization of an Intelligent Ultrasonic-Assisted Grinding System for SiC Ceramics. IEEE Access 2020, 8, 195721–195732. [Google Scholar] [CrossRef]
Figure 1. Design process of response surface method [27,28].
Figure 1. Design process of response surface method [27,28].
Materials 17 04093 g001
Figure 2. Design process of genetic algorithm.
Figure 2. Design process of genetic algorithm.
Materials 17 04093 g002
Figure 3. Schematic diagram of birds foraging for food.
Figure 3. Schematic diagram of birds foraging for food.
Materials 17 04093 g003
Figure 4. Design process of particle swarm optimization algorithm.
Figure 4. Design process of particle swarm optimization algorithm.
Materials 17 04093 g004
Figure 5. Calculation process of hybrid particle swarm optimization algorithm [4]. “Reprinted with permission from Ref. [4]. 2024, Int. J. Adv. Manuf. Technol”.
Figure 5. Calculation process of hybrid particle swarm optimization algorithm [4]. “Reprinted with permission from Ref. [4]. 2024, Int. J. Adv. Manuf. Technol”.
Materials 17 04093 g005
Figure 6. Steps in the process of generating decision trees.
Figure 6. Steps in the process of generating decision trees.
Materials 17 04093 g006
Figure 7. Example of the application of the decision tree algorithm [64]. “Reprinted with permission from Ref. [64]. 2024, J. Appl. Sci. Technol. Trends”.
Figure 7. Example of the application of the decision tree algorithm [64]. “Reprinted with permission from Ref. [64]. 2024, J. Appl. Sci. Technol. Trends”.
Materials 17 04093 g007
Figure 8. Process of naive Bayesian algorithm.
Figure 8. Process of naive Bayesian algorithm.
Materials 17 04093 g008
Figure 9. Schematic diagram of hyperplane in SVM algorithm (a) and modeling process (b).
Figure 9. Schematic diagram of hyperplane in SVM algorithm (a) and modeling process (b).
Materials 17 04093 g009
Figure 10. Process of random forest algorithm.
Figure 10. Process of random forest algorithm.
Materials 17 04093 g010
Figure 11. Schematic diagram of artificial neural network structure.
Figure 11. Schematic diagram of artificial neural network structure.
Materials 17 04093 g011
Figure 12. Typical types of optimization objectives reported in the literature.
Figure 12. Typical types of optimization objectives reported in the literature.
Materials 17 04093 g012
Figure 13. Optimization of process parameters in ultrasonic-assisted grinding based on improved genetic algorithm: (a) process parameter optimization process, (b) iterative optimization process, and (c) comparison of surface roughness before and after optimization.
Figure 13. Optimization of process parameters in ultrasonic-assisted grinding based on improved genetic algorithm: (a) process parameter optimization process, (b) iterative optimization process, and (c) comparison of surface roughness before and after optimization.
Materials 17 04093 g013
Figure 14. Optimization of RB SiC machining process with laser-assisted grinding: (a,b) schematic diagrams of the experimental system and the actual device, (c) predicted and actual response values of surface roughness, (d) probability distribution of standard residuals of surface roughness, (e) three-dimensional response surface diagram of surface roughness and process parameters, and (f) feasibility window of process parameters constructed through optimization, constraints, and objective functions.
Figure 14. Optimization of RB SiC machining process with laser-assisted grinding: (a,b) schematic diagrams of the experimental system and the actual device, (c) predicted and actual response values of surface roughness, (d) probability distribution of standard residuals of surface roughness, (e) three-dimensional response surface diagram of surface roughness and process parameters, and (f) feasibility window of process parameters constructed through optimization, constraints, and objective functions.
Materials 17 04093 g014
Figure 15. Optimization of process parameters for laser-assisted cutting based on Taguchi method: (a) parameter design of L16 orthogonal experiment, (b) average signal-to-noise ratio of cutting force, (c) response surface of the influence of spindle speed and feed rate on cutting force [125]. “Reprinted with permission from Ref. [125]. 2024, J. Manuf. Process”.
Figure 15. Optimization of process parameters for laser-assisted cutting based on Taguchi method: (a) parameter design of L16 orthogonal experiment, (b) average signal-to-noise ratio of cutting force, (c) response surface of the influence of spindle speed and feed rate on cutting force [125]. “Reprinted with permission from Ref. [125]. 2024, J. Manuf. Process”.
Materials 17 04093 g015
Figure 16. Optimization study of process parameters for multi-pass roller grinding: (a) schematic of multi-pass roller grinding, (b) solution of particle position update program, (c) hybrid particle swarm optimization algorithm flow, and (d) changes in surface roughness prediction and actual values.
Figure 16. Optimization study of process parameters for multi-pass roller grinding: (a) schematic of multi-pass roller grinding, (b) solution of particle position update program, (c) hybrid particle swarm optimization algorithm flow, and (d) changes in surface roughness prediction and actual values.
Materials 17 04093 g016
Figure 17. Applications of other optimization algorithms: (a) sine cosine whale optimization algorithm; (b) multi-objective dragonfly algorithm and its feasible trade-off solution set [114,137]. Reprinted with permission from Refs. [114,137]. 2024, Neural Comput. Appl and J. Ind. Prod. Eng”.
Figure 17. Applications of other optimization algorithms: (a) sine cosine whale optimization algorithm; (b) multi-objective dragonfly algorithm and its feasible trade-off solution set [114,137]. Reprinted with permission from Refs. [114,137]. 2024, Neural Comput. Appl and J. Ind. Prod. Eng”.
Materials 17 04093 g017
Figure 18. Future research directions for optimization algorithms.
Figure 18. Future research directions for optimization algorithms.
Materials 17 04093 g018
Table 1. Level of independent factors.
Table 1. Level of independent factors.
FactorFactor Level
−2−1012
Power P (W)1020304050
Speed V (rpm)6000800010,00012,00014,000
Feed rate F (mm/min)515253545
Grinding depth D (μm)2.557.51012.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, J.; Wang, B.; Hao, X. Optimization Algorithms and Their Applications and Prospects in Manufacturing Engineering. Materials 2024, 17, 4093. https://doi.org/10.3390/ma17164093

AMA Style

Song J, Wang B, Hao X. Optimization Algorithms and Their Applications and Prospects in Manufacturing Engineering. Materials. 2024; 17(16):4093. https://doi.org/10.3390/ma17164093

Chicago/Turabian Style

Song, Juan, Bangfu Wang, and Xiaohong Hao. 2024. "Optimization Algorithms and Their Applications and Prospects in Manufacturing Engineering" Materials 17, no. 16: 4093. https://doi.org/10.3390/ma17164093

APA Style

Song, J., Wang, B., & Hao, X. (2024). Optimization Algorithms and Their Applications and Prospects in Manufacturing Engineering. Materials, 17(16), 4093. https://doi.org/10.3390/ma17164093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop