Based on the single factor test, the results of the threshing performance indicators change were analyzed. Taking the drum rotation speed, feed volume, and vibration frequency of the concave sieve as variables, the corresponding simulation experiment was designed and the loss rate and the impurity rate of the threshing performance indicators were obtained. The BP neural network model and linear regression model were established according to the experimental results.
3.2. BP Neural Network Model of Establishment and Results
Because of the uncertainty of the threshing condition and the complexity of the influencing factors of threshing device, the relationship of the threshing performance is nonlinear. The prediction of the threshing performance of the combine harvester under different influencing parameters provides ideas and methods for the optimization of the threshing device.
The prediction of the threshing performance is regarded as a nonlinear problem jointly acted by a variety of influencing factors by using the strong nonlinear and generalization ability of the BP neural network so as to achieve the prediction of the threshing performance test indicators under different parameters and reduce the repetition times of the threshing performance test [
26].
According to
Table 7, a three-layer neural network structure with two hidden layers is established, as shown in
Figure 7, and a three-factor and three-level comprehensive experiment was designed to generate 27 groups of sample data—as shown in
Table 8.
As can be seen from
Figure 7,
Wij and
Wjk are the value of the weight. In the process of designing the threshing performance, the BP neural network structure, the number of neurons in each layer, the activation function, the number of hidden layers in the network, the number of input and output layers, and other key parameters should be fully considered.
Considering the drum rotation speed, feed volume, and the vibration frequency, loss rate, impurity rate, the principle of BP neural network nonlinear modeling, the training sample dimension, and the input nodes as the input layers for three factors, namely drum rotation speed, feed volume, and vibration frequency, whilst the output layers are the loss rate and impurity rate;
When the number of hidden layers is large, the reliability of the error signal propagating back from the output layer to input layer decreases. In addition, the increase in the number of hidden layers will reduce the learning efficiency and increase the time cost of network training. As for the number of hidden layers, it is generally better to have as few as possible under the condition of meeting the requirements dealt with it in this paper is relatively simple. According to this principle, a three-layer neural network with two hidden layers was selected for training and learning;
The number of neurons will indirectly affect the performance of the neural network. Neurons at each layer of the neural network structure are connected to all neurons at the next layer through corresponding characteristic function transformation, while there is no connection between the neurons at each layer of the neural network and no direct corresponding connection with the outside world.
The threshing performance prediction model is trained by Levenberg–Marquardt method, the training function is trained, the maximum training times is 100, the training accuracy is 0, the maximum number of failures is 6, the minimum gradient is 1 × 10−10, the initial value of Mu is 0.001, and the maximum value of Mu is 1 × 1010. The hidden layer is divided into two layers, in which the transfer function purelin in the first layer, logsig, transfer function of the second layer, show an interval of 10 and set an error target of 10−3, whilst the rest of the settings of the default value are adopted in the network training process, and the parameters such as the coefficient of the network structure, network parameters, calculation process, and these parameters which are related to each other are adopted. As such, the end value is obtained through the study of different combinations of these parameters and step-by-step debugging.
According to the BP neural network and experimental indicators system and Kolmogorov theorem [
27], the input layer has three nodes, so the number of neurons in the hidden layer 2R + 1 is seven, and R is the number of nodes in the input layer. Equation (3) can also be used to calculate the number of hidden layer nodes.
where
n denotes the number of nodes in the output layer,
k represents the number of nodes in the input layer,
n1 is the number of nodes in the hidden layer, and
c is a constant ranging from approximately 1 to 10.
As the experimental data units of the BP neural network input layer are inconsistent, their values differ greatly, which finally leads to the slow convergence of the neural network results and longer training time of sample set. All input data are normalized to prevent the phenomenon that the information with a small value in dataset is annihilated by the information with large value, as shown in Equation (4).
where
x denotes the original data,
xmax and
xmin represent the maximum value and minimum value of the original data training set, respectively, and
y is the normalized data.
The principle of the neural network using the objective function is to minimize the sum of squares of the error of the output variables of parameters to be estimated on training samples, as shown in Equation (5):
where
xit denotes the value of the
i sample on the
t input variable;
yij represents the value of the
i sample on the
j output variable;
fj is the input/output function of the neural network;
B is the unestimated parameter where
B = [
β1,
β2...
βn].
n = (
q −
m) ∗
r,
q is the number of input variables,
m is the number of output variables,
r is the number of intermediate neurons, and
is the parameter to be estimated.
=
satisfies
where
s = 1, 2, 3 ···
n;
p is number of training samples; and
uij is random error term,
When p + q >> n, is the unbiased estimation. For this paper, q = 3, r = 7, m = 2, n = 35, p = 18, that is the sample size must be greater than 18, and the total number of tests in this paper is 27 groups, which meets this condition.
The 27 groups of experiments were divided into two parts, with 23 groups as the training set and 4 groups as the test set. When the error reaches the set stop condition or the number of iterations ends, the training is stopped, as shown in
Figure 8 and
Figure 9.
It can be seen from
Figure 8 that the root mean square error of training samples decreases with the increase in iterations. After 70 iterations, the best training result is 0.00098789. When the error reached the set stop condition or the number of iterations ended, the training was stopped.
As shown in
Figure 9, the sample’s predicted value of the neural network training is quite close to the actual value, and the error accuracy also satisfies the set requirements.
3.3. Linear Regression Model of Establishment and Results
The mathematical model between the loss rate and impurity rate of the test indicators and the influencing factors can be obtained through the fitting of the secondary center combined test, which can be used to predict the test indicators. The level coding table of the design factors is designed as shown in
Table 9.
The factor level coding table designed according to the selected factor level is shown in
Table 10. Reference [
28] calculated the total number of tests as 23 groups.
According to the results of the threshing performance indicators, the loss rate Y
1 and impurity rate Y
2 are shown in
Table 10.
The optimal solution of the influencing factors was obtained by regression fitting, and then the threshing performance indicators were predicted. A linear regression equation is generally adopted, as shown in Equation (7):
where,
a,
bj,
bkj, and
bjj denote the regression coefficient;
m represents the factor;
xk,
xj are the variable factors; and Y is the test indicator.
A polynomial fitting method was used to obtain the regression equations Y
1 and Y
2 between the loss rate and impurity rate of the test indicators, and the rotational speed of drum, feed volume, and vibration frequency of concave sieve for Y
1 and Y
2 are as shown in
Table 11.
The parameter values in equations Y
1 and Y
2 are as shown in
Table 11.