Next Article in Journal
Multi-Phase Focused PID Adaptive Tuning with Reinforcement Learning
Previous Article in Journal
Recommendation Method of Power Knowledge Retrieval Based on Graph Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method for Battery SOC Estimation Based on Slime Mould Algorithm Optimizing Neural Network under the Condition of Low Battery SOC Value

School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 266100, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(18), 3924; https://doi.org/10.3390/electronics12183924
Submission received: 14 August 2023 / Revised: 12 September 2023 / Accepted: 14 September 2023 / Published: 18 September 2023

Abstract

:
The State of Charge (SOC) is a crucial parameter in battery management systems, making accurate estimation of SOC essential for adjusting control strategies in automotive energy management and ensuring the performance of electric vehicles. In order to solve the problem that the estimation error of the traditional BP neural network increases sharply under complex conditions and low battery SOC values, a recurrent neural network estimation method based on slime mould algorithm optimization is proposed. Firstly, the data are serialized to include multiple discharge data. Secondly, the data are input into a recurrent neural network for SOC estimation, with a self-attention mechanism added to the network. Furthermore, it is found in the experiment that parameters have an impact on the estimation accuracy of the neural network, so the slime mould algorithm is introduced to optimize the parameters of the neural network. The experiment results show that the maximum error of the novel method is limited to within 5% under two conditions. It is worth noting that the SOC estimation error at low SOC value decreases instead of increasing, which shows the advantages of the novel method.

1. Introduction

As the growth in global novel energy vehicles accelerates, battery-related technologies have become a central area of research. The SOC is among the most crucial parameters for batteries [1]. It expresses the remaining charge in a battery. Precise SOC estimation helps prevent overcharging and over-discharging, ensuring the healthy operation of batteries, which holds great importance [2].
Numerous researchers have recently explored SOC estimation. The open-circuit voltage method calculates the battery’s SOC using its terminal voltage when in an open-circuit state [3]. This method offers high estimation accuracy, but it necessitates that the battery remains idle for an extended period to attain voltage stability, posing significant challenges in practical application [4]. To tackle this issue, some researchers have introduced the coulomb counting method [5]. This approach estimates the battery’s SOC by integrating current and time, which is relatively straightforward to implement [6]. However, as it may lead to error accumulation [7], other studies have suggested the use of the Kalman filtering technique [8]. The Kalman filtering method implementing the minimum mean squared error concept, the MSE between the estimated value and observed value is minimized to update system parameters [9], achieving optimal battery SOC estimation [10]. Prashant Shrivastava and colleagues proposed a dual forgetting factor-based adaptive extended Kalman filter for SOC estimation and developed a combined estimation approach for SOC and State of Energy (SOE) [11]. Zhi Cheng He and others suggested a novel method for online SOC estimation that combines an adaptive extended Kalman filter with a parameter identification algorithm grounded in adaptive recursive least squares [12]. Experiment results indicate that this method accurately estimates battery SOC and exhibits strong robustness [13]. Dong-Ji Xuan and his team proposed an improved central difference transform Kalman filtering technique based on the square root second-order central difference transform. Experiments demonstrate that the algorithm converges rapidly and provides high estimation accuracy [14]. In comparison to other methods, it is particularly suited for battery SOC estimation when significant current fluctuations are present, but its estimation accuracy heavily relies on the battery equivalent model.
In recent years, due to neural networks’ excellent adaptability to nonlinear systems, numerous researchers worldwide have investigated battery SOC estimation using neural networks. How et al. [15] employed a deep neural network (DNN) for estimating SOC, demonstrating that increasing the number of hidden layers in the DNN can reduce the error rate. Mohammad et al. [16] combined an autoencoder neural network with a Long Short-Term Memory (LSTM) neural network for high-precision battery SOC estimation. Yang et al. [17] suggested an RNN with gated recurrent units for SOC estimation. Feng et al. [18] designed a novel structure for a standard RNN to estimate battery SOC. 19. Hannan et al. [19] utilized a deep fully convolutional network model for SOC estimation. Chen et al. [20] proposed a method based on the autoregressive LSTM network and Moving Horizon Estimation. Jiao et al. [21] examined a Gated Recurrent Unit RNN-based momentum gradient technique for estimating battery SOC. Ephrem et al. [22] employed an RNN with LSTM for accurate SOC estimation in li-ion batteries. Bian et al. [23] suggested a stacked bidirectional LSTM neural network for SOC estimation.
Moreover, as hyperparameters influence neural network estimation outcomes, several researchers have proposed combining optimization algorithms with neural networks to improve battery SOC estimation accuracy. Che et al. [24] implemented a self-adaptive weight Particle Swarm Optimization (SWPSO) algorithm for training the Dynamic RNN (DRNN). The results show that the SWPSO algorithm enhances error convergence speed and avoids local optima. Ren et al. [25] utilized a Particle Swarm Optimization-based LSTM neural network (PSO-LSTM) for SOC estimation.
Although many researchers have adopted neural networks or combined them with other methods for SOC estimation and effectively controlled errors, there are still some challenges in SOC estimation under specific conditions: On one hand, while the estimation errors in some of the studies can be lower than 5%, their research focuses on constant current discharge or periodic variable current discharge with mild variations. In such a discharge scenario, the samples and test data show good consistency, so it is of low difficulty to estimate the battery SOC, and the experiment results have higher estimation accuracy. However, in this study, intermittent constant current discharge is used, as shown in Figures, wherein dramatic fluctuations in current lead to increased estimation difficulties. On the other hand, few studies pay attention to the estimation of battery SOC at low battery SOC value. When the SOC value of the battery is low, the external characteristics of the battery, such as battery voltage characteristics and internal characteristics, are completely different from those seen when the SOC value of the battery is high, showing a sharp nonlinear change, which leads to the difficulty of estimation.
The limitations of the above-mentioned methods are mainly as follows. Firstly, almost all the methods do not involve SOC estimation when the SOC value is low. When the SOC value is low, the external characteristics of the battery, such as battery voltage characteristics and internal characteristics, are completely different from those seen under the condition of high SOC value, showing a sharp nonlinear change, which leads to increases in the difficulty of estimation. Secondly, manually adjusting and assigning network parameters will add more uncertainty factors, ultimately affecting the effectiveness of the model.
In order to solve the above problems, this paper proposes adding a self-attention mechanism and SMA optimization algorithm to a GRU neural network. Firstly, using RNN to process time series is an effective method for estimating battery SOC, and, compared with the traditional Back Propagation (BP) neural network method, the data input to the neural network are more abundant, which can improve the overall estimation ability. Secondly, a self-attention mechanism is added to the GRU neural network, which can make the GRU neural network notice the data that have a greater influence on the current SOC estimation value, thus improving the estimation accuracy; when the SOC value is low, the voltage drops sharply. At this time, the self-attention mechanism can give greater weight to the data that have a great influence on the current SOC estimation, to correct the SOC estimation. Finally, the Slime Mould Algorithm (SMA) is used to optimize the parameters of the GRU neural network to further improve the stability and estimation effect of the GRU neural network. Because the neural network fits the whole SOC curve, it will pay more attention to integrity by fitting the appropriate weights with data, but the SMA is used to find the optimal parameters, and the evaluation criterion is error. The smaller the neural network error, the higher the adaptability of the SMA and the better the parameters. Therefore, when the error estimation error increases when the SOC value is low, the adaptability of the SMA will become lower, so the SMA goes back to find more suitable parameters, thus further reducing the estimation error of the battery when the SOC value is low.
In this paper, the novel method is tested in two condition data sets, and, compared with the traditional BP neural network method, it is found that the novel method greatly reduces the SOC estimation error. Moreover, when the SOC value is low, the SOC estimation error of the traditional method increases obviously, but the SOC estimation error of the novel method decreases instead of increasing, and there is little difference with the overall error, which shows the advantages of the novel method.
The rest of this paper is as follows. Section 2 describes the battery discharge data. In Section 3, a GRU neural network battery SOC estimation model with a self-attention mechanism is proposed. Section 4 presents the experiment analysis. In Section 5, the method and experiment analysis of optimizing the neural network parameters using the SMA are put forward. Section 6 summarizes this paper.

2. The Battery Model

2.1. Data Acquisition

The experiment setup included test specimens, a temperature-controlled chamber, a LANHECT-2001 battery testing apparatus, and a computer, as illustrated in Figure 1. The test specimens comprised 18,650 LiNiMengCoO2/graphite lithium-ion cells. Key specifications can be found in Table 1.
In the experiment, the battery test equipment was used to carry out the charge and discharge experiment, and several groups of battery SOC discharge data were obtained. This provided a large amount of training data for the next neural network experiment.

2.2. Data Analysis

The testing samples underwent a Dynamic Stress Test (DST) and an Urban Dynamometer Driving Schedule (UDDS) test.
The current–voltage variations under two test conditions are shown in Figure 2, Figure 3, Figure 4 and Figure 5. During the discharge in the UDDS test, the battery discharged from 100% to 0% after 10 cycles. The DST uses a larger discharge current, longer discharge, and rest time. It can be observed that during the DST, once the SOC drops below 20%, the voltage changes sharply and shows enhanced nonlinear characteristics, which leads to difficulty in SOC estimation.

3. Network Structure

In order to solve the problem that the traditional Back Propagation (BP) method has low SOC estimation accuracy and the estimation error increases sharply when the SOC value is low, a recurrent neural network SOC estimation method with self-attention mechanism is proposed. As shown in Figure 6, where Ct−1 is the weight shared from the previous time to the next time, ht is the output of the current recurrent part, ht* is the output of the current time recurrent neural network after attention correction, at is the feature weight calculated by the self-attention mechanism at the current time, and SOCt is the estimated SOC value at the current time.

3.1. Serializing the Discharge Data

According to the input format of the recurrent neural network, the data are serialized, and each sample after processing contains both current and historical discharge data. As shown in Figure 7, Xt = [Ut, It] is the discharge data at one moment. When the sequence length is set to n, the input samples are [Xt−n+1, Xt−n+2, …, Xt].

3.2. Using Recurrent Neural Network to Estimate Battery SOC

Recurrent neural networks are unique structures that retain previous information and utilize it for current output calculations. However, conventional RNNs often suffer from vanishing or exploding gradients. To address this issue, this study employs LSTM networks or their variant, GRU, both of which possess information filtering capabilities.
In contrast to conventional recurrent neural networks, the LSTM model incorporates a unique gating structure in each repeating module, as depicted in Figure 8. The gate serves as an information filter, composed of a sigmoid layer and an element-by-element multiplication. The sigmoid layer produces values ranging from 0 to 1, indicating the amount allowed to pass through. A value of 0 signifies no passage, while 1 permits full passage. LSTM contains three gates as follows:
Forget gate: determines the extent of information retention at the current moment;
Input gate: regulates the amount of information preserved from the current input to the unit;
Output gate: affects the amount of information from the current time unit released externally.
The GRU is a modified version of the LSTM network. It combines the forget gate and input gate into a single update gate, thus simplifying the structure and speeding up the training compared to the LSTM network. The GRU neural network structure, depicted in Figure 9, comprises only two gates: the update gate and the reset gate. The update gate controls how much information from the previous moment is carried into the current moment, while the reset gate determines the extent to which information from the prior moment is disregarded.

3.3. Adding Self-Attention Mechanism to Recurrent Neural Network

The input samples contain discharge data at multiple times, and each set of data has different effects on the neural network. The essence of the self-attention mechanism is to ensure the neural network can notice the data that have a great influence on the output results, so that it can learn more useful information. Since the neural network involves the transmission of many numbers, and the characteristics of the voltage and current in the discharge data are also composed of numbers, changing the weight of these features can make the neural network notice the features it needs to pay attention to.
A self-attention layer is added after the network layer of the recurrent neural network, and the output of the recurrent neural network contains the characteristics of each moment, which is used as the input of the self-attention layer. After axis inversion, a fully connected layer is added and the SoftMax activation function is used. Because the SoftMax function maps the outputs of multiple neurons into the (0, 1) interval, the weight of each moment can be calculated after several operations, and then the dimension can be restored after axis inversion. Finally, this set of weights is multiplied by the eigenvalues, and the original feature combination of the recurrent neural network becomes a weighted feature combination.
In the continuous cycle, the characteristics of each moment are behind the weight, and the weight of the moment that has a greater impact on the current result will be greater than that of other moments, so the neural network will pay more attention to these moments.

4. Experiment Analysis

4.1. Overall Experiment Analysis

We input the two sets of experiment data into the above neural network for training, where the number of neurons and iteration times of the recurrent neural network involved are both 100. In this paper, the mean squared error and the maximum error are used as the evaluation criteria, as shown in Equation (1), where SOC’(t) is the estimated value at time t and SOC is the real value at time t.
MSE = 1 T   t T   ( S O C t S O C ( t ) ) 2 ,
After using different neural networks to estimate the SOC of the two sets of discharge data, Figure 10 and Figure 11 show the SOC estimation errors of different methods under two conditions. Through these two waterfall diagrams, it can be intuitively seen that the advantages of the recurrent neural network are greater, and the GRU neural network with the self-attention mechanism has the best effect. Figure 12 and Figure 13 show the comparison between the estimated SOC value and the real SOC value of different methods under two conditions. Table 2 shows the numerical results.
Under the DST condition, the maximum error of SOC estimation using the traditional BP method is 16.38%, with a mean squared error of 7.21. The maximum error of estimating SOC by the CNN method is 14.86%, and the mean squared error is 4.28. In comparison, the maximum errors for the LSTM and GRU methods with the added self-attention mechanism are 9.61% and 6.99%, respectively, with mean squared errors of 1.87 and 0.62.
Under the UDDS condition, the maximum error of SOC estimation using the traditional BP method is 11%, with a mean squared error of 4.27%. The maximum error of estimating SOC by the CNN method is 9.50%, and the mean squared error is 3.79. The maximum errors for the LSTM and GRU methods with the added self-attention mechanism are 4.33% and 3.30%, respectively, with mean squared errors of 0.56 and 0.47.
From the above results, it is evident that adding a self-attention mechanism to the recurrent neural networks significantly reduces the SOC estimation error compared to the traditional BP neural network method. Furthermore, the GRU neural network with the added self-attention mechanism performs optimally.
The reason why the novel method can significantly reduce the error and mean squared error is twofold: firstly, the introduction of the time series enables the neural network to learn more information; and secondly, the attention mechanism is added to the neural network, allowing the network to focus on one or a few groups of discharge data that have a greater impact on the current result during the training process. The above recurrent neural network uses 100 neurons and iterations.

4.2. Experiment Analysis When the SOC Value Is Low

Figure 14 and Figure 15 show the SOC estimation errors of different methods under DST conditions under normal and low SOC values. Figure 16 and Figure 17 show the comparison between the estimated SOC value and the real SOC value of different methods under two conditions. Table 3 shows the numerical results. In the DST condition, experiment analysis shows that, when the SOC is greater than 20%, the maximum estimation error of the traditional BP method is 15.72%, and the mean squared error is 8.34. However, when the SOC is below 20%, due to the dramatic changes in the discharge data, the maximum error and mean squared error of the traditional BP method increase accordingly, with an especially significant increase in the mean squared error. In contrast, the estimation method based on the GRU neural network with the self-attention mechanism not only does not increase the error when the SOC value is low, but also reduces it.
Figure 18 and Figure 19 show the errors of different methods under UDDS conditions under normal and low SOC value conditions. Figure 20 and Figure 21 show the comparison between the estimated SOC value and the real SOC value of different methods under two conditions. Table 4 shows the numerical results. In the UDDS condition, since large currents and long idle times are not used during the battery discharge process, the discharge data change when the SOC value is low are relatively mild compared to those under the DST condition. Nevertheless, the error of the traditional BP method still increases when the SOC value is low. However, the SOC estimation performance of the GRU neural network with the self-attention mechanism when the SOC value is low remains significant.
To investigate whether different parameters will affect the estimation accuracy, the number of neurons in the GRU-Attention network was reduced to 30 and trained under the DST condition. The results are shown in Table 5. When the number of neurons is 30, the maximum error decreases by 0.56%, but the mean squared error increases by 0.62.
In view of the above problems, the SOC of two groups of discharge data is estimated by using different neural network parameters, and the results are shown in Table 6 and Table 7. Under the DST condition, when the number of neurons is 30, the maximum error is 5.23 percentage points lower than when the number of neurons is 150. Under the UDDS condition, when the number of neurons is 30, the maximum error is 1.02 percentage points higher than when the number of neurons is 100.
As mentioned above, the parameters of the neural network do have an impact on the SOC estimation accuracy. Therefore, in the next chapter, swarm intelligence optimization algorithms will be used to find suitable numbers of neurons and parameters for different discharge data scenarios, with the hope of further improving the estimation accuracy of the neural network.

5. Adding Optimization Algorithm

The Slime Mould Algorithm (SMA) is an innovative heuristic algorithm that mirrors the foraging behavior and morphological transformations of slime mould. Introduced by Li Shimin et al. in 2020, it boasts several benefits such as straightforward principles, minimal adjustment parameters, robust optimization capabilities, and ease of implementation. Slime mould organisms primarily derive nutrition from external organic substances, with food concentration impacting the propagating wave created by the biological oscillator and the cytoplasmic flow rate. The SMA employs this foraging behavior for function optimization.

5.1. Algorithm Principle

This algorithm has three parts. During the first stage, known as the food approach phase, slime moulds move towards food based on airborne scents. Their approach behavior is represented by Formulas (2) and (3).
X(t + 1) = Xb(t) + vb(W × XA(t) − XB(t)), r < p,
X(t + 1) =vc × X(t), r ≥ p
where X(t + 1) and X(t) represent the position of slime moulds in the t + 1 and t iterations, respectively, Xb(t) represents the position with the highest food concentration (optimal position) in the iteration t, and XA(t) and XB(t) represent two randomly selected slime moulds in the tth iteration; the range of vb is [−a,a], a = arctanh(1 − (t/T)), t is the current iteration number, and T is the maximum iteration number; the range decreases linearly from 1 to 0; r is a random number between 0 and 1; p = tanh(|S(i)DF|), where i = 1, 2, …, N, S(i) represents the fitness value of the ith slime mould, DF represents the optimal fitness value in all iterations, and N represents the population size of slime moulds; and W represents the weight of the slime moulds. As shown in Formulas (4)–(6).
W(SortIndex(i)) = 1 + r × log((bF-S(i))/(bF − wF) + 1)), condition,
W(SortIndex(i)) = 1 − r × log((bF-S(i))/(bF − wF) + 1)), others,
SortIndex = sort(S)
The second stage is warp food. In this stage, individual positions are updated according to changes in the optimal position Xb and the fine-tuning of vb vc and W. The second stage involves food encircling, simulating the contraction pattern of slime mould veins. A higher food concentration results in stronger biological oscillator waves and faster cytoplasmic flows. The slime mould approach behavior update formula for this stage is shown in Formulas (7)–(9), where condition represents the individuals in the first half of the fitness list, bF and wF represent the best and worst fitness values in the current iteration, respectively, and ‘Sort Index’ represents the sorted fitness value sequence.
X(t + 1) =rand × (ub − lb) + lb, r < z,
X(t + 1) =Xb(t) + vb ∗ (W × XA(t) − XB(t)), r < p,
X(t + 1) = vc × X(t), r ≥ p
where rand and r represent the random values generated between 0 and 1, ub and lb represent the upper and lower bounds of the search space, respectively, and z represents the proportion of randomly distributed slime moulds.
The third stage is oscillation. In this stage, slime moulds use the propagating wave generated by the biological oscillator to modify the cytoplasmic flow speed. They simulate vein width changes through vb, vc, and W, allowing slime moulds to approach food more slowly when the food concentration is low and more quickly when high-quality food is found.
The algorithm flow chart is shown in Figure 22; the specific steps are as follows:
Step 1: Set parameters, initialize the population, and calculate fitness;
Step 2: Calculate the weight W and parameter a;
Step 3: Generate a random number r and compare it with z. If r is smaller than z, update the position according to the Formula (7); otherwise update the parameter p, vb, vc; compare r with parameter p, if r is smaller than p, update the position according to the Formula (8), otherwise update the position according to the Formula (9);
Step 4: Recalculate the fitness and update the global optimal solution;
Step 5: Judge whether the termination condition is satisfied: if yes, output the global optimal solution; otherwise, repeat Steps 2 to 5.

5.2. Fusion of Algorithm and Neural Network

The SMA is integrated with the neural network, utilizing the neural network’s MSE as the fitness for the SMA. The optimized parameters comprise the number of GRU neurons (GRU-neurons), the count of fully connected layer neurons (fc-neurons), epochs, and batch size. The search range for these four parameters has upper limits of [150, 30, 32] and lower limits of [30, 5, 1]. We set the population number of the optimization algorithm to 15, the search dimension to 4, and the initial value of the position parameter z to 0.03.
As shown in Figure 23, the SMA is used to optimize the network parameters as follows:
  • Parameter initialization. Set the position parameters, initialize the population, and randomly generate Myxomycetes × [GRU-neurons, Fc-neurons, Batch_size];
  • Calculate the fitness value of everything, and select the individual with the lowest fitness value as the current optimal solution. As shown in Formula (10), the fitness value of Myxomycetes is the mean squared error of the neural network.
fit = 1 / T × t T   ( S O C t S O C ( t ) ) 2
3.
After updating the position of everything according to the flow of the SMA, the fitness value is recalculated and the optimal solution is updated;
4.
Judging the end condition. If satisfied, the algorithm ends and outputs the optimization result;
5.
Establishing and training the neural network with parameters optimized by SMA.

5.3. Experiment Analysis after Adding SMA

Figure 24 and Figure 25 show the comparison between the estimated SOC value and the real SOC value after optimization of the SMA. Table 8 shows the parameter optimization results. Table 9 shows the numerical results. Under the DST condition, the optimal parameter group is [136, 28, 17]. The maximum error is reduced from 6.99% to 4.52% and the mean squared error is reduced from 0.62 to 0.38. Under the UDDS condition, the optimal parameter group is [130, 13, 10]. The maximum error is reduced from 3.30% to 2.78%, and the mean squared error is reduced from 0.47 to 0.22.

5.4. Experiment Analysis When the SOC Value Is Low after Adding SMA

Figure 26 and Figure 27 show the comparison between the estimated SOC value and the real SOC value. Table 10 shows the numerical results. Compared with the estimation error of SOC greater than 20%, the estimation error when the SOC value is low is also reduced.
Figure 28 and Figure 29 show the comparison between the estimated SOC value and the real SOC value. Table 11 shows the numerical results. Under the UDDS condition, the error is still further reduced after adding the optimization algorithm, and the performance of the original method under high and low SOC value conditions is not changed by adding the optimization algorithm. The improved method still has advantages in SOC estimation when the SOC value is low.

6. Conclusions

To enhance SOC estimation accuracy under complex conditions, particularly when the SOC value of the battery is low, this study introduces an SOC estimation approach that utilizes the slime mould algorithm for optimizing recurrent neural networks. This technique is evaluated under two discharge conditions, DST and UDDS, and is compared with the traditional BP neural network; three findings are summarized.
First, the main body of the novel method is the GRU neural network. When the features of the input data are few, it is very effective to increase the diversity of the input data by processing the time series with the recurrent neural network. Compared with the traditional BP network, the novel method greatly improves the accuracy of SOC estimation, and the error is limited to within 5%. Secondly, when dealing with discharge data with strong nonlinear characteristics, the self-attention mechanism can show obvious advantages. The addition of the self-attention mechanism effectively solves the problem that the SOC estimation error rises sharply due to the drastic change in discharge data when the battery SOC value is low. Thirdly, the SMA optimization algorithm can improve the overall SOC estimation ability without changing the network characteristics. The experiment results show that the novel proposed network has more advantages for SOC estimation when the battery SOC value is low before and after adding the optimization algorithm, and that the SMA optimizes the number of neurons and batch size of neural network, overcomes the randomness of manually determining parameters, and improves the prediction ability of neural network.
The experiment results show that, compared with the traditional BP neural network method, the battery SOC estimation method based on the slime mould algorithm and recursive neural network greatly reduces the error. The estimation error of the traditional BP neural network increases obviously when the battery SOC value is low, but the estimation error of the novel method decreases instead of increasing in this case, which is a major feature of the novel method.
The limitation of the novel method is that, on the one hand, the optimization algorithm is easy to fall into local optimum, and when the SMA initializes the population, the individuals are randomly distributed in the range, so the distribution is not uniform. The quality of the initial population is very important to the optimization algorithm, so the SMA may fall into a minimum point in the iteration and it is difficult to jump out. On the other hand, if the battery life is not considered, the capacity of the battery will decrease with frequent charging and discharging, which will affect the SOC estimation of the battery.
In future work, firstly, the initial population of the SMA will be improved by using the chaotic mapping method, and the individuals of Myxomycetes can be more evenly distributed in the search range by using the characteristics of uniform distribution of chaotic states, thus improving the convergence speed and optimization ability of the SMA. Secondly, the battery life factor in SOC estimation can be considered, and the phenomenon of battery life reduction by reducing battery capacity can be simulated.

Author Contributions

Conceptualization, J.L. and X.Z.; methodology, J.L. and X.Z.; software, J.L.; validation, X.Z. and J.L. and X.L.; formal analysis, J.L.; investigation, X.L.; resources, J.L.; data curation, X.Z.; writing—original draft preparation, X.Z. and X.L.; writing—review and editing, X.Z. and J.L.; visualization, X.Z.; supervision, J.L.; project administration, J.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the RESEARCH PROJECT of HEBEI EDUCATION DEPARTMENT, grant number ZD2021334 and the KEY RESEARCH AND DEVELOPMENT PROGRAM OF HEBEI PROVINCE, grant number 20310101D and the S&T Program of Hebei, grant number 22375801D and the National Natural Science Foundation of CHINA, grant number 12072203.

Data Availability Statement

Data are available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, S.L.; Xiong, X.; Zou, C.Y.; Chen, L.; Jiang, C.; Xie, Y.X.; Stroe, D.I. An improved coulomb counting method based on dual open-circuit voltage and real-time evaluation of battery dischargeable capacity considering temperature and battery aging. Int. J. Energy Res. 2021, 45, 17609–17621. [Google Scholar] [CrossRef]
  2. Meng, J.; Ricco, M.; Luo, G.; Swierczynski, M.; Stroe, D.-I.; Stroe, A.-I.; Teodorescu, R. An overview and comparison of online implementable SOC estimation methods for lithium-ion battery. IEEE Trans. Ind. Appl. 2017, 54, 1583–1591. [Google Scholar] [CrossRef]
  3. Lin, C.; Yu, Q.; Xiong, R.; Wang, L.Y. A study on the impact of open circuit voltage tests on state of charge estimation for lithium-ion batteries. Appl. Energy 2017, 205, 892–902. [Google Scholar] [CrossRef]
  4. Ren, Z.; Du, C.; Wu, Z.; Shao, J.; Deng, W. A comparative study of the influence of different open circuit voltage tests on model-based state of charge estimation for lithium-ion batteries. Int. J. Energy Res. 2021, 45, 13692–13711. [Google Scholar] [CrossRef]
  5. Li, Y.; Guo, H.; Qi, F.; Guo, Z.; Li, M. Comparative study of the influence of open circuit voltage tests on state of charge online estimation for lithium-ion batteries. IEEE Access 2020, 8, 17535–17547. [Google Scholar] [CrossRef]
  6. Zhang, S.; Guo, X.; Dou, X.; Zhang, X. A rapid online calculation method for state of health of lithium-ion battery based on coulomb counting method and differential voltage analysis. J. Power Sources 2020, 479, 228740. [Google Scholar] [CrossRef]
  7. Zhang, S.; Guo, X.; Dou, X.; Zhang, X. A data-driven coulomb counting method for state of charge calibration and estimation of lithium-ion battery. Sustain. Energy Technol. Assess. 2020, 40, 100752. [Google Scholar] [CrossRef]
  8. Movassagh, K.; Raihan, A.; Balasingam, B.; Pattipati, K. A critical look at coulomb counting approach for state of charge estimation in batteries. Energies 2021, 14, 4074. [Google Scholar] [CrossRef]
  9. Qiu, X.; Wu, W.; Wang, S. Remaining useful life estimation of lithium-ion battery based on improved cuckoo search particle filter and a novel state of charge estimation method. J. Power Sources 2020, 450, 227700. [Google Scholar] [CrossRef]
  10. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S. Overview of model-based online state-of-charge estimation using Kalman filter family for lithium-ion batteries. Renew. Sustain. Energy Rev. 2019, 113, 109233. [Google Scholar] [CrossRef]
  11. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S.; Adnan, S.B.R.S. Combined state of charge and state of energy estimation of lithium-ion battery using dual forgetting factor-based adaptive extended Kalman filter for electric vehicle applications. IEEE Trans. Veh. Technol. 2021, 70, 1200–1215. [Google Scholar] [CrossRef]
  12. Afshar, S.; Morris, K.; Khajepour, A. State-of-charge estimation using an EKF-based adaptive observer. IEEE Trans. Control Syst. Technol. 2018, 27, 1907–1923. [Google Scholar] [CrossRef]
  13. He, Z.; Yang, Z.; Cui, X.; Li, E. A method of state-of-charge estimation for EV power lithium-ion battery using a novel adaptive extended Kalman filter. IEEE Trans. Veh. Technol. 2020, 69, 14618–14630. [Google Scholar] [CrossRef]
  14. Xuan, D.J.; Shi, Z.; Chen, J.; Zhang, C.; Wang, Y.-X. Real-time estimation of state-of-charge in lithium-ion batteries using improved central difference transform method. J. Clean. Prod. 2020, 252, 119787. [Google Scholar] [CrossRef]
  15. How, D.N.T.; Hannan, M.A.; Lipu, M.S.H.; Sahari, K.S.M.; Ker, P.J.; Muttaqi, K. State-of-charge estimation of li-ion battery in electric vehicles: A deep neural network approach. IEEE Trans. Ind. Appl. 2020, 56, 5565–5574. [Google Scholar] [CrossRef]
  16. Fasahat, M.; Manthouri, M. State of charge estimation of lithium-ion batteries using hybrid autoencoder and Long Short Term Memory neural networks. J. Power Sources 2020, 469, 228375. [Google Scholar] [CrossRef]
  17. Yang, F.; Li, W.; Li, C.; Miao, Q. State-of-charge estimation of lithium-ion batteries based on gated recurrent neural network. Energy 2019, 175, 66–75. [Google Scholar] [CrossRef]
  18. Feng, X.; Chen, J.; Zhang, Z.; Miao, S.; Zhu, Q. State-of-charge estimation of lithium-ion battery based on clockwork recurrent neural network. Energy 2021, 236, 121360. [Google Scholar] [CrossRef]
  19. Hannan, M.A.; How, D.N.; Lipu, M.S.; Ker, P.J.; Dong, Z.Y.; Mansur, M.; Blaabjerg, F. SOC estimation of li-ion batteries with learning rate-optimized deep fully convolutional network. IEEE Trans. Power Electron. 2020, 36, 7349–7353. [Google Scholar] [CrossRef]
  20. Chen, Y.; Li, C.; Chen, S.; Ren, H.; Gao, Z. A combined robust approach based on auto-regressive long short-term memory network and moving horizon estimation for state-of-charge estimation of lithium-ion batteries. Int. J. Energy Res. 2021, 45, 12838–12853. [Google Scholar] [CrossRef]
  21. Jiao, M.; Wang, D.; Qiu, J. A GRU-RNN based momentum optimized algorithm for SOC estimation. J. Power Sources 2020, 459, 228051. [Google Scholar] [CrossRef]
  22. Chemali, E.; Kollmeyer, P.J.; Preindl, M.; Ahmed, R.; Emadi, A.; Kollmeyer, P. Long short-term memory networks for accurate state-of-charge estimation of Li-ion batteries. IEEE Trans. Ind. Electron. 2017, 65, 6730–6739. [Google Scholar] [CrossRef]
  23. Bian, C.; He, H.; Yang, S. Stacked bidirectional long short-term memory networks for state-of-charge estimation of lithium-ion batteries. Energy 2020, 191, 116538. [Google Scholar] [CrossRef]
  24. Che, Y.; Liu, Y.; Cheng, Z.; Zhang, J. SOC and SOH identification method of li-ion battery based on SWPSO-DRNN. IEEE J. Emerg. Sel. Top. Power Electron. 2020, 9, 4050–4061. [Google Scholar] [CrossRef]
  25. Ren, X.; Liu, S.; Yu, X.; Dong, X. A method for state-of-charge estimation of lithium-ion batteries based on PSO-LSTM. Energy 2021, 234, 121236. [Google Scholar] [CrossRef]
Figure 1. Battery test bench.
Figure 1. Battery test bench.
Electronics 12 03924 g001
Figure 2. Current data under DST condition.
Figure 2. Current data under DST condition.
Electronics 12 03924 g002
Figure 3. Voltage data under DST condition.
Figure 3. Voltage data under DST condition.
Electronics 12 03924 g003
Figure 4. Current data under UDDS condition.
Figure 4. Current data under UDDS condition.
Electronics 12 03924 g004
Figure 5. Voltage data under UDDS condition.
Figure 5. Voltage data under UDDS condition.
Electronics 12 03924 g005
Figure 6. RNN model for SOC estimation.
Figure 6. RNN model for SOC estimation.
Electronics 12 03924 g006
Figure 7. Data serialization.
Figure 7. Data serialization.
Electronics 12 03924 g007
Figure 8. LSTM.
Figure 8. LSTM.
Electronics 12 03924 g008
Figure 9. GRU.
Figure 9. GRU.
Electronics 12 03924 g009
Figure 10. SOC estimation error under DST condition.
Figure 10. SOC estimation error under DST condition.
Electronics 12 03924 g010
Figure 11. SOC estimation error under UDDS condition.
Figure 11. SOC estimation error under UDDS condition.
Electronics 12 03924 g011
Figure 12. SOC estimation under DST condition.
Figure 12. SOC estimation under DST condition.
Electronics 12 03924 g012
Figure 13. SOC estimation under UDDS condition.
Figure 13. SOC estimation under UDDS condition.
Electronics 12 03924 g013
Figure 14. SOC estimation error of normal SOC value under DST condition.
Figure 14. SOC estimation error of normal SOC value under DST condition.
Electronics 12 03924 g014
Figure 15. SOC estimation error of low SOC value under DST condition.
Figure 15. SOC estimation error of low SOC value under DST condition.
Electronics 12 03924 g015
Figure 16. SOC estimation of normal SOC value under DST condition.
Figure 16. SOC estimation of normal SOC value under DST condition.
Electronics 12 03924 g016
Figure 17. SOC estimation of low SOC value under DST condition.
Figure 17. SOC estimation of low SOC value under DST condition.
Electronics 12 03924 g017
Figure 18. SOC estimation error of normal SOC value under UDDS condition.
Figure 18. SOC estimation error of normal SOC value under UDDS condition.
Electronics 12 03924 g018
Figure 19. SOC estimation error of low SOC value under UDDS condition.
Figure 19. SOC estimation error of low SOC value under UDDS condition.
Electronics 12 03924 g019
Figure 20. SOC estimation of normal SOC value under UDDS condition.
Figure 20. SOC estimation of normal SOC value under UDDS condition.
Electronics 12 03924 g020
Figure 21. SOC estimation of low SOC value under UDDS condition.
Figure 21. SOC estimation of low SOC value under UDDS condition.
Electronics 12 03924 g021
Figure 22. SMA flow chart.
Figure 22. SMA flow chart.
Electronics 12 03924 g022
Figure 23. Combination of SMA and neural network.
Figure 23. Combination of SMA and neural network.
Electronics 12 03924 g023
Figure 24. SOC estimation under DST condition after SMA.
Figure 24. SOC estimation under DST condition after SMA.
Electronics 12 03924 g024
Figure 25. SOC estimation under UDDS condition after SMA.
Figure 25. SOC estimation under UDDS condition after SMA.
Electronics 12 03924 g025
Figure 26. SOC Estimation of normal SOC value under DST condition.
Figure 26. SOC Estimation of normal SOC value under DST condition.
Electronics 12 03924 g026
Figure 27. SOC Estimation of low SOC value under DST condition.
Figure 27. SOC Estimation of low SOC value under DST condition.
Electronics 12 03924 g027
Figure 28. SOC Estimation of normal SOC value under UDDS condition.
Figure 28. SOC Estimation of normal SOC value under UDDS condition.
Electronics 12 03924 g028
Figure 29. SOC Estimation of low SOC value under UDDS condition.
Figure 29. SOC Estimation of low SOC value under UDDS condition.
Electronics 12 03924 g029
Table 1. Battery parameter.
Table 1. Battery parameter.
ParameterValue
Nominal voltage3.6 V
Nominal capacity1.5 Ah
Working voltage2.5 V, 4.2 V
Maximum current15 A (20 °C)
C rate10 C
Table 2. Estimation error of different networks.
Table 2. Estimation error of different networks.
DSTUDDS
Maximum
Error
MSEMaximum
Error
MSE
BP16.38%7.2111%4.27
CNN14.86%4.289.50%3.79
LSTM9.93%2.245.53%1.20
GRU8.47%0.844.42%0.59
LSTM-Attention9.61%1.874.33%0.56
GRU-Attention6.99%0.623.30%0.47
Table 3. SOC estimation error under different discharge conditions in DST condition.
Table 3. SOC estimation error under different discharge conditions in DST condition.
DSTSOC > 20%SOC < 20%
Maximum
Error
MSEMaximum
Error
MSE
BP15.72%8.3416.38%11.91
LSTM-Att9.61%2.428.13%2.11
GRU-Att6.99%0.894.75%0.43
Table 4. SOC estimation error under different discharge conditions in UDDS condition.
Table 4. SOC estimation error under different discharge conditions in UDDS condition.
UDDSSOC > 20%SOC < 20%
Maximum
Error
MSEMaximum
Error
MSE
BP11.49%5.5011.73%7.87
LSTM-Att4.33%0.633.06%0.30
GRU-Att3.30%0.523.00%0.27
Table 5. Error of different neurons under DST condition.
Table 5. Error of different neurons under DST condition.
NeuronsMaximum ErrorMSE
1006.99%0.62
306.43%1.24
Table 6. Error of different parameters under DST condition.
Table 6. Error of different parameters under DST condition.
NeuronsMaximum ErrorMSE
306.43%1.24
506.84%0.98
1006.99%0.62
15011.66%0.67
Table 7. Error of different parameters under UDDS condition.
Table 7. Error of different parameters under UDDS condition.
NeuronsMaximum ErrorMSE
304.32%0.46
503.90%0.39
1003.30%0.47
1504.00%0.45
Table 8. Parameter after optimization of SMA.
Table 8. Parameter after optimization of SMA.
ParameterDSTUDDS
GRU-neurons136130
Fc-neurons2813
Batch_size1710
Table 9. Error after optimization of SMA.
Table 9. Error after optimization of SMA.
DSTUDDS
Maximum
Error
MSEMaximum
Error
MSE
BP16.38%7.2111%4.27
CNN14.86%4.289.50%3.79
GRU-Att6.99%0.623.30%0.47
SMA-GRU-Att4.52%0.382.78%0.22
Table 10. SOC estimation error of DST under different discharge conditions after SMA.
Table 10. SOC estimation error of DST under different discharge conditions after SMA.
DSTSOC > 20%SOC < 20%
Maximum
Error
MSEMaximum
Error
MSE
BP15.72%8.3416.38%11.91
GRU-Att6.99%0.894.75%0.43
SMA-GRU-Att4.52%0.54.20%0.42
Table 11. SOC estimation error of UDDS under different discharge conditions after SMA.
Table 11. SOC estimation error of UDDS under different discharge conditions after SMA.
UDDSSOC > 20%SOC < 20%
Maximum
Error
MSEMaximum
Error
MSE
BP11.49%5.5011.73%7.87
GRU-Att3.30%0.523.00%0.27
SMA-GRU-Att2.78%0.332.51%0.13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Liu, X.; Li, J. A Novel Method for Battery SOC Estimation Based on Slime Mould Algorithm Optimizing Neural Network under the Condition of Low Battery SOC Value. Electronics 2023, 12, 3924. https://doi.org/10.3390/electronics12183924

AMA Style

Zhang X, Liu X, Li J. A Novel Method for Battery SOC Estimation Based on Slime Mould Algorithm Optimizing Neural Network under the Condition of Low Battery SOC Value. Electronics. 2023; 12(18):3924. https://doi.org/10.3390/electronics12183924

Chicago/Turabian Style

Zhang, Xuesen, Xiaojing Liu, and Jianhua Li. 2023. "A Novel Method for Battery SOC Estimation Based on Slime Mould Algorithm Optimizing Neural Network under the Condition of Low Battery SOC Value" Electronics 12, no. 18: 3924. https://doi.org/10.3390/electronics12183924

APA Style

Zhang, X., Liu, X., & Li, J. (2023). A Novel Method for Battery SOC Estimation Based on Slime Mould Algorithm Optimizing Neural Network under the Condition of Low Battery SOC Value. Electronics, 12(18), 3924. https://doi.org/10.3390/electronics12183924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop