Next Article in Journal
Numerical Analysis on the Flue Gas Temperature Maintenance System of a Solid Fuel-Fired Boiler Operating at Minimum Loads
Next Article in Special Issue
Lithium Battery SOH Monitoring and an SOC Estimation Algorithm Based on the SOH Result
Previous Article in Journal
Application of Scattering Parameters to DPL Time-Lag Parameter Estimation at Nanoscale in Modern Integration Circuit Structures
Previous Article in Special Issue
Implementing an Extended Kalman Filter for SoC Estimation of a Li-Ion Battery with Hysteresis: A Step-by-Step Guide
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm

1
Department of Electrical Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan
2
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan
3
Institute of Food Safety and Risk Management, National Taiwan Ocean University, Keelung City 202301, Taiwan
4
Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan
*
Authors to whom correspondence should be addressed.
Energies 2021, 14(15), 4423; https://doi.org/10.3390/en14154423
Submission received: 31 May 2021 / Revised: 12 July 2021 / Accepted: 15 July 2021 / Published: 22 July 2021

Abstract

:
This study uses deep learning to model the discharge characteristic curve of the lithium-ion battery. The battery measurement instrument was used to charge and discharge the battery to establish the discharge characteristic curve. The parameter method tries to find the discharge characteristic curve and was improved by MLP (multilayer perceptron), RNN (recurrent neural network), LSTM (long short-term memory), and GRU (gated recurrent unit). The results obtained by these methods were graphs. We used genetic algorithm (GA) to obtain the parameters of the discharge characteristic curve equation.

1. Introduction

Energy demand is increasing, from power generation to today’s nuclear power generation. In recent years, environmental protection issues have gradually risen and environmental pollution caused by power generation has become a threshold that must not be crossed by technological development [1].
To achieve convenience, countless devices have been invented, which use electrical energy as the main source. People have invented, improved, and simplified these devices. Portable electronic products can integrate various high-tech features. Modern high-tech products can be seen everywhere such as notebook computers, mobile phones, navigation devices, smart-watches, tablet computers, etc. To realize these elements, the battery is an indispensable component [2].
Nowadays, batteries are used in almost every device, which leads to a significant impact on the environment. Therefore, it would be ideal to optimize these reusable batteries. The time and endurance of the device are dependent on reusable batteries. In addition to emphasizing the large capacity and long life of the battery, the market is also committed to research on battery health management systems. An automatic power-off system for charging can prevent the battery from overcharging and preventing the problem of reduced battery life [3,4,5].
In recent years, different energy storage equipment has been developed but some challenges remain. These challenges include the reduction in the cost of energy storage equipment and its size, extended lifespan and improved performance, and the system for measuring the remaining battery power. Each discharge characteristic curve is different for different manufacturers. It is very important to establish a battery discharge characteristic module. To build a battery model, we need to understand the health of the battery, the discharge current, and the battery capacity. Then, the life of the battery can be predicted.
As it is difficult to build a battery model, the voltage is measured at different time points with different current levels. To reduce time and cost, deep learning was used to solve the problem. We used different deep learning algorithms to train our model and to find the most similar characteristic curve.
The discharge characteristic curve of the lithium-ion battery has been modeled by several methods [3,4,5] including the discharge test [6], ampere hour counting [4], open circuit voltage [4], linear modeling [7], physical properties of the electrolyte [4], internal resistance [4], impedance spectroscopy [8], Kalman filter [9], and artificial intelligence [10,11]. The methods of the discharge test, ampere hour counting, open circuit voltage, impedance spectroscopy, Kalman filter, and artificial intelligence are suitable for all battery modeling, which have the advantages of online and easy to accurate, but have the drawbacks of adjusting the battery state, needing regular re-calibration points, low dynamics, cost intensive, the problem of determining initial parameters, and needing the training data, respectively. The method of linear modeling, the physical properties of the electrolyte and the internal resistance are usually in the application of lead, Ni/Cd, and Zn/Br batteries, which are not suitable for lithium-ion batteries [4].
The goal of our study was to predict the different model battery life based on the optimization algorithm and deep learning method. We adopted the optimization algorithm to find different battery charge and discharge regression parameters and used recurrent a deep neural network to predict the real discharge battery lifetime. Another objective is our hope that these studies can help in the prediction of electric vehicle batteries.
This paper used different deep learning methods to study the discharge behavior of the battery to establish a discharge model and to understand the characteristics of the battery discharge. The measured discharge characteristic curve value was input into multilayer perceptron, recurrent neural network, long short-term memory, gated recurrent unit, and genetic algorithm, which are five different algorithms for training. The parameter values obtained were then compared with the characteristic values of the originally measured discharge curve.
The rest of this article is organized as follows. Section 1 describes our study motivation and Section 2 introduces the related works about battery life prediction. Section 3 contributes to our proposed method. The experimental section excludes our different battery model. Finally, we conclude our results in the last section.

2. Related Works

In recent years, the use of artificial intelligence has greatly increased in the application of modeling the discharge of lithium-ion batteries [4,5,11]. Artificial intelligence is realized among the methods of machine learning and deep learning. Deep learning is a training method of machine learning and is the main mode of operation of artificial intelligence today. In this era of automation, the application of deep learning can be seen everywhere [12].
Since the 1980s, the reduction in computer hardware costs and the advancement of storage equipment has helped machine learning to flourish. It has evolved from a single-layer network to the current multi-layer network [13].
Deep learning is used for image recognition in self-driving cars. For artificial intelligence, the real challenge lies in how to solve the problem intuitively for humans. The image recognition based on deep learning can be retrieved as efficient features by convolution operators because some features are difficult for humans to extract. Deep learning has evolved and has been improved and simplified. There are many algorithms and models for everyone to apply because different algorithms or models have different characteristics, training methods, performance, and efficiency.

2.1. MLP (Multilayer Perceptron)

Multi-layer perceptron is a supervised learning algorithm. The multi-layer architecture deals with nonlinear problems. MLP is roughly divided into three levels: input layer, hidden layer, and output layer. Each neuron is fully connected, and each connected neuron has weights that are used to calculate whether the input data has the information we need [14,15,16].

2.2. RNN (Recurrent Neural Network)

The recurrent neural network was invented by David Rumelhart in 1986 [17]. It is roughly divided into three levels: input layer, state layer, and output layer. The state layer can also be regarded as a hidden layer, but the difference from MLP is that the state layer of RNN will have one more output value of the previous layer as input. Therefore, RNN is a kind of neural network with short-term memory [11,16,18,19].
RNN is a kind of neural network that is good at dealing with sequence problems. It specializes in dealing with related topics such as weather observation, stock trading, video data, and other temporal data. RNN emphasizes a concept: if a specific message appears multiple times in a sequence, then the matter of sharing information will become extremely important.
RNN has a vanishing gradient problem. When the program is performed longer, the earlier data will become less important. This results in the incomplete calculation of the weights and the entire program cannot be carried out. It is impossible to remember what happened a long time ago. This problem was solved by Sepp Hochreiter and Jürgen Schmidhuber, who invented LSTM to improve the vanishing gradient problem.

2.3. LSTM (Long Short-Term Memory)

LSTM is a special RNN model proposed by Sepp Hochreiter and Jürgen Schmidhuber in 1997 to improve the vanishing gradient problem [20]. Compared to RNN, LSTM is complicated. This algorithm introduces three gates to control memory, namely input gate, forget gate, and output gate. It gives the machine an ability to select information [21,22,23,24]. Here, we must mention three types of activation functions commonly used in deep learning. The commonly used activation functions in deep learning are the sigmoid function, tanh function, and rectified linear unit (ReLU). Sigmoid function is monotonous, continuous, and easy to solve [25]. The disadvantage of this function is that it can cause the vanishing gradient problem in the saturated regions at both ends. The advantages of the tanh function are similar to the sigmoid function, but the slope of the tanh function is larger, and the convergence and training speed will be faster [26]. The training range is from −1.0 to 1.0, so a more approximate value can be found. The disadvantage is the same as that of the sigmoid function (i.e., a vanishing gradient problem). The activation function rectified linear unit (ReLU) has no vanishing gradient problem and no complicated exponential calculations [27] with a fast convergence rate. The disadvantage of ReLU is that when the input is less than zero, it will not be able to update the data for the next calculation.

2.4. GRU (Gated Recurrent Unit)

The gated recurrent unit is a type of recurrent neural network. It was invented by Junyoung Chung et al. in 2014 [28]. Like LSTM, it is designed to solve the vanishing gradient problem. The biggest difference is that this algorithm combines the forget gate and the input gate and replaces it with an update gate. Because of this, the GRU required calculation time and the required resources are greatly reduced [29,30,31,32].
The following table shows the three model (RNN, LSTM, and GRU) equations, separately. The below table shows the RNN, LSTM, and GRU equations.
Recurrent Neural Network Equation
The output of RNN:
h t = σ ( U h × X t + W h × h t 1 + b h )
σ( ) is activation function.
h t is the hidden layer activations in time t.
X t is the input vector.
U h   is the weight of input vector.
W h is the weight of hidden layer activations.
b h is the bias.
Long Short Term Memory Equation
Forgotten gate:
f t = σ ( W x f × X t + W h f × h t 1 + b f )
σ( ) is activation function.
f t is the current forgotten gate.
X t is the current input vector.
W x f is the current weight of input vector.
W h f is the weight of the hidden vector.
h t 1 is the weight of hidden vector in time t − 1.
b f is the bias.
Input gate:
it = σ(Wxi × Xt + Whi × ht − 1 + bi)
s t = i t   σ ( W x s × X t + W h s × h t 1 + b s ) + f t s t 1
it is the external input gate.
Output gate:
Ot = σ(Wxo × Xt + Who × ht − 1 + bo)
h t = O t × t a n h ( s t )
Ot is the output gate.
h t is the output.
Gated Recurrent Unit Equation
Update gate:
Z t = σ ( W z × X t + W z × h t 1 + b z )
Zt is the update gate.
Reset gate:
R t = σ ( W r × X t + W r × h t 1 + b r )
Rt is the reset gate.

3. Our Proposed Battery Model and Prediction Method

Through the description in the previous section, we found that the recurrent deep neural network had better prediction performance and the battery life was a sequential predicted problem. Therefore, our study must discover different battery parameters with a battery regression function, so we designed a battery charge or discharge activation process to discuss the battery characteristics. We also introduced our major optimization algorithm—genetic algorithm—to help us to find the parameters of the battery charge/discharge function.

3.1. Battery Characteristics

This paper used 18,650 commercial lithium-ion cylindrical batteries, which were common brands with good reliability and currently available on the market. We named them L brand, P brand, and S brand. The basic specifications and charging and discharging conditions are shown in Table 1. The charge cut-off voltage was 4.2 V and the discharge cut-off voltage was 3 V.

3.2. Lithium-Ion Battery Charge and Discharge

After understanding the charging and discharging conditions of the battery, one can use the battery tester to set multiple groups of different charging and discharging rates as an electronic load to test the battery characteristics of a single battery core. Automated battery testing equipment can be used to measure various parameters of the battery in operation (such as voltage, current, current flow, and battery surface temperature, etc...), analyze these data to understand battery characteristics, and build battery characteristics data based on this fuel gauge model [3].

3.3. Lithium-Ion Battery Activation

Since the selected lithium-ion battery was used for the first time, the lithium-ion battery had to be activated before the charge and discharge study as the measured charge and discharge characteristics can reflect the characteristics of the battery. The battery activation flow chart is shown in Figure 1 below.
The procedure is as follows: charge with a constant current of 88 mA (0.04 C) and charge to 4.2 V with a constant voltage until the current is less than 88 mA; after the battery is fully charged, wait for 30 min to restore the voltage to a stable state; discharge with a constant current of 88 mA until the terminal voltage reaches 3 V; again, wait for 30 min; repeat this process five times to activate the battery and make the measured charge and discharge characteristics more convincing.

3.4. Lithium-Ion Battery Charging and Discharging

A total of 18,650 commercial lithium-ion cylindrical batteries were discharged with different C numbers of constant current. The L brand and S brand used 0.04 C, 0.1 C, 0.2 C, 0.5 C, and 1 C, respectively, with five different ways to discharge. The P brand used 0.025 C, 0.1 C, 0.15 C, 0.2 C, 0.5 C, 1 C, and 1.5 C to discharge in seven different ways. After the batteries were activated, we considered four batteries in series as one unit. Taking the L brand battery as an example, we first charged it with a constant current/voltage at 0.1 C to make the terminal voltage reach 16.8 V, then divided it into four batteries and discharged it with constant current from the terminal voltage of 4.2 V to 3 V. This was stopped when the discharge current dropped to 1/3 or 1/4, then we waited for 15 min before repeating this process 300 times, as shown in Figure 2.

3.5. Multilayer Perceptron

Multilayer perceptron is a kind of forward pass neural network, which contains at least three layers (input layer, hidden layer, and output layer), and uses the technology of “backward propagation” to achieve supervised learning. In the current development of deep learning, MLP is actually a special case of a deep neural network. The recurrent neural network, long short-term memory, and gated recurrent unit concept are basically the same as MLP. Only DNN has more techniques and layers in the learning process, which will be greater and deeper. Therefore, our studies emphasized these three recurrent type deep neural networks and are described in the following section.

3.6. Recurrent Neural Network, Long Short Term Memory, and Gated Recurrent Unit

The simplest kind of neural network was introduced above, as multilayer neural networks (MLP). The output of each layer of calculation will only be forwarded to the input of the next layer in a single direction, that is to say, input and output are independent. One of the more advanced changes is the recurrent neural network (RNN). The difference between RNN and MLP is that RNN can pass the calculated output of a certain layer back to the layer itself as input. The output also becomes one of its own inputs at the next point in time (not another hidden layer). Therefore, there is memory in RNN. Because many application scenarios have the concept of sequence such as battery charge/discharge process (the probability of the next battery state depends on what the previous state is). Therefore, to train RNN, you need sequential data, where the input of RNN is the value of each variable in each time series. However, RNN has a shortcoming, that is, the earlier information has less influence on subsequent decision-making. When the time sequence passes, the influence of the previous information almost approaches zero. Therefore, we need a bit of a paradoxical network—long short term memory (LSTM). LSTM introduces three mechanisms to control memory, namely input gate, output gate, and forget gate. The changes in the opening and closing of these three gates have also become one of the variables. The machine learns to open or close by itself through data, thereby determining which information is the focus and which is noise. LSTM uses memory to enhance current decision-making, and uses three control gates to determine the storage and use of memory.
  • In addition to the predicted output, a memory branch is added and updated over time. The current memory is represented by the “forget gate”, and “input gate” is used to determine whether to update the memory.
  • Forget Gate: If the current sentence is a new topic or the opposite of the previous sentence, the previous sentence will be filtered out by this gate. Otherwise, it may continue to be retained in memory. This gate is usually a Sigmoid function.
  • Input Gate: This determines whether the current input and the newly generated memory cell are added to the long term memory. This gate is also a Sigmoid function, which means that it needs to be added or not.
  • Output Gate: This determines whether the current state is added to the output. This gate is also a Sigmoid function, indicating whether to add it or not.
  • Finally, for whether the long-term memory is added to the output, the tanh function is usually used. The value of the output gate will fall between [−1, 1], and the −1 means removing the long-term memory.
LSTM also has the problem of slow execution speed, so the gated recurrent unit (GRU) was proposed to speed up execution and reduce memory consumption. The difference between GRU and LSTM is that GRU only uses two gates, namely the update gate and reset gate. The reset gate controls what percentage of the previous hidden state should be used to calculate the next hidden state with the new input. The update gate is to adjust the ratio of the hidden state to the previous hidden state to obtain the final hidden state.

3.7. Genetic Algorithm

Genetic algorithm (GA) [33,34,35,36,37] was proposed by Professor John Holland and his students around the 1970s and has been widely used to obtain the best results. It is used for optimization problems, artificial intelligence, data retrieval, machine learning, and deep learning. It is said to be a calculation method that simulates the evolution of natural organisms as various species will compete with each other in the environment and only the fittest will be able to survive.
There are some commonly used terms and concepts in genetic algorithms. The population is composed of several different individuals. Individuals are composed of genes and genes are the basic elements of forming chromosomes. A generation refers to the process of evolution. Holland believes that the process of natural evolution occurs within the genes of chromosomes. Evolution refers to the changes that occur in each generation of organisms. The characteristics of each organism are the genes of the previous generation, which determine the level of fitness. Therefore, the principle of survival of the fittest will leave the excellent genes behind and weed out the unsuitable ones. The evolutionary processes of these simulated organisms include reproduction, crossover, and mutation.
Crossover is the most important operation method in genetic algorithms. The process of evolution in the biological world may take tens of thousands of years, but it only takes a few seconds or minutes to use machines to execute genetic algorithms. If you want to obtain strong offspring, you must choose different genes for mating. The common selection methods are roulette wheel selection and tournament selection.
The roulette-style selection method is that each generation of individuals represents a block on the roulette. The size of the block is proportional to the fitness value of the individual. The two selected individuals will be sent to the mating pool for mating to obtain excellent offspring.
For competitive selection, two or more individuals are selected and the individual with a higher fitness value will be sent to the mating pool to wait. In the mating process, two chromosomes are used to produce offspring with some parent-like characteristics. The goal of mating is that the offspring have highly adapted chromosomes. However, it is also possible to inherit the shortcomings of the parents and the mating may not produce better offspring. After eliminating the offspring with shortcomings, an excellent offspring can continue to survive.
The mutation process will make random changes to the chromosomes. The common method will change a certain gene in the chromosome. The purpose of mutation is to let the genetic algorithm search for genes that have not appeared before and bring new genes into the population. However, too many mutations will destroy the structure of the genetic algorithm and cause the offspring to be quite different from their parents. If the number of mutations is too small, the offspring and their parents will not change in any way, so mutations will be regarded as a secondary calculation method.

4. Experiment Result

4.1. Manual Extraction Parameters

Before starting the experiment, we needed to know the discharge characteristic curve of the battery. Taking L brand 18650 as an example, Figure 3 shows the voltage curve of the battery under different discharge rates using a battery measuring instrument. From the viewpoint of energy conservation, the discharge time is shorter before the higher discharge rate reaches the discharge cut-off voltage, which is a normal phenomenon.
The following equation employs three series subcells to describe the discharge characteristics of 18,650 commercial lithium-ion cylindrical batteries [4,38], where Vo1, Vo2, Vo3, Vc, Vk, I, K, τ1, τ2, τ3, τ4, and τ5 are the open-circuit voltage of subcell 1 to subcell 3, open-circuit voltage of batteries, voltage drop of external resistance, discharge current, decline constant of internal resistance, time constant 1 to 5, respectively.
Energies 14 04423 i001
Using the discharge characteristic equation, we can find Figure 4 by manually extracting the parameters. Model is the original parameter and measurement is the value obtained from the manual extraction parameter. As shown in Figure 4, the manual extraction parameters differed from the original data. Considering the time cost, different algorithms were used to solve this problem.
We used the root mean square error equation (RMSE) to express the training score where a smaller value is better.

4.2. MLP Result

Figure 5, Figure 6 and Figure 7 show the measurement data of the L brand, P brand, and S brand batteries with varying discharging current: Epoch = 100, look_back = 10; model is the training data.
Table 2 shows the score after the completion of training for Epoch = 100 and look_back = 10.

4.3. RNN Result

Figure 8, Figure 9 and Figure 10 show the measurement data of the L brand, P brand, and S brand batteries with varying discharging current: Epoch = 100, look_back = 10; model is the training data.
Table 3 shows the score after the completion of training for Epoch = 100 and look_back = 10.

4.4. LSTM Result

Figure 11, Figure 12 and Figure 13 show the measurement data of the L brand, P brand, and S brand batteries with varying discharging current: Epoch = 100, look_back = 10; model is the training data.
Table 4 shows the score after the completion of training for Epoch = 100 and look_back = 10.

4.5. GRU Result

Figure 14, Figure 15 and Figure 16 show the measurement data of the L brand, P brand, and S brand batteries with varying discharging current: Epoch = 100, look_back = 10; model is the training data.
Table 5 shows the score after the completion of training for Epoch = 100 and look_back = 10.

4.6. GA Results

4.6.1. L Brand 18650

Figure 17 shows the results of L brand 18650 training with genetic algorithm (GA). The arrow points to the location where improvement is needed.
Figure 18 utilizes the improved data. The arrow in Figure 18 is consistent with that in Figure 17, and the arrow in Figure 19 is narrower than that in Figure 17. We added an extra capacitor to the discharge characteristic curve equation and increased the mutation rate and mating rate to make it easier for the program to find approximate values. Table 6 shows the parameters of GA for L brand 18650.

4.6.2. P Brand 18650

Figure 20 is the result of P brand 18650 training with genetic algorithm (GA). The arrow points to the locations where improvement is needed.
Figure 21 represents the improved data. The arrow in Figure 20 is significantly consistent with that in Figure 21. The arrow in Figure 22 is narrower than that in Figure 20. We added two more capacitors to the discharge characteristic curve equation and increased the mutation rate and mating rate to make it easier for the program to find approximate values. Table 7 shows the parameters of GA for P brand 18650.

4.6.3. S Brand 18650

Figure 23 shows the results of S-brand 18650 training with genetic algorithm (GA). The arrow marks the points where improvement is needed.
Figure 24 shows the improved data. The arrow in Figure 24 is consistent with that in Figure 23 and the arrow in Figure 25 is narrower than that in Figure 23. We added two more capacitors to the discharge characteristic curve equation and increased the mutation rate and mating rate to make it easier for the program to find approximate values. Table 8 shows the parameters of GA for S brand 18650.

5. Discussion

From the data, it is evident that the training result will have a similar curve to the output. By a simple lookup of the table, the data at different times, voltages, and currents can be found. However, for the values of time parameters and temperature, we cannot use these four methods.
To obtain all the parameter values that make up the equation and to solve this problem, we chose to use the genetic algorithm (GA). This method can set the parameters and range that are required to be solved. The program can imitate the natural world’s “survival of the fittest” and the rule of “elimination” to screen data.
In Table 3, Table 4 and Table 5, we can find the scores (RMSE) between different three recurrent-type model such as RNN, LSTM, and GRU. The average scores of LSTM were better than those of RNN and GRU, and the average of RNN was the worst score. However, we could also find the cure fitting type of Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16. The LSTM cure was also estimated to describe the recurrent-type model more reliably. From Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25, the predicted battery life of P brand 18650 was found as the GA parameters fit the battery life curve. However, S brand 18650 and L brand 18650 were not smooth discharge situations and their discharge situations in our study process were not always at thee down state. Therefore, as a whole, the battery discharge equation of the GA parameters is efficient to estimate the battery life.

6. Conclusions

In this paper, deep learning was used to describe the discharge characteristic curve of the battery. The discharge characteristic curve was used as the basis to establish the discharge model. The battery measuring instrument was used to charge and discharge the battery to establish the discharge characteristic curve.
First, we tried to find the discharge characteristic curve by manually extracting the parameters and found that the effect was not good and the time cost was huge. Therefore, MLP (multilayer perceptron), RNN (recurrent neural network), LSTM (long short-term memory), and GRU (gated recurrent unit) were used to improve this cost. The results obtained by these methods were graphs, but the requirement was to obtain the parameters of the discharge characteristic curve equation. Finally, we used the genetic algorithm (GA) to find the parameters of the discharge characteristic curve equation. This method can effectively find the parameter values that constitute the discharge characteristic curve equation.

Author Contributions

Conceptualization: Y.-Z.H., S.-W.H. and S.-W.T.; methodology, Y.-Z.H. and S.-W.T.; software, S.-W.H.; validation, S.-W.T.; formal analysis, Y.-Z.H.; investigation, Y.-Z.H.; resources, S.-W.T. and S.-S.L.; data curation, Y.-Z.H. and S.-W.T.; writing—original draft preparation, Y.-Z.H. and S.-W.H.; writing—review and editing, Y.-Z.H. and S.-W.T.; supervision, Y.-Z.H.; project administration, Y.-Z.H.; funding acquisition, Y.-Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was partly supported by Ministry of Science and Technology, Taiwan, under MOST 110-2221-E-019-051-, MOST 109-2622-E-019-010-, MOST 109-2221-E-019-057-, MOST 110-2634-F-019-001-, MOST 110-2634-F-008-005-, MOST 110-2221-E-019-052-MY3, and MOST 108-2221-E-019-038-MY2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahuja, D.; Tatsutani, M. Sustainable energy for developing countries. Surv. Perspect. Integr. Environ. Soc. 2009, 2, 1. [Google Scholar]
  2. Pereira, J.C. Environmental issues and international relations, a new global (dis)order-the role of International Relations in promoting a concerted international system. Rev. Bras. Política Int. 2015, 58, 191–209. [Google Scholar] [CrossRef] [Green Version]
  3. Bryntesen, S.; Strømman, A.; Tolstorebrov, I.; Shearing, P.; Lamb, J.; Burheim, O.S. Opportunities for the State-of-the-Art Production of LIB Electrodes—A Review. Energies 2021, 14, 1406. [Google Scholar] [CrossRef]
  4. Piller, S.; Perrin, M.; Jossen, A. Methods for state-of-charge determination and their applications. J. Power Sources 2001, 96, 113–120. [Google Scholar] [CrossRef]
  5. Shen, Y.C. The Characteristics of Battery Discharge and Automodeling. Master Thesis, National Taiwan Ocean University, Keelung, Taiwan, 2013. [Google Scholar]
  6. Lee, S.J.; Kim, J.H.; Lee, J.M.; Cho, B.H. The State and Parameter Estimation of an Li-Ion Battery Using a New OCV-SOC Concept. In Proceedings of the Power Electronics Specialists Conference, Orlando, FL, USA, 17–21 June 2007; pp. 2799–2803. [Google Scholar]
  7. Ehret, C.; Piller, S.; Schroer, W.; Jossen, A. State-of-charge determination for lead-acid batteries in PV-applications. In Proceedings of the 16th European Photovoltaic Solar Energy Conference, Glasgow, UK, 1–5 May 2000. [Google Scholar]
  8. Huet, F. A review of impedance measurements for determination of the state-of-charge or state-of-health of secondary batteries. J. Power Sources 1998, 70, 59–69. [Google Scholar] [CrossRef]
  9. Yu, Z.; Huai, R.; Xiao, L. State-of-Charge Estimation for Lithium-Ion Batteries Using a Kalman Filter Based on Local Linearization. Energies 2015, 8, 7854–7873. [Google Scholar] [CrossRef] [Green Version]
  10. Chan, C.; Lo, E.; Weixiang, S. The available capacity computation model based on artificial neural network for lead–acid batteries in electric vehicles. J. Power Sources 2000, 87, 201–204. [Google Scholar] [CrossRef]
  11. Hsieh, Y.; Tan, S.; Gu, S.; Jeng, Y. Prediction of Battery Discharge States Based on the Recurrent Neural Network. J. Internet Technol. 2020, 21, 113–120. [Google Scholar]
  12. Hsieh, Y.-Z.; Lin, S.-S.; Luo, Y.-C.; Jeng, Y.-L.; Tan, S.-W.; Chen, C.-R.; Chiang, P.-Y.; Hsieh, Y.-Z.; Lin, S.-S.; Luo, Y.-C.; et al. ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation. Sustainability 2020, 12, 5605. [Google Scholar] [CrossRef]
  13. Shahrivar, E.M.; Sundaram, S. The Strategic Formation of Multi-Layer Networks. IEEE Trans. Netw. Sci. Eng. 2015, 2, 164–178. [Google Scholar] [CrossRef] [Green Version]
  14. Vilovic, I.; Burum, N. A comparison of MLP and RBF neural networks architectures for electromagnetic field prediction in indoor environments. In Proceedings of the 5th European Conference on Antennas and Propagation (EUCAP), Rome, Italy, 11–15 April 2011; pp. 1719–1723. [Google Scholar]
  15. Xiang, C.; Ding, S.Q.; Lee, T.H. Architecture analysis of MLP by geometrical interpretation. In Proceedings of the 2004 International Conference on Communications, Circuits and Systems (IEEE Cat. No.04EX914), Chengdu, China, 27–29 June 2004; Volume 2, pp. 1042–1046. [Google Scholar] [CrossRef]
  16. Xiang, C.; Ding, S.Q.; Lee, T.H. Geometrical Interpretation and Architecture Selection of MLP. IEEE Trans. Neural Netw. 2005, 16, 84–96. [Google Scholar] [CrossRef] [PubMed]
  17. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  18. Uçkun, F.A.; Özer, H.; Nurbaş, E.; Onat, E. Direction Finding Using Convolutional Neural Networks and Convolutional Recurrent Neural Networks. In Proceedings of the 2020 28th Signal Processing and Communications Applications Conference (SIU), Gaziantep, Turkey, 5–7 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  19. Boden, M.; Hawkins, J. Improved Access to Sequential Motifs: A Note on the Architectural Bias of Recurrent Networks. IEEE Trans. Neural Netw. 2005, 16, 491–494. [Google Scholar] [CrossRef] [PubMed]
  20. Sepp, H.; Jürgen, S. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar]
  21. Heck, J.C.; Salem, F.M. Simplified minimal gated unit variations for recurrent neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1593–1596. [Google Scholar] [CrossRef] [Green Version]
  22. Kumar, B.P.; Hariharan, K. Multivariate Time Series Traffic Forecast with Long Short Term Memory based Deep Learning Model. In Proceedings of the 2020 International Conference on Power, Instrumentation, Control and Computing (PICC), Thrissur, India, 17–19 December 2020; pp. 1–5. [Google Scholar] [CrossRef]
  23. Sheikhfaal, S.; Demara, R.F. Short-Term Long-Term Compute-in-Memory Architecture: A Hybrid Spin/CMOS Approach Supporting Intrinsic Consolidation. IEEE J. Explor. Solid-State Comput. Devices Circuits 2020, 6, 62–70. [Google Scholar] [CrossRef]
  24. Schmidhuber, J.; Gers, F.; Eck, D. Learning Nonregular Languages: A Comparison of Simple Recurrent Networks and LSTM. Neural Comput. 2002, 14, 2039–2041. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, B.; Meng, P. Hybrid Algorithm Combining Ant Colony Algorithm with Genetic Algorithm for Continuous Domain. In Proceedings of the 2008 The 9th International Conference for Young Computer Scientists, Hunan, China, 18–21 November 2008; pp. 1819–1824. [Google Scholar] [CrossRef]
  26. Apostolov, P.S.; Stefanov, A.K.; Bagasheva, M.S. Efficient FIR Filter Synthesis Using Sigmoidal Function. In Proceedings of the 2019 X National Conference with International Participation (ELECTRONICA), Sofia, Bulgaria, 16–17 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
  27. Komatsuzaki, Y.; Otsuka, H.; Yamanaka, K.; Hamamatsu, Y.; Shirae, K.; Fukumoto, H. A low distortion Doherty amplifier by using tanh function gate bias control. In Proceedings of the 2014 Asia-Pacific Microwave Conference, Sendai, Japan, 4–7 November 2014; pp. 110–112. [Google Scholar]
  28. Junyoung, C.; Caglar, G.; KyungHyun, C.; Yoshua, B. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  29. Turan, A.; Kayıkçıoğlu, T. Neuronal motifs of long term and short term memory functions. In Proceedings of the 2014 22nd Signal Processing and Communications Applications Conference (SIU), Trabzon, Turkey, 23–25 April 2014; pp. 1255–1258. [Google Scholar] [CrossRef]
  30. Mirza, A.H. Online additive updates with FFT-IFFT operator on the GRU neural networks. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  31. Mirza, A.H. Variants of combinations of additive and multiplicative updates for GRU neural networks. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
  32. Yang, S.; Yu, X.; Zhou, Y. LSTM and GRU Neural Network Performance Comparison Study: Taking Yelp Review Dataset as an Example. In Proceedings of the 2020 International Workshop on Electronic Communication and Artificial Intelligence (IWECAI), Qingdao, China, 12–14 June 2020; pp. 98–101. [Google Scholar] [CrossRef]
  33. Pavithra, M.; Saruladha, K.; Sathyabama, K. GRU Based Deep Learning Model for Prognosis Prediction of Disease Progression. In Proceedings of the 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 27–29 March 2019; pp. 840–844. [Google Scholar] [CrossRef]
  34. Yichen, L.; Bo, L.; Chenqian, Z.; Teng, M. Intelligent Frequency Assignment Algorithm Based on Hybrid Genetic Algorithm. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Nanchang, China, 15–17 May 2020; pp. 461–467. [Google Scholar] [CrossRef]
  35. Jiang, J.; Butler, D. A genetic algorithm design for vector quantization. In Proceedings of the First International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications, Sheffield, UK, 12–14 September 1995; pp. 331–336. [Google Scholar] [CrossRef]
  36. Wang, J.; Hong, W.; Li, X. The Influence of Genetic Initial Algorithm on the Highest Likelihood in Gaussian Mixture Model. In Proceedings of the 2006 6th World Congress on Intelligent Control and Automation, Dalian, China, 21–23 June 2006; pp. 3580–3583. [Google Scholar] [CrossRef]
  37. Chen, M.; Yao, Z. Classification Techniques of Neural Networks Using Improved Genetic Algorithms. In Proceedings of the 2008 Second International Conference on Genetic and Evolutionary Computing, Jinzhou, China, 25–26 September 2008; pp. 115–119. [Google Scholar] [CrossRef]
  38. Ananda, S.; Lakshminarasamma, N.; Radhakrishna, V.; Srinivasan, M.S.; Satyanarayana, P.; Sankaran, M. Generic Lithium ion battery model for energy balance estimation in spacecraft. In Proceedings of the 2018 IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES), Chennai, India, 18–21 December 2018; pp. 1–5. [Google Scholar] [CrossRef]
Figure 1. Activation flowchart (LOOP 5 times).
Figure 1. Activation flowchart (LOOP 5 times).
Energies 14 04423 g001
Figure 2. Discharging flowchart (LOOP 300 times).
Figure 2. Discharging flowchart (LOOP 300 times).
Energies 14 04423 g002
Figure 3. Discharge curve of L brand 18650.
Figure 3. Discharge curve of L brand 18650.
Energies 14 04423 g003
Figure 4. Measurement of L brand 18650.
Figure 4. Measurement of L brand 18650.
Energies 14 04423 g004
Figure 5. Curve of L brand 18650, Epoch = 100, look_back = 10.
Figure 5. Curve of L brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g005
Figure 6. Curve of P brand 18650, Epoch = 100, look_back = 10.
Figure 6. Curve of P brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g006
Figure 7. Curve of S brand 18650, Epoch = 100, look_back = 10.
Figure 7. Curve of S brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g007
Figure 8. Curve of L brand 18650, Epoch = 100, look_back = 10.
Figure 8. Curve of L brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g008
Figure 9. Curve of P brand 18650, Epoch = 100, look_back = 10.
Figure 9. Curve of P brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g009
Figure 10. Curve of S brand 18650, Epoch = 100, look_back = 10.
Figure 10. Curve of S brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g010
Figure 11. Curve of L brand 18650, Epoch = 100, look_back = 10.
Figure 11. Curve of L brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g011
Figure 12. Curve of P brand 18650, Epoch = 150, look_back = 20.
Figure 12. Curve of P brand 18650, Epoch = 150, look_back = 20.
Energies 14 04423 g012
Figure 13. Curve of S brand 18650, Epoch = 100, look_back = 10.
Figure 13. Curve of S brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g013
Figure 14. Curve of L brand 18650, Epoch = 100, look_back = 10.
Figure 14. Curve of L brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g014
Figure 15. Curve of P brand 18650, Epoch = 100, look_back = 10.
Figure 15. Curve of P brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g015
Figure 16. Curve of S brand 18650, Epoch = 100, look_back = 10.
Figure 16. Curve of S brand 18650, Epoch = 100, look_back = 10.
Energies 14 04423 g016
Figure 17. GA results (L brand 18650).
Figure 17. GA results (L brand 18650).
Energies 14 04423 g017
Figure 18. GA results 2 (L brand 18650).
Figure 18. GA results 2 (L brand 18650).
Energies 14 04423 g018
Figure 19. GA results 2 (L brand 18650): reduce the range at the arrow.
Figure 19. GA results 2 (L brand 18650): reduce the range at the arrow.
Energies 14 04423 g019
Figure 20. GA results (P brand 18650).
Figure 20. GA results (P brand 18650).
Energies 14 04423 g020
Figure 21. GA results 2 (P brand 18650).
Figure 21. GA results 2 (P brand 18650).
Energies 14 04423 g021
Figure 22. GA results 2 (P brand 18650): reduce the range at the arrow.
Figure 22. GA results 2 (P brand 18650): reduce the range at the arrow.
Energies 14 04423 g022
Figure 23. GA results (S brand 18650).
Figure 23. GA results (S brand 18650).
Energies 14 04423 g023
Figure 24. GA results 2 (S brand 18650).
Figure 24. GA results 2 (S brand 18650).
Energies 14 04423 g024
Figure 25. GA results 2 (S brand 18650): reduce the range at the arrow.
Figure 25. GA results 2 (S brand 18650): reduce the range at the arrow.
Energies 14 04423 g025
Table 1. Specifications for 18,650 commercial lithium-ion cylindrical batteries.
Table 1. Specifications for 18,650 commercial lithium-ion cylindrical batteries.
L BrandP BrandS Brand
Average voltage3.7 V3.7 V3.7 V
Charging cut-off voltage4.2 V4.2 V4.2 V
Discharge cut-off voltage3.0 V3.0 V3.0 V
Nominal capacity2200 mAh2200 mAh2200 mAh
Maximum discharge rate1.5 C1.5 C1.5 C
Cycle life≥300≥300≥300
Charging working temperature10 °C~45 °C10 °C~45 °C10 °C~45 °C
Discharge working temperature−10 °C~60 °C−10 °C~60 °C±10 °C~60 °C
Table 2. Results of MLP, Epoch = 100, look_back = 10.
Table 2. Results of MLP, Epoch = 100, look_back = 10.
L Brand 18650ScoreP Brand 18650ScoreS Brand 18650Score
C1E100LB102.3962C1E100LB101.9665C1E100LB101.416
C2E100LB101.0355C2E100LB102.5641C2E100LB105.1457
C3E100LB101.9235C3E100LB102.1702C3E100LB101.4973
C4E100LB102.5563C4E100LB101.2151C4E100LB101.9197
C5E100LB101.8657C5E100LB103.4141C5E100LB102.2744
C6E100LB102.1928
C7E100LB101.8609
Table 3. Results of RNN, Epoch = 100, look_back = 10.
Table 3. Results of RNN, Epoch = 100, look_back = 10.
L Brand 18650ScoreP Brand 18650ScoreS Brand 18650Score
C1E100LB1022.7628C1E100LB1026.1699C1E100LB1010.2164
C2E100LB1012.8867C2E100LB1015.3611C2E100LB1011.8313
C3E100LB1019.2566C3E100LB107.8641C3E100LB1016.3885
C4E100LB1016.4656C4E100LB1013.1751C4E100LB1047.0949
C5E100LB1014.4838C5E100LB1027.3929C5E100LB1023.4548
C6E100LB1012.1015
C7E100LB1011.0649
Table 4. Results of LSTM, Epoch = 100, look_back = 10.
Table 4. Results of LSTM, Epoch = 100, look_back = 10.
L Brand 18650ScoreP Brand 18650ScoreS Brand 18650Score
C1E100LB1035.2445C1E100LB105.3272C1E100LB105.2539
C2E100LB1034.7757C2E100LB107.2181C2E100LB1030.271
C3E100LB1016.183C3E100LB109.3963C3E100LB108.6395
C4E100LB104.7719C4E100LB103.2441C4E100LB1016.1969
C5E100LB107.4273C5E100LB107.3323C5E100LB1011.1258
C6E100LB105.8578
C7E100LB103.7878
Table 5. Results of GRU, Epoch = 100, look_back = 10.
Table 5. Results of GRU, Epoch = 100, look_back = 10.
L Brand 18650ScoreP Brand 18650ScoreS Brand 18650Score
C1E100LB1011.7349C1E100LB1013.9564C1E100LB107.9086
C2E100LB1022.561C2E100LB1010.8346C2E100LB1014.9015
C3E100LB1015.0054C3E100LB108.2631C3E100LB107.614
C4E100LB105.245C4E100LB105.8903C4E100LB1021.8592
C5E100LB105.2089C5E100LB1012.3002C5E100LB108.8804
C6E100LB107.3077
C7E100LB105.1895
Table 6. The parameters of GA (L brand 18650).
Table 6. The parameters of GA (L brand 18650).
ParameterRangeTrue Value
I0.01~11072.6875
K1.4 × 10−13~1.4 × 10−191.3633 × 10−13
Vk0~40003674
Vc0~42,0001292
Vo10~30002404
Vo20~35,00010,603
Vo30~40002555
Vo40~70,00030,444
τ10~140,0005339
τ20~450,000,000113,874,966
τ30~187,00055,007
τ40~170,00081,575
τ50~10,000604
τ60~900,000,000195,378,993
Table 7. The parameters of GA (P brand 18650).
Table 7. The parameters of GA (P brand 18650).
ParameterRangeTrue Value
I0.01~11076.8125
K1.4 × 10−13~1.4 × 10−199.1103 × 10−14
Vk0~40002941
Vc0~42,0001087
Vo10~3000697
Vo20~35,0009232
Vo30~40003554
Vo40~70,0008213
Vo50~70,00034247
τ10~140,00035186
τ20~450,000,000151,985,257
τ30~187,0007117
τ40~170,00087,017
τ50~10,000651
τ60~900,000,000155,079,888
τ70~900,000,000712,175,902
Table 8. The parameters of GA (S brand 18650).
Table 8. The parameters of GA (S brand 18650).
ParameterRangeTrue Value
I0.01~11082.75
K1.4 × 10−13~1.4 × 10−191.3061 × 10−13
Vk0~40002965
Vc0~42,0001144
Vo10~30002456
Vo20~35,00033,368
Vo30~40002632
Vo40~70,00052,797
Vo50~70,00027,897
τ10~140,000139,951
τ20~450,000,000210,499,733
τ30~187,0002942
τ40~170,00047,774
τ50~10,000671
τ60~900,000,000818,734,152
τ70~900,000,000457,604,887
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tan, S.-W.; Huang, S.-W.; Hsieh, Y.-Z.; Lin, S.-S. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies 2021, 14, 4423. https://doi.org/10.3390/en14154423

AMA Style

Tan S-W, Huang S-W, Hsieh Y-Z, Lin S-S. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies. 2021; 14(15):4423. https://doi.org/10.3390/en14154423

Chicago/Turabian Style

Tan, Shih-Wei, Sheng-Wei Huang, Yi-Zeng Hsieh, and Shih-Syun Lin. 2021. "The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm" Energies 14, no. 15: 4423. https://doi.org/10.3390/en14154423

APA Style

Tan, S. -W., Huang, S. -W., Hsieh, Y. -Z., & Lin, S. -S. (2021). The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies, 14(15), 4423. https://doi.org/10.3390/en14154423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop