Temporal Attention Mechanism Based Indirect Battery Capacity Prediction Combined with Health Feature Extraction
Abstract
:1. Introduction
- Advanced feature extraction for battery health monitoring: In this study, key features indicative of battery degradation are skillfully extracted using easily measurable parameters in the battery charge/discharge cycle. The paper details a systematic approach to extracting battery health indicators from available data, categorizing them into measured and calculated health indicators. In addition, it reveals the correlation between the extracted health metrics and battery capacity, which helps to identify the most relevant and effective parameters for battery capacity. These metrics not only deepen the understanding of battery aging trajectories, but also provide a solid foundation for predictive analysis.
- Indirect battery capacity prediction model based on time-attention mechanism: The model adds a temporal focus mechanism to capture the time-dependent nuances in the battery charge/discharge data to reveal its intrinsic dynamic behavior and degradation mechanisms. The focus on temporal dynamics greatly enhances the model’s ability to adapt to datasets of different sizes and characteristics, thus expanding its applicability to different battery capacity prediction studies.
- Empirical validation through extensive experimentation: This paper emphasizes the proposed model’s consistency and adaptability through experiments using multiple battery datasets. The accuracy and robustness of the model predictions are rigorously verified through in-depth error distribution analysis and visualization of graphs. In addition, a comparative analysis with four other well-established models is conducted in this paper to highlight the proposed model’s superior prediction accuracy and reliability, especially highlighting the advantages offered by the temporal attention mechanism and advanced feature extraction process.
2. Hybrid Network for Battery Capacity Prediction
2.1. Temporal Attention Mechanism
- Formulation of Attention Scores: In the inaugural phase, attention scores are formulated for each time fragment within a defined series . This procedure entails the deployment of a parameterized function to evaluate the resemblance between the focal time step and its counterparts, represented as
- Synthesis of Attention Weights: Following this, attention weights are synthesized, utilizing the softmax function to equilibrate the attention scores, thus ensuring a summation equal to one. This computational process is mathematically illustrated asThese weights mirror the significance attributed to each time step, dictating their contribution to the representation of the current time fragment.
- Computation of the Weighted Sum: Subsequently, a weighted sum of the input series is constructed leveraging the attention weights, engendering a sophisticated representation of the present time step. This is mathematically conveyed as
2.2. Convolutional Neural Network
2.3. Long Short-Term Memory Networks
- Forget Gate (): Employs the sigmoid function to decide which information is forgotten or retained from the cell state.
- Input Gate (): Also uses the sigmoid function to determine which new information is added to the cell state.
- Candidate Cell State (): Generates potential new values for the cell state, using the tanh function.
- Cell State (): Maintains the long-term flow of information within the LSTM unit, updating based on the outputs of the forget gate and input gate.
- Output Gate (): Utilizes the sigmoid function to control which information from the cell state is outputted.
- Hidden State (): Represents the output of the LSTM unit at the current time step, based on the decisions of the output gate and the current cell state.
2.4. Hybrid Network for Battery Capacity Prediction
- Network Structure DiagramTo illustrate the complex network structure in detail, Figure 2 is a diagram of the network structure of the prediction model based on the temporal attention mechanism to help in gaining a deep understanding of the model’s functionality and the synergistic dynamics between the layers.
- Initial Data ProcessingBefore starting the network operation, the data must go through several transformation stages to ensure optimal network performance. MinMaxScaler normalizes data obtained from specific databases. This process helps maintain the data’s intrinsic distribution while adjusting the values to a predetermined range (0 to 1 in this case). Subsequently, the data corpus was divided into training and test sets, maintaining an 80–20 ratio. Using a custom function called create_dataset, the data were adapted to a format consistent with the LSTM inputs. A backtracking parameter was integrated to take into account past time steps in the prediction model.
- Start LayerThe network operates from the input layer, where the transformed data are accepted. This layer represents the starting point of the data through the network and is aligned with the feature dimensions of the training dataset.
- Convolution PhaseAfter the initial layer, the data enter the convolution phase where important features are separated from the input data using 64 filters and 3 kernel dimensions. The Rectified Linear Unit activation function is used here to introduce a nonlinear element to the system, which enhances the network’s ability to capture complex patterns in the data.
- Dimensionality Reduction LayerIn order to simplify the computational process and to highlight the main features, a maximum pooling layer is incorporated in the system with a pool size of 1. This layer helps to minimize the spatial dimensionality of the output volumes, preserving the essential information and speeding up the computational process.
- Sequential LSTM LayerFor temporal analysis, LSTM layers are a core element that is proficient in capturing time-related dependencies in the data. Arranged in different configurational orders, these layers excel at identifying long-term dependencies in the data, providing a nuanced understanding of temporal patterns that are critical for accurate prediction.
- Attention MechanismThis mechanism allows the model to selectively focus on different segments of the input sequence, highlighting important patterns and dependencies, thus improving prediction accuracy.
- Output SegmentThe final part of the network structure is the output segment, a dense layer in which a single unit synthesizes and analyzes the information to produce the final battery capacity prediction. This segment translates the insights extracted from the data into an actual output representing the predicted battery capacity.
- Model Configuration and TrainingOnce the network structure is defined, the model will enter the configuration and training phase. The Adam optimizer, known for its adaptive learning rate, is chosen in the training phase to minimize the mean square error loss function and to fine tune the weights for optimal performance over a time span of 30 calendar hours with a batch size of 2.
- Prediction and Performance EvaluationAfter the training phase, the model starts to enter the prediction phase, where predictions are made for both the training and test datasets. These predictions are then inversely transformed to reverse the normalization applied in the initial phase. This step is crucial to obtain predictions of the original proportions and helps to make an accurate assessment of the model’s efficiency.
3. Feature Extraction
3.1. Data Description
3.1.1. NASA Dataset
3.1.2. MIT Dataset
3.2. Extraction of Health Indicators
3.2.1. Directly Measured Indicators of Battery Aging
3.2.2. Calculated Indicators of Battery Aging
3.3. Correlation Analysis
4. Model Framework and Model Training
4.1. The Framework of Capacity Estimation
- Data Collection and Feature ExtractionThe analytical process initiates with the extraction of key HIs from the raw battery data, supplemented by data normalization to ensure compatibility with the model requirements. These HIs are pivotal in providing a comprehensive depiction of the battery’s health status.In terms of measured HIs, the focus is placed on parameters that are directly observable from the battery’s operational data. This encompasses the duration of the constant current (CC) charging stage (), variations in charging voltage over equidistant time intervals (ETCV), and the temporal location of the peak temperature during the discharge phase (TPL). These indicators directly reflect the battery’s aging process, encapsulating the effects of wear and tear.Pertaining to calculated HIs, the analysis delves into the subtle electrochemical changes occurring within the battery. This involves examining the incremental capacity (IC) curve to extract the peak of the curve (ICP) and the location of this peak (ICPL). Additionally, the discharge voltage’s sample entropy (SampEn) is computed. These calculated metrics provide a holistic view of the battery’s health, enriching the analysis.Concluding this phase, a comprehensive correlation analysis is conducted to establish the relationship between these HIs and the battery’s capacity. This step is crucial as it isolates the HIs that significantly impact battery capacity, ensuring that the predictive model is grounded in relevant and substantial data. The correlation analysis reveals the quantitative relationships between the HIs and battery capacity, facilitating feature selection, and enhancing the predictive model’s accuracy and reliability. The results of this analysis are succinctly presented in a heatmap, providing a clear visual representation of the correlation between each indicator and battery capacity, thereby laying the groundwork for the subsequent predictive modeling.
- Capacity Prediction ModelThe construction of the capacity prediction model hinges on the integration of a temporal attention mechanism with a deep learning network. After preprocessing, the data are fed into the model through an input layer, followed by a convolutional layer tasked with extracting local features from the time series data. To optimize computational efficiency, a max pooling layer is utilized to diminish both the parameters and computational load.Following the initial layers, LSTM layers are introduced to capture the inherent temporal dependencies in the data, a critical component for time series prediction tasks such as battery capacity forecasting. To bolster the model’s ability to focus on significant temporal features, a temporal attention layer is amalgamated with the LSTM layers. This integration allows the model to allocate varied attention weights to different time steps based on their relevance.The training of the model proceeds in a sequential manner, starting with the extraction of local features via the convolutional layer, followed by the LSTM layers capturing the long-term dependencies. The inclusion of the temporal attention mechanism ensures meticulous focus on pivotal time steps, thereby enhancing the model’s prediction accuracy.Throughout the training phase, appropriate optimizers and loss functions are chosen to guarantee both the accuracy and stability of the model’s predictions. This detailed approach to model training and architecture selection culminates in a robust and reliable capacity prediction model, adept at navigating the complexities of battery capacity data.
- Model Evaluation and ComparisonIn the comprehensive evaluation of the capacity prediction model, a series of evaluation metrics including Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) and Maximum Absolute Error (MaxAE) are utilized. These metrics collectively provide insights into the accuracy, stability, and robustness of the model in various scenarios.The performance of the proposed model is rigorously compared with several established network structures to validate its effectiveness and demonstrate its superiority. The models for comparison encompass a CNN-LSTM network, a CNN-LSTM network enhanced with transformers, a standalone CNN network, and a standalone LSTM network.To facilitate a thorough understanding and showcase the comparative performance, a variety of visual representations such as prediction plots, error curves, and radar charts are employed. This extensive comparative analysis aims to highlight the unique advantages and superior predictive capabilities of the indirect battery capacity prediction model in handling complex tasks related to battery capacity estimation.With this composite network framework, this study expects to deeply explore the hidden information in battery data and construct an efficient and accurate time series prediction model.
4.2. Model Training
5. Simulation and Verification
5.1. The Evaluation Criteria
- Mean Squared Error (MSE): This metric quantifies the average of the squares of the errors between the true and predicted values, thus offering an insight into the magnitude of the error generated by the model. The mathematical representation is as follows:
- Mean Absolute Error (MAE): MAE computes the average of the absolute differences between the actual and forecasted values, thus portraying a clear picture of the model’s prediction accuracy.
- Root Mean Squared Error (RMSE): This metric represents the square root of the MSE, offering a perspective that is more aligned with the original data units and portrays a more interpretable view of the average error magnitude.
- Mean Absolute Percentage Error (MAPE): MAPE furnishes a relative measure of prediction accuracy as it computes the average percentage error between the actual and predicted values.
- Maximum Absolute Error (MaxAE): This metric captures the largest absolute error between the true and predicted values, thus highlighting the worst-case error scenario in the model’s predictions.
5.2. Capacity Prediction Based on the Proposed Method
5.2.1. Analysis of Prediction Curves
5.2.2. Error Analysis
5.2.3. Quantitative Evaluation
- MSE: The proposed model manifests low MSE values across all datasets, indicating its precision and accuracy in predicting battery capacities.
- RMSE: The RMSE is more sensitive to outliers, so it better reflects the presence of more significant errors. As can be seen from the table, the model used has relatively low RMSE values for all the battery datasets, which further confirms the accuracy of its predictions.
- MAPE: The MAPE values underline the model’s precision, reflecting minor deviations from the true values.
- MaxAE: From the MaxAE values in the table, it can be seen that the maximum prediction errors of the model are relatively small on all datasets, further validating the reliability and robustness of the model.
5.3. Method Comparison and Error Analysis
5.3.1. Performance Metrics Analysis
- B_5 battery dataset: The time_attention model demonstrates exceptional predictive accuracy on the B_5 battery dataset, achieving a Test MSE of 0.0126%, which is approximately 79.7% lower than that of the next best model, CNN. This remarkable performance underscores its precision and reliability in capacity estimation. In terms of Test MAPE, the model outperforms the CNN_LSTM model by approximately 68.1%, showcasing its superior predictive accuracy and reliability.
- B_7 battery dataset: On the B_7 battery dataset, the time_attention model maintains a high level of accuracy, achieving a Test MSE that is 85.5% lower than that of the CNN_LSTM model. Even though some metrics might show slight fluctuations, the model invariably stands out, particularly in minimizing the Test MAPE, where it achieves a 70.5% reduction compared to the CNN_LSTM_trans model. This consistent performance cements its position as a top-tier predictive model.
- B_18 battery dataset: The time_attention model continues to exhibit superior prediction abilities on the B_18 battery dataset, achieving a Test MSE that is 14.7% lower than the CNN_LSTM model, the next best in this metric. Specifically, it excels in minimizing the Test MAPE, outperforming the CNN_LSTM_trans model by 58.6%. These results highlight the model’s exceptional performance and reliability in battery capacity prediction.
- B_MIT battery dataset: On the larger B_MIT battery dataset, the time_attention model showcases an exceptional performance, achieving a Test MSE that is 66.1% lower than the CNN model, the next best in this metric. Its Test MAPE is also the smallest among all models, reinforcing its reliability and precision. With a 47.9% reduction in Test MaxAE compared to the LSTM model, it highlights its unparalleled reliability in worst-case scenarios.
5.3.2. Prediction Charts Analysis
5.3.3. Error Curves Analysis
5.3.4. Radar Chart Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zubi, G.; Dufo-López, R.; Carvalho, M.; Pasaoglu, G. The lithium-ion battery: State of the art and future perspectives. Renew. Sustain. Energy Rev. 2018, 89, 292–308. [Google Scholar] [CrossRef]
- Li, W.; Chen, S.; Peng, X.; Xiao, M.; Gao, L.; Garg, A.; Bao, N. A comprehensive approach for the clustering of similar-performance cells for the design of a lithium-ion battery module for electric vehicles. Engineering 2019, 5, 795–802. [Google Scholar] [CrossRef]
- Wen, J.; Zhao, D.; Zhang, C. An overview of electricity powered vehicles: Lithium-ion battery energy storage density and energy conversion efficiency. Renew. Energy 2020, 162, 1629–1648. [Google Scholar] [CrossRef]
- Hu, X.; Che, Y.; Lin, X.; Onori, S. Battery health prediction using fusion-based feature selection and machine learning. IEEE Trans. Transp. Electrif. 2020, 7, 382–398. [Google Scholar] [CrossRef]
- Hannan, M.A.; Lipu, M.H.; Hussain, A.; Mohamed, A. A review of lithium-ion battery state of charge estimation and management system in electric vehicle applications: Challenges and recommendations. Renew. Sustain. Energy Rev. 2017, 78, 834–854. [Google Scholar] [CrossRef]
- Kong, J.z.; Yang, F.; Zhang, X.; Pan, E.; Peng, Z.; Wang, D. Voltage-temperature health feature extraction to improve prognostics and health management of lithium-ion batteries. Energy 2021, 223, 120114. [Google Scholar] [CrossRef]
- Thakur, A.K.; Sathyamurthy, R.; Velraj, R.; Saidur, R.; Pandey, A.; Ma, Z.; Singh, P.; Hazra, S.K.; Sharshir, S.W.; Prabakaran, R.; et al. A state-of-the art review on advancing battery thermal management systems for fast-charging. Appl. Therm. Eng. 2023, 226, 120303. [Google Scholar] [CrossRef]
- Jin, S.; Sui, X.; Huang, X.; Wang, S.; Teodorescu, R.; Stroe, D.I. Overview of machine learning methods for lithium-ion battery remaining useful lifetime prediction. Electronics 2021, 10, 3126. [Google Scholar] [CrossRef]
- Li, Y.; Liu, K.; Foley, A.M.; Zülke, A.; Berecibar, M.; Nanini-Maury, E.; Van Mierlo, J.; Hoster, H.E. Data-driven health estimation and lifetime prediction of lithium-ion batteries: A review. Renew. Sustain. Energy Rev. 2019, 113, 109254. [Google Scholar] [CrossRef]
- Zhao, J.; Zhu, Y.; Zhang, B.; Liu, M.; Wang, J.; Liu, C.; Hao, X. Review of State Estimation and Remaining Useful Life Prediction Methods for Lithium–Ion Batteries. Sustainability 2023, 15, 5014. [Google Scholar] [CrossRef]
- Samanta, A.; Chowdhuri, S.; Williamson, S.S. Machine learning-based data-driven fault detection/diagnosis of lithium-ion battery: A critical review. Electronics 2021, 10, 1309. [Google Scholar] [CrossRef]
- Wang, S.; Ren, P.; Takyi-Aninakwa, P.; Jin, S.; Fernandez, C. A critical review of improved deep convolutional neural network for multi-timescale state prediction of lithium-ion batteries. Energies 2022, 15, 5053. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, C.; Liu, Y.; Sun, F.; Qiao, J.; Xu, T. Review on degradation mechanism and health state estimation methods of lithium-ion batteries. J. Traffic Transp. Eng. 2023, 10, 578–610. [Google Scholar] [CrossRef]
- Hassan, M.U.; Saha, S.; Haque, M.E.; Islam, S.; Mahmud, A.; Mendis, N. A comprehensive review of battery state of charge estimation techniques. Sustain. Energy Technol. Assess. 2022, 54, 102801. [Google Scholar] [CrossRef]
- Liu, K.; Shang, Y.; Ouyang, Q.; Widanage, W.D. A data-driven approach with uncertainty quantification for predicting future capacities and remaining useful life of lithium-ion battery. IEEE Trans. Ind. Electron. 2020, 68, 3170–3180. [Google Scholar] [CrossRef]
- Nuhic, A.; Terzimehic, T.; Soczka-Guth, T.; Buchholz, M.; Dietmayer, K. Health diagnosis and remaining useful life prognostics of lithium-ion batteries using data-driven methods. J. Power Sources 2013, 239, 680–688. [Google Scholar] [CrossRef]
- Li, C.; Zhang, H.; Ding, P.; Yang, S.; Bai, Y. Deep feature extraction in lifetime prognostics of lithium-ion batteries: Advances, challenges and perspectives. Renew. Sustain. Energy Rev. 2023, 184, 113576. [Google Scholar] [CrossRef]
- Li, W.; Sengupta, N.; Dechent, P.; Howey, D.; Annaswamy, A.; Sauer, D.U. Online capacity estimation of lithium-ion batteries with deep long short-term memory networks. J. Power Sources 2021, 482, 228863. [Google Scholar] [CrossRef]
- Zhao, C.; Huang, X.; Li, Y.; Yousaf Iqbal, M. A double-channel hybrid deep neural network based on CNN and BiLSTM for remaining useful life prediction. Sensors 2020, 20, 7109. [Google Scholar] [CrossRef]
- Ren, L.; Dong, J.; Wang, X.; Meng, Z.; Zhao, L.; Deen, M.J. A data-driven auto-CNN-LSTM prediction model for lithium-ion battery remaining useful life. IEEE Trans. Ind. Inform. 2020, 17, 3478–3487. [Google Scholar] [CrossRef]
- Yao, X.Y.; Chen, G.; Pecht, M.; Chen, B. A novel graph-based framework for state of health prediction of lithium-ion battery. J. Energy Storage 2023, 58, 106437. [Google Scholar] [CrossRef]
- Shi, D.; Zhao, J.; Wang, Z.; Zhao, H.; Wang, J.; Lian, Y.; Burke, A.F. Spatial-Temporal Self-Attention Transformer Networks for Battery State of Charge Estimation. Electronics 2023, 12, 2598. [Google Scholar] [CrossRef]
- Chen, D.; Hong, W.; Zhou, X. Transformer network for remaining useful life prediction of lithium-ion batteries. IEEE Access 2022, 10, 19621–19628. [Google Scholar] [CrossRef]
- Zhang, H.; Tang, W.; Na, W.; Lee, P.Y.; Kim, J. Implementation of generative adversarial network-CLS combined with bidirectional long short-term memory for lithium-ion battery state prediction. J. Energy Storage 2020, 31, 101489. [Google Scholar] [CrossRef]
- Li, X.; Zhang, L.; Wang, Z.; Dong, P. Remaining useful life prediction for lithium-ion batteries based on a hybrid model combining the long short-term memory and Elman neural networks. J. Energy Storage 2019, 21, 510–518. [Google Scholar] [CrossRef]
- Ma, G.; Zhang, Y.; Cheng, C.; Zhou, B.; Hu, P.; Yuan, Y. Remaining useful life prediction of lithium-ion batteries based on false nearest neighbors and a hybrid neural network. Appl. Energy 2019, 253, 113626. [Google Scholar] [CrossRef]
- Zhao, G.; Li, S.; Li, Y.; Duan, B.; Shang, Y.; Zhang, C. Capacity Prediction and Remaining Useful Life Diagnosis of Lithium-ion Batteries Using CNN-LSTM Hybrid Neural Network. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2179–2183. [Google Scholar]
- Song, X.; Yang, F.; Wang, D.; Tsui, K.L. Combined CNN-LSTM network for state-of-charge estimation of lithium-ion batteries. IEEE Access 2019, 7, 88894–88902. [Google Scholar] [CrossRef]
- Xu, H.; Wu, L.; Xiong, S.; Li, W.; Garg, A.; Gao, L. An improved CNN-LSTM model-based state-of-health estimation approach for lithium-ion batteries. Energy 2023, 276, 127585. [Google Scholar] [CrossRef]
- Ehteram, M.; Ghanbari-Adivi, E. Self-attention (SA) temporal convolutional network (SATCN)-long short-term memory neural network (SATCN-LSTM): An advanced python code for predicting groundwater level. Environ. Sci. Pollut. Res. 2023, 30, 92903–92921. [Google Scholar] [CrossRef] [PubMed]
- Lu, L.; Di, H.; Lu, Y.; Zhang, L.; Wang, S. A two-level attention-based interaction model for multi-person activity recognition. Neurocomputing 2018, 322, 195–205. [Google Scholar] [CrossRef]
- Sun, S.; Sun, J.; Wang, Z.; Zhou, Z.; Cai, W. Prediction of battery soh by cnn-bilstm network fused with attention mechanism. Energies 2022, 15, 4428. [Google Scholar] [CrossRef]
- Feng, X.; Hui, H.; Liang, Z.; Guo, W.; Que, H.; Feng, H.; Yao, Y.; Ye, C.; Ding, Y. A novel electricity theft detection scheme based on text convolutional neural networks. Energies 2020, 13, 5758. [Google Scholar] [CrossRef]
- Zhang, M.; Yang, D.; Du, J.; Sun, H.; Li, L.; Wang, L.; Wang, K. A review of SOH prediction of Li-ion batteries based on data-driven algorithms. Energies 2023, 16, 3167. [Google Scholar] [CrossRef]
- Huang, Z.; Pan, Z.; Lei, B. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
- Ketkar, N.; Moolayil, J.; Ketkar, N.; Moolayil, J. Convolutional neural networks. In Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch; Apress: New York, NY, USA, 2021; pp. 197–242. [Google Scholar]
- Jozefowicz, R.; Zaremba, W.; Sutskever, I. An empirical exploration of recurrent network architectures. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 2342–2350. [Google Scholar]
- Shewalkar, A.N. Comparison of RNN, LSTM and GRU on Speech Recognition Data. 2018. Available online: https://library.ndsu.edu/ir/handle/10365/29111 (accessed on 6 December 2023).
- Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
- Feng, Q.; Xia, Y.; Yao, W.; Lu, T.; Zhang, X. Malicious Relay Detection for Tor Network Using Hybrid Multi-Scale CNN-LSTM with Attention. In Proceedings of the 2023 IEEE Symposium on Computers and Communications (ISCC), Gammarth, Tunisia, 9–12 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1242–1247. [Google Scholar]
- Saha, B.; Goebel, K. Battery Data Set, NASA Ames Prognostics Data Repository. CA: Moffett Field, 2007. Available online: http://ti.arc.nasa.gov/tech/dash/pcoe/prognosticdatarepository/ (accessed on 6 December 2023).
- Severson, K.A.; Attia, P.M.; Jin, N.; Perkins, N.; Jiang, B.; Yang, Z.; Chen, M.H.; Aykol, M.; Herring, P.K.; Fraggedakis, D.; et al. Data-driven prediction of battery cycle life before capacity degradation. Nat. Energy 2019, 4, 383–391. [Google Scholar] [CrossRef]
- Li, X.; Lyu, M.; Li, K.; Gao, X.; Liu, C.; Zhang, Z. Lithium-ion battery state of health estimation based on multi-source health indicators extraction and sparse Bayesian learning. Energy 2023, 282, 128445. [Google Scholar] [CrossRef]
Method | Application in Battery Prediction | Advantages | Drawbacks |
---|---|---|---|
CNN-LSTM Network | Multidimensional time series data processing | Efficient in extracting spatial and temporal features | May struggle with long time series data |
Graph Convolutional Networks + RNN | Modeling complex chemical and physical processes | Accurate representation of battery processes | High complexity, not suitable for real-time applications |
Transformer Networks + Other Networks | Handling various battery prediction tasks | Parallel processing, attention mechanisms | Requires a large amount of training data |
Generative Adversarial Networks (GANs) | Data augmentation for training | Improves model’s generalization ability and prediction accuracy | Training instability issues |
Hybrid Models and Integrated Learning | Enhancing predictive accuracy and stability | Higher predictive accuracy and stability | – |
Feature | B_5 Correlation with Capacity | B_7 Correlation with Capacity | B_18 Correlation with Capacity | B_MIT Correlation with Capacity |
---|---|---|---|---|
ETCV | −0.9895 | −0.9869 | −0.9295 | −0.9603 |
tcc | 0.9955 | 0.9935 | 0.9889 | 0.9308 |
Tpeak_L | 0.9976 | 0.9992 | 0.9994 | 0.9798 |
ICP | 0.9880 | 0.9813 | 0.9789 | 0.9829 |
ICPL | 0.9360 | 0.8655 | 0.9228 | 0.9249 |
SampEn | −0.9841 | −0.8260 | −0.9740 | −0.9753 |
Layer Type | Units/Parameters | Description |
---|---|---|
Input Layer | - | Accepts fixed time-step multivariate time series data |
Convolutional Layer | 64 filters, kernel size of 3 | Extracts local features from time series data |
Max Pooling Layer | - | Reduces the spatial dimensions of the data |
Dropout Layer | Dropout rate: 0.3 | Mitigates the phenomenon of overfitting |
LSTM Layer | 50 units per layer | Captures long-term dependencies in the data |
Time-Attention Mechanism | 4 heads, key dimension of 2 | Enhances the model’s ability to understand time series by processing information from different representation subspaces in parallel |
LSTM Layer | 50 units per layer | Captures long-term dependencies in the data after processing by the time-attention mechanism |
Dropout Layer | Dropout rate: 0.3 | Mitigates the phenomenon of overfitting |
Output Layer | - | Produces the prediction results |
Battery | MSE (%) | RMSE (%) | MAE (%) | MAPE (%) | MaxAE (%) |
---|---|---|---|---|---|
B_5 | 0.0126 | 1.1218 | 0.7365 | 0.5550 | 4.4539 |
B_7 | 0.0110 | 1.0474 | 0.7409 | 0.5137 | 3.9064 |
B_18 | 0.0264 | 1.6239 | 1.1483 | 0.8275 | 5.1729 |
B_MIT | 0.0055 | 0.7439 | 0.4616 | 0.5063 | 2.6950 |
Battery | Model | Test MSE (%) | Test RMSE (%) | Test MAE (%) | Test MAPE (%) | Test MaxAE (%) |
---|---|---|---|---|---|---|
B_5 | time_attention | 0.0126 | 1.1218 | 0.7365 | 0.5550 | 4.4539 |
cnn_lstm | 0.0620 | 2.4898 | 2.2861 | 1.7408 | 4.4446 | |
cnn_lstm_trans | 0.3856 | 6.2094 | 5.8276 | 4.4320 | 9.4027 | |
cnn | 0.0965 | 3.1060 | 2.8752 | 2.1894 | 5.4075 | |
lstm | 0.2004 | 4.4762 | 4.3240 | 3.2851 | 6.5544 | |
B_7 | time_attention | 0.0110 | 1.0474 | 0.7409 | 0.5137 | 3.9064 |
cnn_lstm | 0.0755 | 2.7477 | 2.4967 | 1.7478 | 4.7860 | |
cnn_lstm_trans | 1.2348 | 11.1120 | 10.8869 | 7.5932 | 15.2542 | |
cnn | 0.0909 | 3.0144 | 2.6673 | 1.8689 | 5.1959 | |
lstm | 0.3358 | 5.7945 | 5.6041 | 3.9127 | 8.5977 | |
B_18 | time_attention | 0.0264 | 1.6239 | 1.1483 | 0.8275 | 5.1729 |
cnn_lstm | 0.0309 | 1.7586 | 1.3034 | 0.9376 | 5.7559 | |
cnn_lstm_trans | 0.1048 | 3.2372 | 2.7706 | 1.9968 | 6.7867 | |
cnn | 0.0278 | 1.6684 | 0.9436 | 0.6748 | 6.0050 | |
lstm | 0.0851 | 2.9173 | 2.6979 | 1.9538 | 5.3611 | |
B_MIT | time_attention | 0.0055 | 0.7439 | 0.4616 | 0.0051 | 2.6950 |
cnn_lstm | 0.0395 | 1.9873 | 1.4818 | 0.0161 | 4.9751 | |
cnn_lstm_trans | 0.0240 | 1.5480 | 1.1348 | 0.0124 | 3.9657 | |
cnn | 0.0138 | 1.1729 | 0.6852 | 0.0075 | 3.8693 | |
lstm | 0.0534 | 2.3107 | 1.9783 | 0.0213 | 5.1158 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chu, F.; Shan, C.; Guo, L. Temporal Attention Mechanism Based Indirect Battery Capacity Prediction Combined with Health Feature Extraction. Electronics 2023, 12, 4951. https://doi.org/10.3390/electronics12244951
Chu F, Shan C, Guo L. Temporal Attention Mechanism Based Indirect Battery Capacity Prediction Combined with Health Feature Extraction. Electronics. 2023; 12(24):4951. https://doi.org/10.3390/electronics12244951
Chicago/Turabian StyleChu, Fanyuan, Ce Shan, and Lulu Guo. 2023. "Temporal Attention Mechanism Based Indirect Battery Capacity Prediction Combined with Health Feature Extraction" Electronics 12, no. 24: 4951. https://doi.org/10.3390/electronics12244951
APA StyleChu, F., Shan, C., & Guo, L. (2023). Temporal Attention Mechanism Based Indirect Battery Capacity Prediction Combined with Health Feature Extraction. Electronics, 12(24), 4951. https://doi.org/10.3390/electronics12244951