A Vehicle Velocity Prediction Method with Kinematic Segment Recognition
Abstract
:1. Introduction
2. Data Preprocessing
2.1. Kinematic Fragment Feature Parameter Extraction
2.2. Principal Component Analysis
- 1.
- The initial variable matrix X is constructed by using the characteristic parameter information of the kinematic fragment above, where l is the number of characteristic parameters 10, and n is the number of samples 317.
- 2.
- The correlation coefficient matrix R is calculated based on the correlation of the elements in the matrix.
- 3.
- The eigenvalues and eigenvectors of the correlation coefficient matrix can be obtained by solving the eigenequation of the correlation matrix R.
- 4.
- The principal component contribution rate is calculated from the eigenvalue, which belongs to a part of all data features contained in the sample and is related to the sample features. The larger the eigenvalue, the more features are covered when the eigenvalue is used to describe the original information. Principal components can be selected according to the size of the eigenvalue, and the contribution rate represents the retention degree of each principal component to the initial information. The cumulative contribution rate can determine the number of principal components used to express the features of the original data. The calculation formula of the principal component contribution rate is described below.
2.3. Kinematic Fragment Clustering
- According to the randomness principle, the algorithm selects k objects from the dataset as the center of k clusters, which is called the initial cluster center.
- Calculate the distance of all other objects to the initial cluster center, classify all objects into this k cluster with the shortest principle of European distance, and obtain the k initial clustering. Distance is used to characterize the similarity between objects, describing the real distance of two objects in space, and the calculation formula is:
- 3.
- Mobile clustering center, where the average value of the data objects in each initial cluster is calculated, and the centroid of each cluster is obtained as the new cluster center.
- 4.
- The distance of all data objects to the new cluster center is calculated, and the data objects are reclassified according to the principle of minimum distance. Repeat the above steps; the algorithm ends when the optimal E value of the objective function is minimum and no longer changes.
3. Methodology
3.1. QPSO Model
3.2. LSTM Model
3.3. Prediction Model Based on K-Means-QPSO-LSTM
- Data processing. The data processing steps have been detailed in Section 2 and will not be elaborated in this section. Three types of driving condition datasets were obtained through data processing. For the first type of dataset, because there was a lot of data, the first 90% of the dataset was used as the training set, and the last 10% was used as the test set. For the second and third types of datasets, the first 70% of the driving condition data was used as the training set, and the last 30% was used as the test set. The training set was used to train the sub-LSTM model and optimize its parameters, while the test set was used to evaluate the model’s performance.
- Initialize the particle swarm parameters including the particle swarm size, particle dimension, number of iterations, particle position (hidden layer neuron number (Lai, Lbi), training number (Ki), and learning rate (Iri) (i = 1, 2, 3).
- The velocity and position of each particle are updated, and the fitness value of each particle is calculated. The fitness value of each particle is a function of the mean square error (MSE) of the training set.
- Update the individual and population history optimal fitness value and location of each particle as well as the other parameters.
- The end condition is satisfied by calculating the maximum number of iterations or the minimum difference value of the fitness value between two iterations. If not, step 2 is performed to continue the iteration.
- If the conditions are met, the global optimal value is output to the sub-LSTM network to achieve the purpose of optimizing the hyperparameters.
4. Prediction and Analysis
4.1. Training Setting
4.2. Parameter Setting and Training
4.3. Simulation Results Based on K-Means-QPSO-LSTM
5. Conclusions
- Kinematic segments of typical driving cycles were extracted, and ten characteristic parameters including the average velocity, mean square error of velocity, and maximum velocity were calculated. In order to reduce the computation and avoid the overlap of information, the principal component analysis method was used to reduce the dimension of the eigenvector composed of the eigen parameters. Each principal component is a parameter related to ten features. The three principal components are used to describe the ten feature parameters of each kinematic segment, and the transformation of the feature vector from ten dimensions to three dimensions is realized.
- According to the three principal component information of each kinematic segment sample, the K-means algorithm was used to cluster the samples, and sample sets under three typical working conditions were obtained. These respectively represent the high velocity, long driving mileage type 3, low velocity, long idling time, short driving mileage type 1, and the characteristics between the two driving conditions type 2.
- The vehicle velocity prediction sub-model based on LSTM was trained with three kinds of kinematic fragment samples, respectively, and the optimal LSTM hyperparameters were found by iterative correction with the QPSO algorithm. Combining kinematic segment recognition and a vehicle velocity prediction model, a vehicle velocity prediction method based on K-means-QPSO-LSTM was obtained. This method has an adaptive neural network that can change the vehicle velocity prediction models in real-time according to the actual driving information. Compared with the traditional neural network-based vehicle velocity prediction method, the prediction accuracy was improved.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zou, Y.; Liu, T.; Liu, D.; Sun, F. Reinforcement learning-based real-time energy management for a hybrid tracked vehicle. Appl. Energy 2016, 171, 372–382. [Google Scholar] [CrossRef]
- Zhang, F.; Hu, X.; Xu, K.; Tang, X.; Cui, Y. Current Status and Prospects for Model Predictive Energy Management in Hybrid Electric Vehicles. J. Mech. Eng. 2019, 55, 86–108. [Google Scholar] [CrossRef]
- Huang, Y.; Wang, H.; Khajepour, A.; He, H.; Ji, J. Model predictive control power management strategies for HEVs: A review. J. Power Sources 2017, 341, 91–106. [Google Scholar] [CrossRef]
- Zeng, X.; Huang, K.; Meng, F. Model predictive control for parallel hybrid electric vehicles with potential real-time capability. J. Automot. Saf. Energy 2012, 3, 165–172. [Google Scholar]
- Sun, C.; Hu, X.; Moura, S.J.; Sun, F. Velocity Predictors for Predictive Energy Management in Hybrid Electric Vehicles. IEEE Trans. Control Syst. Technol. 2015, 23, 1197–1204. [Google Scholar]
- Häggström, O. Finite Markov Chains and Algorithmic Applications; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
- Hu, Q.; Yue, W. Markov Decision Processes with Their Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Shen, P.; Zhao, Z.; Zhan, X.; Li, J.; Guo, Q. Optimal energy management strategy for a plug-in hybrid electric commercial vehicle based on velocity prediction. Energy 2018, 155, 838–852. [Google Scholar] [CrossRef]
- Murphey, Y.L.; Park, J.; Chen, Z.; Kuang, M.L.; Masrur, M.A.; Phillips, A.M. Intelligent Hybrid Vehicle Power Control—Part I: Machine Learning of Optimal Vehicle Power. IEEE Trans. Veh. Technol. 2012, 61, 3519–3530. [Google Scholar] [CrossRef]
- Sun, C.; Sun, F.; He, H. Investigating adaptive-ECMS with velocity forecast ability for hybrid electric vehicles. Appl. Energy 2017, 185, 1644–1653. [Google Scholar] [CrossRef]
- Zhang, P.; Lu, W.; Du, C.; Hu, J.; Yan, F. A Comparative Study of Vehicle Velocity Prediction for Hybrid Electric Vehicles Based on a Neural Network. Mathematics 2024, 12, 575. [Google Scholar] [CrossRef]
- Pu, Z.; Jiao, X.; Yang, C.; Fang, S. An Adaptive Stochastic Model Predictive Control Strategy for Plug-in Hybrid Electric Bus During Vehicle-Following Scenario. IEEE Access 2020, 8, 13887–13897. [Google Scholar] [CrossRef]
- Li, L.; You, S.; Yang, C. Multi-Objective Stochastic MPC-Based System Control Architecture for Plug-In Hybrid Electric Buses. IEEE Trans. Ind. Electron. 2016, 63, 4752–4763. [Google Scholar] [CrossRef]
- Xie, S.; Hu, X.; Qi, S.; Tang, X.; Lang, K.; Xin, Z.; Brighton, J. Model Predictive Energy Management for Plug-In Hybrid Electric Vehicles Considering Optimal Battery Depth of Discharge. Energy 2019, 173, 667–678. [Google Scholar] [CrossRef]
- Lin, X.; Wang, Z.; Wu, J. Energy management strategy based on velocity prediction using back propagation neural network for a plug-in fuel cell electric vehicle. Int. J. Energy Res. 2021, 45, 2629–2643. [Google Scholar] [CrossRef]
- Zhou, Q.; Du, C.; Wu, D.; Huang, C.; Yan, F. A tolerant sequential correction predictive energy management strategy of hybrid electric vehicles with adaptive mesh discretization. Energy 2023, 274, 127314. [Google Scholar] [CrossRef]
- Liu, Y.; Li, J.; Gao, J.; Lei, Z.; Zhang, Y.; Chen, Z. Prediction of vehicle driving conditions with incorporation of stochastic forecasting and machine learning and a case study in energy management of plug-in hybrid electric vehicles. Mech. Syst. Signal Process. 2021, 158, 107765. [Google Scholar] [CrossRef]
- Wang, P.; Li, J.; Yu, Y.; Xiong, X.; Zhao, S.; Shen, W. Energy management of plug-in hybrid electric vehicle based on trip characteristic prediction. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2020, 234, 2239–2259. [Google Scholar] [CrossRef]
- Wu, Y.; Tan, H.; Peng, J.; Zhang, H.; He, H. Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus. Appl. Energy 2019, 247, 454–466. [Google Scholar] [CrossRef]
- Gu, W.; Zhao, D.; Mason, B. A Review of Intelligent Road Preview Methods for Energy Management of Hybrid Vehicles. IFAC-PapersOnLine 2019, 52, 654–660. [Google Scholar] [CrossRef]
- Ma, X.; Tao, Z.; Wang, Y.; Yu, H.; Wang, Y. Long short-term memory neural network for traffic speed prediction using remote microwave sensor data. Transp. Res. Part C Emerg. Technol. 2015, 54, 187–197. [Google Scholar] [CrossRef]
- Liu, Y.; Zheng, H.; Feng, X.; Chen, Z. Short-term traffic flow prediction with Conv-LSTM. In Proceedings of the 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 11–13 October 2017; pp. 1–6. [Google Scholar]
- Wu, Y.; Tan, H. Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework. arXiv 2016, arXiv:1612.01022. [Google Scholar]
- Fu, R.; Zhang, Z.; Li, L. Using LSTM and GRU neural network methods for traffic flow prediction. In Proceedings of the 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), Wuhan, China, 11–13 November 2016; pp. 324–328. [Google Scholar]
- Zhang, J.W.; Li, M.L.; Ai, G.H.; Zhang, F.X.; Zhu, X.C. A Study on the Features of Existing Typical Vehicle Driving Cycles. Automot. Eng. 2005, 27, 220–224. [Google Scholar]
- Montazeri-Gh, M.; Fotouhi, A. Traffic condition recognition using the -means clustering method. Sci. Iran. 2011, 18, 930–937. [Google Scholar] [CrossRef]
- Miao, Q.; Sun, Q.; Bai, S.; Yan, W.; Li, G. Construction of Typical Driving Cycle of Bus Based on Clustering and Markov Chain. China J. Highw. Transp. 2016, 29, 161–169. [Google Scholar]
- Billard, L.; Le-Rademacher, J. Principal component analysis for interval data. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 535–540. [Google Scholar] [CrossRef]
- Qiao, D. Research and Application Based On K-means Algorithm and Hierarchical Clustering Algorithm. Master’s Thesis, Jilin University, Changchun, China, 2016. [Google Scholar]
- Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
- Sun, J. Particle Swarm Optimization with Particles Having Quantum Behavior. Master’s Thesis, Jiangnan University, Wuxi, China, 2009. [Google Scholar]
- Mikki, S.M.; Kishk, A.A. Quantum particle swarm optimization for electromagnetics. IEEE Trans. Antennas Propag. 2006, 54, 2764–2775. [Google Scholar] [CrossRef]
- Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
- Fu, X.; Wei, Y.; Su, Y.; Hu, H. Shear Wave Velocity Prediction Based on the Long Short-Term Memory Network with Attention Mechanism. Appl. Sci. 2024, 14, 2489. [Google Scholar] [CrossRef]
- Guo, J.; Liu, D.; Lin, S.; Lin, J.; Zhen, W. Temperature Prediction of a Temperature-Controlled Container with Cold Energy Storage System Based on Long Short-Term Memory Neural Network. Appl. Sci. 2024, 14, 854. [Google Scholar] [CrossRef]
- Yan, S.; Zhang, J.; Parvej, M.M.; Zhang, T. Sea Drift Trajectory Prediction Based on Quantum Convolutional Long Short-Term Memory Model. Appl. Sci. 2023, 13, 9969. [Google Scholar] [CrossRef]
- Du, M.; Chen, Y.; Liu, Y.; Yin, H. A Novel Hybrid Method to Predict PM2.5 Concentration Based on the SWT-QPSO-LSTM Hybrid Model. Comput. Intell. Neurosci. 2022, 2022, 7207477. [Google Scholar] [CrossRef] [PubMed]
- Cao, M.; Liang, Y.; Zhu, Y.; Lü, G.; Ma, Z. Prediction for Origin-Destination Distribution of Dockless Shared Bicycles: A Case Study in Nanjing City. Front. Public Health 2022, 10, 849766. [Google Scholar] [CrossRef] [PubMed]
- Zhao, L.; Cao, N.; Yang, H. Forecasting regional short-term freight volume using QPSO-LSTM algorithm from the perspective of the importance of spatial information. Math. Biosci. Eng. 2023, 20, 2609–2627. [Google Scholar] [CrossRef] [PubMed]
Fragment | vave (km/h) | vmax (km/h) | σ | ηa | ηd | amax (m/s2) | amin (m/s2) | aa_ave (m/s2) | ad_ave (m/s2) | S (m) |
---|---|---|---|---|---|---|---|---|---|---|
1 | 31.44 | 51.8 | 19.37 | 0.38 | 0.56 | 7.2 | −5 | 2.94 | −1.99 | 1603.6 |
2 | 18.05 | 30.3 | 11.08 | 0.62 | 0.38 | 4.2 | −5.8 | 2.33 | −3.79 | 397.2 |
…… | ||||||||||
317 | 10.47 | 18.4 | 6.56 | 0.53 | 0.32 | 3.17 | −4.44 | 1.84 | −3.07 | 209.45 |
Principal Component | Eigenvalue | Contribution Rate (%) | Cumulative Contribution Rate (%) |
---|---|---|---|
1 | 4.5073 | 45.0372 | 45.0732 |
2 | 1.8052 | 18.0520 | 63.1251 |
3 | 1.5257 | 15.2566 | 78.3817 |
4 | 0.6908 | 6.9083 | 85.2901 |
5 | 0.5545 | 5.5446 | 90.8346 |
6 | 0.4695 | 4.6949 | 95.5295 |
7 | 0.2602 | 2.6018 | 98.1312 |
8 | 0.1190 | 1.1901 | 99.3213 |
9 | 0.0586 | 0.5859 | 99.9072 |
10 | 0.0093 | 0.0928 | 100 |
Normalized Variable | Eigenvector Type 1 | Eigenvector Type 2 | Eigenvector Type 3 |
---|---|---|---|
vave | 0.4367 | −0.0049 | 0.2382 |
vmax | 0.4429 | 0.0262 | 0.2279 |
σ | 0.4423 | 0.0423 | 0.0872 |
ηa | −0.0909 | 0.5603 | 0.1888 |
ηd | −0.1656 | −0.3673 | 0.4079 |
amax | 0.2461 | −0.4672 | −0.1525 |
amin | −0.3896 | −0.2130 | 0.1238 |
aa_ave | 0.1629 | −0.4478 | −0.3609 |
ad_ave | −0.2842 | −0.2846 | 0.4992 |
S | 0.2488 | −0.0563 | 0.5132 |
Sample | f1 | f2 | f3 |
---|---|---|---|
1 | −3.1689 | −1.2193 | 0.8208 |
2 | −3.164 | −1.2196 | 0.8176 |
3 | −3.1628 | −1.2197 | 0.8161 |
4 | −3.1605 | −1.2198 | 0.8144 |
5 | −3.159 | −1.2195 | 0.8138 |
Cluster Center | First Principal Component | Second Principal Component | Third Principal Component |
---|---|---|---|
ξ1 | −1.03 | 0.09 | 0.13 |
ξ2 | 2.34 | −0.98 | −1.17 |
ξ3 | 5.50 | −0.57 | 3.71 |
Driving Condition Type | Prediction Method | Lai | Lbi | Ki | Iri |
---|---|---|---|---|---|
Type 1 | LSTM | 120 | 80 | 100 | 0.0035 |
PSO-LSTM | 77 | 80 | 71 | 0.0079 | |
QPSO-LSTM | 75 | 116 | 90 | 0.0094 | |
Type 2 | LSTM | 100 | 80 | 100 | 0.005 |
PSO-LSTM | 75 | 53 | 96 | 0.0096 | |
QPSO-LSTM | 61 | 80 | 99 | 0.00977 | |
Type 3 | LSTM | 90 | 60 | 100 | 0.0055 |
PSO-LSTM | 100 | 134 | 94 | 0.0094 | |
QPSO-LSTM | 85 | 95 | 92 | 0.0093 |
Driving Condition Type | Prediction Methods | R2 | RMSE (km/h) | MAE (km/h) | Time Horizon (s) |
---|---|---|---|---|---|
Type 1 | LSTM | 0.9723 | 1.9662 | 1.5097 | 2 |
PSO-LSTM | 0.97456 | 1.8845 | 1.404 | 2 | |
QPSO-LSTM | 0.97583 | 1.8367 | 1.3522 | 2 | |
Type 2 | LSTM | 0.98785 | 1.8531 | 1.288 | 2 |
PSO-LSTM | 0.99161 | 1.6076 | 1.0857 | 2 | |
QPSO-LSTM | 0.99188 | 1.5153 | 1.002 | 2 | |
Type 3 | LSTM | 0.99134 | 2.8681 | 2.0725 | 2 |
PSO-LSTM | 0.99199 | 2.7587 | 2.0203 | 2 | |
QPSO-LSTM | 0.99481 | 2.2208 | 1.8171 | 2 | |
Type 1 | LSTM | 0.79538 | 6.3759 | 3.4762 | 5 |
PSO-LSTM | 0.87641 | 4.1451 | 2.9673 | 5 | |
QPSO-LSTM | 0.90036 | 4.0226 | 2.1233 | 5 | |
Type 2 | LSTM | 0.94099 | 4.2559 | 2.9106 | 5 |
PSO-LSTM | 0.94736 | 4.022 | 2.8344 | 5 | |
QPSO-LSTM | 0.95043 | 3.9041 | 2.67 | 5 | |
Type 3 | LSTM | 0.9722 | 5.738 | 3.9431 | 5 |
PSO-LSTM | 0.96385 | 4.7484 | 3.2997 | 5 | |
QPSO-LSTM | 0.96473 | 4.6913 | 3.2972 | 5 | |
Type 1 | LSTM | 0.32817 | 9.6329 | 7.0982 | 10 |
PSO-LSTM | −0.035857 | 11.9613 | 8.3113 | 10 | |
QPSO-LSTM | 0.22749 | 10.3295 | 6.9411 | 10 | |
Type 2 | LSTM | 0.78341 | 8.1622 | 5.7281 | 10 |
PSO-LSTM | 0.74334 | 8.8853 | 5.7359 | 10 | |
QPSO-LSTM | 0.80565 | 7.7318 | 5.2808 | 10 | |
Type 3 | LSTM | 0.87092 | 8.9326 | 5.7073 | 10 |
PSO-LSTM | 0.8788 | 8.6566 | 5.6616 | 10 | |
QPSO-LSTM | 0.8655 | 9.1182 | 5.5855 | 10 |
Prediction Method | R2 | RMSE (km/h) | MAE (km/h) | Time Horizon (s) |
---|---|---|---|---|
RBF | 0.98996 | 2.9754 | 2.3851 | 5 s |
LSTM | 0.98947 | 2.9836 | 2.4226 | 5 s |
CNN-LSTM | 0.99134 | 2.8205 | 2.2161 | 5 s |
K-means-QPSO-LSTM | 0.99397 | 2.2578 | 1.6976 | 5 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, B.; Wei, C.; Feng, F. A Vehicle Velocity Prediction Method with Kinematic Segment Recognition. Appl. Sci. 2024, 14, 5030. https://doi.org/10.3390/app14125030
Lin B, Wei C, Feng F. A Vehicle Velocity Prediction Method with Kinematic Segment Recognition. Applied Sciences. 2024; 14(12):5030. https://doi.org/10.3390/app14125030
Chicago/Turabian StyleLin, Benxiang, Chao Wei, and Fuyong Feng. 2024. "A Vehicle Velocity Prediction Method with Kinematic Segment Recognition" Applied Sciences 14, no. 12: 5030. https://doi.org/10.3390/app14125030
APA StyleLin, B., Wei, C., & Feng, F. (2024). A Vehicle Velocity Prediction Method with Kinematic Segment Recognition. Applied Sciences, 14(12), 5030. https://doi.org/10.3390/app14125030