Next Article in Journal
Acceptance and Commitment Therapy for Destructive Experiential Avoidance (ACT-DEA): A Feasibility Study
Next Article in Special Issue
A Novel Environment Estimation Method of Whole Sample Traffic Flows and Emissions Based on Multifactor MFD
Previous Article in Journal
Proactive Risk Assessment through Failure Mode and Effect Analysis (FMEA) for Perioperative Management Model of Oral Anticoagulant Therapy: A Pilot Project
Previous Article in Special Issue
IvCDS: An End-to-End Driver Simulator for Personal In-Vehicle Conversational Assistant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CEEMDAN-IPSO-LSTM: A Novel Model for Short-Term Passenger Flow Prediction in Urban Rail Transit Systems

1
School of Electrical Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
State Key Lab of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, China
3
Ganjiang Innovation Academy, Chinese Academy of Sciences, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2022, 19(24), 16433; https://doi.org/10.3390/ijerph192416433
Submission received: 9 November 2022 / Revised: 25 November 2022 / Accepted: 2 December 2022 / Published: 7 December 2022

Abstract

:
Urban rail transit (URT) is a key mode of public transport, which serves for greatest user demand. Short-term passenger flow prediction aims to improve management validity and avoid extravagance of public transport resources. In order to anticipate passenger flow for URT, managing nonlinearity, correlation, and periodicity of data series in a single model is difficult. This paper offers a short-term passenger flow prediction combination model based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and long-short term memory neural network (LSTM) in order to more accurately anticipate the short-period passenger flow of URT. In the meantime, the hyperparameters of LSTM were calculated using the improved particle swarm optimization (IPSO). First, CEEMDAN-IPSO-LSTM model performed the CEEMDAN decomposition of passenger flow data and obtained uncoupled intrinsic mode functions and a residual sequence after removing noisy data. Second, we built a CEEMDAN-IPSO-LSTM passenger flow prediction model for each decomposed component and extracted prediction values. Third, the experimental results showed that compared with the single LSTM model, CEEMDAN-IPSO-LSTM model reduced by 40 persons/35 persons, 44 persons/35 persons, 37 persons/31 persons, and 46.89%/35.1% in SD, RMSE, MAE, and MAPE, and increase by 2.32%/3.63% and 2.19%/1.67% in R and R2, respectively. This model can reduce the risks of public health security due to excessive crowding of passengers (especially in the period of COVID-19), as well as reduce the negative impact on the environment through the optimization of traffic flows, and develop low-carbon transportation.

1. Introduction

The influence of human activities on the global climate, characterized by global warming, has had serious negative impacts on public health. Energy conservation and carbon reduction have become serious environmental development issues to address. At the 75th United Nations General Assembly on 22 September 2020, China announced it would reach a peak in CO2 emissions by 2030 and achieve carbon neutrality before 2060 (hereinafter referred to as double carbon goals) [1].
With the continuous improvement of China’s urbanization level and the diversification of urban transport logistics and travel demand, the transport sector has become the main body of China’s energy consumption and carbon emissions growth [2]. A key strategy for lowering urban carbon emissions is the expansion of public transportation [3,4]. Urban rail transit (hereinafter referred to as URT) is a large-capacity public transport infrastructure and the backbone of low-carbon transportation in cities. The URT in China has been rapidly increasing, and its energy consumption and carbon emission reduction pressure remains high. As of 30 September 2022, 52 mainland Chinese cities have put into operation 9788.64 km of URT lines, including 7655.32 km of subway, accounting for 78.21% [5]. Passenger flow volume is rapidly growing along with URT’s quick expansion, which is producing severe congestion in URT systems. Accurately predicting the short-term flow volume and subsequently carrying out the necessary management procedures are two ways by which to relieve traffic congestion [6,7]. Travelers can effectively change their preferred method of transportation, route, or travel dates in advance by properly forecasting the influx and outflow of each station in a URT, which reduces travel time and costs [8,9]. Utilizing the prediction data, operators can identify crowded stations. The relevant passenger control measures can be put in place at stations that are severely congested to prevent congestion. Moreover, the timetable can be timely optimized so as to transport more passengers during peak hours according to predictions results.
At present, the research on short-time passenger flow prediction of URT at home and abroad is mainly conducted through three categories: statistical methods, traditional machine learning methods, and deep learning methods. Statistical methods are more sensitive to the linear relationship between variables, but they cannot capture the nonlinear relationship in the data. Such methods mainly include Kalman Filter model [10,11], ARMA model [12], and ARIMA model [13,14,15]. Traditional machine learning methods can better capture the nonlinear features in time series, and the accuracy for rail transit passenger flow prediction is higher. Such methods mainly include Support Vector Machine [16,17] and neural network [18,19,20]. However, the prediction model using traditional machine learning methods is prone to over-learning or under-learning problems when dealing with massive passenger flow data, which affects the accuracy of prediction models [21]. With the advancement of related theories and technologies, researchers have begun to use deep learning models to predict URT passenger flow [22]. Due to the strong applicability of the LSTM model in processing time series data, it has been widely used in passenger flow forecasting research [23,24,25].
The achievement of a single model’s good prediction performance in real-world case studies is undoubtedly difficult. As a result, more academics have increasingly concentrated on combination forecasting models. Gong et al. [26] set up a passenger flow forecasting framework combining the seasonal ARIMA-based method and Kalman filter-based method. The framework was applied to the real bus line for passenger flow prediction. Qin et al. [27] coupled a seasonal-trend decomposition approach with an adaptive boosting framework to anticipate the monthly passenger flow on China Railway. A prediction model for irregular passenger flow based on the combination of support vector regression and LSTM was presented by Guo et al. [28]. A three-stage passenger flow forecasting model was developed by Liu and Chen [29] using a deep neural network and stacked automated encoder. The performance of the prediction was shown to be significantly impacted by the choice and combination of important features.
Although the accuracy of the aforementioned prediction methods has somewhat increased, neither the interference of passenger flow data noise nor the manual trial-and-error method of determining the hyperparameters of the neural network based solely on empirical values has been considered. In order to address these issues, this paper combines the CEEMDAN algorithm for reducing data noise interference with the IPSO algorithm for hyperparameters optimization of LSTM neural networks to create a new short-term passenger flow prediction model of URT based on CEEMDAN-IPSO-LSTM. The model’s predictive performance is then thoroughly assessed using the benchmark function, prediction error, and Taylor diagram. In a word, short-term passenger flow accurate prediction of URT can improve the efficiency of transport infrastructure and means of transport. At the same time, it can further put forward optimization suggestions for URT operation management during the post-epidemic period, and provide a reference for the early realization of the dual carbon goals.

2. Methods

2.1. CEEMDAN Algorithm

The complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) algorithm is a time-frequency domain analysis method that excels at nonlinear and non-stationary data due to its excellent adaptivity and convergence [30]. Through the addition of adaptive noise, the modal effects are further diminished. This algorithm can decompose complex time series data into intrinsic modal functions (IMFs) and a residual (Res), so as to effectively solve problems such as boundary effects and low computational efficiency that EMD [31], EEMD [32], and CEEMD [33] are prone to.
The following are the specific steps of the CEEMDAN algorithm.
x ( t ) is the original passenger flow time series; I M F k ¯ ( t ) is the kth IMF obtained by CEEMDAN decomposition; EMD j ( ) represents the jth IMF obtained by EMD decomposition; β k ( k = 2 , 3 , , K ) is a scalar coefficient that is used to adjust the signal-to-noise ratio at each stage, determining the standard deviation of the Gaussian white noise in the process; ω i ( t ) ( i = 1 , 2 , , n ) is the Gaussian white noise that adheres to the standard normal distribution.
Step 1: The acquired x ( t ) is utilized for the first decomposition by adding a white noise ω i ( t ) with a signal-to-noise ratio β 0 to the original time series x i ( t ) , as indicated in Equation (1).
x i ( t ) = x ( t ) + β 0 ω i ( t )
where t stands for the various time points, i for the ith addition of white noise, and n for all the additions of white noise.
Step 2: Use EMD to decompose x i ( t )  n times, then obtain I M F 1 i ( t ) . The average value is calculated using Equation (2) to obtain the first IMF of CEEMDAN. The first residual R 1 ( t ) is produced using Equation (3), and E M D 1 ( ) represents the first IMF obtained through EMD. Theoretically, since white noise has an average value of zero, the influence of white noise can be reduced by finding the average value.
I M F 1 ¯ ( t ) = 1 n i = 1 n I M F 1 i ( t ) = 1 n E M D 1 x i ( t )
R 1 ( t ) = x ( t ) I M F 1 ¯ ( t )
Step 3: The first IMF derived by EMD with the inclusion of white noise ω i ( t ) and signal-to-noise ratio β 1 is the adaptive noise term. The first residual R 1 ( t ) is then combined with the adaptive noise term to create a new time series. The second IMF of CEEMDAN is then obtained by decomposing a fresh time series using Equation (4). Equation (5) is used to generate the second residual R 2 ( t ) .
I M F 2 ¯ ( t ) = 1 n i = 1 n E M D 1 R 1 ( t ) + β 1 E M D 1 ω i ( t )
R 2 ( t ) = R 1 ( t ) I M F 2 ¯ ( t )
Step 4: Repeat Step 3, adding the new adaptive noise component to the residual term to create the new time series. After that, break it down to get the kth IMF of CEEMDAN. Equations (6) and (7) in particular are as follows:
I M F k ¯ ( t ) = 1 n i = 1 n E M D 1 R k 1 ( t ) + β k 1 E M D k 1 ω i ( t )
R k ( t ) = R k 1 ( t ) I M F k ¯ ( t )
Step 5: The CEEMDAN algorithm reaches a conclusion when the residual term is unable to proceed with the decomposition since it does not exceed two extreme points. The last residual R ( t ) at that point is a clear trend term. Equation (8) links the complete IMF to the initial time series of passenger flow.
x ( t ) = k = 1 K I M F k ¯ ( t ) + R k ( t )

2.2. LSTM Neural Network

Long short-term memory neural network (LSTM) is a special variant of recurrent neural networks (RNN) [34]. The gating mechanism is introduced in comparison to the original RNN, and it may recognize long-term dependencies in the input data. It can address issues like gradient explosion, gradient disappearance, and the difficulty to manage long-term dependencies brought on by intricate network layers. Although URT’s passenger flow significantly varies over the short period, it still depends on changes in both the long-term and current passenger flow levels. Therefore, accurate short-term passenger flow estimates can be made using the LSTM model. Figure 1 depicts the LSTM model structure.
The forget gate, shown as ft in the architectural diagram above, determines whether the upper layer of the LSTM’s hidden cellular state is filtered. it stands for the input gate, Ct−1 for the cell state at the time of the previous moment, Ct for the current moment, and Ot for the output gate. The current input and output are represented by xt and ht, respectively. The hyperbolic tangent function is represented by the symbol tanh, and the sigmoid function is represented by σ. The wf, wi, wo, and wc stand for the forget gate, input gate, output gate, and weight matrix of the cell state, respectively. The offset vectors for the forget gate, input gate, output gate, and cell state are denoted by bf, bi, bo, and bc, respectively. Below is a description of each control gate’s calculating principles.
First, the candidate state value C ˜ of the input cell at time t and the value of the input gate it are calculated:
i t = σ w i h t 1 , x t + b i
C ˜ = tanh w c h t 1 , x t + b c
The forget gate’s activation value ft is then determined at time t:
f t = σ W f h t 1 , x t + b f
It is possible to determine the cell state Ct at time t by using the values discovered in the previous two steps:
C t = f t C t 1 + i t C ˜ t
The output gate values can be derived after getting the cell state update values:
O t = σ w o h t 1 , x t + b o
h t = O t tanh ( C t )
For the LSTM model selected in this paper, the number of training iterations K, the learning rate Lr, and the number of neurons in the LSTM hidden layer L1, L2, are four hyperparameters that have a significant impact on the algorithm’s performance. The IPSO algorithm is used to adjust and improve the LSTM model, and these four essential hyperparameters are used as features for the particle search.

2.3. PSO Algorithm and Improvement

A swarm intelligence optimization technique called particle swarm optimization (PSO) mimics the social behavior of animals like fish and birds [35]. Velocity and position are the only two characteristics of the particle. Each particle’s position indicates a potential resolution to the issue, and the information that describes it is provided by its position, velocity, and fitness value. Calculating a certain fitness function yields the fitness value.
PSO begins with a set of random particles and uses continual updating and iteration to locate the best solution. Each particle will choose its own position and speed throughout each iteration based on p b and g b . Equations (15) and (16) are used to update the particle’s velocity and position after determining these two best values.
v i t + 1 = w v i D t + c 1 r 1 p b i t x i t + c 2 r 2 g b i t x i t
x i t + 1 = x i t + v i t + 1
where v i is the velocity of the particle; x i is the particle’s position; c 1 and c 2 are the learning factors; r 1 and r 2 are the random numbers between 0 , 1 ; w is the inertia weight.
PSO has been successful in many real-world applications, however the standard PSO still struggles with local optimization and has poor convergence accuracy. This study focuses on the three improvement options listed below to address the aforementioned issues.

2.3.1. Improved Adaptive Inertia Weight

The weight of inertia has a major role in determining the convergence of PSO. The local optimization capability is poor but the global capability is higher when the inertia weight is high. The inverse is also accurate. Due to the wide variety of neural network parameters, it is simple to reach a local extremum when using a typical linear decreasing technique, as illustrated in Equation (17). The adaptive change inertia weight, as described in Equation (18), is used in this research to navigate around this restriction.
ω = ω max ω max ω min t max × t
W = 0.1 N 1 D g b / D i = 1 N 1 D p b / D
where ω max and ω min represent the variable’s maximum and minimum values; t and t max represent the current iteration’s and maximum iteration’s iterations, respectively.
The IPSO algorithm’s early stages are characterized by a modest declining trend, a powerful global search capability, and the potential for a broadly applicable solution. The diminishing trend of W is accelerated in this algorithm’s latter stages. The convergence velocity of IPSO can be accelerated after a good solution is identified in the early stage.

2.3.2. Improvement of Learning Factors

The learning factors c 1 and c 2 are used to regulate the step duration and reposition the particles to reach both the local and the global ideal positions. As the iterative process moves forward in actual applications, it is typically required to adjust the c 1 value from large to tiny in order to speed up the search speed in the initial iterations and enhance the capability of global search. To help with the local refinement search in the subsequent iteration of the iteration and enhance the local search capacity, the c 2 value is changed from small to large. Typically, the PSO algorithm sets c 1 = c 2 = 2 . However, this falls short of what is required for real-world applications. The linear change learning factors C 1 and C 2 , as shown in Equations (19) and (20), are introduced to improve the global and local search performance of PSO.
C 1 t = 2.5 2 × t t max
C 2 t = 0.5 + 2 × t t max

2.3.3. Improvement of Velocity and Position Update Equation

By inserting a linear model of and as indicated in Equations (21) and (22), the better particle velocity update Equation (23) is created.
P b = p b + g b 2
G b = p b g b 2
V i t + 1 = W V i t + C 1 r 1 P b i t x i t + C 2 r 2 G b i t x i t
In addition, the average dimensional information conceptual Equation (24) and adaptive determination condition Equation (25) are introduced to further enhance the local and global search capability of particles by adaptively updating the particle positions using “ X = X + V ” and “ X = W X + ( 1 W ) V ” segments.
δ = 1 D i = 1 D x i t
Q i = exp f x i t exp 1 N i = 1 N f x i t
f x i t = 1 n t = 1 n x ^ i t x i t 2
X i t + 1 = W X i t + 1 W V i t + 1 , Q i > δ X i t + V i t + 1 , Q i < δ
where δ is the average of each particle’s dimensions information; Q i is the ratio between the current particle’s fitness value and the population’s average fitness value; f ( ) is the fitness value of a particle. When Q i > δ , it implies that IPSO is in the early stages of its search or that the current particle distribution is dispersed, as opposed to the middle or late stages of its search or the concentrated current particle distribution, which are indicated by Q i < δ .
In summary, the IPSO algorithm finally improves Equations (15) and (16) to Equation (28).
V i t + 1 = W V i t + C 1 r 1 P b i t x i t + C 2 r 2 G b i t x i t X i t + 1 = W X i t + 1 W V i t + 1 , Q i > δ V i t + 1 = W V i t + C 1 r 1 P b i t X i t + C 2 r 2 G b i t X i t X i t + 1 = X i t + V i t + 1 , Q i < δ

2.4. Evaluation Indicators

2.4.1. Benchmark Function

The performance of the proposed IPSO algorithm was evaluated in this study using simulated experiments using the 10 common benchmark functions shown in Table 1 [36]. The prediction model’s convergence precision increases as the test function’s optimized value ( f o p t ) gets nearer to zero.

2.4.2. Prediction Errors

For evaluating model performance, choosing suitable performance criteria is crucial. All models used in this research are statistically evaluated using the standard deviation (SD), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), correlation coefficient (R), and coefficient of determination (R2). The following values would correspond to the projected value and actual value: SD = 0, RMSE = 0, MAE = 0, MAPE = 0, CC = 1, and R2 = 1. The following is a list of the mathematical representations:
SD = 1 n t = 1 n [ y ^ ( t ) y ^ ¯ ( t ) ] [ y ( t ) y ¯ ( t ) ] 2
RMSE = 1 n t = 1 n [ y ( t ) x ( t ) ] 2
MAE = 1 n t = 1 n y ( t ) x ( t )
MAPE = 1 n t = 1 n y ( t ) x ( t ) x ( t )
R = t = 1 n [ y ( t ) y ¯ ( t ) ] [ x ( t ) x ¯ ( t ) ] t = 1 n [ y ( t ) y ¯ ( t ) ] 2 t = 1 n [ x ( t ) x ¯ ( t ) ] 2
R 2 = 1 t = 1 n y ( t ) x ( t ) 2 t = 1 n x ¯ ( t ) x ( t ) 2
where n is the total number of time series samples, y ( t ) and x ( t ) are the predicted value and actual value at time t , y ¯ ( t ) and x ¯ ( t ) are the mean value of the predicted value and actual value.

2.4.3. Taylor Diagram

In addition, this paper further qualitatively evaluates the performance of the prediction models through a Taylor diagram [37]. This diagram can provide a statistical assessment of how well each model matches the other in terms of its SD, RMSE, and R, as well as a simple summary of the degree of connection between simulated and observed fields. The value of R, RMSE, and SD differences between prediction models are all represented by a single point on a two-dimensional plot in a Taylor diagram. Although this diagram’s structure is generic, it is particularly helpful when assessing complex models.

2.5. CEEMDAN-IPSO-LSTM Model

The complexity and non-smoothness of the original passenger flow time series of URT interfere with the neural network prediction and the problems of neural network hyperparameters determined by trial-and-error with only empirical values seriously affecting the accuracy of the prediction model. In this study, we use the CEEMDAN algorithm to break down the time series data for the passenger flow, use the LSTM hyperparameters as the object of optimization, combine them with the IPSO algorithm to determine the optimal value of the LSTM hyperparameters, and build a combined CEEMDAN-IPSO-LSTM model to accurately predict the short-term passenger flow of URT systems. Figure 2 depicts the precise prediction method, and the subsequent steps are presented in the prediction process.
Step 1: Data decomposition. CEEMDAN is used to decompose passenger flow data to obtain IMFs and Res.
Step 2: A training set and a test set are created from the passenger flow sequence that was obtained from CEEMDAN decomposition.
Step 3: Construct LSTM neural network. Initialize the batch size, hidden layer unit number, gradient limit, and other parameters of LSTM.
Step 4: Initialize the IPSO parameters at random. The size of the population, the maximum number of iterations, and the size of the particles are chosen at random.
Step 5: Create the CEEMDAN-IPSO-IPSO-LSTM prediction model and build a combination prediction model; the hyperparameters (L1, L2, Lr, K) of LSTM are computed using IPSO. If the iteration termination conditions are met, output the optimal value of LSTM hyperparameters. If it is not satisfied, make t = t + 1 , and repeat steps 2-5.
Step 6: Evaluate the prediction model. CEEMDAN-IPSO-IPSO-LSTM model is evaluated by the prediction error and Taylor diagram.

3. Results

3.1. Data Set

The experimental data are the inbound and outbound passenger flow data of Yangji Station of Guangzhou Metro from 1 July 2019 to 28 July 2019 from 6:15 to 23:15. The time series was smoothed by aggregating flow data into nonoverlapping 15-min intervals [38]. This resulted in 96 samples per day. Based on the above CEEMDAN-IPSO-LSTM model, the first 75% of the data were taken as the training set and the last 25% as the test set. The sliding window length was 3; that is, the data of the first 3 weeks were used to predict the next week.
Figure 3 depicts how Yangji Station’s inbound/outbound passenger flow statistics changed throughout the experiment. Additionally, because the subway station is close to sizable residential neighborhoods, commuters frequently utilize it during the working week, and significant morning and evening peak characteristics exist, which aids in improving forecast performance. The passenger flow significantly varies during the course of a single day, as shown in Figure 3. Its pattern is quite similar during the working week, with two peaks visible each day. The first inbound/outbound peak typically occurs between 7:30 and 8:45 and 7:30 and 9:30 in the morning, and the second inbound/outbound peak usually occurs between 17:15 and 19:15 and 17:45 and 19:00 in the afternoon. The passenger volume during the morning and/or afternoon peaks is often two to three times more than during off-peak times. Weekend trends diverge from weekday trends, and there are no clear morning and afternoon peaks. Between 11:00 and 19:00, there are frequently high passenger loads. In general, Saturday has a greater passenger volume than Sunday. Due to entertainment and social events, it is also observed that there is an increase in passenger traffic late on Friday and Saturday nights.

3.2. CEEMDAN Decomposition

The inbound passenger flow time series was divided using CEEMDAN into a total of 12 subseries with various amplitudes and frequencies, comprising 11 IMF components and a Res component, as shown in Figure 4. It is clear that when the IMF is further decomposed, it becomes less volatile and cyclical, which is consistent with the decomposed IMF’s features. IMF1 has the highest frequency and the shortest wavelength. As the wavelength rises, the frequency of IMF2 to IMF11 drops in turn. The trend term of the inbound passenger flow sequence is represented by the residual term.

3.3. Benchmark Function and Comparison Algorithm

Four other evolutionary algorithms (SOA [39], WOA [40], GWO [41], and PSO) were chosen for comparison with IPSO to assess the IPSO algorithm’s performance. All comparison algorithms made use of the same set of parameters to ensure fairness. The maximum number of iterations was 1000, and the population size was set at 50. Additionally, each algorithm was individually run 50 times on each benchmark function to lessen the effect of random numbers on algorithm performance.
Table 2 compares five evolutionary algorithms across ten benchmark functions. The operation results in Table 2 show that, for the identical benchmark function, the IPSO algorithm’s minimum, maximum, mean, and SD values are, for the most part, smaller than those of other algorithms. It can be seen from the operation results in Table 2 that, under the same benchmark function, the value of minimum, maximum, mean, and SD obtained by the IPSO algorithm are smaller than other algorithms, in most cases. The IPSO algorithm performs better than other algorithms in the whole iteration process, which can enable particles to gather more stably near the global optimal value and more easily find the global optimal solution.
Figure 5 displays the ideal iterative convergence curves for each benchmark function. The convergence curve of the IPSO algorithm on most benchmark functions is below that of other algorithms. It demonstrates that IPSO not only has great convergence accuracy throughout the whole search process for each specified benchmark function, but also a faster convergence speed. The IPSO algorithm’s adaptive strategy significantly enhances the efficiency of particle optimization, avoids PSO’s inefficient iteration process, and achieves a balance between local and global search.

3.4. CEEMDAN-IPSO-LSTM Results

The fitness function employed in this study is the best mean square error (MSE) that the LSTM could attain throughout training. The hyperparameters derived from the optimization are L1, L2, Lr, and K, which correspond to the minimum MSE. Figure 6a depicts the error convergent curve during the training process. It was discovered that as the iteration count increased, the error of the CEEMDAN-IPSO-LSTM model soon converged. Within four iterations, the CEEMDAN-IPSO-LSTM fitness evolution curve attained the necessary precision and then maintained the ideal fitness value, demonstrating strong learning ability. The initial and final errors of CEEMDAN-IPSO-LSTM are one order of magnitude fewer than those of CEEMDAN-PSO-LSTM, and the model accuracy significantly increases. Figure 6b displays the estimated outcomes of the LSTM hyperparameters, which are L1 = 65, L2 = 173, Lr = 0.007, and K = 60, which were optimized by PSO and IPSO.

3.5. Prediction Results of Inbound and Outbound Passenger Flow

The LSTM, CEEMDAN-LSTM, and CEEMDAN-PSO-LSTM models were employed for comparison testing to confirm the accuracy of the proposed CEEMDAN-IPSO-LSTM model. Figure 7 displays the outcomes of several model predictions of data on the inbound and outgoing passenger flow. As can be observed, the trend of the actual value curves, whether during the peak time or off-peak period, is largely consistent with the forecast curves derived by various models. The CEEMDAN-IPSO-LSTM model, on the other hand, correlates to a prediction curve through thorough local observation, which has greater forecast accuracy than the other models and is more similar to the real monitoring curve, indicating the CEEMDAN-IPSO-LSTM model has strong robustness.

3.6. Evaluation Indicators of Prediction Models

3.6.1. Quantitative Analysis Based on Prediction Errors

Table 3 shows the performance of the CEEMDAN-IPSO-LSTM model comparison to other models (LSTM, CEEMDAN-LSTM, CEEMDAN-PSO-LSTM) for both inbound and outbound passenger flow data. It can be seen that the CEEMDAN-IPSO-LSTM model respectively reduces SD, RMSE, MAE, and MAPE of inbound/outbound passenger flow data concerning the whole day of month by 12~40 persons/13~35 persons, 13~44 person/12~35 persons, 6~37 persons/12~31 persons and 5.08~46.89%/6.5~35.1%, R and R2 respectively increased by 0.07~2.32%/0.86~3.63% and 0.13~2.19%/0.67~1.67%. At the same time, the proposed model can achieve favorable prediction results for the different periods during weekdays and also on the weekend. This demonstrates once more the higher prediction accuracy of the CEEMDAN-IPSO-LSTM model suggested in this study.

3.6.2. Qualitative Analysis Based on Taylor Diagram

Additionally, a Taylor diagram was created for each model’s prediction errors in order to qualitatively assess the characteristics of how prediction errors are distributed among different prediction models. According to Figure 8, the comprehensive ranking of prediction results is as follows: LSTM < EMD-LSTM < EEMD-LSTM < CEEMD-LSTM < CEEMDAN-LSTM < EMD-PSO-LSTM < EEMD-PSO-LSTM < CEEMD-PSO-LSTM < CEEMDAN-PSO-LSTM < EMD-IPSO-LSTM < EEMD-IPSO-LSTM < CEEMD-IPSO-LSTM < CEEMDAN-IPSO-LSTM. Among the peer models, the CEEMDAN-IPSO-LSTM model has the highest accuracy and can meet the demands for accurate short-term predictions of passenger flow.

4. Discussion

In this paper, we verified that the CEEMDAN-IPSO-LSTM model can accurately predict short-term passenger flow of URT. The error statistics of inbound passenger flow and outbound passenger flow demonstrate that the proposed model, combining the strong noise-resistant robustness of the CEEMDAN and the nonlinear mapping of the LSTM, outperforms other models in terms of prediction performance. Compared with the single LSTM model, the CEEMDAN-IPSO-LSTM model reduce by 40 person/35 person, 44 person/35 person, 37 person/31 person, and 46.89%/35.1% in SD, RMSE, MAE, and MAPE, and increase by 2.32%/3.63% and 2.19%/1.67% in R and R2, respectively. The performance improvement of CEEMDAN-IPSO-LSTM for the LSTM is significantly higher than that of the other models.
Because of the sensitivity of the short-term prediction model to the original passenger flow time series, it can consider the impact of various factors on the passenger flow series. For further study, more effective pretreatment methods of noise reduction for passenger flow data should be explored and applied to further enhance the algorithm performance. The methods that could be explored include variational mode decomposition [42], synchrosqueezing wavelet transform [43], savitzky-golay filter [44], etc.
In this paper, we only analyzed a basic prediction model of LSTM. There exist some other improvements to this model. For example, the Bi-directional LSTM [45] and gated recurrent neural network [46]. Therefore, more base models with various denoising methods should be compared and analyzed, to further strengthen the applicability of the IPSO-LSTM model in passenger flow prediction.
In addition, the CEEMDAN-IPSO-LSTM model proposed in this paper is also valuable for time series prediction of other traffic flows. At the same time, the model can be further extended from one subway station to one subway line, or even to the entire subway network, to improve the accurate prediction of short-term passenger flow in the URT system.

5. Conclusions

There are increasing traffic pollution issues in the process of urbanization in many countries. URT is low-carbon and widely regarded as an effective way to solve such problems. The accurate prediction of short-term passenger flow in URT systems can improve the efficiency of transport infrastructure and vehicles, and provide reference for the development of low-carbon transportation. In this study, a short-term passenger flow prediction model for URT was proposed based on CEEMDAN-IPSO-LSTM, including the framework design of CEEMDAN-IPSO-LSTM and the determination of model parameters, which successfully addresses the issues of easy local optimum fall-off, slow late convergence, and early convergence in the conventional PSO algorithm. The experimental findings showed that the CEEMDAN-IPSO-LSTM model beat other comparison models in terms of overall performance. Specifically, the CEEMDAN-IPSO-LSTM model respectively reduced SD, RMSE, MAE, and MAPE of inbound/outbound passenger flow data concerning the whole day of month by 12~40 person/13~35 person, 13~44 person/12~35 person, 6~37 person/12~31 person and 5.08~46.89%/6.5~35.1%, R and R2 respectively increased by 0.07~2.32%/0.86~3.63% and 0.13~2.19%/0.67~1.67%. At the same time, the proposed model achieved favorable prediction results during weekdays and at the weekend. In summary, this research validates the applicability and robustness of the CEEMDAN-IPSO-LSTM model in the area of predicting short-term passenger flow for URT systems, and extends the use of ensemble learning technology.
However, there are still a number of restrictions in this study. For instance, the current case study examined the station’s passenger flow statistics, but did not address the relationships between other lines, nor did investigate how service interruptions and spatiotemporal impacts can affect passenger flow. Additionally, multi-source data pertaining to factors such as weather, traffic, and accidents might be investigated in the future. Further research into the proposed model’s applicability to other spatial-temporal data mining applications, such trajectory prediction, would also be interesting.

Author Contributions

Conceptualization, L.Z. and Z.L.; methodology, L.Z. and Z.L.; software, L.Z. and Z.L.; validation, L.Z. and Z.L.; formal analysis, L.Z. and Z.L.; investigation, L.Z. and Z.L.; resources, L.Z., J.Y. and X.X.; data curation, L.Z. and Z.L.; writing—original draft preparation, L.Z. and Z.L.; writing—review and editing, L.Z., Z.L., J.Y. and X.X.; visualization, L.Z., Z.L.; supervision, L.Z., Z.L., J.Y. and X.X.; project administration, L.Z., Z.L., J.Y. and X.X.; funding acquisition, L.Z., J.Y. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (No. 62063009); the State Key Laboratory of Rail Traffic Control and Safety (Contract No. RCS2020K005), Beijing Jiaotong University; the Science and Technology Project of the Education Department of Jiangxi Province (No.GJJ200825); Scientific research project of Ganjiang Innovation Academy, Chinese Academy of Sciences (No.E255J001); and Jiangxi University of Scientific and Technology research fund for high-level talents (No.205200100428).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to sincerely thank the editors and the anonymous reviewers for their constructive comments that greatly contributed to improving the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xi, J. Statement by H.E. Xi Jinping President of the People’s Republic of China at the General Debate of the 75th Session of the United Nations General Assembly; Ministry of Foreign Affairs, the People’s Republic of China: Beijing, China, 2020. [Google Scholar]
  2. Zhang, Y.; Jiang, L.; Shi, W. Exploring the growth-adjusted energy-emission efficiency of transportation industry in China. Energy Econ. 2020, 90, 104873. [Google Scholar] [CrossRef]
  3. Mao, R.; Bao, Y.; Duan, H.; Liu, G. Global urban subway development, construction material stocks, and embodied carbon emissions. Humanit. Soc. Sci. Commun. 2021, 8, 83. [Google Scholar] [CrossRef]
  4. Wei, T.; Chen, S. Dynamic energy and carbon footprints of urban transportation infrastructures: Differentiating between existing and newly-built assets. Appl. Energy 2020, 277, 115554. [Google Scholar] [CrossRef]
  5. China Association of Metros. Available online: https://www.camet.org.cn/xxfb (accessed on 18 October 2022).
  6. Liu, Y.; Liu, Z.; Jia, R. DeepPF: A deep learning based architecture for metro passenger flow prediction. Transp. Res. Part C Emerg. Technol. 2019, 101, 18–34. [Google Scholar] [CrossRef]
  7. Li, G.; Knoop, V.L.; van Lint, H. Multistep traffic forecasting by dynamic graph convolution: Interpretations of real-time spatial correlations. Transp. Res. Part C Emerg. Technol. 2021, 128, 103185. [Google Scholar] [CrossRef]
  8. Cheng, Z.; Trépanier, M.; Sun, L. Incorporating travel behavior regularity into passenger flow forecasting. Transp. Res. Part C Emerg. Technol. 2021, 128, 103200. [Google Scholar] [CrossRef]
  9. Noursalehi, P.; Koutsopoulos, H.N.; Zhao, J. Predictive decision support platform and its application in crowding prediction and passenger information generation. Transp. Res. Part C Emerg. Technol. 2021, 129, 103139. [Google Scholar] [CrossRef]
  10. Jiao, P.; Li, R.; Sun, T.; Hou, Z.; Ibrahim, A. Three Revised Kalman Filtering Models for Short-Term Rail Transit Passenger Flow Prediction. Math. Probl. Eng. 2016, 2016, 9717582. [Google Scholar] [CrossRef] [Green Version]
  11. Liang, S.; Ma, M.; He, S.; Zhang, H. Short-Term Passenger Flow Prediction in Urban Public Transport: Kalman Filtering Combined K-Nearest Neighbor Approach. IEEE Access 2019, 7, 120937–120949. [Google Scholar] [CrossRef]
  12. Cao, L.; Liu, S.G.; Zeng, X.H.; He, P.; Yuan, Y. Passenger Flow Prediction Based on Particle Filter Optimization. Appl. Mech. Mater. 2013, 373–375, 1256–1260. [Google Scholar] [CrossRef]
  13. Liu, S.Y.; Liu, S.; Tian, Y.; Sun, Q.L.; Tang, Y.Y. Research on Forecast of Rail Traffic Flow Based on ARIMA Model. J. Phys. Conf. Ser. 2021, 1792, 012065. [Google Scholar] [CrossRef]
  14. Guo, J.; Huang, W.; Williams, B.M. Adaptive Kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification. Transp. Res. Part C Emerg. Technol. 2014, 43, 50–64. [Google Scholar] [CrossRef]
  15. Shahriari, S.; Ghasri, M.; Sisson, S.A.; Rashidi, T. Ensemble of ARIMA: Combining parametric and bootstrapping technique for traffic flow prediction. Transp. A Transp. Sci. 2020, 16, 1552–1573. [Google Scholar] [CrossRef]
  16. Hu, Y.; Wu, C.; Liu, H. Prediction of passenger flow on the highway based on the least square support vector machine. Transport 2011, 26, 197–203. [Google Scholar] [CrossRef] [Green Version]
  17. Zhou, G.; Tang, J. Forecast of Urban Rail Transit Passenger Flow in Holidays Based on Support Vector Machine Model. In Proceedings of the 5th International Conference on Electromechanical Control Technology and Transportation (ICECTT), Nanchang, China, 15–17 May 2020. [Google Scholar]
  18. Li, H.; Zhang, J.; Yang, L.; Qia, J.; Gaoa, Z. Graph-GAN: A spatial-temporal neural network for short-term passenger flow prediction in urban rail transit systems. Transp. Res. Part C 2022, 1–24. [Google Scholar]
  19. Zhang, J.; Chen, F.; Cui, Z.; Guo, Y.; Zhu, Y. Deep Learning Architecture for Short-Term Passenger Flow Forecasting in Urban Rail Transit. IEEE Trans. Intell. Transp. Syst. 2020, 22, 7004–7014. [Google Scholar] [CrossRef]
  20. Yu, S.; Shang, C.; Yu, Y.; Zhang, S.; Yu, W. Prediction of bus passenger trip flow based on artificial neural network. Adv. Mech. Eng. 2016, 8, 1–7. [Google Scholar] [CrossRef] [Green Version]
  21. Long, X.; Li, J.; Chen, Y. Metro short-term traffic flow prediction with deep learning. Control. Decis. 2019, 34, 1589–1600. [Google Scholar]
  22. Nicholas, G.; Sokolov, V. Deep learning for short-term traffic flow prediction. Transp. Res. Part C Emerg. Technol. 2017, 79, 1–17. [Google Scholar]
  23. Zhao, Z.; Chen, W.; Wu, X. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 2019, 13, 68–75. [Google Scholar] [CrossRef] [Green Version]
  24. He, P.; Jiang, G.; Lam, S.; Sun, Y. Learning heterogeneous traffic patterns for travel time prediction of bus journeys. Inf. Sci. 2020, 512, 1394–1406. [Google Scholar] [CrossRef]
  25. Jing, Y.; Hu, H.; Guo, S.; Wang, X.; Chen, F. Short-Term Prediction of Urban Rail Transit Passenger Flow in External Passenger Transport Hub Based on LSTM-LGB-DRS. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4611–4621. [Google Scholar] [CrossRef]
  26. Gong, M.; Fei, X.; Wang, Z.H.; Qiu, Y.J. Sequential Framework for Short-Term Passenger Flow Prediction at Bus Stop. Transp. Res. Rec. J. Transp. Res. Board 2014, 2417, 58–66. [Google Scholar] [CrossRef]
  27. Lan, Q.; Weide, L.; Shijia, L. Effective passenger flow forecasting using STL and ESN based on two improvement strategies. Neurocomputing 2019, 356, 244–256. [Google Scholar]
  28. Guo, J.; Xie, Z.; Qin, Y.; Jia, L.; Wang, Y. Short-term abnormal passenger flow prediction based on the fusion of SVR and LSTM. IEEE Access 2019, 7, 42946–42955. [Google Scholar] [CrossRef]
  29. Liu, L.; Chen, R. A novel passenger flow prediction model using deep learning methods. Transp. Res. Part C 2017, 84, 74–91. [Google Scholar] [CrossRef]
  30. Marfa, E.; Torres, M.; Colominas, G.; Patrick, F. A complete ensemble empirical mode decomposition with adaptive noise. In Proceedings of the IEEE International Conference on Acoustics, Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
  31. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.-C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  32. Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal. 2009, 1, 1–41. [Google Scholar] [CrossRef]
  33. Yeh, J.-R.; Shieh, J.-S.; Huang, N.E. Complementary Ensemble Empirical Mode Decomposition: A Novel Noise Enhanced Data Analysis Method. Adv. Adapt. Data Anal. 2010, 2, 135–156. [Google Scholar] [CrossRef]
  34. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  35. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  36. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal crossover and steering-based particle swarm optimization algorithm with disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
  37. Taylor, K.E. Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res. Atmos. 2001, 106, 7183–7192. [Google Scholar] [CrossRef]
  38. Li, P.; Ma, C.; Ning, J.; Wang, Y.; Zhu, C. Analysis of Prediction Accuracy under the Selection of Optimum Time Granularity in Different Metro Stations. Sustainability 2019, 11, 5281. [Google Scholar] [CrossRef] [Green Version]
  39. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl. -Based Syst. 2018, 165, 169–196. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  42. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Processing 2014, 62, 531–544. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Wen, X.; Jiang, L.; Liu, J.; Yang, J.; Liu, S. Prediction of high-quality reservoirs using the reservoir fluid mobility attribute computed from seismic data. J. Pet. Sci. Eng. 2020, 190, 107007. [Google Scholar] [CrossRef]
  44. Bi, J.; Lin, Y.; Dong, Q.; Yuan, H.; Zhou, M. Large-scale water quality prediction with integrated deep neural network. Inf. Sci. 2021, 571, 191–205. [Google Scholar] [CrossRef]
  45. Li, Z.; Ge, H.; Cheng, R. Traffic flow prediction based on BILSTM model and data denoising scheme. Chin. Phys. B 2022, 31, 040502. [Google Scholar] [CrossRef]
  46. Wang, S.; Zhao, J.; Shao, C.; Dong, C.D.; Yin, C. Truck Traffic Flow Prediction Based on LSTM and GRU Methods With Sampled GPS Data. IEEE Access 2020, 8, 208158–208169. [Google Scholar] [CrossRef]
Figure 1. LSTM structure diagram.
Figure 1. LSTM structure diagram.
Ijerph 19 16433 g001
Figure 2. Flowchart of CEEMDAN-IPSO-LSTM prediction model.
Figure 2. Flowchart of CEEMDAN-IPSO-LSTM prediction model.
Ijerph 19 16433 g002
Figure 3. Four weeks of inbound and outbound passenger flow data.
Figure 3. Four weeks of inbound and outbound passenger flow data.
Ijerph 19 16433 g003
Figure 4. IMFs and Res obtained from the daily inbound passenger flow data after CEEMDAN decomposition.
Figure 4. IMFs and Res obtained from the daily inbound passenger flow data after CEEMDAN decomposition.
Ijerph 19 16433 g004
Figure 5. Average convergence curves of 10 benchmark functions. (aj) represent the average convergence curve of function f1f10.
Figure 5. Average convergence curves of 10 benchmark functions. (aj) represent the average convergence curve of function f1f10.
Ijerph 19 16433 g005aIjerph 19 16433 g005b
Figure 6. (a) The error convergence curves of CEEMDAN-PSO-LSTM model and CEEMDAN-IPSO-LSTM model in the training process. (b) The hyperparameters optimization results of CEEMDAN-PSO-LSTM model and CEEMDAN-IPSO-LSTM model.
Figure 6. (a) The error convergence curves of CEEMDAN-PSO-LSTM model and CEEMDAN-IPSO-LSTM model in the training process. (b) The hyperparameters optimization results of CEEMDAN-PSO-LSTM model and CEEMDAN-IPSO-LSTM model.
Ijerph 19 16433 g006
Figure 7. Prediction results of inbound and outbound passenger flow of different models in the last week.
Figure 7. Prediction results of inbound and outbound passenger flow of different models in the last week.
Ijerph 19 16433 g007
Figure 8. Taylor diagram of 13 prediction models. (a) represents the inbound passenger flow, (b) represents the outbound passenger flow.
Figure 8. Taylor diagram of 13 prediction models. (a) represents the inbound passenger flow, (b) represents the outbound passenger flow.
Ijerph 19 16433 g008
Table 1. Benchmark functions.
Table 1. Benchmark functions.
FunctionFormulationRange f o p t
Sphere f 1 x = i = 1 D x i 2 100 , 100 0
Sum Squars f 2 x = i = 1 D i x i 2 5.21 , 5.21 0
Sum of
Different Power
f 3 x = i = 1 D x i i + 1 1 , 1 0
Rosenbrock f 4 x = i = 1 D 100 x i + 1 x i 2 2 + x i 1 2 30 , 30 0
Quartic f 5 x = i = 1 D i x i 2 + r a n d o m 0 , 1 1.28 , 1.28 0
Rastigrin f 6 x = i = 1 D x i 2 10 cos 2 π x i + 10 5.21 , 5.21 0
Ackley f 7 x = 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i + 20 + e 32 , 32 0
Griewank f 8 x = 1 4000 i = 1 D x i 2 i = 1 D cos x i i + 1 600 , 600 0
Penalized f 9 x = π D 10 sin 2 π y 1 + i = 1 D 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 D U x i , 10 , 100 , 4 , y i = 1 + 1 4 x i + 1 , U x i , a , k , m = k x i a m , x i > a 0 , a x i a k x i a m , x i < a 50 , 50 0
Penalized2 f 10 x = π D 10 sin 2 π y 1 + i = 1 D 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 D 1 U x i , 5 , 100 , 4 , y i = 1 + 1 4 x i + 1 , U x i , a , k , m = k x i a m , x i > a 0 , a x i a k x i a m , x i < a 50 , 50 0
Table 2. Comparison results between IPSO and other evolutionary algorithms.
Table 2. Comparison results between IPSO and other evolutionary algorithms.
FunctionValueSOAWOAGWOPSOIPSO
f 1 Min3.43 × 10−34.59 × 10−135.38 × 10−211.12 × 101.83 × 10−35
Max6.33 × 103.78 × 10−121.96 × 10−171.79 × 101.51 × 10−32
Mean5.21 × 1001.54 × 10−121.49 × 10−181.49 × 102.70 × 10−33
Std1.26 × 108.83 × 10−133.71 × 10−181.46 × 1003.62 × 10−33
Rank53241
f 2 Min3.43 × 10−33.14 × 10−105.38 × 10−131.42 × 106.06 × 10−21
Max6.33 × 101.30 × 10−92.88 × 10−91.78 × 103.11 × 10−19
Mean5.21 × 1006.15 × 10−104.56 × 10−101.59 × 107.02 × 10−20
Std1.26 × 102.19 × 10−107.18 × 10−109.45 × 10−16.26 × 10−20
Rank43251
f 3 Min1.59 × 1031.15 × 1044.11 × 10−184.23 × 102.09 × 10−11
Max1.40 × 1032.34 × 1041.73 × 10−128.92 × 107.04 × 10−7
Mean6.69 × 1031.62 × 1046.59 × 10−146.82 × 106.57 × 10−8
Std3.65 × 1032.85 × 1042.19 × 10−101.15 × 101.43 × 10−7
Rank45132
f 4 Min9.74 × 1001.25 × 101.41 × 10−111.31 × 1001.61 × 10−9
Max3.96 × 102.68 × 107.01 × 10−91.71 × 1006.49 × 10−8
Mean2.32 × 102.00 × 101.37 × 10−91.55 × 1001.83 × 10−8
Std9.64 × 1003.63 × 1001.68 × 10−99.59 × 10−21.53 × 10−8
Rank54231
f 5 Min1.00 × 10−49.19 × 101.43 × 101.76 × 1020.00 × 100
Max1.26 × 1021.87 × 1021.44 × 1022.57 × 1022.07 × 100
Mean3.01 × 101.46 × 1026.34 × 102.21 × 1026.91 × 10−2
Std2.99 × 102.64 × 103.09 × 101.56 × 103.79 × 10−1
Rank53241
f 6 Min7.18 × 102.47 × 102.85 × 101.66 × 1032.54 × 10
Max5.02 × 1044.56 × 102.87 × 105.20 × 1032.79 × 10
Mean7.00 × 1032.72 × 102.87 × 103.20 × 1032.66 × 10
Std1.21 × 1043.58 × 1003.20 × 10−29.90 × 1027.18 × 10−1
Rank54321
f 7 Min4.29 × 1003.66 × 10−131.57 × 10−21.07 × 102.05 × 10−5
Max1.79 × 102.98 × 10−121.39 × 10−11.81 × 101.01 × 100
Mean7.75 × 1001.52 × 10−126.56 × 10−21.53 × 104.64 × 10−1
Std3.34 × 1007.21 × 10−133.03 × 10−21.99 × 1003.24 × 10−1
Rank54321
f 8 Min5.24 × 1003.34 × 10−22.46 × 10−51.62 × 1022.37 × 10−4
Max2.89 × 1058.20 × 10−27.11 × 10−44.18 × 1022.62 × 10−3
Mean4.69 × 1045.33 × 10−22.40 × 10−42.65 × 1021.31 × 10−3
Std7.42 × 1041.40 × 10−21.86 × 10−46.37 × 105.97 × 10−4
Rank54321
f 9 Min−1.12 × 103−1.66 × 103−1.30 × 103−1.52 × 103−1.49 × 103
Max−7.88 × 102−1.33 × 103−9.10 × 102−9.46 × 102−1.08 × 103
Mean−9.34 × 102−9.34 × 102−1.09 × 103−1.19 × 104−1.24 × 103
Std7.27 × 107.97 × 101.03 × 1021.47 × 1028.49 × 10
Rank21534
f 10 Min3.34 × 10−11.00 × 10−225.00 × 10−42.02 × 10−10.00 × 100
Max2.53 × 102.31 × 10−194.85 × 10−17.98 × 1005.23 × 10−2
Mean2.30 × 1002.76 × 10−206.85 × 10−21.37 × 1002.18 × 10−2
Std5.49 × 1005.08 × 10−201.41 × 10−11.81 × 1001.26 × 10−2
Rank54231
Total Rank4535253114
Final Rank54231
Table 3. Comparison of prediction errors.
Table 3. Comparison of prediction errors.
PeriodErrorInboundOutbound
L 1C-L 1C-P-L 1C-IP-L 1L 1C-L 1C-P-L 1C-IP-L 1
MonthDaySD6755392789836754
RMSE6955382589846654
MAE5741262066604735
MAPE82.3673.1940.5535.4783.6880.1155.0848.58
R97.2498.1599.4999.5695.1696.6597.9398.79
R297.6999.1099.7599.8897.5797.8798.5799.24
WeekdayDaySD6249332586784537
RMSE6147332487784537
MAE4436241963564236
MAPE73.3465.8440.0235.3372.0470.5357.8240.68
R98.8199.1799.4999.7096.3497.0498.0299.33
R298.3499.4299.7499.8698.2198.5799.5299.67
PeakSD1241128678199151124100
RMSE1231108775203151126101
MAE918367521591129183
MAPE63.1758.8447.3732.0161.7757.8447.0241.63
R88.3394.5496.2498.774.2080.6690.1294.73
R293.9894.7397.5398.3587.0193.4095.1197.24
Off-PeakSD4341322669594132
RMSE4340332770574132
MAE3329232050433025
MAPE72.8262.7250.4947.7370.2568.5656.1452.08
R95.1195.8097.2896.7987.9891.7795.7397.39
R297.5497.9298.7099.4393.9195.8097.9798.69
WeekendDaySD116926041120997352
RMSE1199360421181007355
MAE8771504689755042
MAPE37.2826.2718.7115.0043.3140.1935.7132.00
R79.8883.4888.3792.8473.8880.4889.3591.84
R284.0689.6493.1896.8886.2990.6494.1897.59
1 The names of LSTM, CEEMDAN-LSTM, CEEMDAN-PSO-LSTM, and CEEMDAN-IPSO-LSTM models are abbreviated as L, C-L, C-P-L, and C-IP-L in Table 3.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zeng, L.; Li, Z.; Yang, J.; Xu, X. CEEMDAN-IPSO-LSTM: A Novel Model for Short-Term Passenger Flow Prediction in Urban Rail Transit Systems. Int. J. Environ. Res. Public Health 2022, 19, 16433. https://doi.org/10.3390/ijerph192416433

AMA Style

Zeng L, Li Z, Yang J, Xu X. CEEMDAN-IPSO-LSTM: A Novel Model for Short-Term Passenger Flow Prediction in Urban Rail Transit Systems. International Journal of Environmental Research and Public Health. 2022; 19(24):16433. https://doi.org/10.3390/ijerph192416433

Chicago/Turabian Style

Zeng, Lu, Zinuo Li, Jie Yang, and Xinyue Xu. 2022. "CEEMDAN-IPSO-LSTM: A Novel Model for Short-Term Passenger Flow Prediction in Urban Rail Transit Systems" International Journal of Environmental Research and Public Health 19, no. 24: 16433. https://doi.org/10.3390/ijerph192416433

APA Style

Zeng, L., Li, Z., Yang, J., & Xu, X. (2022). CEEMDAN-IPSO-LSTM: A Novel Model for Short-Term Passenger Flow Prediction in Urban Rail Transit Systems. International Journal of Environmental Research and Public Health, 19(24), 16433. https://doi.org/10.3390/ijerph192416433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop