A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction
Abstract
:1. Introduction
2. Related Work
3. Methodology Formulation
3.1. SDNN Structure
3.1.1. Synapses
- Case (Constant-1 connection): When or , in this case, the output of the synapse is always approximately 1 despite the changes in the input.
- Case (Constant-0 connection): When or , in this case, the result is always 0 despite the changes in the input.
- Case (Inverse connection): When , where , the output is approximately 0; otherwise, the output tends to be 1.
- Case (Direct connection): When , where , the output tends to be 1; otherwise, the output is approximately 0.
3.1.2. Dendrites
3.1.3. Soma
3.2. Training Algorithm
3.2.1. Direction Vector
3.2.2. Collisions
3.2.3. Random Behaviour
3.3. Time Delay and Embedding Dimensions
- Calculate Euclidean distance between point and its nearest point . Both and are joined by the dimension from d to ; then, compute the new Euclidean distance . If the result is greater than threshold , then the points are considered false neighbours; otherwise, verify the second condition.
- If cannot satisfy the following condition, then the points are considered false neighbours.
3.4. PSR and the MLE
4. Experiments
4.1. Dataset Description
4.2. Normalization
4.3. Parameter Settings
4.4. Evaluation Criteria
- The MSE of the predictor for the normalized data is obtained as follows:
- The MAE is defined as follows:
- The MAPE of the predictions is defined as follows:
- The RMSE for the normalized distribution can be defined as follows:
- The CEs of the prediction phase can be given by the following:
4.5. Performance Comparison
4.5.1. Comparison with Other Optimization Algorithms
4.5.2. Comparison with Other Prediction Approaches
5. Extension
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Gan, K.; Sun, S.; Wang, S.; Wei, Y. A secondary-decomposition-ensemble learning paradigm for forecasting PM2.5 concentration. Atmos. Pollut. Res. 2018, 9, 989–999. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, W.; Wang, J. Air quality early-warning system for cities in China. Atmos. Environ. 2017, 148, 239–257. [Google Scholar] [CrossRef]
- Agarwal, S.; Sharma, S.; Suresh, R.; Rahman, M.H.; Vranckx, S.; Maiheu, B.; Batra, S. Air quality forecasting using artificial neural networks with real time dynamic error correction in highly polluted regions. Sci. Total. Environ. 2020, 735, 139454. [Google Scholar] [CrossRef]
- Cekim, H.O. Forecasting PM10 concentrations using time series models: A case of the most polluted cities in Turkey. Environ. Sci. Pollut. Res. Int. 2020, 27, 25612–25624. [Google Scholar] [CrossRef]
- Wang, D.; Wei, S.; Luo, H.; Yue, C.; Grunder, O. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine. Sci. Total. Environ. 2017, 580, 719–733. [Google Scholar] [CrossRef]
- Lv, B.; Cobourn, W.G.; Bai, Y. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large Chinese cities. Atmos. Environ. 2016, 147, 209–223. [Google Scholar] [CrossRef]
- Sahu, R.; Nagal, A.; Dixit, K.K.; Unnibhavi, H.; Mantravadi, S.; Nair, S.; Tripathi, S.N. Robust statistical calibration and characterization of portable low-cost air quality monitoring sensors to quantify real-time O3 and NO2 concentrations in diverse environments. Atmos. Meas. Tech. 2021, 14, 37–52. [Google Scholar] [CrossRef]
- Stafoggia, M.; Bellander, T.; Bucci, S.; Davoli, M.; De Hoogh, K.; De’Donato, F.; Scortichini, M. Estimation of daily PM10 and PM2.5 concentrations in Italy, 2013–2015, using a spatiotemporal land-use random-forest model. Environ. Int. 2019, 124, 170–179. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; Chang, F.J.; Chang, L.C.; Kao, I.F.; Wang, Y.S.; Kang, C.C. Multi-output support vector machine for regional multi-step-ahead PM2.5 forecasting. Sci. Total. Environ. 2019, 651, 230–240. [Google Scholar] [CrossRef]
- Todo, Y.; Tamura, H.; Yamashita, K.; Tang, Z. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw. 2014, 60, 96–103. [Google Scholar] [CrossRef]
- Taylor, W.R.; He, S.; Levick, W.R.; Vaney, D.I. Dendritic computation of direction selectivity by retinal ganglion cells. Science 2000, 289, 2347–2350. [Google Scholar] [CrossRef]
- Tang, C.; Ji, J.; Tang, Y.; Gao, S.; Tang, Z.; Todo, Y. A novel machine learning technique for computer-aided diagnosis. Eng. Appl. Artif. Intell. 2020, 92, 103627. [Google Scholar] [CrossRef]
- Song, Z.; Tang, Y.; Ji, J.; Todo, Y. Evaluating a dendritic neuron model for wind speed forecasting. Knowl. Based Syst. 2020, 201, 106052. [Google Scholar] [CrossRef]
- Song, S.; Chen, X.; Tang, C.; Song, S.; Tang, Z.; Todo, Y. Training an Approximate Logic Dendritic Neuron Model Using Social Learning Particle Swarm Optimization Algorithm. IEEE Access 2019, 7, 141947–141959. [Google Scholar] [CrossRef]
- Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef] [Green Version]
- Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence; Springer: Warwick, UK, 1980; pp. 366–381. [Google Scholar]
- Fraser, A.M.; Swinney, H.L. Independent coordinates for strange attractors from mutual information. Phys. Rev. A 1986, 33, 1134. [Google Scholar] [CrossRef]
- Kennel, M.B.; Brown, R.; Abarbanel, H.D. Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A 1992, 45, 3403. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kantz, H. A robust method to estimate the maximal Lyapunov exponent of a time series. Phys. Lett. A 1994, 185, 77–87. [Google Scholar] [CrossRef]
- Elbayoumi, M.; Ramli, N.A.; Yusof, N.F.F.M. Development and comparison of regression models and feedforward backpropagation neural network models to predict seasonal indoor PM2.5-10 and PM2.5 concentrations in naturally ventilated schools. Atmos. Pollut. Res. 2015, 6, 1013–1023. [Google Scholar] [CrossRef]
- Lin, Y.C.; Lee, S.J.; Ouyang, C.S.; Wu, C.H. Air quality prediction by neuro-fuzzy modeling approach. Applied soft computing. Sensors Actuators B Chem. 2020, 86, 105898. [Google Scholar]
- Bai, Y.; Zeng, B.; Li, C.; Zhang, J. An ensemble long short-term memory neural network for hourly PM2.5 concentration forecasting. Chemosphere 2019, 222, 286–294. [Google Scholar] [CrossRef] [PubMed]
- Xu, X.; Yoneda, M. Multitask Air-Quality Prediction Based on LSTM-Autoencoder Model. IEEE Trans. Cybern. 2019. [Google Scholar] [CrossRef]
- Ma, J.; Ding, Y.; Cheng, J.C.; Jiang, F.; Tan, Y.; Gan, V.J.; Wan, Z. Identification of high impact factors of air quality on a national scale using big data and machine learning techniques. J. Clean. Prod. 2020, 244, 118955. [Google Scholar] [CrossRef]
- Loy-Benitez, J.; Vilela, P.; Li, Q.; Yoo, C. Sequential prediction of quantitative health risk assessment for the fine particulate matter in an underground facility using deep recurrent neural networks. Ecotoxicol. Environ. Saf. 2019, 169, 316–324. [Google Scholar] [CrossRef]
- Li, T.; Shen, H.; Yuan, Q.; Zhang, X.; Zhang, L. Estimating ground-level PM2.5 by fusing satellite and station observations: A geo-intelligent deep learning approach. Geophys. Res. Lett. 2017, 44, 11–985. [Google Scholar] [CrossRef] [Green Version]
- Huang, C.J.; Kuo, P.H. A deep cnn-lstm model for particulate matter (PM2.5) forecasting in smart cities. Sensors 2018, 18, 2220. [Google Scholar] [CrossRef] [Green Version]
- Pak, U.; Ma, J.; Ryu, U.; Ryom, K.; Juhyok, U.; Pak, K.; Pak, C. Deep learning-based PM2.5 prediction considering the spatiotemporal correlations: A case study of Beijing, China. Sci. Total. Environ. 2020, 699, 133561. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Chen, C. Prediction of outdoor PM2.5 concentrations based on a three-stage hybrid neural network model. Atmos. Pollut. Res. 2020, 11, 469–481. [Google Scholar] [CrossRef]
- Voukantsis, D.; Karatzas, K.; Kukkonen, J.; Räsänen, T.; Karppinen, A.; Kolehmainen, M. Intercomparison of air quality data using principal component analysis, and forecasting of PM10 and PM2.5 concentrations using artificial neural networks, in Thessaloniki and Helsinki. Sci. Total. Environ. 2011, 409, 1266–1276. [Google Scholar] [CrossRef]
- Abderrahim, H.; Chellali, M.R.; Hamou, A. Forecasting PM10 in Algiers: Efficacy of multilayer perceptron networks. Environ. Sci. Pollut. Res. 2016, 23, 1634–1641. [Google Scholar] [CrossRef]
- Fu, M.; Wang, W.; Le, Z.; Khorram, M.S. Prediction of particular matter concentrations by developed feed-forward neural network with rolling mechanism and gray model. Neural Comput. Appl. 2015, 26, 1789–1797. [Google Scholar] [CrossRef]
- Gao, S.; Zhao, H.; Bai, Z.; Han, B.; Xu, J.; Zhao, R.; Yu, H. Combined use of principal component analysis and artificial neural network approach to improve estimates of PM2.5 personal exposure: A case study on older adults. Sci. Total. Environ. 2020, 726, 138533. [Google Scholar] [CrossRef] [PubMed]
- Biancofiore, F.; Busilacchio, M.; Verdecchia, M.; Tomassetti, B.; Aruffo, E.; Bianco, S.; Di Carlo, P. Recursive neural network model for analysis and forecast of PM10 and PM2.5. Atmos. Pollut. Res. 2017, 8, 652–659. [Google Scholar] [CrossRef]
- Yeganeh, B.; Hewson, M.G.; Clifford, S.; Tavassoli, A.; Knibbs, L.D.; Morawska, L. Estimating the spatiotemporal variation of NO2 concentration using an adaptive neuro-fuzzy inference system. Environ. Model. Softw. 2018, 100, 222–235. [Google Scholar] [CrossRef] [Green Version]
- Feng, X.; Li, Q.; Zhu, Y.; Hou, J.; Jin, L.; Wang, J. Artificial neural networks forecasting of PM2.5 pollution using air mass trajectory based geographic model and wavelet transformation. Atmos. Environ. 2015, 107, 118–128. [Google Scholar] [CrossRef]
- Ordieres, J.B.; Vergara, E.P.; Capuz, R.S.; Salazar, R.E. Neural network prediction model for fine particulate matter (PM2.5) on the US-Mexico border in El Paso (Texas) and Ciudad Juárez (Chihuahua). Environ. Model. Softw. 2005, 20, 547–559. [Google Scholar] [CrossRef]
- Sun, W.; Sun, J. Daily PM2.5 concentration prediction based on principal component analysis and LSSVM optimized by cuckoo search algorithm. J. Environ. Manag. 2017, 188, 144–152. [Google Scholar] [CrossRef]
- Liu, H.; Duan, Z.; Chen, C. A hybrid multi-resolution multi-objective ensemble model and its application for forecasting of daily PM2.5 concentrations. Inf. Sci. 2020, 516, 266–292. [Google Scholar] [CrossRef]
- Qi, Y.; Li, Q.; Karimian, H.; Liu, D. A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory. Sci. Total. Environ. 2019, 664, 1–10. [Google Scholar] [CrossRef]
- Kow, P.Y.; Wang, Y.S.; Zhou, Y.; Kao, I.F.; Issermann, M.; Chang, L.C.; Chang, F.J. Seamless integration of convolutional and back-propagation neural networks for regional multi-step-ahead PM2.5 forecasting. J. Clean. Prod. 2020, 261, 121285. [Google Scholar] [CrossRef]
- Di, Q.; Koutrakis, P.; Schwartz, J. A hybrid prediction model for PM2.5 mass and components using a chemical transport model and land use regression. Atmos. Environ. 2016, 131, 390–399. [Google Scholar] [CrossRef]
- Gabbiani, F.; Krapp, H.G.; Koch, C.; Laurent, G. Multiplicative computation in a visual neuron sensitive to looming. Nature 2002, 420, 320–324. [Google Scholar] [CrossRef] [Green Version]
- Small, M. Applied Nonlinear Time Series Analysis: Applications in Physics, Physiology and Finance; World Scientific: Singapore, 2005; p. 52. [Google Scholar]
- Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef] [Green Version]
- Abarbanel, H. Analysis of Observed Chaotic Data; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Altland, H.W. Computer-Based Robust Engineering: Essentials for DFSS. Technometrics 2006. [Google Scholar] [CrossRef]
- Srinivas, M.; Patnaik, L.M. Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man, Cybern. 1994, 24, 656–667. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 IEEE World Congress on Nature & Biologically Inspired Computing (NaBIC), Pietermaritzburg, South Africa, 9 December 2009; pp. 210–214. [Google Scholar]
- Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
- Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
- Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
- Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
- Bonyadi, M.R.; Michalewicz, Z. Particle swarm optimization for single objective continuous space problems: A review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef] [PubMed]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; California Univ San Diego La Jolla Inst for Cognitive Science: San Diego, CA, USA, 1985. [Google Scholar]
- García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617. [Google Scholar] [CrossRef]
- Li, T.; Hua, M.; Wu, X. A hybrid CNN-LSTM model for forecasting particulate matter (PM2.5). IEEE Access 2020, 8, 26933–26940. [Google Scholar] [CrossRef]
- Tao, Q.; Liu, F.; Li, Y.; Sidorov, D. Air pollution forecasting using a deep learning model based on 1D convnets and bidirectional GRU. IEEE Access 2019, 7, 76690–76698. [Google Scholar] [CrossRef]
- Chuentawat, R.; Kan-ngan, Y. The comparison of PM2.5 forecasting methods in the form of multivariate and univariate time series based on support vector machine and genetic algorithm. In Proceedings of the IEEE 15th International Conference on Electrical Engineering/Electronics, Chiang Rai, Thailand, 18–21 July 2018; pp. 572–575. [Google Scholar]
- Xu, X. Forecasting air pollution PM2.5 in Beijing using weather data and multiple kernel learning. J. Forecast. 2020, 39, 117–125. [Google Scholar] [CrossRef]
State | Duration | Probability H | |||
---|---|---|---|---|---|
Gas | 50% | 0.8 | 0.8 | [0.8, 1.0] | 0.9 |
Liquid | 40% | 0.4 | 0.2 | [0.0, 0.6] | 0.2 |
Solid | 10% | 0.1 | 0.0 | [0.0, 0.1] | 0.0 |
Dataset | Time Delay | Embedding Dimension m | |
---|---|---|---|
PM data 1 | 3 | 4 | 0.0528 |
PM data 2 | 5 | 4 | 0.0023 |
PM data 3 | 5 | 4 | 0.0024 |
PM data 4 | 4 | 5 | 0.0380 |
PM data 5 | 4 | 4 | 0.0474 |
PM data 6 | 4 | 4 | 0.0452 |
Trainning Interval | Prediction Interval | Instance Number | |
---|---|---|---|
Dataset | Year/Month | Year/Month | Days |
PM data 1 | 2016/01–2017/06 | 2017/07–2017/12 | 547,184 |
PM data 2 | 2016/07–2017/12 | 2018/01–2018/06 | 549,181 |
PM data 3 | 2017/01–2018/06 | 2018/07–2018/12 | 546,184 |
PM data 4 | 2017/07–2018/12 | 2019/01–2019/06 | 549,181 |
PM data 5 | 2018/01–2019/06 | 2019/07–2019/12 | 546,184 |
PM data 6 | 2018/07–2019/12 | 2020/01–2020/06 | 549,182 |
Parm. | PM Concentration Dataset | ||||||||
---|---|---|---|---|---|---|---|---|---|
NO. | Data 1 (Mean ± Std) | Data 2 (Mean ± Std) | Data 3 (Mean ± Std) | Data 4 (Mean ± Std) | Data 5 (Mean ± Std) | Data 6 (Mean ± Std) | |||
1 | 6 | 4 | 0.3 | 7.60 ± 2.16 | 1.06 ± 2.63 | 1.04 ± 1.50 | 1.78 ± 8.62 | 1.28 ± 2.29 | 1.92 ± 9.00 |
2 | 6 | 7 | 0.6 | 5.26 ± 1.74 | 7.67 ± 3.27 | 7.87 ± 1.88 | 1.67 ± 6.25 | 1.19 ± 3.16 | 1.83 ± 9.24 |
3 | 6 | 10 | 0.9 | 5.26 ± 1.16 | 7.66 ± 2.00 | 7.81 ± 1.93 | 1.72 ± 9.71 | 1.19 ± 2.92 | 1.82 ± 8.60 |
4 | 6 | 13 | 1.2 | 5.21 ± 1.67 | 7.78 ± 2.74 | 7.71 ± 1.77 | 1.68 ± 8.38 | 1.19 ± 2.45 | 1.89 ± 1.31 |
5 | 10 | 4 | 0.6 | 5.13 ± 1.41 | 7.73 ± 4.30 | 8.01 ± 4.22 | 1.68 ± 8.86 | 1.17 ± 2.56 | 1.84 ± 1.30 |
6 | 10 | 7 | 0.3 | 5.22 ± 1.56 | 7.59 ± 2.95 | 7.95 ± 2.96 | 1.72 ± 9.78 | 1.19 ± 3.78 | 1.88 ± 1.24 |
7 | 10 | 10 | 1.2 | 5.19 ± 1.68 | 7.69 ± 3.34 | 7.98 ± 3.31 | 1.722 ± 1.31 | 1.18 ± 4.87 | 1.91 ± 1.57 |
8 | 10 | 13 | 0.9 | 5.20 ± 1.98 | 7.65 ± 3.47 | 7.91 ± 3.86 | 1.74 ± 1.12 | 1.19 ± 3.01 | 1.97 ± 2.05 |
9 | 14 | 4 | 0.9 | 5.06 ± 8.80 | 7.53 ± 4.16 | 7.99 ± 4.59 | 1.67 ± 9.48 | 1.17 ± 2.45 | 1.87 ± 1.09 |
10 | 14 | 7 | 1.2 | 5.15 ± 2.26 | 7.68 ± 4.89 | 7.93 ± 2.88 | 1.74 ± 1.46 | 1.17 ± 4.51 | 1.93 ± 2.08 |
11 | 14 | 10 | 0.3 | 5.25 ± 2.27 | 7.62 ± 4.61 | 8.18 ± 4.35 | 1.78 ± 2.12 | 1.19 ± 5.20 | 1.96 ± 1.38 |
12 | 14 | 13 | 0.6 | 5.20 ± 1.29 | 7.56 ± 3.10 | 8.14 ± 4.92 | 1.79 ± 1.52 | 1.19 ± 4.57 | 2.02 ± 2.27 |
13 | 18 | 4 | 1.2 | 5.05 ± 6.37 | 7.97 ± 4.27 | 8.00 ± 2.43 | 1.67 ± 7.60 | 1.16 ± 1.73 | 1.88 ± 1.96 |
14 | 18 | 7 | 0.9 | 5.10 ± 9.60 | 7.54 ± 3.74 | 8.10 ± 3.92 | 1.73 ± 1.52 | 1.18 ± 4.37 | 1.97 ± 1.65 |
15 | 18 | 10 | 0.6 | 5.14 ± 1.77 | 8.13 ± 2.77 | 8.34 ± 6.48 | 1.74 ± 1.78 | 1.17 ± 2.95 | 2.03 ± .04 |
16 | 18 | 13 | 0.3 | 5.23 ± 2.40 | 7.81 ± 5.70 | 8.27 ± 5.43 | 1.78 ± 2.19 | 1.19 ± 4.69 | 2.04 ± 2.51 |
PM Data 1 | PM Data 2 | PM Data 3 | PM Data 4 | PM Data 5 | PM Data 6 | |
---|---|---|---|---|---|---|
Algorithm | Mean ± Std | Mean ± Std | Mean ± Std | Mean ± Std | Mean ± Std | Mean ± Std |
GA | 6.27 ± 1.47 | 1.03 ± 2.36 | 8.37 ± 3.11 | 1.91 ± 2.58 | 1.35 ± 1.40 | 2.55 ± 4.02 |
CS | 6.05 ± 2.07 | 1.08 ± 2.63 | 8.48 ± 3.10 | 1.85 ± 4.88 | 4.77 ± 1.27 | 3.97 ± 1.24 |
FA | 5.97 ± 1.14 | 1.09 ± 3.64 | 8.48 ± 3.79 | 1.88 ± 2.87 | 1.41 ± 1.85 | 2.39 ± 6.27 |
GSA | 1.75 ± 2.62 | 2.46 ± 3.55 | 1.27 ± 5.84 | 5.86 ± 1.21 | 3.20 ± 3.30 | 5.45 ± 7.14 |
JADE | 5.27 ± 5.25 | 9.90 ± 3.15 | 8.54 ± 4.18 | 1.80 ± 1.49 | 1.29 ± 1.40 | 1.96 ± 2.52 |
L-SHADE | 4.99 ± 3.18 | 8.41 ± 4.16 | 7.92 ± 5.58 | 1.51 ± 6.44 | 1.22 ± 1.04 | 1.87 ± 9.21 |
PSO | 5.10 ± 1.66 | 8.62 ± 1.92 | 8.53 ± 6.20 | 1.70 ± 1.30 | 1.23 ± 4.21 | 1.88 ± 7.17 |
SMS | 5.05 ± 6.37 | 7.53 ± 4.16 | 7.71 ± 1.77 | 1.67 ± 7.60 | 1.16 ± 1.73 | 1.82 ± 8.60 |
Algorithm | Ranking | z-Value | Unadjusted p | |
---|---|---|---|---|
GA | 5.5 | 2.9462 | 0.003216 | 0.02251 |
CS | 6.0833 | 3.3588 | 0.000783 | 0.00548 |
FA | 5.5833 | 3.0052 | 0.002654 | 0.01858 |
GSA | 7.8333 | 4.5962 | 0.000004 | 0.00003 |
JADE | 4.5 | 2.2392 | 0.025145 | 0.09601 |
L-SHADE | 1.6667 | 0.2357 | 0.813664 | 5.69564 |
PSO | 3.5 | 1.5321 | 0.125506 | 0.87855 |
SMS | 1.3333 | - | - | - |
Models | Parameter | Value |
---|---|---|
MLP | Hidden layer number | 4, 7, 13, 4, 4, 10 |
DNN-BP | K | 18, 14, 6, 18, 18, 6 |
M | 4, 7, 13, 4, 4, 10 | |
1.2, 0.9, 1.2, 1.2, 1.2, 0.9 | ||
S-DNN | K | 18, 14, 6, 18, 18, 6 |
1.2, 0.9, 1.2, 1.2, 1.2, 0.9 | ||
DT | Minleaf | 25 |
Maxleaf and Maxdepth | Default | |
SVR-L, SVR-P, SVR-R | Cost (c) | 0.5, 0.5, 0.5 |
Epsilon of loss function (p) | 0.01, 0.2, 0.01 | |
1/5 | ||
LSTM | Hidden units | 200 |
Maximum epochs | 1000 |
PM Concentration Data 1 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 6.54 ± 9.38 | 9.13 | 6.73 ± 6.05 | 1.24 | 6.15 ± 3.73 | 8.43 | 8.07 ± 5.74 | 9.13 | 6.10 ± 1.31 |
DNN-BP | 1.75 ± 3.89 | 9.13 | 1.07 ± 1.86 | 9.13 | 5.56 ± 6.18 | 9.13 | 1.32 ± 1.47 | 9.13 | 5.42 ± 2.60 |
S-DNN | 2.16 ± 3.64 | 1.24 | 1.09 ± 6.77 | 1.24 | 4.84 ± 2.62 | 2.04 | 1.32 ± 6.62 | 1.24 | 4.98 ± 2.57 |
DT | 6.13 ± 1.76 | 1.01 | 5.88 ± 2.82 | 1.01 | 7.73 ± 2.21 | 1.01 | 7.83 ± 0.00 | 1.01 | 5.43 ± 2.82 |
SVR-L | 5.45 ± 8.82 | 1.63 | 6.12 ± 4.23 | 9.13 | 6.31 ± 1.10 | 1.16 | 7.38 ± 2.82 | 1.63 | 5.91 ± 2.26 |
SVR-P | 7.40 ± 0.00 | 9.13 | 7.33 ± 0.00 | 9.13 | 6.07 ± 1.10 | 1.21 | 8.60 ± 1.41 | 9.13 | 4.53 ± 2.26 |
SVR-R | 5.36 ± 2.65 | 1.63 | 6.08 ± 3.53 | 9.13 | 6.06 ± 5.51 | 1.01 | 7.32 ± 2.82 | 1.63 | 7.12 ± 2.82 |
LSTM | 5.53 ± 1.95 | 2.47 | 5.93 ± 1.25 | 8.87 | 8.98 ± 9.79 | 2.01 | 7.73 ± 1.42 | 2.11 | 7.45 ± 2.37 |
SDNN | 5.05 ± 6.37 | - | 5.70 ± 6.14 | - | 5.76 ± 4.45 | - | 7.13 ± 1.62 | - | 7.92 ± 1.35 |
PM concentration data 2 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 8.51 ± 6.40 | 1.78 | 7.16 ± 3.51 | 1.01 | 1.49 ± 1.48 | 2.79 | 9.22 ± 3.46 | 1.63 | 3.37 ± 1.07 |
DNN-BP | 4.83 ± 1.40 | 9.13 | 1.42 ± 1.40 | 9.13 | 6.20 ± 3.20 | 9.13 | 1.75 ± 1.35 | 9.13 | 3.41 ± 2.04 |
S-DNN | 2.26 ± 2.28 | 9.13 | 1.16 ± 2.57 | 9.13 | 5.59 ± 9.26 | 9.13 | 1.50 ± 7.63 | 9.13 | 4.98 ± 1.66 |
DT | 1.12 ± 3.53 | 9.13 | 7.50 ± 5.65 | 9.13 | 8.18 ± 4.41 | 1.00 | 1.06 ± 2.82 | 9.13 | 5.09 ± 2.82 |
SVR-L | 7.68 ± 2.65 | 1.42 | 6.99 ± 5.65 | 9.13 | 1.27 ± 2.21 | 1.00 | 8.76 ± 7.06 | 1.42 | 5.49 ± 1.69 |
SVR-P | 9.50 ± 5.29 | 9.13 | 7.93 ± 2.82 | 9.13 | 1.56 ± 6.62 | 9.13 | 9.75 ± 0.00 | 9.13 | 5.67 ± 1.13 |
SVR-R | 7.74 ± 5.29 | 3.51 | 6.98 ± 2.82 | 9.13 | 1.24 ± 4.41 | 1.00 | 8.80 ± 1.41 | 3.36 | 5.45 ± 2.26 |
LSTM | 8.07 ± 3.01 | 6.51 | 6.68 ± 7.88 | 9.13 | 1.25 ± 2.75 | 1.00 | 8.88 ± 1.39 | 1.31 | 5.81 ± 7.37 |
SDNN | 7.53 ± 4.16 | - | 6.42 ± 5.73 | - | 1.43 ± 3.44 | - | 8.66 ± 1.96 | - | 6.57 ± 3.11 |
PM concentration data 3 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 1.06 ± 1.28 | 9.13 | 7.54 ± 4.87 | 9.13 | 1.60 ± 2.01 | 1.03 | 1.03 ± 6.24 | 9.13 | 4.30 ± 1.15 |
DNN-BP | 2.34 ± 7.51 | 9.13 | 1.10 ± 2.16 | 1.01 | 4.71 ± 1.68 | 1.85 | 1.50 ± 2.84 | 9.13 | 1.87 ± 2.86 |
S-DNN | 1.25 ± 5.55 | 9.13 | 8.39 ± 1.89 | 9.13 | 1.73 ± 1.29 | 1.57 | 1.10 ± 2.07 | 9.13 | 4.99 ± 1.17 |
DT | 1.15 ± 0.00 | 9.13 | 7.41 ± 1.41 | 9.13 | 1.20 ± 4.41 | 1.00 | 1.07 ± 1.41 | 9.13 | 4.72 ± 0.00 |
SVR-L | 9.95 ± 5.29 | 9.13 | 7.55 ± 4.23 | 9.13 | 1.45 ± 8.82 | 3.66 | 9.98 ± 1.41 | 9.13 | 5.56 ± 2.26 |
SVR-P | 1.30 ± 1.76 | 9.13 | 8.62 ± 4.23 | 9.13 | 1.93 ± 8.82 | 9.13 | 1.14 ± 1.41 | 9.13 | 4.49 ± 1.69 |
SVR-R | 9.77 ± 3.53 | 9.13 | 7.50 ± 0.00 | 9.13 | 1.42 ± 0.00 | 3.92 | 9.88 ± 0.00 | 9.13 | 5.76 ± 3.39 |
LSTM | 8.55 ± 1.17 | 9.13 | 6.46 ± 5.03 | 1.77 | 1.46 ± 1.50 | 3.92 | 9.42 ± 7.02 | 1.98 | 5.96 ± 3.39 |
SDNN | 7.71 ± 1.77 | - | 6.46 ± 1.16 | - | 1.35 ± 6.99 | - | 8.84 ± 1.25 | - | 6.32 ± 1.51 |
PM concentration data 4 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 1.97± 2.00 | 1.37 | 1.06± 4.59 | 9.13 | 1.67± 1.71 | 1.24 | 1.40± 7.11 | 3.30 | 4.78± 1.23 |
DNN-BP | 6.13 ± 5.40 | 9.13 | 1.96 ± 1.52 | 9.13 | 5.57 ± 2.47 | 9.13 | 2.48 ± 1.09 | 9.13 | 3.68± 1.94 |
S-DNN | 4.58± 1.18 | 8.59 | 1.38± 1.30 | 5.91 | 2.52± 2.39 | 3.53 | 1.73± 1.28 | 9.13 | 5.68± 1.94 |
DT | 3.44± 2.12 | 9.13 | 1.27± 0.00 | 9.13 | 8.43± 0.00 | 1.00 | 1.86± 0.00 | 9.13 | 4.31± 1.69 |
SVR-L | 1.67± 1.06 | 3.79 | 9.70± 4.23 | 1.62 | 1.57 ± 1.10 | 9.13 | 1.28± 0.00 | 1.00 | 5.48 ± 3.39 |
SVR-P | 2.24± 1.06 | 9.13 | 1.09 ± 7.06 | 9.13 | 2.40 ± 4.41 | 9.13 | 1.50 ± 5.65 | 1.51 | 4.14± 0.00 |
SVR-R | 1.68 ± 3.53 | 3.56 | 9.77± 4.23 | 3.71 | 1.60 ± 0.00 | 9.13 | 1.29 ± 5.65 | 1.00 | 5.55 ± 1.13 |
LSTM | 4.13 ± 4.98 | 9.13 | 1.36 ± 5.51 | 9.13 | 1.53 ± 5.19 | 9.13 | 1.89 ± 7.43 | 9.13 | 5.55 ± 1.13 |
SDNN | 1.67 ± 7.60 | - | 9.64 ± 1.67 | - | 9.37 ± 4.37 | - | 1.31 ± 7.50 | - | 5.98 ± 2.03 |
PM concentration data 5 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 1.36 ± 1.30 | 1.37 | 9.01 ± 5.16 | 9.13 | 1.18 ± 2.16 | 1.83 | 1.16 ± 5.54 | 1.37 | 3.39 ± 1.19 |
DNN-BP | 1.07 ± 2.07 | 9.13 | 2.25 ± 2.09 | 9.13 | 6.59 ± 3.42 | 9.13 | 2.62 ± 1.99 | 9.13 | 4.07 ± 2.03 |
S-DNN | 4.05 ± 1.73 | 9.13 | 1.61 ± 6.71 | 9.13 | 5.59 ± 1.20 | 9.13 | 2.01 ± 4.31 | 9.13 | 4.22 ± 1.47 |
DT | 1.39 ± 8.82 | 9.13 | 9.20 ± 5.65 | 9.13 | 8.40 ± 2.21 | 1.00 | 1.18 ± 7.06 | 9.13 | 4.69± 1.69 |
SVR-L | 1.18 ± 1.76 | 1.00 | 7.89 ± 7.06 | 1.63 | 1.18 ± 0.00 | 2.48 | 1.07 ± 5.65 | 1.00 | 4.87 ± 2.26 |
SVR-P | 1.40 ± 3.53 | 9.13 | 8.99 ± 0.00 | 9.13 | 1.83 ± 4.41 | 9.13 | 1.18 ± 2.82 | 9.13 | 5.38 ± 5.65 |
SVR-R | 1.17 ± 0.00 | 1.01 | 7.89 ± 1.41 | 1.63 | 1.10 ± 4.41 | 1.00 | 1.07 ± 1.41 | 1.00 | 4.89 ± 2.26 |
LSTM | 1.30 ± 2.17 | 8.01 | 8.33 ± 6.56 | 6.63 | 1.20 ± 3.63 | 1.79 | 1.14 ± 9.25 | 9.13 | 4.89 ± 2.26 |
SDNN | 1.16 ± 1.73 | - | 7.73 ± 1.08 | - | 1.14 ± 2.70 | - | 1.08 ± 7.61 | - | 5.97 ± 9.69 |
PM concentration data 6 | |||||||||
Models | MSE (Mean ± Std) | p-value | MAE (Mean ± Std) | p-value | MAPE (Mean ± Std) | p-value | RMSE (Mean ± Std) | p-value | (Mean ± Std) |
MLP | 2.35 ± 3.18 | 9.13 | 1.09 ± 6.52 | 9.13 | 2.04 ± 2.42 | 2.74 | 1.53 ± 1.03 | 9.13 | 4.27 ± 1.66 |
DNN-BP | 1.25 ± 1.89 | 9.13 | 2.47 ± 1.91 | 9.13 | 5.98 ± 1.60 | 9.13 | 3.09 ± 1.76 | 9.13 | 3.05 ± 1.61 |
S-DNN | 8.65 ± 1.12 | 9.13 | 2.13 ± 1.12 | 9.13 | 5.76 ± 9.30 | 9.13 | 2.76 ± 1.04 | 9.13 | 5.74 ± 1.52 |
DT | 2.25 ± 0.00 | 1.12 | 1.05 ± 4.23 | 9.13 | 1.40 ± 6.62 | 1.00 | 1.50 ± 2.82 | 1.12 | 5.24 ± 0.00 |
SVR-L | 1.84 ± 7.06 | 9.99 | 9.44 ± 1.41 | 6.37 | 1.73 ± 4.41 | 7.16 | 1.32 ± 5.65 | 1.00 | 6.64 ± 2.26 |
SVR-P | 2.64 ± 7.06 | 9.13 | 1.11 ± 5.65 | 9.13 | 2.69 ± 1.32 | 9.13 | 1.62 ± 2.82 | 9.13 | 5.54 ± 0.00 |
SVR-R | 1.83 ± 7.06 | 8.38 | 9.54 ± 4.23 | 8.01 | 1.88 ± 6.62 | 1.67 | 1.35 ± 5.65 | 1.00 | 6.57 ± 0.00 |
LSTM | 2.74 ± 1.37 | 9.13 | 1.11 ± 4.23 | 9.13 | 1.66 ± 2.76 | 1.37 | 1.62 ± 3.31 | 9.13 | 6.44 ± 1.37 |
SDNN | 1.82 ± 8.60 | - | 9.33 ± 2.56 | - | 1.55 ± 1.22 | - | 1.36 ± 6.24 | - | 6.94 ± 4.98 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, Z.; Tang, C.; Ji, J.; Todo, Y.; Tang, Z. A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction. Electronics 2021, 10, 373. https://doi.org/10.3390/electronics10040373
Song Z, Tang C, Ji J, Todo Y, Tang Z. A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction. Electronics. 2021; 10(4):373. https://doi.org/10.3390/electronics10040373
Chicago/Turabian StyleSong, Zhenyu, Cheng Tang, Junkai Ji, Yuki Todo, and Zheng Tang. 2021. "A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction" Electronics 10, no. 4: 373. https://doi.org/10.3390/electronics10040373
APA StyleSong, Z., Tang, C., Ji, J., Todo, Y., & Tang, Z. (2021). A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction. Electronics, 10(4), 373. https://doi.org/10.3390/electronics10040373