Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review
Abstract
:1. Introduction
2. Background
2.1. Hazard, Susceptibility and Risk
- Susceptibility models the tendency of the occurrence of hazardous events at a specific location based on its physical and environmental characteristics [11]. The target variable for susceptibility modelling is the occurrence (or non-occurrence) of a hazardous event at a specific location.
- Risk measures the probability and severity of the negative impact on an asset [10]. The risk is the product of the hazard, the exposed assets, and the vulnerability of the assets towards that hazard [6]. In this context, exposure refers to the presence of infrastructure or a population at the event location, whilst vulnerability refers to how an asset can be affected by the hazard considering the physical, social, economic, and environmental factors [5].
- Susceptibility maps depict the areas which are prone to the occurrence of hazardous events. The susceptibility is provided quantitatively in terms of probability or qualitatively in terms of low, medium, or highly susceptible areas. The time frame of susceptibility maps tends to not be considered [10].
- Risk maps show the product of the combination of the hazard, exposure, and vulnerability maps. The parameters of the combination are customised to the case study.
2.2. Machine Learning, Deep Learning, and Other Methods
- Expert-knowledge-based models rely on questionnaires and the expert’s opinion [12].
- Physically based models are deterministic models which simulate the interactions between different environmental variables. They provide a theoretical framework integrating areas such as hydrology and geomorphology [13]. For example, for landslide susceptibility mapping, a physical model may integrate the soil characteristics, slope gradient, and water density [14].
- Statistical methods are data-driven methods which analyse the relationships between the previous occurrences of hazardous events and environmental variables to understand the correlations among them. For example, statistical methods include frequency ratio, weight of evidence, and analytical hierarchy process.
- Machine learning methods are also data-driven methods which rely on the same relationship as statistical methods. However, ML models learn the characteristics of the environmental variables associated with previous event occurrences to determine the probability of occurrence at unseen locations. This can be described as analytical model building [15]. ML encompasses several subcategories, even deep learning. In this context, ML includes regressions, decision trees, support vector machines, ensembles, Bayesian-based classifiers and regressors, instance-based classifiers, and artificial neural networks.
- Deep learning methods are part of ML. Deep learning methods are a branch of artificial neural networks, specifically those with multiple hidden layers and specific processing approaches [15]. These tend to perform better with respect to ’traditional’ ML [16], due to their capacity to build even more complex relationships. DL includes convolutional neural networks, recurrent neural networks, transformer neural networks, autoencoders, and others.
3. Methodology
- “air pollution” AND (mapping OR prediction OR modelling) AND (deep OR machine) learning.
- “heat island” AND (prediction OR modelling OR intensity) AND (deep OR machine) learning.
- Flood AND (mapping OR susceptibility) AND (machine OR deep) learning.
- Landslide AND (mapping OR susceptibility) AND (machine OR deep) learning.
- Articles were selected if all the following criteria were met:
- (a)
- The publication date was between 1 January 2020 and 31 December 2023.
- (b)
- The publication was a peer-reviewed article (i.e., not conference proceedings or other types of text).
- (c)
- The publication reported on the use of ML/DL models, algorithms, or techniques in the production of hazard susceptibility mapping, forecasting, or modelling.
- (d)
- The publication was related to risk assessment, but the susceptibility of hazards was modelled individually, i.e., the risk assessment could be split into hazard susceptibility modelling and then the risk was assessed based on the modelled hazard susceptibility.
- Articles were excluded if any of the following criteria were met:
- (a)
- The publication was a literature review.
- (b)
- The full text was not available.
- (c)
- The publication was written in languages other than English and without an English translation.
- (d)
- The publication did not have a DOI.
- Potentially relevant studies published before the year 2020 were excluded; therefore, the information presented in this manuscript is limited to the latest studies.
- The exclusion of articles which are not in English introduces a language bias by ignoring relevant studies written in other languages.
- The exclusion of grey literature, such as conference proceedings, tends to result in cutting-edge approaches being overlooked.
- The articles are sorted based on the number of citations per year; therefore, the sorting is biased towards older articles.
4. Algorithms Classification
- Supervised learning algorithms start from a known training dataset with labelled data and produce an inferred function used to make predictions.
- Unsupervised learning algorithms take as input data those which are not labelled or classified a priori. The goal of these models is to find hidden structures or patterns in the training data without explicit feedback or guidance.
- Semi-supervised learning algorithms are a combination of the two previous categories since they use both labelled and unlabelled data for training. The objective is to provide a better prediction with respect to using supervised learning with scarce data [16].
- Reinforcement learning algorithms learn to interact with their surrounding environment to achieve a specific goal. The model takes actions and receives rewards based on the outcome of the actions. Its goal is to learn the set of actions that maximises the cumulative reward. Examples of the use cases for reinforcement learning are autonomous driving and robotics, which are in an environment-driven context [16].
4.1. Neural Networks
4.2. Regression
4.3. Decision Trees
4.4. Support Vector Machines
4.5. Ensemble Methods
- Overfitting avoidance: when the number of data points available for training is limited, learning algorithms tend to fit the training data too closely and perform poorly on unseen instances. Averaging multiple predictions from different models can effectively reduce this limitation and improve the overall predictive performance.
- Local optima avoidance: single machine learning algorithms have the possibility of getting stuck in local optima solutions. This drawback is reduced in ensemble methods with the combination of multiple learners.
- Expansion of search space: the best solution for a problem can be outside the hypothesis space of any single model. However, the combination of different models can expand the search space and increase the chance of finding the best fit for the data.
4.6. Instance Based
4.7. Bayesian
4.8. Time Series
4.9. Other Classes
4.9.1. Dimensionality Reduction
4.9.2. Statistical
4.9.3. Clustering
4.9.4. Rule-Based Systems
5. Feature Selection
5.1. Wrappers
5.2. Filters
5.3. Embedded
5.4. Explainable
6. Hazard Susceptibility Modelling
6.1. Conditioning Factors Common to Multiple Hazards
7. Air Pollution Susceptibility Modelling
7.1. Air Pollution Data Sources and Conditioning Factors
7.2. Air Pollution Monitoring Data Preprocessing
7.3. Insights into Pollution ML/DL Modelling Techniques
8. Urban Heat Island Susceptibility Modelling
8.1. Urban Heat Island Data Sources and Conditioning Factors
8.2. Insights into the Urban Heat Island ML/DL Modelling Techniques
9. Flood Susceptibility Modelling
9.1. Flood Data Sources and Conditioning Factors
9.1.1. Flood Inventory
9.1.2. Flood Conditioning Factors
9.2. Flood Data Preprocessing
9.3. Insights into the Flood ML/DL Modelling Techniques
10. Landslide Susceptibility Modelling
10.1. Landslide Data Sources and Conditioning Factors
10.2. Landslide Inventory
10.3. Landslide Conditioning Factors
10.4. Landslide Data Preprocessing
10.5. Insights into the Landslide ML/DL Modelling Techniques
11. Discussion
11.1. The Importance of the Quality of Training Data
11.2. Model Generalisation and Transfer Learning
11.3. Spatial Agreement of Susceptibility Maps
12. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ANN | Artificial Neural Networks |
ANOVA | Analysis of Variance |
AOD | Aerosol Optical Depth |
AOI | Area of Interest |
AQI | Air Quality Index |
AUC | Area under the ROC Curve |
BAY | Bayesian |
BLSTM | Bidirectional Long Short-Term Memory |
CF | Conditioning Factor |
CLUST | Clustering |
CNN | Convolutional Neural Network |
DEM | Digital Elevation Model |
DL | Deep Learning |
DNN | Deep Neural Network |
DR | Dimensionality Reduction |
DTs | Decision Trees |
ENS | Ensemble |
ERA5 | ECMWF (European Centre for Medium-Range Weather Forecasts) Atmospheric |
Reanalysis V5 | |
ERT | Extremely Randomised Trees |
GAMI | Generalised Additive Models with structured Interaction |
GAN | Generative Adversarial Networks |
GCN | Graph Convolutional Network |
GE | Google Earth |
GPR | Gaussian Process Regression |
GRU | Gated Recurrent Unit |
IB | Instance-based |
IoT | Internet of Things |
LDA | Linear Discriminant Analysis |
LIME | Local Interpretable Model-agnostic Explanations |
LR | Linear Regression |
LST | Land Surface Temperature |
LSTM | Long Short-Term Memory |
LULC | Land Use–Land Cover |
MG | Model Generalisation |
MI | Mutual Information |
ML | Machine Learning |
MNDWI | Modified Normalised Difference Water Index |
MSPA | Morphological Spatial Pattern Analysis |
NDBI | Normalised Difference Built-up Index |
NDVI | Normalised Difference Vegetation Index |
NDWI | Normalised Difference Water Index |
NN | Neural Networks |
OSM | OpenStreetMap |
REG | Regression |
RF | Random Forest |
RFR | Random Forest Regression |
RNN | Recurrent Neural Network |
ROC | Receiver Operating Characteristic |
RUL | Rule-based Systems |
SDG | Sustainable Development Goal |
SHAP | Shapley Additive Explanation values |
STAT | Statistical |
SVM | Support Vector Machine |
SVR | Support Vector Regression |
TL | Transfer Learning |
TS | Time Series |
UHI | Urban Heat Island |
UHII | Urban Heat Island Intensity |
UTEI | Urban Thermal Environment Index |
UTFVI | Urban Thermal Field Variance Index |
XAI | Explainable Artificial Intelligence |
XGB | Extreme Gradient Boost |
Appendix A. Algorithms Classification
Class | Algorithms |
---|---|
Bayesian | Bayesian General Linear Model (BGLM), Bayesian Logistic Regression (BLOGR), Bayesian Moving Average (BMA), Bayesian Network (BN), Bayesian Regression (BR), Incremental Learning Bayesian Network (ILBN), Naive Bayes (NB). |
Clustering | DBScan (DBS), Gaussian Mixture Models (GMM), Hierarchical Clustering (HC), K-Means (KM), Positive Unlabelled Bagging (PUB). |
Dimensionality Reduction | Flexible Discriminant Analysis (FLDA), Functional Discriminant Analysis (FUDA), Mixture Discriminant Analysis (MIDA), Multivariate Discriminant Analysis (MDA), Partial Least Square Regression (PLSR), Quadratic Discriminant Analysis (QDA). |
Decision Trees | Alternating Decision Trees (ADT), Best First Decision Trees (BFDT), C4.5 Decision Tree (C45DT), C5.0 Decision Trees (C50DT), CHi-squared Automatic Interaction Detection (CHAID), Classification and Regression Tress (CART), Decision Table Classifier (DTC), Decision Trees (DT), Functional Trees (FT), Gradient Boosting Regression Trees (GBRT), Hoeffding Trees (HT), J48 Decision Tree (JDT), Logistic Model Tree (LMT), M5 model trees (M5P), Naive Bayes Trees (NBT), Partial Decision Tree (PDT), Reduced Error Pruning Decision Tree (REPDT), Reduced Error Pruning Tree (REPT), Regression Trees (RT), Spatiotemporal Decision Trees (STDT). |
Instance-based | Hyperpipes (HP), K-Nearest-Neighbour (KNN), K-Star (KS), Locally Weighted Linear Regression (LWLR), Subspace K-Nearest-Neighbour (SSKNN), Voting Feature Intervals (VFI). |
Rule-based Systems | Cubist (CUB), Genetic Algorithm Rule-Set Production (GARP), Rough Set (RS). |
Regression | Additive Regression (ADR), Complete Subset Regression (CRS), Elastic Regression (ER), Elasticnet Classifier (ENC), Gaussian Process Regression (GPR), General Linear Model (GLM), Generalised Additive Model (GAM), Kernel Logistic Regression (KLOGR), Kernel-based Regularised Least Squares (KRLS), LASSO Regression (LASSO), Land Use Regression (LUR), Linear Regression (LR), Logistic Regression (LOGR), Maximum Entropy (MENT), Multivariate Adaptive Regression Spline (MARS), Principal Component Regression (PCR), Ridge Regression (RR), Volterra (VOL). |
Support Vector Machine | Least Square Support Vector Machine (LSSVM), Relevance Vector Machine (RVM), Spatiotemporal Support Vector Machine (STSVM), Support Vector Machine (SVM), Support Vector Regression (SVR). |
Statistical | Dynamic Conditional Pareto (DCP), Frequency Ratio (FR), Functional Data Analysis (FDA), Response Surface Model (RSM). |
Time Series | Autoregressive model (AR), Autoregressive Integrated Moving Average (ARIMA), Autoregressive Moving Average (ARMA), Generalised Autoregressive Conditional Heteroskedasticity (GARCH), Moving Average (MA), Prophet Forecasting Model (PFM), Vector Autoregression (VAR). |
Neural Networks | Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Network (ANN), Autoencoders (AE), Autocorrelation Error Informer (AEI), Back Propagation Neural Network (BPNN), Bayesian Neural Network (BNN), Bottleneck Transformer Network (BTN), Convolutional Neural Network (CNN), Deep Autoencoder (DAE), Deep Belief Network (DBN), Deep Convolutional Neural Network (DCNN), Deep Neural Network (DNN), Dense Convolutional Networks (DCN), Echo State Neural Network (ESNN), Elman Network (EN), Extreme Learning Adaptive Neuro-Fuzzy Inference System (ELANFIS), Extreme Learning Machine (ELM), Full Connection Layers (FCL), Fuzzy Neural Network (FNN), Gated Recurrent Unit (GRU), General Regression Neural Network (GRNN), Generalised Additive Models with Structured Interactions (GAMI), Generative Adversarial Network (GAN), Graph Convolutional Network (GCN), Graph Long Short-Term Memory (GLSTM), Graph Neural Network (GNN), Hierarchical Neural Network (HNN), Long Short-Term Memory (LSTM), Model Averaged Neural Network (MANN), Multi-Graph Convolution (MGC), Multi-Layer Perceptron (MLP), Multi-Step Ahead Neural Network (MSANN), Passive Aggressive Classifier (PA), Quasi-Recurrent Neural Networks (QRNN), Radial Basis Function (RBF), Recurrent Neural Network (RNN), Residual Convolutional Neural Network (RESCNN), Residual Neural Network (RESNN), Restricted Boltzman Machine (RBM), Self-Adaptive Deep Neural Network (SADNN), Self-Organising Map (SOM), Sequence2Sequence RNN (SEQ2SEQ), Simple Recurrent Unit (SRU), Spatiotemporal Backpropagation Neural Network (STBPNN), Spatiotemporal Dynamic Advection (STDA), Spatiotemporal Extreme Learning Machine (STELM), Spatiotemporal Gated Recurrent Unit (STGRU), Spatiotemporal Informer (STI), Spatiotemporal Long Short Term Memory (STLSTM), Spatiotemporal Multi-Layer Perceptron (STMLP), Spatiotemporal Neural Networks (STNN), Spatiotemporal Orthogonal Cube (STOC), Spatiotemporal Transformer (STT), Temporal Convolutional Neural Network (TCNN), Temporal Difference-based Graph Transformer Networks (TDGTN), Transformer Neural Network (TNN), Variational AutoEncoder (VAE), Vision Transformer (ViT), Wavelet Neural Network (WNN). |
Ensemble | Adaboost (AB): AB Decision Table (ABDTA), AB Alternating Decision Trees (ABADT), AB Backpropagation Neural Network (ABBPNN), AB Classifier (ABC), AB Credal Decision Trees (ABCDT), AB Decision Trees (ABDT), AB Extreme Learning Machine (ABELM), AB Hyperpipes (ABHP), AB Partial Decision Tree (ABPDT), AB Reduced Error Pruning Decision Tree (ABREPDT), AB Rough Set (ABRS), AB Voting Feature Intervals (ABVFI), AB Credal Decision Tree (ABCDT), Real AB Hyperpipes (RABHP), Real AB J48 Decision Tree (RABJDT), Real AB Reduced Error Pruning Tree (RABREPDT); Adaptively Resample and Combine (ARC), Attribute Selected Classifier Artificial Neural Network (ASCANN); Bagging (Ba): Ba Credal Decision Tree (BCDT), Ba Forest Penalising Attribute (BFPA), Ba Functional Trees (BFT), Ba Artificial Neural Network (BANN), Ba Best First Decision Trees (BBFDT), Ba C4.5 Decision Tree (BC45DT), Ba Credal Decision Trees (BCDT), Ba Decision Table (BDTA), Bagging Decision Trees (BDT), Ba Deep Neural Network (BDNN), Ba Functional Tree (BFT), Ba Gaussian Process (BGP), Ba Hyperpipes (BHP), Ba K-Nearest-Neighbour (BKNN), Ba Logistic Model Tree (BLMT), Ba M5 model trees (BM5P), Ba Partial Decision Tree (BPDT), Ba Random Forest (BRF), Ba Random Subspace Naive Bayes Trees (BRSSNBT), Ba Random Trees (BART), Ba Reduced Error Pruning Decision Tree (BREPDT), Ba Rough Set (BRS), Ba Sequential Minimal Optimisation (BSMO), Ba Support Vector Machine (BSVM); Bayesian Additive Decision Trees (BADT); Boosted (Bo): Bo Classification Tree (BCT), Bo Decision Trees (BODT), Bo Generalised Additive Model (BGAM), Bo Generalised Linear Model (BOGLM), Bo Regression Tree (BRT), Bo Regression Trees (BRT), Bo Artificial Neural Network (BOANN), Bo C4.5 Decision Tree (BOC45DT), Bo Decision Trees (BODT), Bo Logistic Model Tree (BOLMT), Bo Support Vector Machine (BOSVM), Explainable Bo Machine (EBM), Extreme Gradient Bo (XGB), Extreme Gradient Bo Regression (XGBR), Generalised Bo Model (GBM), Gradient Bo Decision Trees (GBDT), Gradient Bo Regression Trees (GBRT), Gradient Bo Classifier (GBC), Gradient Bo Extreme Learning Machine (GBELM), Gradient Bo Machine (GBM), Histogram-based Gradient Bo (HGB), Light Gradient Bo Machine (LGBM), Logit Bo (LOGB), Natural Gradient Bo (NGB), Stochastic Gradient Bo (SGB); Bootsrap aggregation (BA), Cascade Generalisation Artificial Neural Network (CGANN), Cascade Random Forest (CRF), Catboost (CB), Conditional Inference Random Forest (CIRF), Cost Sensitive Forest (CSF), Credal Decision Trees (CDT); Dagging (Da): Da Alternating Decision Trees (DADT), Da Artificial Neural Network (DANN), Da Best First Decision Trees (DABFDT), Da Credal Decision Trees (DCDT), Da Decision Trees (DDT), Da Functional Tree (DFT), Da HyperPipes (DHP), Da M5 model trees (DM5P), Da Partial Decision Tree (DPDT), Da Reduced Error Pruning Decision Tree (DAREPDT); Decorate (De): De Best First Decision Trees (DBFDT), De Credal Decision Tree (DECDT), De Decision Trees (DDT), De Forest Penalising Attribute (DFPA), De HyperPipes (DEHP), De Reduced Error Pruning Decision Tree (DREPDT); Deep Forest (DF), Deepboost (DB), Extremely Randomised Trees (ERT), Forest Penalising Attribute (FPA), Geographical Random Forest (GRF), Isolated Forest (IF); Multiboost (MB): MB Adaboost Credal Decision Trees (MBCDT), MB J48 Decision Tree (MBJDT), MB Alternating Decision Trees (MBADT), MB Artificial Neural Network (MBANN), MB Decision Trees (MBDT), MB Voting Feature Intervals (MBVFI), MB Reduced Error Pruning Decision Tree (MBREPDT); Multiple Kernel Learning (MKL), Rotation forest (ROF): ROF Credal Decision Tree (ROFCDT), ROF Random Forest (ROFRF), ROF Functional Tree (ROFFT), ROF Reduced Error Pruning Decision Tree (ROFREPDT); Random Forest (RF): RF Logistic Model Tree (RFLMT), RF Machine (RFM), RF Regression (RFR); Random Naive Bayes (RNB), Random Subspace (RSS): RSS Alternating Decision Trees (RSSADT), RSS Artificial Neural Network (RSSANN), RSS Best First Decision Trees (RSSBFDT), RSS C4.5 Decision Tree (RSSC45DT), RSS Credal Decision Tree (RSSCDT), RSS Decision Trees (RSSDT), RSS Functional Trees (RSSFT), RSS J48 Decision Tree (RSSJDT), RSS Partial Decision Trees (RSSPDT), RSS Random Forest (RSSRF), RSS Reduced Error Pruning Decision Tree (RSSREPDT); Random Trees Classifier (RTC), Spatiotemporal Extreme Gradient Boost (STXGB), Spatiotemporal Extremely Randomised Trees (STERT), Spatiotemporal Gradient Boosted Decision Tree (STGBDT), Spatiotemporal Light Gradient Boosting Machine (STLGBM), Spatiotemporal Random Forest (STRF), Stacking multiple models (STACK), Subspace Discriminant (SSD), SysFor (SF), Ultraboost (UB). |
References
- IPCC. IPCC, 2023: Summary for Policymakers. In Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; IPCC: Geneva, Switzerland, 2023; pp. 1–34. [Google Scholar] [CrossRef]
- Zampieri, M.; Russo, S.; di Sabatino, S.; Michetti, M.; Scoccimarro, E.; Gualdi, S. Global assessment of heat wave magnitudes from 1901 to 2010 and implications for the river discharge of the Alps. Sci. Total Environ. 2016, 571, 1330–1339. [Google Scholar] [CrossRef] [PubMed]
- Munich Re. Hurricanes, Cold Waves, Tornadoes: Weather Disasters in USA Dominate Natural Disaster Losses in 2021; Munich Re: Munich, Germany, 2022. [Google Scholar]
- Asian Development Bank. Moving from Risk to Resilience: Sustainable Urban Development in the Pacific; Asian Development Bank: Mandaluyong, Philippines, 2013. [Google Scholar]
- Corominas, J.; van Westen, C.; Frattini, P.; Cascini, L.; Malet, J.P.; Fotopoulou, S.; Catani, F.; Eeckhaut, M.V.D.; Mavrouli, O.; Agliardi, F.; et al. Recommendations for the quantitative analysis of landslide risk. Bull. Eng. Geol. Environ. 2014, 73, 209–263. [Google Scholar] [CrossRef]
- Schmidt, J.; Matcham, I.; Reese, S.; King, A.; Bell, R.; Henderson, R.; Smart, G.; Cousins, J.; Smith, W.; Heron, D. Quantitative multi-risk analysis for natural hazards: A framework for multi-risk modelling. Nat. Hazards 2011, 58, 1169–1192. [Google Scholar] [CrossRef]
- Zennaro, F.; Furlan, E.; Simeoni, C.; Torresan, S.; Aslan, S.; Critto, A.; Marcomini, A. Exploring machine learning potential for climate change risk assessment. Earth-Sci. Rev. 2021, 220, 103752. [Google Scholar] [CrossRef]
- Lee, J.G.; Kang, M. Geospatial Big Data: Challenges and Opportunities. Big Data Res. 2015, 2, 74–81. [Google Scholar] [CrossRef]
- Mehmood, K.; Bao, Y.; Saifullah; Cheng, W.; Khan, M.A.; Siddique, N.; Abrar, M.M.; Soban, A.; Fahad, S.; Naidu, R. Predicting the quality of air with machine learning approaches: Current research priorities and future perspectives. J. Clean. Prod. 2022, 379, 134656. [Google Scholar] [CrossRef]
- Fell, R.; Corominas, J.; Bonnard, C.; Cascini, L.; Leroi, E.; Savage, W.Z. Guidelines for landslide susceptibility, hazard and risk zoning for land use planning. Eng. Geol. 2008, 102, 85–98. [Google Scholar] [CrossRef]
- Bentivoglio, R.; Isufi, E.; Jonkman, S.N.; Taormina, R. Deep learning methods for flood mapping: A review of existing applications and future research directions. Hydrol. Earth Syst. Sci. 2022, 26, 4345–4378. [Google Scholar] [CrossRef]
- Nhu, V.H.; Zandi, D.; Shahabi, H.; Chapi, K.; Shirzadi, A.; Al-Ansari, N.; Singh, S.K.; Dou, J.; Nguyen, H. Comparison of Support Vector Machine, Bayesian Logistic Regression, and Alternating Decision Tree Algorithms for Shallow Landslide Susceptibility Mapping along a Mountainous Road in the West of Iran. Appl. Sci. 2020, 10, 5047. [Google Scholar] [CrossRef]
- Formetta, G.; Rago, V.; Capparelli, G.; Rigon, R.; Muto, F.; Versace, P. Integrated Physically based System for Modeling Landslide Susceptibility. Procedia Earth Planet. Sci. 2014, 9, 74–82. [Google Scholar] [CrossRef]
- Feng, L.; Guo, M.; Wang, W.; Chen, Y.; Shi, Q.; Guo, W.; Lou, Y.; Kang, H.; Chen, Z.; Zhu, Y. Comparative Analysis of Machine Learning Methods and a Physical Model for Shallow Landslide Risk Modeling. Sustainability 2022, 15, 6. [Google Scholar] [CrossRef]
- Sarker, I. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
- Sarker, I. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
- Tehrani, F.S.; Calvello, M.; Liu, Z.; Zhang, L.; Lacasse, S. Machine learning and landslide studies: Recent advances and applications. Nat. Hazards 2022, 114, 1197–1245. [Google Scholar] [CrossRef]
- Al-Najjar, H.A.; Pradhan, B. Spatial landslide susceptibility assessment using machine learning techniques assisted by additional data created with generative adversarial networks. Geosci. Front. 2021, 12, 625–637. [Google Scholar] [CrossRef]
- Guo, Q.; Ren, M.; Wu, S.; Sun, Y.; Wang, J.; Wang, Q.; Ma, Y.; Song, X.; Chen, Y. Applications of artificial intelligence in the field of air pollution: A bibliometric analysis. Front. Public Health 2022, 10, 2972. [Google Scholar] [CrossRef]
- Pugliese-Viloria, A. Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review—Data and Software, V1.0.0; Zenodo: 2024. Available online: https://zenodo.org/records/13386422 (accessed on 29 July 2024).
- Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
- Chauhan, N.K.; Singh, K. A review on conventional machine learning vs. deep learning. In Proceedings of the 2018 International Conference on Computing, Power and Communication Technologies, GUCON 2018, Greater Noida, Uttar Pradesh, India, 28–29 September 2018; pp. 347–352. [Google Scholar] [CrossRef]
- Patgiri, R. Taxonomy of Big Data: A Survey. arXiv 2018, arXiv:1808.08474. [Google Scholar]
- Preeti, K.S.; Dhankar, A. A review on Machine Learning Techniques. Int. J. Adv. Res. Comput. Sci. 2017, 8, 778–782. [Google Scholar]
- Grossi, E.; Buscema, M. Introduction to artificial neural networks. Eur. J. Gastroenterol. Hepatol. 2008, 19, 1046–1054. [Google Scholar] [CrossRef]
- Nikparvar, B.; Thill, J.C. Machine Learning of Spatial Data. ISPRS Int. J. -Geo-Inf. 2021, 10, 600. [Google Scholar] [CrossRef]
- Du, S.; Li, T.; Yang, Y.; Horng, S.J. Deep Air Quality Forecasting Using Hybrid Deep Learning Framework. IEEE Trans. Knowl. Data Eng. 2021, 33, 2412–2424. [Google Scholar] [CrossRef]
- Yan, R.; Liao, J.; Yang, J.; Sun, W.; Nong, M.; Li, F. Multi-hour and multi-site air quality index forecasting in Beijing using CNN, LSTM, CNN-LSTM, and spatiotemporal clustering. Expert Syst. Appl. 2021, 169, 114513. [Google Scholar] [CrossRef]
- Li, T.; Hua, M.; Wu, X. A Hybrid CNN-LSTM Model for Forecasting Particulate Matter (PM2.5). IEEE Access 2020, 8, 26933–26940. [Google Scholar] [CrossRef]
- Vaswani, A.; Brain, G.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All you Need. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
- Yu, H.; Pei, W.; Zhang, J.; Chen, G. Landslide Susceptibility Mapping and Driving Mechanisms in a Vulnerable Region Based on Multiple Machine Learning Models. Remote Sens. 2023, 15, 1886. [Google Scholar] [CrossRef]
- Doreswamy; Vastrad, C.M. Performance Analysis Of Regularized Linear Regression Models For Oxazolines And Oxazoles Derivitive Descriptor Dataset. arXiv 2013, arXiv:1312.2789. [Google Scholar]
- Peng, J.; Lee, K.; Ingersoll, G. An Introduction to Logistic Regression Analysis and Reporting. J. Educ. Res. 2002, 96, 3–14. [Google Scholar] [CrossRef]
- Ali, S.A.; Parvin, F.; Pham, Q.B.; Vojtek, M.; Vojteková, J.; Costache, R.; Linh, N.T.T.; Nguyen, H.Q.; Ahmad, A.; Ghorbani, M.A. GIS-based comparative assessment of flood susceptibility mapping using hybrid multi-criteria decision-making approach, naïve Bayes tree, bivariate statistics and logistic regression: A case of Topľa basin, Slovakia. Ecol. Indic. 2020, 117, 106620. [Google Scholar] [CrossRef]
- Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar]
- Quinlan, J. C4.5: Programs for Machine Learning; Morgan Kaufmann: San Francisco, CA, USA, 1993. [Google Scholar]
- Elomaa, T.; Kääriäinen, M. An Analysis of Reduced Error Pruning. arXiv 2011, arXiv:1106.0668. [Google Scholar]
- Hearst, M.; Dumais, S.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- Cristianini, N.; Scholkopf, B. Support Vector Machines and Kernel Methods: The New Generation of Learning Machines. AI Mag. 2002, 23, 31. [Google Scholar] [CrossRef]
- Costache, R.; Hong, H.; Pham, Q.B. Comparative assessment of the flash-flood potential within small mountain catchments using bivariate statistics and their novel hybrid integration with machine learning models. Sci. Total Environ. 2020, 711, 134514. [Google Scholar] [CrossRef] [PubMed]
- Choubin, B.; Moradi, E.; Golshan, M.; Adamowski, J.; Sajedi-Hosseini, F.; Mosavi, A. An ensemble prediction of flood susceptibility using multivariate discriminant analysis, classification and regression trees, and support vector machines. Sci. Total Environ. 2019, 651, 2087–2096. [Google Scholar] [CrossRef]
- Sagi, O.; Rokach, L. Ensemble learning: A survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Bui, D.T.; Tsangaratos, P.; Nguyen, V.T.; Liem, N.V.; Trinh, P.T. Comparing the prediction performance of a Deep Learning Neural Network model with conventional machine learning models in landslide susceptibility assessment. CATENA 2020, 188, 104426. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. A Short Introduction to Boosting. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, IJCAI’99, Stockholm, Sweden, 31 July–6 August 1999; Volume 2, pp. 1401–1406. [Google Scholar]
- Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 16, New York, NY, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Yao, J.; Zhang, X.; Luo, W.; Liu, C.; Ren, L. Applications of Stacking/Blending ensemble learning approaches for evaluating flash flood susceptibility. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102932. [Google Scholar] [CrossRef]
- Aha, W.; Kibler, D.; Albert, M. Instance-Based Learning Algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef]
- Cunningham, P.; Delany, S. k-Nearest neighbour classifiers. Mult. Classif. Syst. 2007, 54, 1–25. [Google Scholar] [CrossRef]
- Adnan, M.S.G.; Rahman, M.S.; Ahmed, N.; Ahmed, B.; Rabbi, M.F.; Rahman, R.M. Improving Spatial Agreement in Machine Learning-Based Landslide Susceptibility Mapping. Remote Sens. 2020, 12, 3347. [Google Scholar] [CrossRef]
- Zhu, J.; Chen, J.; Hu, W. Big Learning with Bayesian Methods. arXiv 2014, arXiv:1411.6370. [Google Scholar] [CrossRef]
- Vikramkumar; Vijaykumar, B.; Trilochan. Bayes and Naive Bayes Classifier. arXiv 2014, arXiv:1404.0933. [Google Scholar]
- Liu, Z.; Zhu, Z.; Gao, J.; Xu, C. Forecast Methods for Time Series Data: A Survey. IEEE Access 2021, 9, 91896–91912. [Google Scholar] [CrossRef]
- Taylor, S.; Letham, B. Forecasting at Scale. Am. Stat. 2017, 72, 37–45. [Google Scholar] [CrossRef]
- Jia, W.; Sun, M.; Lian, J.; Hou, S. Feature dimensionality reduction: A review. Complex Intell. Syst. 2022, 8, 2663–2693. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, D.; Wen, H.; Zhang, H.; Zhang, F. Comparison of Random Forest Model and Frequency Ratio Model for Landslide Susceptibility Mapping (LSM) in Yunyang County (Chongqing, China). Int. J. Environ. Res. Public Health 2020, 17, 4206. [Google Scholar] [CrossRef] [PubMed]
- Xu, D.; Tian, Y. A comprehensive survey of clustering algorithms. Ann. Data Sci. 2015, 2, 165–193. [Google Scholar] [CrossRef]
- Lin, J.; Sreng, C.; Oare, E.; Batarseh, F.A. NeuralFlood: An AI-driven flood susceptibility index. Front. Water 2023, 5. [Google Scholar] [CrossRef]
- Huang, F.; Cao, Z.; Jiang, S.H.; Zhou, C.; Huang, J.; Guo, Z. Landslide susceptibility prediction based on a semi-supervised multiple-layer perceptron model. Landslides 2020, 17, 2919–2930. [Google Scholar] [CrossRef]
- Mao, Y.; Mwakapesa, D.S.; Li, Y.; Xu, K.; Nanehkaran, Y.A.; Zhang, M. Assessment of landslide susceptibility using DBSCAN-AHD and LD-EV methods. J. Mt. Sci. 2022, 19, 184–197. [Google Scholar] [CrossRef]
- Sette, S.; Boullart, L. An implementation of genetic algorithms for rule based machine learning. Eng. Appl. Artif. Intell. 2000, 13, 381–390. [Google Scholar] [CrossRef]
- Li, Z.; Yim, S.H.L.; Ho, K.F. High temporal resolution prediction of street-level PM2.5 and NOx concentrations using machine learning approach. J. Clean. Prod. 2020, 268, 121975. [Google Scholar] [CrossRef]
- Bellman, R. Dynamic programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef]
- Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Omer, R.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G.; et al. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Comput. Surv. 2023, 55, 1–33. [Google Scholar] [CrossRef]
- Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
- Venkatesh, B.; Anuradha, J. A review of Feature Selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
- Dosilovic, F.K.; Brcic, M.; Hlupic, N. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018—Proceedings, Opatija, Croatia, 21–25 May 2018; pp. 210–215. [Google Scholar] [CrossRef]
- Du, W.; Chen, L.; Wang, H.; Shan, Z.; Zhou, Z.; Li, W.; Wang, Y. Deciphering urban traffic impacts on air quality by deep learning and emission inventory. J. Environ. Sci. 2023, 124, 745–757. [Google Scholar] [CrossRef] [PubMed]
- Henckaerts, R.; Antonio, K.; Côté, M.P. When stakes are high: Balancing accuracy and transparency with Model-Agnostic Interpretable Data-driven suRRogates. Expert Syst. Appl. 2022, 202, 117230. [Google Scholar] [CrossRef]
- Shao, Y.; Zhao, W.; Liu, R.; Yang, J.; Liu, M.; Fang, W.; Hu, L.; Adams, M.; Bi, J.; Ma, Z. Estimation of daily NO2 with explainable machine learning model in China, 2007–2020. Atmos. Environ. 2023, 314, 120111. [Google Scholar] [CrossRef]
- Lalonde, M.; Oudin, L.; Bastin, S. Urban effects on precipitation: Do the diversity of research strategies and urban characteristics preclude general conclusions? Urban Clim. 2023, 51, 101605. [Google Scholar] [CrossRef]
- Nabavi, S.O.; Nolscher, A.C.; Samimi, C.; Thomas, C.; Haimberger, L.; Luers, J.; Held, A. Site-scale modeling of surface ozone in Northern Bavaria using machine learning algorithms, regional dynamic models, and a hybrid model. Environ. Pollut. 2021, 268, 115736. [Google Scholar] [CrossRef]
- Sun, D.; Gu, Q.; Wen, H.; Xu, J.; Zhang, Y.; Shi, S.; Xue, M.; Zhou, X. Assessment of landslide susceptibility along mountain highways based on different machine learning algorithms and mapping units by hybrid factors screening and sample optimization. Gondwana Res. 2022, 123, 89–106. [Google Scholar] [CrossRef]
- Martens, D.; Vanthienen, J.; Verbeke, W.; Baesens, B. Performance of classification models from a user perspective. Decis. Support Syst. 2011, 51, 782–793. [Google Scholar] [CrossRef]
- Choubin, B.; Abdolshahnejad, M.; Moradi, E.; Querol, X.; Mosavi, A.; Shamshirband, S.; Ghamisi, P. Spatial hazard assessment of the PM10 using machine learning models in Barcelona, Spain. Sci. Total Environ. 2020, 701, 134474. [Google Scholar] [CrossRef]
- Bui, D.T.; Hoang, N.D.; Martínez-Álvarez, F.; Ngo, P.T.T.; Hoa, P.V.; Pham, T.D.; Samui, P.; Costache, R. A novel deep learning neural network approach for predicting flash flood susceptibility: A case study at a high frequency tropical storm area. Sci. Total Environ. 2020, 701, 134413. [Google Scholar] [CrossRef]
- Lei, T.M.T.; Ng, S.C.W.; Siu, S.W.I. Application of ANN, XGBoost, and Other ML Methods to Forecast Air Quality in Macau. Sustainability 2023, 15, 5341. [Google Scholar] [CrossRef]
- Pourghasemi, H.R.; Kariminejad, N.; Amiri, M.; Edalat, M.; Zarafshar, M.; Blaschke, T.; Cerda, A. Assessing and mapping multi-hazard risk susceptibility using a machine learning technique. Sci. Rep. 2020, 10, 3203. [Google Scholar] [CrossRef] [PubMed]
- Ghosh, S.; Saha, S.; Bera, B. Flood susceptibility zonation using advanced ensemble machine learning models within Himalayan foreland basin. Nat. Hazards Res. 2022, 2, 363–374. [Google Scholar] [CrossRef]
- Bui, Q.T.; Nguyen, Q.H.; Nguyen, X.L.; Pham, V.D.; Nguyen, H.D.; Pham, V.M. Verification of novel integrations of swarm intelligence algorithms into deep learning neural network for flood susceptibility mapping. J. Hydrol. 2020, 581, 124379. [Google Scholar] [CrossRef]
- Aydin, H.E.; Iban, M.C. Predicting and analyzing flood susceptibility using boosting-based ensemble machine learning algorithms with SHapley Additive exPlanations. Nat. Hazards 2023, 116, 2957–2991. [Google Scholar] [CrossRef]
- Ozdemir, H.; Koçyiğit, M.B.; Akay, D. Flood susceptibility mapping with ensemble machine learning: A case of Eastern Mediterranean basin, Turkiye. Stoch. Environ. Res. Risk Assess. 2023, 37, 4273–4290. [Google Scholar] [CrossRef]
- Karakas, G.; Kocaman, S.; Gokceoglu, C. A Hybrid Multi-Hazard Susceptibility Assessment Model for a Basin in Elazig Province, Turkiye. Int. J. Disaster Risk Sci. 2023, 14, 326–341. [Google Scholar] [CrossRef]
- Pourghasemi, H.R.; Gayen, A.; Edalat, M.; Zarafshar, M.; Tiefenbacher, J.P. Is multi-hazard mapping effective in assessing natural hazards and integrated watershed management? Geosci. Front. 2020, 11, 1203–1217. [Google Scholar] [CrossRef]
- Yousefi, S.; Pourghasemi, H.R.; Emami, S.N.; Pouyan, S.; Eskandari, S.; Tiefenbacher, J.P. A machine learning framework for multi-hazards modeling and mapping in a mountainous area. Sci. Rep. 2020, 10, 12144. [Google Scholar] [CrossRef]
- Oukawa, G.Y.; Krecl, P.; Targino, A.C. Fine-scale modeling of the urban heat island: A comparison of multiple linear regression and random forest approaches. Sci. Total Environ. 2022, 815, 152836. [Google Scholar] [CrossRef]
- Zhang, X.; Huang, T.; Gulakhmadov, A.; Song, Y.; Gu, X.; Zeng, J.; Huang, S.; Nam, W.H.; Chen, N.; Niyogi, D. Deep Learning-Based 500 m Spatio-Temporally Continuous Air Temperature Generation by Fusing Multi-Source Data. Remote Sens. 2022, 14, 3536. [Google Scholar] [CrossRef]
- Vulova, S.; Meier, F.; Fenner, D.; Nouri, H.; Kleinschmit, B. Summer Nights in Berlin, Germany: Modeling Air Temperature Spatially With Remote Sensing, Crowdsourced Weather Data, and Machine Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5074–5087. [Google Scholar] [CrossRef]
- Chen, S.; Xu, Z.; Wang, X.; Zhang, C. Ambient air pollutants concentration prediction during the COVID-19: A method based on transfer learning. Knowl.-Based Syst. 2022, 258, 109996. [Google Scholar] [CrossRef] [PubMed]
- Oliveira, A.; Lopes, A.; Niza, S.; Soares, A. An urban energy balance-guided machine learning approach for synthetic nocturnal surface Urban Heat Island prediction: A heatwave event in Naples. Sci. Total Environ. 2022, 805, 150130. [Google Scholar] [CrossRef] [PubMed]
- dos Santos, R.S. Estimating spatio-temporal air temperature in London (UK) using machine learning and earth observation satellite data. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102066. [Google Scholar] [CrossRef]
- Mohammad, P.; Goswami, A.; Chauhan, S.; Nayak, S. Machine learning algorithm based prediction of land use land cover and land surface temperature changes to characterize the surface urban heat island phenomena over Ahmedabad city, India. Urban Clim. 2022, 42, 101116. [Google Scholar] [CrossRef]
- Wang, Y.; Liang, Z.; Ding, J.; Shen, J.; Wei, F.; Li, S. Prediction of Urban Thermal Environment Based on Multi-Dimensional Nature and Urban Form Factors. Atmosphere 2022, 13, 1493. [Google Scholar] [CrossRef]
- Pham, B.T.; Avand, M.; Janizadeh, S.; Phong, T.V.; Al-Ansari, N.; Ho, L.S.; Das, S.; Le, H.V.; Amini, A.; Bozchaloei, S.K.; et al. GIS Based Hybrid Computational Approaches for Flash Flood Susceptibility Assessment. Water 2020, 12, 683. [Google Scholar] [CrossRef]
- Nhu, V.H.; Ngo, P.T.T.; Pham, T.D.; Dou, J.; Song, X.; Hoang, N.D.; Tran, D.A.; Cao, D.P.; Aydilek, İ.B.; Amiri, M.; et al. A New Hybrid Firefly–PSO Optimized Random Subspace Tree Intelligence for Torrential Rainfall-Induced Flash Flood Susceptible Mapping. Remote Sens. 2020, 12, 2688. [Google Scholar] [CrossRef]
- Islam, A.R.M.T.; Talukdar, S.; Mahato, S.; Kundu, S.; Eibek, K.U.; Pham, Q.B.; Kuriqi, A.; Linh, N.T.T. Flood susceptibility modelling using advanced ensemble machine learning models. Geosci. Front. 2021, 12, 101075. [Google Scholar] [CrossRef]
- AlDousari, A.E.; Kafy, A.A.; Saha, M.; Fattah, M.A.; Almulhim, A.I.; Abdullah-Al-Faisal; Rakib, A.A.; Jahir, D.M.A.; Rahaman, Z.A.; Bakshi, A.; et al. Modelling the impacts of land use/land cover changing pattern on urban thermal characteristics in Kuwait. Sustain. Cities Soc. 2022, 86, 104107. [Google Scholar] [CrossRef]
- Li, Y.; Osei, F.B.; Hu, T.; Stein, A. Urban flood susceptibility mapping based on social media data in Chengdu city, China. Sustain. Cities Soc. 2023, 88, 104307. [Google Scholar] [CrossRef]
- He, W.; Zhang, S.; Meng, H.; Han, J.; Zhou, G.; Song, H.; Zhou, S.; Zheng, H. Full-Coverage PM2.5 Mapping and Variation Assessment during the Three-Year Blue-Sky Action Plan Based on a Daily Adaptive Modeling Approach. Remote Sens. 2022, 14, 3571. [Google Scholar] [CrossRef]
- Luo, Z.; Tian, J.; Zeng, J.; Pilla, F. Resilient landscape pattern for reducing coastal flood susceptibility. Sci. Total Environ. 2023, 856, 159087. [Google Scholar] [CrossRef]
- Shen, Y.; de Hoogh, K.; Schmitz, O.; Clinton, N.; Tuxen-Bettman, K.; Brandt, J.; Christensen, J.H.; Frohn, L.M.; Geels, C.; Karssenberg, D.; et al. Europe-wide air pollution modeling from 2000 to 2019 using geographically weighted regression. Environ. Int. 2022, 168, 107485. [Google Scholar] [CrossRef] [PubMed]
- Vavassori, A.; Viloria, A.D.J.P.; Brovelli, M.A. Using open data to reveal factors of urban susceptibility to natural hazards and human-made hazards: Case of Milan and Sofia. GeoScape 2022, 16, 93–107. [Google Scholar] [CrossRef]
- Environmental Pollution Centers. What Is Air Pollution; Environmental Pollution Centers: Nairobi, Kenya, 2022. [Google Scholar]
- Khan, M.A.; Kim, H.; Park, H. Leveraging Machine Learning for Fault-Tolerant Air Pollutants Monitoring for a Smart City Design. Electronics 2022, 11, 3122. [Google Scholar] [CrossRef]
- European Environment Agency. European Air Quality Index; European Environment Agency: Copenhagen, Denmark, 2021. [Google Scholar]
- El-Magd, S.A.; Soliman, G.; Morsy, M.; Kharbish, S. Environmental hazard assessment and monitoring for air pollution using machine learning and remote sensing. Int. J. Environ. Sci. Technol. 2022, 20, 6103–6116. [Google Scholar] [CrossRef]
- Shogrkhodaei, S.Z.; Razavi-Termeh, S.V.; Fathnia, A. Spatio-temporal modeling of PM2.5 risk mapping using three machine learning algorithms. Environ. Pollut. 2021, 289, 117859. [Google Scholar] [CrossRef]
- Wei, J.; Li, Z.; Cribb, M.; Huang, W.; Xue, W.; Sun, L.; Guo, J.; Peng, Y.; Li, J.; Lyapustin, A.; et al. Improved 1km resolution PM2.5 estimates across China using enhanced space–time extremely randomized trees. Atmos. Chem. Phys. 2020, 20, 3273–3289. [Google Scholar] [CrossRef]
- Zhao, C.; Wang, Q.; Ban, J.; Liu, Z.; Zhang, Y.; Ma, R.; Li, S.; Li, T. Estimating the daily PM2.5 concentration in the Beijing-Tianjin-Hebei region using a random forest model with a 0.01° × 0.01° spatial resolution. Environ. Int. 2020, 134, 105297. [Google Scholar] [CrossRef] [PubMed]
- Long, S.; Wei, X.; Zhang, F.; Zhang, R.; Xu, J.; Wu, K.; Li, Q.; Li, W. Estimating daily ground-level NO2 concentrations over China based on TROPOMI observations and machine learning approach. Atmos. Environ. 2022, 289, 119310. [Google Scholar] [CrossRef]
- Zhang, C.; Liu, C.; Li, B.; Zhao, F.; Zhao, C. Spatiotemporal neural network for estimating surface NO2 concentrations over north China and their human health impact. Environ. Pollut. 2022, 307, 119510. [Google Scholar] [CrossRef]
- Di, Q.; Amini, H.; Shi, L.; Kloog, I.; Silvern, R.; Kelly, J.; Sabath, M.B.; Choirat, C.; Koutrakis, P.; Lyapustin, A.; et al. Assessing NO2 Concentration and Model Uncertainty with High Spatiotemporal Resolution across the Contiguous United States Using Ensemble Model Averaging. Environ. Sci. Technol. 2020, 54, 1372–1384. [Google Scholar] [CrossRef] [PubMed]
- Chu, W.; Zhang, C.; Zhao, Y.; Li, R.; Wu, P. Spatiotemporally Continuous Reconstruction of Retrieved PM2.5 Data Using an Autogeoi-Stacking Model in the Beijing-Tianjin-Hebei Region, China. Remote Sens. 2022, 14, 4432. [Google Scholar] [CrossRef]
- Just, A.C.; Arfer, K.B.; Rush, J.; Dorman, M.; Shtein, A.; Lyapustin, A.; Kloog, I. Advancing methodologies for applying machine learning and evaluating spatiotemporal models of fine particulate matter (PM2.5) using satellite data over large regions. Atmos. Environ. 2020, 239, 117649. [Google Scholar] [CrossRef]
- Yu, Y.; Li, H.; Sun, S.; Li, Y. PM2.5 concentration forecasting through a novel multi-scale ensemble learning approach considering intercity synergy. Sustain. Cities Soc. 2022, 85, 104049. [Google Scholar] [CrossRef]
- Liu, X.; Li, C.; Liu, D.; Grieneisen, M.L.; Yang, F.; Chen, C.; Zhan, Y. Hybrid deep learning models for mapping surface NO2 across China: One complicated model, many simple models, or many complicated models? Atmos. Res. 2022, 278, 106339. [Google Scholar] [CrossRef]
- Moursi, A.S.; El-Fishawy, N.; Djahel, S.; Shouman, M.A. An IoT enabled system for enhanced air quality monitoring and prediction on the edge. Complex Intell. Syst. 2021, 7, 2923–2947. [Google Scholar] [CrossRef]
- Ram, R.S.; Venkatachalam, K.; Masud, M.; Abouhawwash, M. Air Pollution Prediction Using Dual Graph Convolution LSTM Technique. Intell. Autom. Soft Comput. 2022, 33, 1639–1652. [Google Scholar] [CrossRef]
- Kang, Y.; Choi, H.; Im, J.; Park, S.; Shin, M.; Song, C.K.; Kim, S. Estimation of surface-level NO2 and O3 concentrations using TROPOMI data and machine learning over East Asia. Environ. Pollut. 2021, 288, 117711. [Google Scholar] [CrossRef] [PubMed]
- Liu, R.; Ma, Z.; Liu, Y.; Shao, Y.; Zhao, W.; Bi, J. Spatiotemporal distributions of surface ozone levels in China from 2005 to 2017: A machine learning approach. Environ. Int. 2020, 142, 105823. [Google Scholar] [CrossRef]
- Huang, C.; Sun, K.; Hu, J.; Xue, T.; Xu, H.; Wang, M. Estimating 2013–2019 NO2 exposure with high spatiotemporal resolution in China using an ensemble model. Environ. Pollut. 2022, 292, 118285. [Google Scholar] [CrossRef]
- Xiao, Q.; Zheng, Y.; Geng, G.; Chen, C.; Huang, X.; Che, H.; Zhang, X.; He, K.; Zhang, Q. Separating emission and meteorological contributions to long-term PM2.5 trends over eastern China during 2000–2018. Atmos. Chem. Phys. 2021, 21, 9475–9496. [Google Scholar] [CrossRef]
- Heidari, A.A.; Akhoondzadeh, M.; Chen, H. A Wavelet PM2.5 Prediction System Using Optimized Kernel Extreme Learning with Boruta-XGBoost Feature Selection. Mathematics 2022, 10, 3566. [Google Scholar] [CrossRef]
- Zhang, B.; Zou, G.; Qin, D.; Ni, Q.; Mao, H.; Li, M. RCL-Learning: ResNet and convolutional long short-term memory-based spatiotemporal air pollutant concentration prediction model. Expert Syst. Appl. 2022, 207, 118017. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, R.; Ma, Q.; Wang, Y.; Wang, Q.; Huang, Z.; Huang, L. A feature selection and multi-model fusion-based approach of predicting air quality. ISA Trans. 2020, 100, 210–220. [Google Scholar] [CrossRef]
- Huang, C.; Hu, J.; Xue, T.; Xu, H.; Wang, M. High-Resolution Spatiotemporal Modeling for Ambient PM2.5 Exposure Assessment in China from 2013 to 2019. Environ. Sci. Technol. 2021, 55, 2152–2162. [Google Scholar] [CrossRef]
- Wang, D.; Wang, H.W.; Li, C.; Lu, K.F.; Peng, Z.R.; Zhao, J.; Fu, Q.; Pan, J. Roadside Air Quality Forecasting in Shanghai with a Novel Sequence-to-Sequence Model. Int. J. Environ. Res. Public Health 2020, 17, 9471. [Google Scholar] [CrossRef]
- Faraji, M.; Nadi, S.; Ghaffarpasand, O.; Homayoni, S.; Downey, K. An integrated 3D CNN-GRU deep learning method for short-term prediction of PM2.5 concentration in urban environment. Sci. Total Environ. 2022, 834, 155324. [Google Scholar] [CrossRef]
- Arowosegbe, O.O.; Roosli, M.; Kunzli, N.; Saucy, A.; Adebayo-Ojo, T.C.; Schwartz, J.; Kebalepile, M.; Jeebhay, M.F.; Dalvie, M.A.; de Hoogh, K. Ensemble averaging using remote sensing data to model spatiotemporal PM10 concentrations in sparsely monitored South Africa. Environ. Pollut. 2022, 310, 119883. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Chen, C. Prediction of outdoor PM2.5 concentrations based on a three-stage hybrid neural network model. Atmos. Pollut. Res. 2020, 11, 469–481. [Google Scholar] [CrossRef]
- Mao, Y.; Mwakapesa, D.S.; Wang, G.; Nanehkaran, Y.; Zhang, M. Landslide susceptibility modelling based on AHC-OLID clustering algorithm. Adv. Space Res. 2021, 68, 301–316. [Google Scholar] [CrossRef]
- Pak, U.; Ma, J.; Ryu, U.; Ryom, K.; Juhyok, U.; Pak, K.; Pak, C. Deep learning-based PM2.5 prediction considering the spatiotemporal correlations: A case study of Beijing, China. Sci. Total Environ. 2020, 699, 133561. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Zeng, Y.; Yan, K. A hybrid deep learning technology for PM2.5 air quality forecasting. Environ. Sci. Pollut. Res. 2021, 28, 39409–39422. [Google Scholar] [CrossRef]
- Li, S.; Xie, G.; Ren, J.; Guo, L.; Yang, Y.; Xu, X. Urban PM2.5 Concentration Prediction via Attention-Based CNN–LSTM. Appl. Sci. 2020, 10, 1953. [Google Scholar] [CrossRef]
- Gilik, A.; Ogrenci, A.S.; Ozmen, A. Air quality prediction using CNN+LSTM-based hybrid deep learning architecture. Environ. Sci. Pollut. Res. 2022, 29, 11920–11938. [Google Scholar] [CrossRef]
- Sarkar, N.; Gupta, R.; Keserwani, P.K.; Govil, M.C. Air Quality Index prediction using an effective hybrid deep learning model. Environ. Pollut. 2022, 315, 120404. [Google Scholar] [CrossRef]
- Ehteram, M.; Ahmed, A.N.; Khozani, Z.S.; El-Shafie, A. Graph convolutional network—Long short term memory neural network- multi layer perceptron- Gaussian progress regression model: A new deep learning model for predicting ozone concertation. Atmos. Pollut. Res. 2023, 14, 101766. [Google Scholar] [CrossRef]
- Guo, C.; Liu, G.; Chen, C.H. Air Pollution Concentration Forecast Method Based on the Deep Ensemble Neural Network. Wirel. Commun. Mob. Comput. 2020, 2020, 1–13. [Google Scholar] [CrossRef]
- Li, D.; Liu, J.; Zhao, Y. Prediction of Multi-Site PM2.5 Concentrations in Beijing Using CNN-Bi LSTM with CBAM. Atmosphere 2022, 13, 1719. [Google Scholar] [CrossRef]
- Heydari, A.; Nezhad, M.M.; Garcia, D.A.; Keynia, F.; Santoli, L.D. Air pollution forecasting application based on deep learning model and optimization algorithm. Clean Technol. Environ. Policy 2022, 24, 607–621. [Google Scholar] [CrossRef]
- Ma, J.; Ding, Y.; Cheng, J.C.; Jiang, F.; Gan, V.J.; Xu, Z. A Lag-FLSTM deep learning network based on Bayesian Optimization for multi-sequential-variant PM2.5 prediction. Sustain. Cities Soc. 2020, 60, 102237. [Google Scholar] [CrossRef]
- Du, P.; Wang, J.; Hao, Y.; Niu, T.; Yang, W. A novel hybrid model based on multi-objective Harris hawks optimization algorithm for daily PM2.5 and PM10 forecasting. Appl. Soft Comput. 2020, 96, 106620. [Google Scholar] [CrossRef]
- Castelli, M.; Clemente, F.M.; Popovic, A.; Silva, S.; Vanneschi, L. A Machine Learning Approach to Predict Air Quality in California. Complexity 2020, 2020, 1–23. [Google Scholar] [CrossRef]
- Yu, M.; Masrur, A.; Blaszczak-Boxe, C. Predicting hourly PM2.5 concentrations in wildfire-prone areas using a SpatioTemporal Transformer model. Sci. Total. Environ. 2023, 860, 160446. [Google Scholar] [CrossRef]
- Cai, K.; Zhang, X.; Zhang, M.; Ge, Q.; Li, S.; Qiao, B.; Liu, Y. Improving air pollutant prediction in Henan Province, China, by enhancing the concentration prediction accuracy using autocorrelation errors and an Informer deep learning model. Sustain. Environ. Res. 2023, 33, 13. [Google Scholar] [CrossRef]
- Yan, X.; Zang, Z.; Jiang, Y.; Shi, W.; Guo, Y.; Li, D.; Zhao, C.; Husi, L. A Spatial-Temporal Interpretable Deep Learning Model for improving interpretability and predictive accuracy of satellite-based PM2.5. Environ. Pollut. 2021, 273, 116459. [Google Scholar] [CrossRef]
- Guo, B.; Zhang, D.; Pei, L.; Su, Y.; Wang, X.; Bian, Y.; Zhang, D.; Yao, W.; Zhou, Z.; Guo, L. Estimating PM2.5 concentrations via random forest method using satellite, auxiliary, and ground-level station dataset at multiple temporal scales across China in 2017. Sci. Total. Environ. 2021, 778, 146288. [Google Scholar] [CrossRef]
- Ren, X.; Mi, Z.; Georgopoulos, P.G. Comparison of Machine Learning and Land Use Regression for fine scale spatiotemporal estimation of ambient air pollution: Modeling ozone concentrations across the contiguous United States. Environ. Int. 2020, 142, 105827. [Google Scholar] [CrossRef]
- Wong, P.Y.; Lee, H.Y.; Chen, Y.C.; Zeng, Y.T.; Chern, Y.R.; Chen, N.T.; Lung, S.C.C.; Su, H.J.; Wu, C.D. Using a land use regression model with machine learning to estimate ground level PM2.5. Environ. Pollut. 2021, 277, 116846. [Google Scholar] [CrossRef] [PubMed]
- Ma, J.; Li, Z.; Cheng, J.C.; Ding, Y.; Lin, C.; Xu, Z. Air quality prediction at new stations using spatially transferred bi-directional long short-term memory network. Sci. Total. Environ. 2020, 705, 135771. [Google Scholar] [CrossRef]
- Ma, W.; Yuan, Z.; Lau, A.K.; Wang, L.; Liao, C.; Zhang, Y. Optimized neural network for daily-scale ozone prediction based on transfer learning. Sci. Total. Environ. 2022, 827, 154279. [Google Scholar] [CrossRef]
- Parthiban, S.; Amudha, P.; Sivakumari, S.P. Exploitation of Advanced Deep Learning Methods and Feature Modeling for Air Quality Prediction. Rev. Dintelligence Artif. 2022, 36, 959–967. [Google Scholar] [CrossRef]
- Zhang, L.; Na, J.; Zhu, J.; Shi, Z.; Zou, C.; Yang, L. Spatiotemporal causal convolutional network for forecasting hourly PM2.5 concentrations in Beijing, China. Comput. Geosci. 2021, 155, 104869. [Google Scholar] [CrossRef]
- Wang, Z.; Hu, B.; Huang, B.; Ma, Z.; Biswas, A.; Jiang, Y.; Shi, Z. Predicting annual PM2.5 in mainland China from 2014 to 2020 using multi temporal satellite product: An improved deep learning approach with spatial generalization ability. ISPRS J. Photogramm. Remote Sens. 2022, 187, 141–158. [Google Scholar] [CrossRef]
- Kim, M.; Brunner, D.; Kuhlmann, G. Importance of satellite observations for high-resolution mapping of near-surface NO2 by machine learning. Remote Sens. Environ. 2021, 264, 112573. [Google Scholar] [CrossRef]
- Ren, X.; Mi, Z.; Cai, T.; Nolte, C.G.; Georgopoulos, P.G. Flexible Bayesian Ensemble Machine Learning Framework for Predicting Local Ozone Concentrations. Environ. Sci. Technol. 2022, 56, 3871–3883. [Google Scholar] [CrossRef]
- United States Environmental Protection Agency. Learn About Heat Islands; US EPA: Washington, DC, USA, 2022. [Google Scholar]
- Kafy, A.A.; Abdullah-Al-Faisal; Rahman, M.S.; Islam, M.; Rakib, A.A.; Islam, M.A.; Khan, M.H.H.; Sikdar, M.S.; Sarker, M.H.S.; Mawa, J.; et al. Prediction of seasonal urban thermal field variance index using machine learning algorithms in Cumilla, Bangladesh. Sustain. Cities Soc. 2021, 64, 102542. [Google Scholar] [CrossRef]
- Lin, J.; Qiu, S.; Tan, X.; Zhuang, Y. Measuring the relationship between morphological spatial pattern of green space and urban heat island using machine learning methods. Build. Environ. 2023, 228, 109910. [Google Scholar] [CrossRef]
- Kafy, A.A.; Saha, M.; Abdullah-Al-Faisal; Rahaman, Z.A.; Rahman, M.T.; Liu, D.; Fattah, M.A.; Rakib, A.A.; AlDousari, A.E.; Rahaman, S.N.; et al. Predicting the impacts of land use/land cover changes on seasonal urban thermal characteristics using machine learning algorithms. Build. Environ. 2022, 217, 109066. [Google Scholar] [CrossRef]
- Jiang, J.; Zhou, Y.; Guo, X.; Qu, T. Calculation and Expression of the Urban Heat Island Indices Based on GeoSOT Grid. Sustainability 2022, 14, 2588. [Google Scholar] [CrossRef]
- Oh, J.W.; Ngarambe, J.; Duhirwe, P.N.; Yun, G.Y.; Santamouris, M. Using deep-learning to forecast the magnitude and characteristics of urban heat island in Seoul Korea. Sci. Rep. 2020, 10, 3559. [Google Scholar] [CrossRef]
- Jato-Espino, D.; Manchado, C.; Roldán-Valcarce, A.; Moscardó, V. ArcUHI: A GIS add-in for automated modelling of the Urban Heat Island effect through machine learning. Urban Clim. 2022, 44, 101203. [Google Scholar] [CrossRef]
- Lai, J.; Zhan, W.; Quan, J.; Bechtel, B.; Wang, K.; Zhou, J.; Huang, F.; Chakraborty, T.; Liu, Z.; Lee, X. Statistical estimation of next-day nighttime surface urban heat islands. ISPRS J. Photogramm. Remote Sens. 2021, 176, 182–195. [Google Scholar] [CrossRef]
- Lan, T.; Peng, J.; Liu, Y.; Zhao, Y.; Dong, J.; Jiang, S.; Cheng, X.; Corcoran, J. The future of Chinas urban heat island effects: A machine learning based scenario analysis on climatic-socioeconomic policies. Urban Clim. 2023, 49, 101463. [Google Scholar] [CrossRef]
- Choudhury, U.; Singh, S.K.; Kumar, A.; Meraj, G.; Kumar, P.; Kanga, S. Assessing Land Use/Land Cover Changes and Urban Heat Island Intensification: A Case Study of Kamrup Metropolitan District, Northeast India (2000–2032). Earth 2023, 4, 503–521. [Google Scholar] [CrossRef]
- Abou, S.; Rasha, M. Investigating and mapping day-night urban heat island and its driving factors using Sentinel/MODIS data and Google Earth Engine. Case study: Greater Cairo, Egypt. Urban Clim. 2023, 52, 101729. [Google Scholar] [CrossRef]
- Han, L.; Zhao, J.; Gao, Y.; Gu, Z. Prediction and evaluation of spatial distributions of ozone and urban heat island using a machine learning modified land use regression method. Sustain. Cities Soc. 2022, 78, 103643. [Google Scholar] [CrossRef]
- Garzón, J.; Molina, I.; Velasco, J.; Calabia, A. A Remote Sensing Approach for Surface Urban Heat Island Modeling in a Tropical Colombian City Using Regression Analysis and Machine Learning Algorithms. Remote Sens. 2021, 13, 4256. [Google Scholar] [CrossRef]
- Waleed, M.; Sajjad, M.; Acheampong, A.O.; Alam, M.T. Towards Sustainable and Livable Cities: Leveraging Remote Sensing, Machine Learning, and Geo-Information Modelling to Explore and Predict Thermal Field Variance in Response to Urban Growth. Sustainability 2023, 15, 1416. [Google Scholar] [CrossRef]
- Soille, P.; Vogt, P. Morphological segmentation of binary patterns. Pattern Recognit. Lett. 2009, 30, 456–459. [Google Scholar] [CrossRef]
- NOAA National Severe Storms Laboratory. Severe Weather 101: Flood Basics; NOAA National Severe Storms Laboratory: Norman, OK, USA, 2022. [Google Scholar]
- Park, S.J.; Lee, D.K. Prediction of coastal flooding risk under climate change impacts in South Korea using machine learning algorithms. Environ. Res. Lett. 2020, 15, 094052. [Google Scholar] [CrossRef]
- El-Magd, S.A.A.; Maged, A.; Farhat, H.I. Hybrid-based Bayesian algorithm and hydrologic indices for flash flood vulnerability assessment in coastal regions: Machine learning, risk prediction, and environmental impact. Environ. Sci. Pollut. Res. 2022, 29, 57345–57356. [Google Scholar] [CrossRef]
- Costache, R.; Arabameri, A.; Costache, I.; Crăciun, A.; Pham, B.T. New Machine Learning Ensemble for Flood Susceptibility Estimation. Water Resour. Manag. 2022, 36, 4765–4783. [Google Scholar] [CrossRef]
- Parvin, F.; Ali, S.A.; Calka, B.; Bielecka, E.; Linh, N.T.T.; Pham, Q.B. Urban flood vulnerability assessment in a densely urbanized city using multi-factor analysis and machine learning algorithms. Theor. Appl. Climatol. 2022, 149, 639–659. [Google Scholar] [CrossRef]
- Yariyan, P.; Janizadeh, S.; Phong, T.V.; Nguyen, H.D.; Costache, R.; Le, H.V.; Pham, B.T.; Pradhan, B.; Tiefenbacher, J.P. Improvement of Best First Decision Trees Using Bagging and Dagging Ensembles for Flood Probability Mapping. Water Resour. Manag. 2020, 34, 3037–3053. [Google Scholar] [CrossRef]
- Costache, R.; Pham, Q.B.; Sharifi, E.; Linh, N.T.T.; Abba, S.; Vojtek, M.; Vojteková, J.; Nhi, P.T.T.; Khoi, D.N. Flash-Flood Susceptibility Assessment Using Multi-Criteria Decision Making and Machine Learning Supported by Remote Sensing and GIS Techniques. Remote Sens. 2019, 12, 106. [Google Scholar] [CrossRef]
- Hosseini, F.S.; Choubin, B.; Mosavi, A.; Nabipour, N.; Shamshirband, S.; Darabi, H.; Haghighi, A.T. Flash-flood hazard assessment using ensembles and Bayesian-based machine learning models: Application of the simulated annealing feature selection method. Sci. Total. Environ. 2020, 711, 135161. [Google Scholar] [CrossRef]
- Shahabi, H.; Shirzadi, A.; Ghaderi, K.; Omidvar, E.; Al-Ansari, N.; Clague, J.J.; Geertsema, M.; Khosravi, K.; Amini, A.; Bahrami, S.; et al. Flood Detection and Susceptibility Mapping Using Sentinel-1 Remote Sensing Data and a Machine Learning Approach: Hybrid Intelligence of Bagging Ensemble Based on K-Nearest Neighbor Classifier. Remote Sens. 2020, 12, 266. [Google Scholar] [CrossRef]
- Ma, M.; Zhao, G.; He, B.; Li, Q.; Dong, H.; Wang, S.; Wang, Z. XGBoost-based method for flash flood risk assessment. J. Hydrol. 2021, 598, 126382. [Google Scholar] [CrossRef]
- Dodangeh, E.; Choubin, B.; Eigdir, A.N.; Nabipour, N.; Panahi, M.; Shamshirband, S.; Mosavi, A. Integrated machine learning methods with resampling algorithms for flood susceptibility prediction. Sci. Total. Environ. 2020, 705, 135983. [Google Scholar] [CrossRef] [PubMed]
- Costache, R.; Pham, Q.B.; Avand, M.; Linh, N.T.T.; Vojtek, M.; Vojteková, J.; Lee, S.; Khoi, D.N.; Nhi, P.T.T.; Dung, T.D. Novel hybrid models between bivariate statistics, artificial neural networks and boosting algorithms for flood susceptibility assessment. J. Environ. Manag. 2020, 265, 110485. [Google Scholar] [CrossRef] [PubMed]
- Talukdar, S.; Ghose, B.; Shahfahad.; Salam, R.; Mahato, S.; Pham, Q.B.; Linh, N.T.T.; Costache, R.; Avand, M. Flood susceptibility modeling in Teesta River basin, Bangladesh using novel ensembles of bagging algorithms. Stoch. Environ. Res. Risk Assess. 2020, 34, 2277–2300. [Google Scholar] [CrossRef]
- Ekmekcioğlu, O.; Koc, K.; ozger, M.; Işık, Z. Exploring the additional value of class imbalance distributions on interpretable flash flood susceptibility prediction in the Black Warrior River basin, Alabama, United States. J. Hydrol. 2022, 610, 127877. [Google Scholar] [CrossRef]
- Abu-Salih, B.; Wongthongtham, P.; Coutinho, K.; Qaddoura, R.; Alshaweesh, O.; Wedyan, M. The development of a road network flood risk detection model using optimised ensemble learning. Eng. Appl. Artif. Intell. 2023, 122, 106081. [Google Scholar] [CrossRef]
- Priscillia, S.; Schillaci, C.; Lipani, A. Flood susceptibility assessment using artificial neural networks in Indonesia. Artif. Intell. Geosci. 2021, 2, 215–222. [Google Scholar] [CrossRef]
- Adnan, M.S.G.; Siam, Z.S.; Kabir, I.; Kabir, Z.; Ahmed, M.R.; Hassan, Q.K.; Rahman, R.M.; Dewan, A. A novel framework for addressing uncertainties in machine learning-based geospatial approaches for flood prediction. J. Environ. Manag. 2023, 326, 116813. [Google Scholar] [CrossRef]
- Meliho, M.; Khattabi, A.; Asinyo, J. Spatial modeling of flood susceptibility using machine learning algorithms. Arab. J. Geosci. 2021, 14, 2243. [Google Scholar] [CrossRef]
- Costache, R.; Bui, D.T. Identification of areas prone to flash-flood phenomena using multiple-criteria decision-making, bivariate statistics, machine learning and their ensembles. Sci. Total. Environ. 2020, 712, 136492. [Google Scholar] [CrossRef]
- Al-Areeq, A.M.; Saleh, R.A.A.; Ghanim, A.A.J.; Ghaleb, M.; Al-Areeq, N.M.; Al-Wajih, E. Flood hazard assessment in Yemen using a novel hybrid approach of Grey Wolf and Levenberg Marquardt optimizers. Geocarto Int. 2023, 38, 2243884. [Google Scholar] [CrossRef]
- Razavi-Termeh, S.V.; Sadeghi-Niaraki, A.; Seo, M.; Choi, S.M. Application of genetic algorithm in optimization parallel ensemble-based machine learning algorithms to flood susceptibility mapping using radar satellite imagery. Sci. Total Environ. 2023, 873, 162285. [Google Scholar] [CrossRef]
- Pradhan, B.; Lee, S.; Dikshit, A.; Kim, H. Spatial flood susceptibility mapping using an explainable artificial intelligence (XAI) model. Geosci. Front. 2023, 14, 101625. [Google Scholar] [CrossRef]
- Liu, J.; Liu, K.; Wang, M. A Residual Neural Network Integrated with a Hydrological Model for Global Flood Susceptibility Mapping Based on Remote Sensing Datasets. Remote Sens. 2023, 15, 2447. [Google Scholar] [CrossRef]
- Kaspi, M.; Kuleshov, Y. Flood Hazard Assessment in Australian Tropical Cyclone-Prone Regions. Climate 2023, 11, 229. [Google Scholar] [CrossRef]
- Ekmekcioğlu, Ö.; Koc, K. Explainable step-wise binary classification for the susceptibility assessment of geo-hydrological hazards. CATENA 2022, 216, 106379. [Google Scholar] [CrossRef]
- United States Geological Survey. What is a Landslide and What Causes One? U.S. Geological Survey: Reston, VA, USA, 2022. [Google Scholar]
- Bera, S.; Upadhyay, V.K.; Guru, B.; Oommen, T. Landslide inventory and susceptibility models considering the landslide typology using deep learning: Himalayas, India. Nat. Hazards 2021, 108, 1257–1289. [Google Scholar] [CrossRef]
- Chang, L.; Zhang, R.; Wang, C. Evaluation and Prediction of Landslide Susceptibility in Yichang Section of Yangtze River Basin Based on Integrated Deep Learning Algorithm. Remote Sens. 2022, 14, 2717. [Google Scholar] [CrossRef]
- Sun, X.; Yu, C.; Li, Y.; Rene, N.N. Susceptibility Mapping of Typical Geological Hazards in Helong City Affected by Volcanic Activity of Changbai Mountain, Northeastern China. ISPRS Int. J. Geo-Inf. 2022, 11, 344. [Google Scholar] [CrossRef]
- Dou, J.; Yunus, A.P.; Merghadi, A.; Shirzadi, A.; Nguyen, H.; Hussain, Y.; Avtar, R.; Chen, Y.; Pham, B.T.; Yamagishi, H. Different sampling strategies for predicting landslide susceptibilities are deemed less consequential with deep learning. Sci. Total. Environ. 2020, 720, 137320. [Google Scholar] [CrossRef] [PubMed]
- Dao, D.V.; Jaafari, A.; Bayat, M.; Mafi-Gholami, D.; Qi, C.; Moayedi, H.; Phong, T.V.; Ly, H.B.; Le, T.T.; Trinh, P.T.; et al. A spatially explicit deep learning neural network model for the prediction of landslide susceptibility. CATENA 2020, 188, 104451. [Google Scholar] [CrossRef]
- Dou, J.; Yunus, A.P.; Bui, D.T.; Merghadi, A.; Sahana, M.; Zhu, Z.; Chen, C.W.; Han, Z.; Pham, B.T. Improved landslide assessment using support vector machine with bagging, boosting, and stacking ensemble machine learning framework in a mountainous watershed, Japan. Landslides 2020, 17, 641–658. [Google Scholar] [CrossRef]
- Huang, F.; Cao, Z.; Guo, J.; Jiang, S.H.; Li, S.; Guo, Z. Comparisons of heuristic, general statistical and machine learning models for landslide susceptibility prediction and mapping. CATENA 2020, 191, 104580. [Google Scholar] [CrossRef]
- Huang, F.; Zhang, J.; Zhou, C.; Wang, Y.; Huang, J.; Zhu, L. A deep learning algorithm using a fully connected sparse autoencoder neural network for landslide susceptibility prediction. Landslides 2020, 17, 217–229. [Google Scholar] [CrossRef]
- Chang, Z.; Huang, J.; Huang, F.; Bhuyan, K.; Meena, S.R.; Catani, F. Uncertainty analysis of non-landslide sample selection in landslide susceptibility prediction using slope unit-based machine learning models. Gondwana Res. 2023, 117, 307–320. [Google Scholar] [CrossRef]
- Dahal, A.; Lombardo, L. Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling. Comput. Geosci. 2023, 176, 105364. [Google Scholar] [CrossRef]
- Zhao, Z.; Liu, Z.; Xu, C. Slope Unit-Based Landslide Susceptibility Mapping Using Certainty Factor, Support Vector Machine, Random Forest, CF-SVM and CF-RF Models. Front. Earth Sci. 2021, 9, 589630. [Google Scholar] [CrossRef]
- Ye, C.; Tang, R.; Wei, R.; Guo, Z.; Zhang, H. Generating accurate negative samples for landslide susceptibility mapping: A combined self-organizing-map and one-class SVM method. Front. Earth Sci. 2023, 10, 1054027. [Google Scholar] [CrossRef]
- Xi, C.; Han, M.; Hu, X.; Liu, B.; He, K.; Luo, G.; Cao, X. Effectiveness of Newmark-based sampling strategy for coseismic landslide susceptibility mapping using deep learning, support vector machine, and logistic regression. Bull. Eng. Geol. Environ. 2022, 81, 174. [Google Scholar] [CrossRef]
- Gupta, S.K.; Shukla, D.P. Handling data imbalance in machine learning based landslide susceptibility mapping: A case study of Mandakini River Basin, North-Western Himalayas. Landslides 2023, 20, 933–949. [Google Scholar] [CrossRef]
- Fang, Z.; Wang, Y.; Niu, R.; Peng, L. Landslide Susceptibility Prediction Based on Positive Unlabeled Learning Coupled With Adaptive Sampling. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11581–11592. [Google Scholar] [CrossRef]
- Wu, B.; Qiu, W.; Jia, J.; Liu, N. Landslide Susceptibility Modeling Using Bagging-Based Positive-Unlabeled Learning. IEEE Geosci. Remote Sens. Lett. 2021, 18, 766–770. [Google Scholar] [CrossRef]
- Wang, Y.; Feng, L.; Li, S.; Ren, F.; Du, Q. A hybrid model considering spatial heterogeneity for landslide susceptibility mapping in Zhejiang Province, China. CATENA 2020, 188, 104425. [Google Scholar] [CrossRef]
- Huang, F.; Xiong, H.; Yao, C.; Catani, F.; Zhou, C.; Huang, J. Uncertainties of landslide susceptibility prediction considering different landslide types. J. Rock Mech. Geotech. Eng. 2023, 15, 2954–2972. [Google Scholar] [CrossRef]
- Sun, D.; Chen, D.; Zhang, J.; Mi, C.; Gu, Q.; Wen, H. Landslide Susceptibility Mapping Based on Interpretable Machine Learning from the Perspective of Geomorphological Differentiation. Land 2023, 12, 1018. [Google Scholar] [CrossRef]
- Pradhan, B.; Dikshit, A.; Lee, S.; Kim, H. An explainable AI (XAI) model for landslide susceptibility modeling. Appl. Soft Comput. 2023, 142, 110324. [Google Scholar] [CrossRef]
- Collini, E.; Palesi, L.A.I.; Nesi, P.; Pantaleo, G.; Nocentini, N.; Rosi, A. Predicting and Understanding Landslide Events With Explainable AI. IEEE Access 2022, 10, 31175–31189. [Google Scholar] [CrossRef]
- Fang, H.; Shao, Y.; Xie, C.; Tian, B.; Shen, C.; Zhu, Y.; Guo, Y.; Yang, Y.; Chen, G.; Zhang, M. A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence. Sustainability 2023, 15, 3094. [Google Scholar] [CrossRef]
- Zhang, J.; Ma, X.; Zhang, J.; Sun, D.; Zhou, X.; Mi, C.; Wen, H. Insights into geospatial heterogeneity of landslide susceptibility based on the SHAP-XGBoost model. J. Environ. Manag. 2023, 332, 117357. [Google Scholar] [CrossRef]
- Zhu, Q.; Chen, L.; Hu, H.; Pirasteh, S.; Li, H.; Xie, X. Unsupervised Feature Learning to Improve Transferability of Landslide Susceptibility Representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3917–3930. [Google Scholar] [CrossRef]
- Zhiyong, F.; Changdong, L.; Wenmin, Y. Landslide susceptibility assessment through TrAdaBoost transfer learning models using two landslide inventories. CATENA 2023, 222, 106799. [Google Scholar] [CrossRef]
- Wang, Z.; Goetz, J.; Brenning, A. Transfer learning for landslide susceptibility modeling using domain adaptation and case-based reasoning. Geosci. Model Dev. 2022, 15, 8765–8784. [Google Scholar] [CrossRef]
- Youssef, A.M.; Pourghasemi, H.R. Landslide susceptibility mapping using machine learning algorithms and comparison of their performance at Abha Basin, Asir Region, Saudi Arabia. Geosci. Front. 2021, 12, 639–655. [Google Scholar] [CrossRef]
- Chang, Z.; Huang, F.; Huang, J.; Jiang, S.H.; Liu, Y.; Meena, S.R.; Catani, F. An updating of landslide susceptibility prediction from the perspective of space and time. Geosci. Front. 2023, 14, 101619. [Google Scholar] [CrossRef]
Hazard | No. of Retrieved Articles | No. of Manually Selected Articles |
---|---|---|
Air pollution | 1385 | 654 |
Urban heat island | 116 | 36 |
Flood | 657 | 253 |
Landslide | 769 | 511 |
Type | Variable | Source | Spatial Resolution | Temporal Resolution | References |
---|---|---|---|---|---|
Meteorological data | Temperature Wind Rain Solar radiation … | ERA5 1 | 0.25° × 0.25° | Hourly | [90,91] |
Meteorological data | Temperature Atmospheric pressure Wind Humidity … | Netatmo 2 | Discrete | 5 min | [92] |
DEM 3 | Elevation Slope Aspect … | ASTER-GDEM 4 | 30 m | N/A | [93,94,95] |
SRTM-DEM 5 | 30 m | N/A | [96,97] | ||
ALOS-PALSAR-DEM 6 | 12.5 m | N/A | [98,99] | ||
Derived surface indices and land cover 7 | LULC NDVI NDBI MDWI … | Landsat-8 | 30 m | Revisit time: 16 days | [80,94,100] |
Landsat-5 | 30 m | Revisit time: 16 days | [96,101] | ||
Sentinel-2 | 10 m | Revisit time: 5 days (twin satellites) | [90,102] | ||
MODIS Terra and Acqua | 250 m | Revisit time: 2 days | [103] | ||
Land cover | Copernicus CORINE 8 Land Cover 2018 | 100 m | 6 years | [34] | |
ESA 9 WorldCover 2020 | 10 m | Annual | [104] | ||
Features | Roads Waterways Power plants Points of interest … | OpenStreetMap (OSM) | vector | Updated by the community | [104,105] |
Source | Pollutants | Spatial Resolution | Temporal Resolution | References |
---|---|---|---|---|
Sentinel-5P | O3, SO2, NO2, CO, HCHO, and CH4 | 5.5 km × 3.5 km | Revisit time: daily | [114,115,123] |
Aura OMI 1 | O3, NO2, SO2, and aerosols | 13 km × 25 km | Revisit time: daily | [116,124,125] |
Air Quality e-Reporting database | O3, SO2, NO2, CO, HCHO, and CH4 | discrete | Daily/hourly | [105] |
CAMS 2 reanalysis | O3, SO2, NO2, CO, HCHO, and CH4 | 10 km × 10 km | Hourly | [76,116] |
MERRA-2 3 | PM2.5, BC, and aerosols | 0.625° × 0.5° | Hourly | [116,117,126] |
Variable | Source | Spatial Resolution | Temporal Resolution | References |
---|---|---|---|---|
LST 1 | MODIS 2 Terra and Aqua | 1 km | Revisit time: daily (morning pass Terra + afternoon pass Acqua) | [168,169] |
Landsat 8/5 | 30 m | Revisit time: 16 days | [162,170] | |
Sentinel-3 | 1 km | Revisit time: 2 days | [171] | |
AOD 3 | MODIS Terra and Aqua | 10 km | Revisit time: daily (morning pass Terra + afternoon pass Acqua) | [168] |
Albedo | MODIS Terra and Aqua | 0.05° | Revisit time: daily (morning pass Terra + afternoon pass Acqua) | [168] |
Landsat 8 | 30 m | Revisit time: 16 days | [172] | |
Anthropogenic heat flux | NOAA 4 night-time lights | 1 km | Daily/monthly | [93] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pugliese Viloria, A.d.J.; Folini, A.; Carrion, D.; Brovelli, M.A. Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review. Remote Sens. 2024, 16, 3374. https://doi.org/10.3390/rs16183374
Pugliese Viloria AdJ, Folini A, Carrion D, Brovelli MA. Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review. Remote Sensing. 2024; 16(18):3374. https://doi.org/10.3390/rs16183374
Chicago/Turabian StylePugliese Viloria, Angelly de Jesus, Andrea Folini, Daniela Carrion, and Maria Antonia Brovelli. 2024. "Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review" Remote Sensing 16, no. 18: 3374. https://doi.org/10.3390/rs16183374
APA StylePugliese Viloria, A. d. J., Folini, A., Carrion, D., & Brovelli, M. A. (2024). Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review. Remote Sensing, 16(18), 3374. https://doi.org/10.3390/rs16183374