Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning
Abstract
:1. Introduction
2. Related Studies
- (1)
- A poor generalization of spectral indices in heterogeneous regions.
- (2)
- Additional pre-fire image acquisitions for bitemporal indices.
- (3)
- Omission errors caused by uni-temporal indices.
- (4)
- Lack of a more detailed quantitative comparison between different kinds of algorithms.
- (5)
- Lack of a further investigation about the cross-sensor dataset.
3. Study Areas and Data Characteristics
3.1. Study Areas
3.2. Data Characteristics
3.2.1. Sentinel-2 and Landsat-8 Data Collection
3.2.2. Reference Data
3.2.3. Test Sites Characteristics
3.2.4. Spectral Feature Selection
4. Methods
4.1. Threshold-Based Approaches
4.2. ML-Based Approaches
4.3. DL-Based Approaches
4.3.1. U-Net
4.3.2. HRNet
4.3.3. Fast-SCNN
- (1)
- A learning to downsample module with a standard convolutional layer (Conv2D) and two depthwise separable convolutional layers (DSConv). The output of feature maps is named relu_4 in A.
- (2)
- A coarse global feature extractor to capture the contextual information for segmentation using a bottleneck block that employs the depthwise separable convolution. It can reduce the number of parameters to train floating-point operations. The end of the extractor is a pyramid pooling module that aims to aggregate the different-region-based context. The context information from the extractor part is given in relu_6 before the fusion operation.
- (3)
- A feature fusion module with simple addition of the features with high-level and low-level representations (relu_7 in C).
- (4)
- A standard classifier consists of two DSConv (relu_11 in D), one Conv2D to boost the accuracy, and a softmax layer to get the segmentation results.
4.3.4. DeepLabv3+
4.3.5. Data Augmentation
4.4. Accuracy Assessment
5. Results
5.1. DL Network Evaluation
5.1.1. Test Results and Analysis
5.1.2. Feature Analysis
5.2. Burned Area Mapping with Sentinel-2 Data
5.2.1. Corinthia Fire
5.2.2. Fågelsjö-Lillåsen Fire and Trängslet Fire
5.3. Transferring Phase with Landsat-8 Data
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Feature Selection on Sentinel-2 Spectral Bands
Appendix B. Comparision Results of ML Algorithms
Model | OA | Recall | Precion | F1 | Kappa |
---|---|---|---|---|---|
Light Gradient Boosting Machine | 0.9664 | 0.9011 | 0.9201 | 0.9105 | 0.8898 |
Gradient Boosting Classifier | 0.9656 | 0.8955 | 0.9209 | 0.908 | 0.8868 |
K Neighbors Classifier | 0.9638 | 0.8933 | 0.9136 | 0.9033 | 0.881 |
Random Forest Classifier | 0.9633 | 0.8911 | 0.9132 | 0.902 | 0.8794 |
Extra Trees Classifier | 0.9619 | 0.8875 | 0.9093 | 0.8982 | 0.8748 |
Ada Boost Classifier | 0.9603 | 0.8617 | 0.9239 | 0.8917 | 0.8674 |
Quadratic Discriminant Analysis | 0.9567 | 0.8481 | 0.9173 | 0.8813 | 0.8549 |
Logistic Regression | 0.9538 | 0.8213 | 0.9266 | 0.8707 | 0.8427 |
Decision Tree Classifier | 0.9484 | 0.8621 | 0.865 | 0.8636 | 0.8317 |
SVM - Linear Kernel | 0.9476 | 0.7791 | 0.9338 | 0.849 | 0.8176 |
Linear Discriminant Analysis | 0.9456 | 0.7827 | 0.9183 | 0.8451 | 0.8124 |
Ridge Classifier | 0.9415 | 0.7424 | 0.9355 | 0.8278 | 0.7931 |
Naive Bayes | 0.9379 | 0.6934 | 0.9705 | 0.8088 | 0.773 |
Appendix C. Burned Area Mapping Results with Landsat-8 Data
References
- Bowman, D.M.; Williamson, G.J.; Abatzoglou, J.T.; Kolden, C.A.; Cochrane, M.A.; Smith, A.M. Human exposure and sensitivity to globally extreme wildfire events. Nat. Ecol. Evol. 2017, 1, 58. [Google Scholar] [CrossRef]
- Mangeon, S.; Field, R.; Fromm, M.; McHugh, C.; Voulgarakis, A. Satellite versus ground-based estimates of burned area: A comparison between MODIS based burned area and fire agency reports over North America in 2007. Anthr. Rev. 2016, 3, 76–92. [Google Scholar] [CrossRef] [Green Version]
- Chuvieco, E.; Congalton, R.G. Mapping and inventory of forest fires from digital processing of tm data. Geocarto Int. 1988, 3, 41–53. [Google Scholar] [CrossRef]
- Chuvieco, E.; Mouillot, F.; van der Werf, G.R.; San Miguel, J.; Tanasse, M.; Koutsias, N.; García, M.; Yebra, M.; Padilla, M.; Gitas, I.; et al. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sens. Environ. 2019, 225, 45–64. [Google Scholar] [CrossRef]
- Giglio, L.; Justice, C.; Boschetti, L.; Roy, D. MCD64A1 MODIS/Terra+Aqua Burned Area Monthly L3 Global 500m SIN Grid V006. Distributed by NASA EOSDIS Land Processes DAAC. 2015. Available online: https://doi.org/10.5067/MODIS/MCD64A1.006 (accessed on 11 April 2021).
- Lizundia-Loiola, J.; Otón, G.; Ramo, R.; Chuvieco, E. A spatio-temporal active-fire clustering approach for global burned area mapping at 250 m from MODIS data. Remote Sens. Environ. 2020, 236, 111493. [Google Scholar] [CrossRef]
- Roteta, E.; Bastarrika, A.; Padilla, M.; Storm, T.; Chuvieco, E. Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa. Remote Sens. Environ. 2019, 222, 1–17. [Google Scholar] [CrossRef]
- Li, J.; Roy, D.P. A global analysis of Sentinel-2a, Sentinel-2b and Landsat-8 data revisit intervals and implications for terrestrial monitoring. Remote Sens. 2017, 9, 902. [Google Scholar] [CrossRef] [Green Version]
- Toukiloglou, P.; Gitas, I.Z.; Katagis, T. An automated two-step NDVI-based method for the production of low-cost historical burned area map records over large areas. Int. J. Remote Sens. 2014, 35, 2713–2730. [Google Scholar] [CrossRef]
- Roy, D.P.; Huang, H.; Boschetti, L.; Giglio, L.; Yan, L.; Zhang, H.H.; Li, Z. Landsat-8 and Sentinel-2 burned area mapping—A combined sensor multi-temporal change detection approach. Remote Sens. Environ. 2019, 231. [Google Scholar] [CrossRef]
- Chen, Y.; Lara, M.J.; Hu, F.S. A robust visible near-infrared index for fire severity mapping in Arctic tundra ecosystems. ISPRS J. Photogramm. Remote Sens. 2020, 159, 101–113. [Google Scholar] [CrossRef]
- Kontoes, C.C.; Poilvé, H.; Florsch, G.; Keramitsoglou, I.; Paralikidis, S. A comparative analysis of a fixed thresholding vs. a classification tree approach for operational burn scar detection and mapping. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 299–316. [Google Scholar] [CrossRef]
- Quintano, C.; Fernández-Manso, A.; Stein, A.; Bijker, W. Estimation of area burned by forest fires in Mediterranean countries: A remote sensing data mining perspective. For. Ecol. Manag. 2011, 262, 1597–1607. [Google Scholar] [CrossRef]
- Trigg, S.; Flasse, S. An evaluation of different bi-spectral spaces for discriminating burned shrub-savannah. Int. J. Remote Sens. 2001, 22, 2641–2647. [Google Scholar] [CrossRef]
- Chu, T.; Guo, X. Remote sensing techniques in monitoring post-fire effects and patterns of forest recovery in boreal forest regions: A review. Remote Sens. 2013, 6, 470–520. [Google Scholar] [CrossRef] [Green Version]
- Fernández-Manso, A.; Fernández-Manso, O.; Quintano, C. SENTINEL-2A red-edge spectral indices suitability for discriminating burn severity. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 170–175. [Google Scholar] [CrossRef]
- Huang, H.; Roy, D.P.; Boschetti, L.; Zhang, H.K.; Yan, L.; Kumar, S.S.; Gomez-Dans, J.; Li, J.; Huang, H.; Roy, D.P.; et al. Separability analysis of Sentinel-2A Multi-Spectral Instrument (MSI) data for burned area discrimination. Remote Sens. 2016, 8, 873. [Google Scholar] [CrossRef] [Green Version]
- Navarro, G.; Caballero, I.; Silva, G.; Parra, P.C.; Vázquez, Á.; Caldeira, R. Evaluation of forest fire on Madeira Island using Sentinel-2A MSI imagery. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 97–106. [Google Scholar] [CrossRef] [Green Version]
- Quintano, C.; Fernández-Manso, A.; Fernández-Manso, O. Combination of Landsat and Sentinel-2 MSI data for initial assessing of burn severity. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 221–225. [Google Scholar] [CrossRef]
- Filipponi, F. BAIS2: Burned Area Index for Sentinel-2. Proceedings 2018, 2, 364. [Google Scholar] [CrossRef] [Green Version]
- Loboda, T.; O’Neal, K.J.; Csiszar, I. Regionally adaptable dNBR-based algorithm for burned area mapping from MODIS data. Remote. Sens. Environ. 2007. [Google Scholar] [CrossRef]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
- Pulvirenti, L.; Squicciarino, G.; Fiori, E.; Fiorucci, P.; Ferraris, L.; Negro, D.; Gollini, A.; Severino, M.; Puca, S. An automatic processing chain for near real-time mapping of burned forest areas using sentinel-2 data. Remote Sens. 2020, 12, 674. [Google Scholar] [CrossRef] [Green Version]
- Smith, A.M.; Drake, N.A.; Wooster, M.J.; Hudak, A.T.; Holden, Z.A.; Gibbons, C.J. Production of Landsat ETM+ reference imagery of burned areas within Southern African savannahs: Comparison of methods and application to MODIS. Int. J. Remote Sens. 2007, 28, 2753–2775. [Google Scholar] [CrossRef]
- Jain, P.; Coogan, S.C.; Subramanian, S.G.; Crowley, M.; Taylor, S.; Flannigan, M.D. A review of machine learning applications in wildfire science and management. Environ. Rev. 2020, 28, 478–505. [Google Scholar] [CrossRef]
- Hawbaker, T.J.; Vanderhoof, M.K.; Beal, Y.J.J.; Takacs, J.D.; Schmidt, G.L.; Falgout, J.T.; Williams, B.; Fairaux, N.M.; Caldwell, M.K.; Picotte, J.J.; et al. Mapping burned areas using dense time-series of Landsat data. Remote Sens. Environ. 2017, 198, 504–522. [Google Scholar] [CrossRef]
- Liu, J.; Heiskanen, J.; Maeda, E.E.; Pellikka, P.K.E. Burned area detection based on Landsat time series in savannas of southern Burkina Faso. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 210–220. [Google Scholar] [CrossRef] [Green Version]
- Stavrakoudis, D.; Katagis, T.; Minakou, C.; Gitas, I.Z.; Stavrakoudis, D.; Katagis, T.; Minakou, C.; Gitas, I.Z. Automated Burned Scar Mapping Using Sentinel-2 Imagery. J. Geogr. Inf. Syst. 2020, 12, 221–240. [Google Scholar] [CrossRef]
- Long, T.; Zhang, Z.; He, G.; Jiao, W.; Tang, C.; Wu, B.; Zhang, X.; Wang, G.; Yin, R. 30m resolution global annual burned area mapping based on landsat images and Google Earth Engine. Remote Sens. 2019, 11, 489. [Google Scholar] [CrossRef] [Green Version]
- Koutsias, N.; Karteris, M. Burned area mapping using logistic regression modeling of a single post-fire Landsat-5 Thematic Mapper image. Int. J. Remote Sens. 2000, 21, 673–687. [Google Scholar] [CrossRef]
- Petropoulos, G.P.; Kontoes, C.; Keramitsoglou, I. Burnt area delineation from a uni-temporal perspective based on landsat TM imagery classification using Support Vector Machines. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 70–80. [Google Scholar] [CrossRef]
- Mitrakis, N.E.; Mallinis, G.; Koutsias, N.; Theocharis, J.B. Burned area mapping in Mediterranean environment using medium-resolution multi-spectral data and a neuro-fuzzy classifier. Int. J. Image Data Fusion 2012, 3, 299–318. [Google Scholar] [CrossRef]
- Mallinis, G.; Koutsias, N. Comparing ten classification methods for burned area mapping in a Mediterranean environment using Landsat TM satellite data. Int. J. Remote Sens. 2012, 33, 4408–4433. [Google Scholar] [CrossRef]
- Pu, R.; Gong, P. Determination of burnt scars using logistic regression and neural network techniques from a single post-fire Landsat 7 ETM+ image. Photogramm. Eng. Remote Sens. 2004, 70, 841–850. [Google Scholar] [CrossRef]
- Stroppiana, D.; Bordogna, G.; Carrara, P.; Boschetti, M.; Boschetti, L.; Brivio, P. A method for extracting burned areas from Landsat TM/ETM+ images by soft aggregation of multiple Spectral Indices and a region growing algorithm. ISPRS J. Photogramm. Remote Sens. 2012, 69, 88–102. [Google Scholar] [CrossRef]
- Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef] [PubMed]
- Lateef, F.; Ruichek, Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019, 338, 321–348. [Google Scholar] [CrossRef]
- Milioto, A.; Lottes, P.; Stachniss, C. Real-Time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 21–25 May 2018; Volume 338, pp. 2229–2235. [Google Scholar] [CrossRef] [Green Version]
- Tseng, Y.H.; Jan, S.S. Combination of computer vision detection and segmentation for autonomous driving. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium, PLANS, Monterey, CA, USA, 23–26 April 2018; pp. 1047–1052. [Google Scholar] [CrossRef]
- Jiang, F.; Grigorev, A.; Rho, S.; Tian, Z.; Fu, Y.S.; Jifara, W.; Adil, K.; Liu, S. Medical image semantic segmentation based on deep learning. Neural Comput. Appl. 2018, 29, 1257–1265. [Google Scholar] [CrossRef]
- Bhuiyan, M.A.E.; Witharana, C.; Liljedahl, A.K. Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. J. Imaging 2020, 6, 137. [Google Scholar] [CrossRef]
- Zhang, W.; Liljedahl, A.K.; Kanevskiy, M.; Epstein, H.E.; Jones, B.M.; Jorgenson, M.T.; Kent, K. Transferability of the deep learning mask R-CNN model for automated mapping of ice-wedge polygons in high-resolution satellite and UAV images. Remote Sens. 2020, 12, 1085. [Google Scholar] [CrossRef] [Green Version]
- Bonhage, A.; Eltaher, M.; Raab, T.; Breuß, M.; Raab, A.; Schneider, A. A modified Mask region-based convolutional neural network approach for the automated detection of archaeological sites on high-resolution light detection and ranging-derived digital elevation models in the North German Lowland. Archaeol. Prospect. 2021, 1–10. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Zhang, R.; Li, G.; Li, M.; Wang, L. Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 143, 85–96. [Google Scholar] [CrossRef]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Wurm, M.; Stark, T.; Zhu, X.X.; Weigand, M.; Taubenböck, H. Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2019, 150, 59–69. [Google Scholar] [CrossRef]
- Liu, C.C.; Zhang, Y.C.; Chen, P.Y.; Lai, C.C.; Chen, Y.H.; Cheng, J.H.; Ko, M.H. Clouds classification from Sentinel-2 imagery with deep residual learning and semantic image segmentation. Remote Sens. 2019, 11, 119. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Liu, W.; Yang, L.; Sun, S.; Hu, W.; Zhang, F.; Li, W. DeepUNet: A Deep Fully Convolutional Network for Pixel-Level Sea-Land Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3954–3962. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
- Pinto, M.M.; Libonati, R.; Trigo, R.M.; Trigo, I.F.; DaCamara, C.C. A deep learning approach for mapping and dating burned areas using temporal sequences of satellite images. ISPRS J. Photogramm. Remote Sens. 2020, 160, 260–274. [Google Scholar] [CrossRef]
- Langford, Z.; Kumar, J.; Hoffman, F. Wildfire mapping in interior alaska using deep neural networks on imbalanced datasets. In Proceedings of the IEEE International Conference on Data Mining Workshops, ICDMW, Singapore, 17–20 November 2018; pp. 770–778. [Google Scholar] [CrossRef]
- Zhang, P.; Nascetti, A.; Ban, Y.; Gong, M. An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data. ISPRS J. Photogramm. Remote Sens. 2019, 158, 50–62. [Google Scholar] [CrossRef]
- Ban, Y.; Zhang, P.; Nascetti, A.; Bevington, A.R.; Wulder, M.A. Near Real-Time Wildfire Progression Monitoring with Sentinel-1 SAR Time Series and Deep Learning. Sci. Rep. 2020, 10. [Google Scholar] [CrossRef] [Green Version]
- Bermudez, J.D.; Happ, P.N.; Feitosa, R.Q.; Oliveira, D.A. Synthesis of Multispectral Optical Images from SAR/Optical Multitemporal Data Using Conditional Generative Adversarial Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1220–1224. [Google Scholar] [CrossRef]
- de Bem, P.P.; de Carvalho, O.A., Jr.; de Carvalho, O.L.F.; Gomes, R.A.T.; Guimarães, R.F. Performance analysis of deep convolutional autoencoders with different patch sizes for change detection from burnt areas. Remote Sens. 2020, 12, 2576. [Google Scholar] [CrossRef]
- Knopp, L.; Wieland, M.; Rättich, M.; Martinis, S. A deep learning approach for burned area segmentation with Sentinel-2 data. Remote Sens. 2020, 12, 2422. [Google Scholar] [CrossRef]
- Van Der Werff, H.; Van Der Meer, F. Sentinel-2A MSI and Landsat 8 OLI provide data continuity for geological remote sensing. Remote Sens. 2016, 8, 883. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- Poudel, R.P.; Liwicki, S.; Cipolla, R. Fast-SCNN: Fast semantic segmentation network. In Proceedings of the 30th British Machine Vision Conference (BMVC), Cardiff, UK, 9–12 September 2019. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [Green Version]
- Dinerstein, E.; Olson, D.; Joshi, A.; Vynne, C.; Burgess, N.D.; Wikramanayake, E.; Hahn, N.; Palminteri, S.; Hedao, P.; Noss, R.; et al. An Ecoregion-Based Approach to Protecting Half the Terrestrial Realm. BioScience 2017, 67, 534–545. [Google Scholar] [CrossRef]
- Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef] [Green Version]
- Mallinis, G.; Mitsopoulos, I.; Chrysafi, I. Evaluating and comparing sentinel 2A and landsat-8 operational land imager (OLI) spectral indices for estimating fire severity in a mediterranean pine ecosystem of Greece. GISci. Remote Sens. 2018, 55, 1–18. [Google Scholar] [CrossRef]
- Farasin, A.; Colomba, L.; Garza, P. Double-step U-Net: A deep learning-based approach for the estimation ofwildfire damage severity through sentinel-2 satellite data. Appl. Sci. 2020, 10, 4332. [Google Scholar] [CrossRef]
- BC Wildfire Service. Wildfires of Note—Elephant Hill (K20637). Available online: http://bcfireinfo.for.gov.bc.ca/hprScripts/WildfireNews/OneFire.asp?ID=620 (accessed on 3 February 2021).
- Matthews, J.A. CORINE land-cover map. In Encyclopedia of Environmental Change; SAGE Publications: Thousand Oaks, CA, USA, 2014. [Google Scholar] [CrossRef]
- Lutes, D.C.; Keane, R.E.; Caratti, J.F.; Key, C.H.; Benson, N.C.; Gangi, L.J. FIREMON: Fire effects monitoring and inventory system. In USDA Forest Service, Rocky Mountain Research Station, General Technical Report; U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2006. [Google Scholar] [CrossRef]
- Gascon, F.; Bouzinac, C.; Thépaut, O.; Jung, M.; Francesconi, B.; Louis, J.; Lonjou, V.; Lafrance, B.; Massera, S.; Gaudel-Vacaresse, A.; et al. Copernicus Sentinel-2A calibration and products validation status. Remote Sens. 2017, 9, 584. [Google Scholar] [CrossRef] [Green Version]
- Chastain, R.; Housman, I.; Goldstein, J.; Finco, M. Empirical cross sensor comparison of Sentinel-2A and 2B MSI, Landsat-8 OLI, and Landsat-7 ETM+ top of atmosphere spectral characteristics over the conterminous United States. Remote Sens. Environ. 2019, 221, 274–285. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. Experiments with a New Boosting Algorithm. In Proceedings of the 13th International Conference on Machine Learning, Bari, Italy, 3–6 July 1996; pp. 148–156. [Google Scholar]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 2017, 3147–3155. [Google Scholar]
- Hardtke, L.A.; Blanco, P.D.; Del Valle, F.; Metternicht, G.I.; Sione, W.F. Semi-automated mapping of burned areas in semi-arid ecosystems using MODIS time-series imagery. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 25–35. [Google Scholar] [CrossRef]
- Imperatore, P.; Azar, R.; Calo, F.; Stroppiana, D.; Brivio, P.A.; Lanari, R.; Pepe, A. Effect of the Vegetation Fire on Backscattering: An Investigation Based on Sentinel-1 Observations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4478–4492. [Google Scholar] [CrossRef]
- Kato, A.; Thau, D.; Hudak, A.T.; Meigs, G.W.; Moskal, L.M. Quantifying fire trends in boreal forests with Landsat time series and self-organized criticality. Remote Sens. Environ. 2020, 237, 111525. [Google Scholar] [CrossRef]
- Ali, M. PyCaret: An Open Source, Low-Code Machine Learning Library in Python, PyCaret Version 2.3. 2020. Available online: https://pycaret.org/ (accessed on 11 April 2021).
- Bengio, Y.; Grandvalet, Y. No unbiased estimator of the variance of K-fold cross-validation. J. Mach. Learn. Res. 2004, 5, 1089–1105. [Google Scholar]
- Xie, Y.; Peng, M. Forest fire forecasting using ensemble learning approaches. Neural Comput. Appl. 2019, 31, 4541–4550. [Google Scholar] [CrossRef]
- Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar] [CrossRef] [Green Version]
- Zammit, O.; Descombes, X.; Zerubia, J. Burnt area mapping using Support Vector Machines. For. Ecol. Manag. 2006, 234, S240. [Google Scholar] [CrossRef]
- Dutta, R.; Das, A.; Aryal, J. Big data integration shows Australian bush-fire frequency is increasing significantly. R. Soc. Open Sci. 2016, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Seydi, S.T.; Akhoondzadeh, M.; Amani, M.; Mahdavi, S. Wildfire damage assessment over australia using sentinel-2 imagery and modis land cover product within the google earth engine cloud platform. Remote Sens. 2021, 13, 220. [Google Scholar] [CrossRef]
- Belgiu, M.; Drăgu, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Gibson, R.; Danaher, T.; Hehir, W.; Collins, L. A remote sensing approach to mapping fire severity in south-eastern Australia using sentinel 2 and random forest. Remote Sens. Environ. 2020, 240. [Google Scholar] [CrossRef]
- Ramo, R.; Chuvieco, E. Developing a Random Forest algorithm for MODIS global burned area classification. Remote Sens. 2017, 9, 1193. [Google Scholar] [CrossRef] [Green Version]
- Ramo, R.; García, M.; Rodríguez, D.; Chuvieco, E. A data mining approach for global burned area mapping. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 39–51. [Google Scholar] [CrossRef]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
- Mottaghi, R.; Chen, X.; Liu, X.; Cho, N.G.; Lee, S.W.; Fidler, S.; Urtasun, R.; Yuille, A. The role of context for object detection and semantic segmentation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 891–898. [Google Scholar] [CrossRef]
- Gong, K.; Liang, X.; Zhang, D.; Shen, X.; Lin, L. Look into Person: Self-supervised Structure-sensitive Learning and a new benchmark for human parsing. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Pontius, R.G.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Goyal, P.; Dollár, P.; Girshick, R.; Noordhuis, P.; Wesolowski, L.; Kyrola, A.; Tulloch, A.; Jia, Y.; He, K. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv 2017, arXiv:1706.02677. [Google Scholar]
- Ho, Y.; Wookey, S. The Real-World-Weight Cross-Entropy Loss Function: Modeling the Costs of Mislabeling. IEEE Access 2020, 8, 4806–4813. [Google Scholar] [CrossRef]
- Shore, J.E.; Johnson, R.W. Axiomatic Derivation of the Principle of Maximum Entropy and the Principle of Minimum Cross-Entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef] [Green Version]
- Bastarrika, A.; Chuvieco, E.; Martín, M.P. Mapping burned areas from landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors. Remote Sens. Environ. 2011, 115, 1003–1012. [Google Scholar] [CrossRef]
Country | Sites | Event Date | End Date | Burned Area (ha) | REF Date | POST Date (S2) | POST Date (L8) | Width × Height in 20 m Res. | |
---|---|---|---|---|---|---|---|---|---|
P1 | Portugal | Leiria District | 2017-06-17 | 2017-06-24 | 45,135 | 2017-06-24 | 2017-07-04 | ⋆ | 2240 × 2022 |
P2 | Spain | Donana | 2017-06-24 | 2017-06-30 | 8446 | 2017-08-08 | 2017-07-01 | ⋆ | 1120 × 1037 |
P3 | Spain | Encinedo | 2017-08-22 | 2017-09-01 | 9934 | 2017-10-10 | 2017-09-02 | ⋆ | 1247 × 550 |
P4 | Portugal | Castelo Branco | 2019-07-20 | 2019-07-23 | 9646 | 2019-07-24 | 2019-08-03 | ⋆ | 1204 × 849 |
P5 | Canada | Elephant Hill | 2017-07-06 | 2017-09-20 | 191,865 | 2018-05-14 | 2017-10-03 | ⋆ | 3839 × 4933 |
P6 | Sweden | Enskogen | 2018-07-14 | 2018-07-18 | 8980 | 2018-08-07 | 2018-10-07 | ⋆ | 816 × 861 |
T1 | Greece | Corinthia | 2020-07-22 | 2020-07-26 | 3282 | 2020-07-28 | 2020-07-29 | 2020-08-23 | 476 × 544 |
T2 | Sweden | Fågelsjö-Lillåsen | 2018-07-13 | 2018-07-27 | 3906 | 2018-07-27 | 2018-09-02 | 2018-10-16 | 409 × 409 |
T3 | Sweden | Trängslet | 2018-07-12 | 2018-07-27 | 3136 | 2018-07-27 | 2018-10-05 | 2018-10-07 | 421 × 385 |
Event | EMSR ID | Tran. (km) | Pop. (No.) | Ele. (m) | PRE IMG. Source (GSD) | POST IMG. Source (GSD) |
---|---|---|---|---|---|---|
Corinthia | 447 | 54.4 | 1501 | 36.9 to 718.3 | SPOT6 (1.5 m) | SPOT7 (1.5 m) |
Fågelsjö-Lillåsen | 298_05 | 44.3 | n.d. | 435.0 to 597.9 | Sentinel 2A/B (10 m) | SPOT6/7 (1.5 m) |
Trängslet | 298_03 | 10.1 | n.d. | 526.9 to 698.7 | Sentinel 2A/B (10 m) | SPOT6/7 (1.5 m) |
Event | Res./Ind. (ha) | Forests (ha) | Het. Agric. (ha) | Perm. Crops (ha) | Shrubs/Herb. (ha) | In. Wetlands (ha) |
Corinthia | 14.3 | 1373.6 | 329.7 | 647.5 | 920.4 | n.d. |
Fågelsjö-Lillåsen | n.d. | 2661.0 | n.d. | n.d. | 985.8 | 240.1 |
Trängslet | n.d. | 1301.2 | n.d. | n.d. | 1085.3 | 749.8 |
Methods | Parameters | Define |
---|---|---|
SCALING | min scale factor = 0.7; max scale factor = 1.2; step = 0.1 | Resize image with a random scale within maximum and minimum in a fixed step. |
FLIP | ratio = 0.5 | Flip (horizontally and vertically) the input data randomly with a given probability (i.e., ratio). |
ROTATION | 0°–75° | Rotate the image by angle between 0° and 75° |
AREA | min = 0.2 | A crop of random size of the original size. |
ASPECT | min = 0.2 | A random aspect ratio of the original aspect ratio. |
COLOR JITTER | brightness jitter ratio = 0.5; saturation jitter ratio = 0.5; contrast jitter ratio=0.5 | Randomly change the brightness, contrast, saturation of an image with a given probability. |
Corinthia | ||||||
---|---|---|---|---|---|---|
Model | Oe (%) | Ce (%) | AD (%) | QD (%) | mIoU | Kappa |
U-Net | 7.71 | 4.35 | 3.38 | 1.41 | 0.90 | 0.90 |
HRNet | 4.43 | 7.98 | 3.57 | 1.55 | 0.90 | 0.89 |
Fast-SCNN | 17.05 | 5.53 | 3.90 | 4.90 | 0.83 | 0.81 |
DeepLabv3+ | 9.56 | 5.00 | 3.83 | 1.93 | 0.89 | 0.88 |
LightGBM | 17.00 | 1.38 | 0.93 | 6.37 | 0.86 | 0.84 |
KNN | 18.93 | 1.33 | 0.88 | 7.17 | 0.84 | 0.83 |
RF | 19.58 | 1.88 | 1.24 | 7.26 | 0.83 | 0.82 |
NBR | 20.12 | 3.57 | 2.38 | 6.90 | 0.82 | 0.80 |
NBR | 29.87 | 0.79 | 0.45 | 11.79 | 0.76 | 0.73 |
Sites | Fågelsjö-Lillåsen | Trängslet | ||||||
---|---|---|---|---|---|---|---|---|
Model | Oe (%) | Ce (%) | mIoU | Kappa | Oe (%) | Ce (%) | mIoU | Kappa |
U-Net | 20.91 | 4.26 | 0.77 | 0.75 | 10.36 | 12.61 | 0.82 | 0.80 |
HRNet | 22.10 | 4.78 | 0.76 | 0.73 | 5.33 | 13.91 | 0.84 | 0.82 |
Fast-SCNN | 11.82 | 8.47 | 0.81 | 0.79 | 2.99 | 20.24 | 0.79 | 0.77 |
DeepLabv3+ | 26.32 | 4.16 | 0.73 | 0.69 | 8.63 | 12.66 | 0.83 | 0.81 |
LightGBM | 44.41 | 3.56 | 0.60 | 0.52 | 6.95 | 6.04 | 0.89 | 0.89 |
KNN | 46.58 | 3.73 | 0.58 | 0.50 | 7.54 | 5.74 | 0.89 | 0.88 |
RF | 40.71 | 3.99 | 0.63 | 0.56 | 6.37 | 7.35 | 0.89 | 0.88 |
NBRotsu | 34.92 | 5.17 | 0.66 | 0.60 | 25.94 | 2.81 | 0.77 | 0.74 |
NBRem | 65.03 | 0.73 | 0.47 | 0.34 | 8.59 | 7.31 | 0.87 | 0.86 |
Sites | Corinthia | Fågelsjö-Lillåsen | Trängslet | ||||||
---|---|---|---|---|---|---|---|---|---|
Model | Oe (%) | Ce (%) | Kappa | Oe (%) | Ce (%) | Kappa | Oe (%) | Ce (%) | Kappa |
U-Net | 16.24 | 2.30 | 0.85 | 21.85 | 4.74 | 0.73 | 8.45 | 12.12 | 0.82 |
HRNet | 14.17 | 4.08 | 0.85 | 12.62 | 9.19 | 0.78 | 5.63 | 13.23 | 0.83 |
Fast-SCNN | 20.39 | 3.68 | 0.80 | 10.92 | 10.92 | 0.77 | 2.64 | 20.05 | 0.77 |
DeepLabv3+ | 20.50 | 2.13 | 0.81 | 40.31 | 2.87 | 0.57 | 12.51 | 10.68 | 0.80 |
LightGBM | 21.57 | 1.09 | 0.75 | 54.66 | 3.20 | 0.43 | 10.05 | 6.53 | 0.86 |
KNN | 29.07 | 1.01 | 0.74 | 56.46 | 3.56 | 0.41 | 10.84 | 6.15 | 0.85 |
RF | 28.86 | 1.14 | 0.74 | 49.07 | 3.32 | 0.48 | 9.21 | 7.52 | 0.85 |
NBR | 22.53 | 4.24 | 0.78 | 49.05 | 6.42 | 0.46 | 25.94 | 3.47 | 0.74 |
NBR | 38.92 | 0.46 | 0.65 | 72.16 | 1.62 | 0.26 | 8.62 | 9.47 | 0.84 |
AD (%) | QD (%) | OA | mIoU | Kappa | |
---|---|---|---|---|---|
dNBR | 0.33 | 11.99 | 0.88 | 0.76 | 0.73 |
dNBR | 2.18 | 2.51 | 0.95 | 0.91 | 0.90 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, X.; Ban, Y.; Nascetti, A. Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sens. 2021, 13, 1509. https://doi.org/10.3390/rs13081509
Hu X, Ban Y, Nascetti A. Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sensing. 2021; 13(8):1509. https://doi.org/10.3390/rs13081509
Chicago/Turabian StyleHu, Xikun, Yifang Ban, and Andrea Nascetti. 2021. "Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning" Remote Sensing 13, no. 8: 1509. https://doi.org/10.3390/rs13081509
APA StyleHu, X., Ban, Y., & Nascetti, A. (2021). Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sensing, 13(8), 1509. https://doi.org/10.3390/rs13081509