Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning
Abstract
:1. Introduction
Background
2. Materials and Methods: Enhancing Burn Extent to Include Obscured Sub-Crown Fire
2.1. Image Collection
2.2. Creating Tree and Burn Raster
2.2.1. Machine Learning
SVM: Classifying Burned versus Unburned Surface Vegetation
MR-CNN: Classifying Tree Crowns
2.2.2. Combining Rasters
2.3. Inferring Surface Burn under Unburned Tree Crowns
2.3.1. Sub-Crown Burn Mapping
2.3.2. Removing Unburned Tree Noise
Calibrating the Unburned Pixel Cluster Threshold
2.4. Creating Validation Data
3. Results
3.1. Accuracy of Methods
3.1.1. Surface Burn Classification Results
3.1.2. Sub-Crown Burn Reclassification Results
3.2. Calibrating the Unburned Tree Noise Classifier
3.3. The Final Burn Extent Output
3.4. Establishment of Statistical Significance
4. Discussion
4.1. Effects of Shadows on Classification and Validation
4.2. How Training Data Affects the Support Vector Machine
4.3. Influence of Canopy Cover on Classifications
4.4. Improving the Results of the Unburned Tree Noise Classifier
4.5. Using and Deploying the Methodology
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Parameter Type | Parameter Value |
---|---|
GPU Count | 1 |
Number of images to train per GPU | 1 1 |
Number of training steps per epoch | 100 1 |
Number of epochs | 50 1 |
Number of validation steps at the end of every training epoch | 50 |
Backbone network structure | resnet101 |
FPN Pyramid backbone strides | [4,8,16,32,64] |
Size of fully connected layers in the classification graph | 1024 |
Size of the top-down layers used to build the feature pyramid | 256 |
Number of classification classes | 2, Tree & Background 1 |
Length of square anchor side in pixels | (32,64,128,256,512) |
Width to Height ratios of anchors at each cell | [0.5,1,2] |
Anchor stride | Created for every cell |
Non-max suppression threshold to filter RPN proposals | 0.7 |
How many anchors per image to use for RPN training | 256 |
ROIs kept after tf.nn.top_k and before non-maximum suppression | 6000 |
ROIs kept after non-maximum suppression—training | 2000 |
ROIs kept after non-maximum suppression—inference | 1000 |
Mask reduced resolution to reduce memory load (height, width) | (56,56) |
Input image resize shape | Crop 1 |
Input image resize minimum dimension | 1024 1 |
Input image resize maximum dimension | 1024 |
Color channels per image | RGB |
Image mean pixel (RGB) | [123.7, 116.8, 103.9] |
Number of ROIs per image to feed to classifier/mask heads | 200 |
Percent of positive ROIs used to train classifier/mask heads | 0.33 |
ROI Pool Size | 7 |
ROI Mask Pool Size | 14 |
Shape of output mask | [28,28] |
Maximum number of ground truth instances to use in one image | 100 |
Bounding box refinement standard deviation for RPN and final detections RPN_BBOX_STD_DEV BBOX_STD_DEV | np.array([0.1, 0.1, 0.2, 0.2]) |
Max number of final detections | 100 |
Minimum probability value to accept a detected instance | 0.7 |
Non-maximum suppression threshold for detection | 0.3 |
Learning Rate | 0.001 |
Learning Momentum | 0.9 |
Weight decay regularization | 0.0001 |
Loss weights for more precise optimization and can be used for R-CNN training setup. | LOSS_WEIGHTS = {“ rpn_class_loss”: 1., “rpn_bbox_loss”: 1., “mrcnn_class_loss”: 1., “mrcnn_bbox_loss”: 1., “mrcnn_mask_loss”: 1. } |
Use RPN ROIs or externally generated ROIs for training | Use RPN ROIs |
Train or freeze batch normalization layers | Freeze |
Gradient norm clipping | 5.0 |
Appendix B
Number of Classes | 2—Burn, Unburn |
Maximum Number of Samples | 500 |
SVM Type | c_cvc |
Kernel Type | Rbf |
Average Cross Validation Rate | 0.9232 +/− 0.0394 |
Average Gamma | 17.1066 +/− 9.6475 |
Average costC | 11,636.6961 +/− 7261.51 |
Average Number of Support Vectors | 1882 +/− 1752.8244 |
Fire | Section | Cross Validation Rate | Gamma | CostC | Total Support Vectors |
---|---|---|---|---|---|
Mesa | Quad 0 | 0.90200 | 4.00000 | 23,170.47501 | 702 |
Quad 1 | 0.94733 | 2.00000 | 32,768.00000 | 433 | |
Quad 2 | 0.97233 | 11.31371 | 5792.61875 | 242 | |
Quad 3 | 0.94840 | 2.00000 | 16,384.00000 | 348 | |
Average | 0.94252 | 4.82843 | 19,528.77344 | 1725 | |
Standard Deviation | 0.02544 | 3.83227 | 9837.56392 | 170.33405 | |
Cottonwood | Quad 0 | 0.87025 | 32.00000 | 11,585.23750 | 1250 |
Quad 1 | 0.89700 | 22.62742 | 23,170.47501 | 1053 | |
Quad 2 | 0.87075 | 32.00000 | 23,170.47501 | 1249 | |
Quad 3 | 0.87925 | 32.00000 | 11,585.23750 | 1228 | |
Average | 0.87931 | 29.65685 | 17,377.85625 | 4780 | |
Standard Deviation | 0.01082 | 4.05845 | 5792.61875 | 82.45302 | |
Corner | 0.97800 | 22.62742 | 1448.15469 | 277 | |
Hoodoo | 0.89300 | 11.31371 | 8192.00000 | 746 |
References
- Hamilton, D.; Pacheco, R.; Myers, B.; Peltzer, B. kNN vs. SVM: A Comparison of Algorithms. In Proceedings of the Fire Continuum—Preparing for the Future of Wildland Fire, Missoula, MT, USA, 21–24 May 2018; p. 95. Available online: https://www.fs.usda.gov/treesearch/pubs/60581 (accessed on 22 September 2021).
- Zhang, R.; Ma, J. An improved SVM method P-SVM for classification of remotely sensed data. Int. J. Remote Sens. 2008, 29, 6029–6036. [Google Scholar] [CrossRef]
- Hamilton, D.; Myers, B.; Branham, J.B. Evaluation of Texture as an Input of Spatial Context for Machine Learning Mapping of Wildland Fire Effects. Signal Image Process. Int. J. 2017, 8, 1–11. [Google Scholar] [CrossRef]
- Hamilton, D. Improving Mapping Accuracy of Wildland Fire Effects from Hyperspatial Imagery Using Machine Learning; The University of Idaho: Moscow, ID, USA, 2018. [Google Scholar]
- Scott, J.H.; Reinhardt, E.D.; Station, R.M.R. Assessing Crown Fire Potential by Linking Models of Surface and Crown Fire Behavior; USDA Forest Service Research Paper; U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2001. [Google Scholar] [CrossRef] [Green Version]
- Wildland Fire Leadership Council. The National Strategy: The Final Phase in the Development of the National Cohesive Wildland Fire Management Strategy. April 2014. Available online: https://www.forestsandrangelands.gov/documents/strategy/strategy/CSPhaseIIINationalStrategyApr2014.pdf (accessed on 22 September 2021).
- Hoover, K.; Hanson, L.A. “Wildfire Statistics”. Congressional Research Service. October 2019. Available online: https://crsreports.congress.gov/product/pdf/IF/IF10244 (accessed on 14 May 2020).
- National Interagency Fire Center (NIFC). Suppression Costs. 2020. Available online: https://www.nifc.gov/fireInfo/fireInfo_documents/SuppCosts.pdf (accessed on 18 May 2020).
- National Interagency Fire Center (NIFC). Wildland Fire Fatalities by Year. National Interagency Fire Center. 2020. Available online: https://www.nifc.gov/safety/safety_documents/Fatalities-by-Year.pdf (accessed on 18 May 2020).
- Zhou, G.; Li, C.; Cheng, P. Unmanned aerial vehicle (UAV) real-time video registration for forest fire monitoring. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 29 July 2005. [Google Scholar]
- Insurance Information Institute. Available online: https://www.iii.org/fact-statistic/facts-statistics-wildfires#Wildfires%20By%20State,%202019 (accessed on 18 May 2020).
- Gitas, I.Z.; Mitri, G.H.; Ventura, G. Object-based image classification for burned area mapping of Creus Cape, Spain, using NOAA-AVHRR imagery. Remote Sens. Environ. 2004, 92, 409–413. [Google Scholar] [CrossRef]
- Hamilton, D.; Hann, W. Mapping landscape fire frequency for fire regime condition class. In Proceedings of the Large Fire Conference, Missoula, MT, USA, 19–23 May 2014; p. 111. Available online: https://www.fs.fed.us/rm/pubs/rmrs_p073.pdf (accessed on 22 September 2021).
- Brewer, C.K.; Winne, J.C.; Redmond, R.L.; Opitz, D.W.; Mangrich, M.V. Classifying and Mapping Wildfire Severity. Photogramm. Eng. Remote Sens. 2005, 71, 1311–1320. [Google Scholar] [CrossRef] [Green Version]
- Kontoes, C.; Poilvé, H.; Florsch, G.; Keramitsoglou, I.; Paralikidis, S. A comparative analysis of a fixed thresholding vs. a classification tree approach for operational burn scar detection and mapping. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 299–316. [Google Scholar] [CrossRef]
- Seydi, S.; Akhoondzadeh, M.; Amani, M.; Mahdavi, S. Wildfire Damage Assessment over Australia Using Sentinel-2 Imagery and MODIS Land Cover Product within the Google Earth Engine Cloud Platform. Remote Sens. 2021, 13, 220. [Google Scholar] [CrossRef]
- Hawbaker, T.J.; Vanderhoof, M.K.; Schmidt, G.L.; Beal, Y.-J.; Picotte, J.J.; Takacs, J.D.; Falgout, J.T.; Dwyer, J.L. The Landsat Burned Area algorithm and products for the conterminous United States. Remote Sens. Environ. 2020, 244, 111801. [Google Scholar] [CrossRef]
- Boschetti, L.; Roy, D.P.; Justice, C.O. International Global Burned Area Satellite Product Validation Protocol Part I—Production and Standardization of Validation Reference Data. 2009. Available online: https://lpvs.gsfc.nasa.gov/PDF/BurnedAreaValidationProtocol.pdf (accessed on 17 August 2021).
- Hamilton, D.; Bowerman, M.; Colwell, J.; Donohoe, G.; Myers, B.; Donohoe, G. Spectroscopic Analysis for Mapping Wildland Fire Effects from Remotely Sensed Imagery. J. Unmanned Veh. Syst. 2017, 5, 146–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lentile, L.B.; Holden, Z.A.; Smith, A.; Falkowski, M.J.; Hudak, A.T.; Morgan, P.; Lewis, S.A.; Gessler, P.E.; Benson, N.C. Remote sensing techniques to assess active fire characteristics and post-fire effects. Int. J. Wildland Fire 2006, 15, 319–345. [Google Scholar] [CrossRef]
- Lewis, S.A.; Robichaud, P.R.; Frazier, B.E.; Wu, J.Q.; Laes, D.Y. Using hyperspectral imagery to predict post-wildfire soil water repellency. Geomorphology 2008, 95, 192–205. [Google Scholar] [CrossRef]
- Monitoring Trends in Burn Severity. Available online: https://mtbs.gov/ (accessed on 9 November 2017).
- Sparks, A.M.; Boschetti, L.; Smith, A.; Tinkham, W.T.; Lannom, K.O.; Newingham, B.A. An accuracy assessment of the MTBS burned area product for shrub steppe fires in the northern Great Basin, United States. Int. J. Wildland Fire 2014, 24, 70–78. [Google Scholar] [CrossRef]
- Eidenshink, J.; Schwind, B.; Brewer, K.; Zhu, Z.; Quayle, B.; Howard, S. A Project for Monitoring Trends in Burn Severity. Fire Ecol. 2007, 3, 3–21. [Google Scholar] [CrossRef]
- Hamilton, D.; Hamilton, N.; Myers, B. Evaluation of Image Spatial Resolution for Machine Learning Mapping of Wildland Fire Effects. In Proceedings of the SAI Intelligent Systems Conference, London, UK, 5–6 September 2019; pp. 400–415. [Google Scholar]
- Hamilton, D.; Brothers, K.; Jones, S.; Colwell, J.; Winters, J. Wildland Fire Tree Mortality Mapping from Hyperspatial Imagery Using Machine Learning. Remote Sens. 2021, 13, 290. [Google Scholar] [CrossRef]
- Classify—ArcGIS Pro Documentation. Available online: https://pro.arcgis.com/en/pro-app/2.7/help/analysis/image-analyst/classify.htm (accessed on 22 July 2021).
- ESRI Support Services. Train SVM Classifier values for Kernel, Gamma, C-Values; Private Email Correspondence, 27th August 2021; ESRI: West Redlands, CA, USA, 2021. [Google Scholar]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Tsung-Yi, L.; Maire, M.; Belonge, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Brownlee, J. How to Calculate Parametric Statistical Hypothesis Tests in Python. Machine Learning Mastery. 17 May 2018. Available online: https://machinelearningmastery.com/parametric-statistical-significance-tests-in-python/ (accessed on 3 June 2021).
- Wilkerson, S. Application of the Paired t-test. XULAneXUS 2008, 5, 6. [Google Scholar]
- Tellidis, I.; Levin, E. Photogrammetric Image Acquisition with Small Unmanned Aerial Systems. In Proceedings of the ASPRS 2014 Annual Conference, Louisville, KY, USA, 23–28 March 2014; p. 12. [Google Scholar]
- USDA Farm Service Agency. NAIP Imagery, National-Content. 2021. Available online: https://fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/index (accessed on 25 May 2021).
- Robinson, E.M. Crime Scene Photography; Academic Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Goodwin, J.; Hamilton, D. Archaeological Imagery Acquisition and Mapping Analytics Development; Boise National Forest: Boise, ID, USA, 2019. [Google Scholar]
- Calkins, A.; Hamilton, D. Archaeological Imagery Acquisition and Mapping Analytics Development; Boise National Forest: Boise, ID, USA, 2018. [Google Scholar]
Input | Output |
---|---|
Unburn & Surface | Surface |
Burn & Surface | Burn |
Unburn & Canopy | Canopy |
Burn & Canopy | Canopy |
Human Classification | Computer Classification | Result |
---|---|---|
Unburned | Surface | True Negative |
Unburned | Canopy | True Negative |
Unburned | Burned | False Positive |
Burned | Surface | False Negative |
Burned | Canopy | False Negative |
Burned | Burned | True Positive |
Classifier | Accuracy | Specificity | Sensitivity |
---|---|---|---|
Surface Burn Classification | 77.6% | 95.3% | 59.4% |
Classifier | Accuracy | Specificity | Sensitivity |
---|---|---|---|
Surface Burn Classification | 77.6% | 95.3% | 59.4% |
Sub-Crown Burn Reclassification | 77.6% | 95.3% | 59.3% |
Calibration Data Averages | |||
---|---|---|---|
Threshold | Accuracy | Specificity | Sensitivity |
SVM | 50.6% | 99.5% | 7.5% |
0 | 50.6% | 99.5% | 7.9% |
100 | 69.7% | 98.8% | 43.6% |
200 | 75.4% | 96.2% | 56.2% |
400 | 77.3% | 95.5% | 60.4% |
800 | 81.5% | 95.4% | 68.6% |
1600 | 83.9% | 94.3% | 74.0% |
2400 | 85.0% | 92.8% | 77.4% |
3200 | 86.1% | 91.4% | 80.8% |
4000 | 86.9% | 91.4% | 82.3% |
4800 | 88.0% | 91.4% | 84.3% |
5600 | 88.6% | 91.0% | 86.0% |
6400 | 88.6% | 91.0% | 86.0% |
7200 | 88.9% | 91.0% | 86.5% |
8000 | 88.9% | 91.0% | 86.5% |
8800 | 88.9% | 91.0% | 86.5% |
9600 | 88.9% | 91.0% | 86.5% |
Classifier | Accuracy | Specificity | Sensitivity |
---|---|---|---|
Surface Burn Classification | 77.6% | 95.3% | 59.4% |
Unburned Tree Noise + Sub-Crown Burn Reclassifications | 86.7% | 94.6% | 77.7% |
Average Difference | +9.1 | −0.6 | +18.3 |
Fire | Classification | Accuracy | Specificity | Sensitivity |
---|---|---|---|---|
Hoodoo | Initial Surface Burn Classification | 81.9% | 99.4% | 66.7% |
Noise & Sub-Crown Reclassifications | 99.4% | 99.3% | 99.5% | |
Corner | Initial Surface Burn Classification | 65.0% | 96.0% | 29.2% |
Noise & Sub-Crown Reclassifications | 70.0% | 95.8% | 40.2% | |
Cottonwood | Initial Surface Burn Classification | 84.6% | 96.2% | 71.3% |
Noise & Sub-Crown Reclassifications | 92.6% | 95.8% | 88.9% | |
Mesa | Initial Surface Burn Classification | 79.0% | 89.5% | 70.3% |
Noise & Sub-Crown Reclassifications | 84.6% | 87.6% | 82.1% | |
Average | Initial Surface Burn Classification | 77.6% | 95.3% | 59.4% |
Noise & Sub-Crown Reclassifications | 86.7% | 94.6% | 77.7% |
Fire | Accuracy | Specificity | Sensitivity |
---|---|---|---|
Hoodoo | +17.6 | −0.1 | +32.9 |
Corner | +5.0 | −0.2 | +11.0 |
Cottonwood | +8.1 | −0.3 | +17.6 |
Mesa | +5.6 | −1.9 | +11.8 |
Average Difference | +9.1 | −0.6 | +18.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hamilton, D.; Brothers, K.; McCall, C.; Gautier, B.; Shea, T. Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning. Remote Sens. 2021, 13, 3843. https://doi.org/10.3390/rs13193843
Hamilton D, Brothers K, McCall C, Gautier B, Shea T. Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning. Remote Sensing. 2021; 13(19):3843. https://doi.org/10.3390/rs13193843
Chicago/Turabian StyleHamilton, Dale, Kamden Brothers, Cole McCall, Bryn Gautier, and Tyler Shea. 2021. "Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning" Remote Sensing 13, no. 19: 3843. https://doi.org/10.3390/rs13193843
APA StyleHamilton, D., Brothers, K., McCall, C., Gautier, B., & Shea, T. (2021). Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning. Remote Sensing, 13(19), 3843. https://doi.org/10.3390/rs13193843