Next Article in Journal
Sediment Response after Wildfires in Mountain Streams and Their Effects on Cultural Heritage: The Case of the 2021 Navalacruz Wildfire (Avila, Spain)
Next Article in Special Issue
An Integrated Grassland Fire-Danger-Assessment System for a Mountainous National Park Using Geospatial Modelling Techniques
Previous Article in Journal
Flame Retardant Additives Used for Polyurea-Based Elastomers—A Review
Previous Article in Special Issue
Assessment of the Analytic Burned Area Index for Forest Fire Severity Detection Using Sentinel and Landsat Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Spatially Adaptable Filter for Error Reduction (SAFER) Process: Remote Sensing-Based LANDFIRE Disturbance Mapping Updates

1
ASRC Federal Data Solutions Contractor to U.S. Geological Survey (USGS), Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198, USA
2
KBR Contractor to U.S. Geological Survey, Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198, USA
3
U.S. Geological Survey, Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198, USA
*
Authors to whom correspondence should be addressed.
Current address: Lynker at NOAA/NWS/NCEP/EMC, 5830 University Research Ct., College Park, MD 20740, USA.
Submission received: 22 December 2023 / Revised: 29 January 2024 / Accepted: 31 January 2024 / Published: 8 February 2024
(This article belongs to the Special Issue Remote Sensing of Wildfire: Regime Change and Disaster Response)

Abstract

:
LANDFIRE (LF) has been producing periodic spatially explicit vegetation change maps (i.e., LF disturbance products) across the entire United States since 1999 at a 30 m spatial resolution. These disturbance products include data products produced by various fire programs, field-mapped vegetation and fuel treatment activity (i.e., events) submissions from various agencies, and disturbances detected by the U.S. Geological Survey Earth Resources Observation and Science (EROS)-based Remote Sensing of Landscape Change (RSLC) process. The RSLC process applies a bi-temporal change detection algorithm to Landsat satellite-based seasonal composites to generate the interim disturbances that are subsequently reviewed by analysts to reduce omission and commission errors before ingestion them into LF’s disturbance products. The latency of the disturbance product is contingent on timely data availability and analyst review. This work describes the development and integration of the Spatially Adaptable Filter for Error Reduction (SAFER) process and other error and latency reduction improvements to the RSLC process. SAFER is a random forest-based supervised classifier and uses predictor variables that are derived from multiple years of pre- and post-disturbance Landsat band observations. Predictor variables include reflectance, indices, and spatial contextual information. Spatial contextual information that is unique to each contiguous disturbance region is parameterized as Z scores using differential observations of the disturbed regions with its undisturbed neighbors. The SAFER process was prototyped for inclusion in the RSLC process over five regions within the conterminous United States (CONUS) and regional model performance, evaluated using 2016 data. Results show that the inclusion of the SAFER process increased the accuracies of the interim disturbance detections and thus has potential to reduce the time needed for analyst review. LF does not track the time taken by each analyst for each tile, and hence, the relative effort saved was parameterized as the percentage of 30 m pixels that are correctly classified in the SAFER outputs to the total number of pixels that are incorrectly classified in the interim disturbance and are presented. The SAFER prototype outputs showed that the relative analysts’ effort saved could be over 95%. The regional model performance evaluation showed that SAFER’s performance depended on the nature of disturbances and availability of cloud-free images relative to the time of disturbances. The accuracy estimates for CONUS were inferred by comparing the 2017 SAFER outputs to the 2017 analyst-reviewed data. As expected, the SAFER outputs had higher accuracies compared to the interim disturbances, and CONUS-wide relative effort saved was over 92%. The regional variation in the accuracies and effort saved are discussed in relation to the vegetation and disturbance type in each region. SAFER is now operationally integrated into the RSLC process, and LANDFIRE is well poised for annual updates, contingent on the availability of data.

1. Introduction

Monitoring land cover and land use over time is fundamental in understanding our environment and environmental change [1,2,3,4]. Over the last five decades, several organizations have developed and commissioned space-based platforms to systematically observe and map the Earth with remote sensing (RS) technologies [5,6,7,8]. This has resulted in systematic and repeatable synoptic collections of measurements and imagery for monitoring and studying the biosphere [4,6,9,10,11], leading to the rapid development of RS methodologies to derive geophysical, biophysical, and environmental variables for inventorying and detecting subsequent change.
The Landscape Fire and Resource Management Planning Tools (LANDFIRE) program [12] (http://www.LANDFIRE.gov (accessed 6 February 2024)) relies on RS imagery-based change detection for monitoring and updating its products. LANDFIRE (LF) is an interagency collaboration that provides consistent and comprehensive vegetation and wildland fuel data for the entire United States [12,13]. LF products are used for strategic planning and/or tactical decision making [14] on wildfire incidents, resource management plans [15], fuel treatment projects [16,17], and many other nonfire applications (e.g., [18,19,20]).
LANDFIRE products have been updated bi-annually to account for changes on the landscape due to natural and anthropogenic disturbances. These products provide national, regional, and local information that is needed by fire ecologists, researchers, land managers, and conservationists to update an assortment of predictive models or planning documents. Hence, high product accuracy and frequency of delivery, along with low product latency, are important.
LANDFIRE disturbance mapping is the first step in understanding where vegetation and fuels have changed. Disturbance layers not only inform how vegetation was initially altered, but also, when used in conjunction with vegetation and fuels transition rulesets, how vegetation has recovered through time. A major component of LF disturbance mapping is the use of RS imagery and change detection algorithms (CDA) via the U.S. Geological Survey Earth Resources Observation and Science (EROS)-based Remote Sensing of Landscape Change (RSLC) process. LANDFIRE’s RSLC disturbance mapping framework (detailed in Section 2.2) uses a modified form of the Multi-Index Integrated Change Analysis (MIICA) [21] algorithm. The interim RSLC MIICA detections are subsequently reviewed in detail by analysts to verify vegetation disturbances.
CDAs often use thresholds that define the cut-off between what is considered natural variability and change. Algorithmic errors are typically caused by the improper characterization of the natural variability of a land surface because of changes in local conditions that include viewing geometry, cloud/cloud shadow, and atmospheric effects. To resolve this, these globally applicable CDAs use thresholds that are derived using large databases [22,23,24,25]. Physically based models, e.g., [26,27,28,29] use modeled seasonal landcover phenology to estimate observed departures. The advantage of a physically based model is its generalized applicability, restricted only by the assumptions in the model.
A conservative threshold in the CDA followed by a commission error filter [25,30] have been found useful to minimize errors of omission and commission. Such filters often use the temporal domain (temporal persistence/consistency of disturbance) to validate the change. For example, the Continuous Change Detection and Classification (CCDC) algorithm, operationally used in the Land Change Monitoring, Assessment, and Projection (LCMAP) program, requires consecutive observations to concur change to validate a spectral break [28]. Similarly, operational [25,31] fire characterization algorithms use both spatial and temporal filters for omission and commission error reduction. The Landscape Change Monitoring System (LCMS; [32]) uses an ensemble of CDAs to detect changes that are later refined by using decision trees that have been trained using a reference dataset to reduce errors rates.
The motivation for this research stems from the fact that human interpreters use spatial contexts [33], intuitive and adaptive skills [34], and their expertise to successfully interpret satellite imagery. Although it is perhaps not currently possible to match human skills, computer algorithms can be adapted to mimic human interpretation by including contextual information and regionally derived models.
This work describes the Spatially Adaptable Filter for Error Reduction (SAFER) process, which integrates contextual information in a machine learning environment; spatial contextual information is parameterized as spatial change Z scores, and these scores are included alongside the pre- and post-disturbance observations as predictor variables. SAFER is trained using prior years of analyst-reviewed remotely sensed disturbance data. We show that the inclusion of the SAFER process increases classification accuracy and has the potential to reduce product latency by reducing the required analyst effort.

2. Data and Preprocessing

LANDFIRE ingests data from multiple sources (refer to Supplementary Table S1) for product generation and uses Landsat satellite series reflectance observations for remotely sensed disturbance detections. Landsat-based imagery is composited (refer to Section 2.2.1) and divided into 98 (Figure 1) nonoverlapping tiles [35]. Based on analysts’ experiences, tiles that encompass a variety of vegetation types and disturbances were selected for prototyping and are highlighted in Figure 1.

2.1. The LF Disturbance Product

LF disturbance products provide raster maps of annual disturbances on the landscape along with attributes for each pixel, including the type of disturbance, time since disturbance, and severity of disturbance. The sources of LF disturbance data are separated into three categories: Fire Program data, Events, and RSLC.
On an annual basis, LF actively solicits and collects field data Events from federal, state, local, and nonprofit cooperating organizations. An Event is a management practice or natural occurrence that is larger than 0.02 acre (greater than 1/10th of a 30 m pixel) that has reportedly affected the vegetation on the landscape and been captured and characterized by the managing agency. Events include disturbances that are not always detected by RS techniques like fuel treatments, chemical/herbicide treatments, and insect or disease infestations. Submitted Events typically include information on the type of disturbance, which can be distilled down to various causative agents such as fire, development, clearcutting, harvest, thinning, mastication, other mechanical, weather, insecticide, chemical, insects, disease, insect/disease, herbicide, and biological. Some change agents take precedence over other types of disturbance if they are overlapping; for example, harvest overrides fire if both occur in the same year. Submitted event data are collected and processed into a standardized format and maintained within the LF Reference Database (LFRDB) as individual vector files (LFRDB; [12]). These vector data are later rasterized and included in the LF disturbance product.
Fire Program data are generated by the U.S. Forest Service (USFS) and U.S. Geological Survey (USGS) and includes Monitoring Trends in Burn Severity (MTBS; [36,37]), Burned Area Emergency Response (BAER; [38]), and Rapid Assessment of Vegetation Condition After Wildfire (RAVG; [39]). The MTBS program is an interagency program, run by the USGS and the USFS, that assesses burned areas that are greater than 1000 acres in the western United States and greater than 500 acres in the eastern United States. MTBS relies on the differenced Normalized Burn Ratio (dNBR) to discern fire boundaries and includes an estimate of the severity of the fire along with other attributes for each fire [36]. The BAER products were designed to assess the immediate post-fire soil effects for the USFS [40]. Burned area delineation methods for BAER are like MTBS; however, the soil burn severity is ground-validated. The BAER product requests are typically restricted to U.S. Department of Interior (DOI)- and USFS-managed lands, so spatial coverage is restricted. The RAVG product [41] maps the condition of the vegetation after a wildfire. The USFS creates the RAVG burn severity maps and translates those outputs to loss of canopy cover, basal area, and other metrics designed for restoration efforts. The RAVG program, like MTBS, maps fires that are greater than 1000 acres on USFS lands in the West and 500 acres in the East using both Landsat and Sentinel imagery before and after a fire. All these data are aggregated by calendar year to be included in the LF disturbance data.
LANDFIRE actively maps remotely sensed disturbances with the RSLC process. This process is described in detail in Section 2.2. Disturbance pixels detected using the RSLC process are subsequently aggregated with fire program data and submitted Events. In this pixel-based process, fire program data typically rank the highest in topological hierarchies. More information on the LF disturbance products is available at https://www.usgs.gov/media/videos/understanding-landfire-disturbance-suite (accessed on 6 February 2024).

2.2. The Remote Sensing of Landscape Change (RSLC) Process

2.2.1. Compositing

LF uses Landsat reflectance observations to generate seasonal composites from all available Landsat scenes across a seasonal date range [35], to which change detection algorithms are applied. Image compositing helps reduce the noise in the time series data by using only the most representative observations that are not contaminated by clouds or cloud shadows or affected by other atmospheric effects. During the prototyping phase of SAFER, Top of Atmosphere (TOA) reflectance data were used operationally; however, in 2017, LF began using atmospherically corrected Landsat surface reflectance (SR) data, resulting in the SAFER process transitioning to SR. Currently, LF uses the USGS (EROS) system Landsat Product Generation System (LPGS), which processes the raw Landsat data into a Level 1 product. These data are terrain-corrected with well-characterized radiometry that is intercalibrated across the different Landsat instruments. Next, the Level 1 products are converted into Level 2 products by the LPGS2 (LPGS Level 2) system processes. Once LPGS/LPGS2 finishes processing, the Level 2 products are then made publicly available via Earth Explorer. LANDFIRE compositing is currently based on the 50th percentile for available imagery within the tile-specific season. The tile-specific seasonal date ranges were informed by phenology data from the USA National Phenology Network [42] but were generally centered around day of year (DOY) 175 and DOY 250 (Supplementary Figure S1). Two seasonal composites (early and late) were generated per year.

2.2.2. Urban, Water, and Agricultural Masks Data

LF has less confidence in changes over urban (i.e., impervious), water/wetland, and crop landcovers and excludes them by using masks if the commission error is significant. The National Land Cover Database (NLCD) urban classes [10,43] are used to derive the urban mask for LF purposes. The surface water mask, including perennial snow and ice, is derived from the Level-3 Dynamic Surface Water Extent (DSWE) science product [44,45]. The agricultural mask is derived from the U.S. Department of Agriculture Cropland Data Layer (CDL) [46,47].

2.2.3. Automated Change Detection

An adapted form of the MIICA [21] is used by RSLC to detect vegetation disturbance. MIICA uses four differenced spectral indices that are calculated using two Landsat acquisitions near anniversary dates and encompassing the year of disturbance. The four differenced indices used in MIICA are the dNBR, the differenced Normalized Difference Vegetation Index (dNDVI), the Change Vector (CV), and the Relative Change Vector Maximum (RCVMAX). These indices are derived as
d N B R = ( ρ n i r t 2 ρ s w i r 2 t 2 ) ( ρ n i r t 2 + ρ s w i r 2 t 2 ) ( ρ n i r t 1 ρ s w i r 2 t 1 ) ( ρ n i r t 1 + ρ s w i r 2 t 1 )
d N D V I = ( ρ n i r t 2 ρ r e d t 2 ) ( ρ n i r t 2 + ρ r e d t 2 ) ( ρ n i r t 1 ρ r e d t 1 ) ( ρ n i r t 1 + ρ r e d t 1 )
C V = i = r e d s w i r 2 ( ρ i t 2 ρ i t 1 ) 2  
R C V M A X = i = r e d s w i r 2 [ ( ρ i t 2 ρ i t 1 ) m a x ( ρ i t 2 , ρ i 0 t 1 )   ] 2
where ρ is the reflectance of Landsat Band [i] for the times t1 (pre) and t2 (post). The MIICA algorithm calculates regional (tile-wise) means and standard deviations for these four indices, and then sorts each index into ranks/classes according to the spectral departure from their regional means using the standard deviation as a unit measure. A vegetation disturbance is detected by MIICA if the pixel’s CV is greater than its tile-wise mean, RCVMAX is greater than its tile-wise mean plus 3.0 times its standard deviation, and dNDVI is less than its tile-wise mean.
MIICA was optimized for mapping spectral changes over the conterminous United States CONUS for the NLCD program and has relatively high regional commission and omission errors for vegetation-related disturbances in LF tiles. To reduce the omissions in the LF application, MIICA is applied to seasonal composites encompassing the year of the disturbance and its previous year, with thresholds defined using tile-wide summary statistics. These thresholds were optimized [21] to minimize errors of omissions for operational use over CONUS. MIICA explicitly tests for decreases or increases in biomass using a series of conditional statements that evaluate combinations of spectral indices against set thresholds that were empirically derived in areas of diverse land cover types.

2.2.4. Analysts’ Review

The two seasonal MIICA outputs indicating a decrease in biomass for each season are combined to create the RSLC annual interim disturbance product that is visually examined by the analyst(s) to verify if the observed spectral change is correctly classified as a vegetation-related disturbance [48]. A wide array of geospatial tools are available in the analysts’ toolbox, including prior- and subsequent-year MIICA detections, multiple years of observations using true and false color composites, spatially explicit differenced indices, and available high-resolution imagery to manually edit residual errors in the map. Human interpreters use spatial contextual information [33] and intuitive and adaptive skills [34] along with their expertise to successfully interpret satellite imagery. LANDFIRE analysts are therefore crucial in ensuring accuracy. Analysts visually interpret LF tile-specific seasonal dNBR images [36] to supplement vegetation disturbances that are incorrectly identified by the seasonal MIICA. These supplemental detections are only applied where MIICA did not accurately map a disturbance or missed it entirely. LANDFIRE analysts frequently review each other’s work as a measure of quality control and intercalibration assurance to minimize errors and improve consistency.

3. Methods

This section describes the development and integration of the SAFER process into the heritage RSLC process (Figure 2). The RSLC interim disturbances that were encountered by the analysts not only include the detections by MIICA that are applied to seasonal composites but may also include optional supplemental disturbance detections. Supplemental detections are disturbance detections using liberal thresholds on each of the seasonal dNBR images. The supplemental disturbance detection thresholds are defined using the sum of the mean and fractional multiples of the standard deviation of the dNBR distribution in the tile, but excluding regions that are already detected by MIICA as disturbed vegetation. Parameters defining the fractional multiples for supplemental disturbance detections to MIICA are set to be optionally available to the analyst, based on prior experiences.
SAFER was prototyped over select tiles (Figure 1). SAFER is a supervised random forest classifier that develops statistical classification rules using training data [49]. For the random forest model, we use the previous years’ analyst-reviewed disturbance data and a suite of predictor variables (Section 3.1.1) to train the model. The predictor variables are defined over the RSLC interim disturbance locations only over the multitemporal periods encompassing the disturbances. This random forest model is applied to the subsequent mapping year’s predictor datasets (Figure 2). The model implicitly assumes that the relationships between the predictor variables and disturbances in a given LF tile are representative of the subsequent year in the same tile.

3.1. Random Forest Model

Random forests [50,51] are an ensemble form of decision tree classifications, where several trees are grown by recursively partitioning a random subset of the training data predictor variables into more homogeneous subsets, referred to as nodes. The predictor variable values (Section 3.1.1; Table 1) are only extracted for the RSLC interim disturbance detection pixels’ locations and include multiple years of remotely sensed reflectance observations, their corresponding indices, and spatial change Z scores (described in Section 3.2). The random forest classifier is used because it is an established supervised classifier that can accommodate nonmonotonic and nonlinear relationships between predictor variables, makes no assumptions concerning the statistical distributions of the variables, and can handle correlated variables [50,51].
Training data are generated using the prior target years’ analyst-reviewed disturbance and are screened using a landcover mask (Section 2.2.2) to exclude regions that are classified as urban, perennial snow and ice, water/wetlands, and agricultural lands. The random forest classification algorithm also allows for the exploration of variable importance to the model using the internal Gini impurity estimates. Predictor variables with lower Gini impurity rank relatively higher for their importance.
The ranger implementation of the random forest machine learning algorithm [52] in the R programming environment was specifically chosen for its ability to use multiple cores and handle large data volumes in the High-Performance Computing (HPC) environment. Default random forest parameter settings were used. All available data from the year prior to the mapping year were used for training the random forest. This implicitly assumes similar class distributions over consecutive years and can maximize accuracy [53].

3.1.1. Multitemporal Predictor Variables

Predictor variables were developed from a suite of Landsat reflectance bands, their corresponding indices, and temporal differenced indices for each LF tile (Table 1). Four consecutive years of Landsat observations that include the year of disturbance and the three years of observations prior to the year of disturbance are used to generate the predictor variables for both training and prediction. For example, to map a 2016 disturbance, Landsat observations for the years 2016, 2015, 2014, and 2013 are used (Table 1). The 2015, 2014, and 2013 data are used for training the random forest and are applied to 2016, 2015, and 2014 data. The Landsat observations used were consistently either TOA during development of SAFER for all the four years or SR for operational purposes. All reflectance bands, from visual to Shortwave Infrared bands [54], contain useful vegetation-related information, but some bands are more sensitive to vegetation-related changes [55] than others. The coastal aerosol Landsat 8 band 1 was not used, because it is strongly affected by the atmosphere [56]. In addition to reflective observations, LF’s predictor variables included a suite of indices and difference indices that are known for their ability to identify vegetation-related disturbances [55,57,58,59]. Spatially explicit change Z scores [60] are also computed (Section 3.1.2) and included as predictor variables.
The original random forest [50] was designed for large number of predictor variables. The ideal number of variables to use is assigned internally by the random forest algorithm as the square root of the number of predictors that lowers the number of predictor variables used in a particular split, thereby reducing the probability of over-fitting with deep trees. Because each split is independent of other variables, including a combination of variables helps generate exploratory depth in the model. However, this may lead to a more complex model and will bias internal variable importance estimates. The inclusion of such correlated variables may also decrease computational efficiency and model parsimony [61,62]; however, the highest output accuracy was achieved when all the variables were used, including ones with a low explanatory power. By design, the modularly scripted software framework of the RSLC and SAFER process allows for easy inclusion/exclusion of any additional layers as predictor variables for future mapping efforts. This modeling framework also allows for easy adaptations from the current usage of random forest’s machine language to algorithms like XGBoost [63], deep learning [64], or state-of-the-art artificial intelligence-based algorithms [65] in the future.

3.1.2. Spatial Change Z Scores (SCZs) Predictor Variable

The predictor variables described above characterize per-pixel observations over the temporal domain for each of the remotely sensed disturbance detections; however, the information from the spatial domain has been ignored. Spatial information and disturbance-specific contextual information have previously been used by remote sensing algorithms to define disturbance types [61,66]. Classification algorithms have also used textural information that is sensitive to spatial kernel size [67], but are computationally simple. Spatial contextual information is more computationally intensive than kernel-based algorithms, because each disturbed region will have its unique contextual neighbors. Spatial contextual algorithms, despite their computational complexities, have successfully been used to reduce errors of commission [22,30] due to the additional information that they bring in. An implicit assumption is that the immediate neighboring undisturbed regions are representative of the conditions prior to the disturbance. Spatially explicit change Z scores [60] are then computed as spatial change probabilities between a pixel inside a disturbed region and the statistical summary of the immediate undisturbed region around the given disturbance region.
To compute the spatial change Z scores (SCZs), each contiguous (i.e., queen’s case) cluster of interim RSLC-detected changed pixels are first assigned a unique cluster id k. For each clusterk, an encompassing n pixel(s) of wide unique background ring(s)j is generated. The background ring includes only immediate undisturbed pixels that are both inside (islands of undisturbed regions) and outside a given disturbance cluster. The background ring is designated with the same id as the disturbance cluster. If the background ring intersects with any other nearby ring(s), the intersecting pixels are flagged as invalid. Growing larger rings decreases the chances of these pixels being representative of the undisturbed conditions, while keeping the ring small increases the chances of using disturbed pixels. The RSLC interim process is designed to liberally detect disturbances; hence, background rings were defined around each disturbance cluster without any buffer pixels in between. To iteratively select the background ring, two such background rings that are (n = 1) and (n = 2) pixels wide are grown independently around each cluster, and the mean and standard deviation are computed from the spatial reference image when there are at least five valid background pixel values per ring. The spatial reference image (e.g., dNBR) is expected to exhibit the contrast between the disturbed and nondisturbed regions. A valid background ring includes only pixels without clouds or cloud shadows that are unique to one disturbance. The SCZ is computed as follows:
S C Z i , k R e f e r e n c e = ( R e f e r e n c e i , k μ k , j R e f e r e n c e ) σ k , j R e f e r e n c e
where SCZi,k is the Z score of the pixel “i” belonging to a disturbance region “k”, Reference is the spatial reference image, and μ k , j R e f e r e n c e and σ k , j R e f e n r e c e are the mean and standard deviation, respectively, of the background pixel values of the jth ring in the spatial reference image. The background ring “j” with the lower coefficient of variation σ/μ is selected from the two rings. The rationale for this selection is that the coefficient of variation should be lower for the background ring that mainly encompasses homogenous undisturbed regions. Figure 3 illustrates this procedure. In case of insufficient (<5) background pixels, a backup algorithm that computes tile-wide statistics, defined using the mean and standard deviations derived over all one-pixel (n = 1)-wide background rings over all disturbance regions over the spatial reference image, is used. In the case of an RSLC change detection schema, which includes the two-anniversary seasonal datasets, two spatial change Z scores are estimated for each epoch and then combined using a function that maximizes the probability of loss of vegetation to generate the annual spatial change Z scores. Three such annual SCZ predictor variables are computed using the dNBR, dNDVI, and dNDMI (differenced Normalized Difference Moisture Index) as spatial reference images. The dNDMI is computed similarly to the computation of dNBR, (1) but by using Landsat 8 band 6 (Short Wavelength Infrared 1) instead of band 4 (Near Infrared). The SCZ algorithm modules were implemented in C++ for efficient multithreaded applications to be deployed in an HPC environment. R scripts were used to automate the processes.

3.2. SAFER Evaluation

Accuracy assessments need independent and reliably accurate data sources for comparisons [68,69]. In this work, the purpose is to evaluate the SAFER process to improve the disturbance detection accuracy of the interim inputs so that the analysts’ workload is reduced, and this work is not intended to be a validation study of the LANDFIRE disturbance suite. Three different ways to generate confusion matrices to parameterize the accuracy of disturbance mapping are used to characterize the SAFER process. Accuracy measures including Cohen’s kappa [70], the overall accuracy, errors of omissions, and errors of commission are derived using the confusion matrices [68,70,71,72]. Although these accuracy metrics are commonly used, they do not capture the analysts’ effort of reviewing and correcting residual errors in the map. We address this with a simple metric: 100 × SAFER output pixels correctly classified (disturbed or undisturbed)/total number of pixels incorrectly classified in RSLC interim detections. This is different to the overall accuracy, which is defined as the total number of the correctly classified (disturbed or undisturbed) to the total number of pixels in the study region. The relative effort saved ranges from 0% to 100%, with 0% indicating no agreement between SAFER outputs and analyst-reviewed disturbances and thus no reduction in effort, 100% indicating perfect agreement and a 100% reduction in effort, and so forth for more intermediate levels of agreement. However, this relative effort score does not directly translate to time saved, but it does provide a helpful proxy for it.

3.2.1. SAFER Prototype Evaluation

The SAFER prototyping process is the test bed that is used to evaluate the feasibility of the inclusion of the SAFER process within the existing RSLC framework. Accuracies and variable importance were estimated for model evaluation in the prototyping process only. Accuracy is estimated by comparing the outputs at different stages of the RSLC process to the final analyst-reviewed disturbances. Internal accuracies are parameterized using the confusion matrix that is created internally by random forest’s Out of the Bag (OOB) data [50] function, which randomly selects from training data for an unbiased internal accuracy estimate. Random forest importance measures are also computed and used to interpret model performance. Accuracies of the RSLC interim disturbances and SAFER outputs are inferred using the confusion matrix that is generated using the SAFER outputs to the corresponding year of analyst-reviewed disturbance data. Differences in accuracies measured by the different methods are discussed vis-a-vis vegetation type and nature of the disturbance.

3.2.2. 2017 SAFER Evaluation

The SAFER process was operationally applied over the whole CONUS to test its operational capability. Accuracies of the interim RSLC and SAFER outputs along with metrics for CONUS-wide relative effort are reported herein in both tabular form and spatially explicit maps. In this application, SAFER variable importance measures were not computed, and model parameters were not tuned to save computational time, but for future applications, studies are underway to optimize model performance.

4. Results

4.1. SAFER Prototype Accuracy Results

Table 2 lists the accuracy metric, Cohen’s kappa, and overall accuracy for each of the study tiles used for prototyping. The classification accuracies were not all uniformly high and were low in regions that are known to have sparser vegetation. The kappa value is affected by the relative proportion of classes [72], and hence, lower values were expected in regions with relatively smaller fractions of disturbances (Table 2). Hence, the relative accuracy values for corresponding tiles matter more than their individual absolute values.
The results over the prototyping tiles show that SAFER outputs had a higher median kappa~0.59 and overall accuracy~99.76 compared to the accuracy of RSLC interim disturbances, which had a kappa of 0.16 and overall accuracy of 95.50%. The results also indicate that SAFER can potentially reduce analyst effort by more than 95%. Of note, this saved effort does not translate directly into saved time for analysts, because they must still review all interim pixels. Nevertheless, with SAFER in the analysts’ toolbox (Section 2.2.4), one could expect higher mapping efficiencies and time savings as well.
The prototype results also show that despite an improvement in accuracies, the increases in accuracies were not similar across tiles, and none of the tiles had a perfect accuracy score. Further, the kappa values estimated using random forest’s internally generated contingency tables and those of the analyst-reviewed disturbances were not always similar. For example, unlike tiles r01c02 and r08c12 that had a similar and high kappa for both internal and SAFER estimates, tiles r02c15 and r06c03 had a high internal kappa but a relatively lower SAFER kappa. Tile r06c03 had a low training fraction, and as such, a low kappa was expected. Despite the low training fraction, the internal kappa for tile r06c03 was high. However, the SAFER kappa was low, and the relative effort saved was high at 99.11%. Tile r09c08 had a relatively low internal and SAFER kappa. The tiles r01c02, r02c15, r06c03, and r09c08 were also visually inspected for qualitative interpretation. Both kappa and overall accuracies increased with SAFER; however, SAFER outputs still have errors and need analyst review. The relative analyst effort saved is fairly high at over 95% and indicates that including SAFER in the RSLC process can potentially increase the accuracy and reduce latency of the disturbance products.
The top 30 most important variables to the random forest model during prototyping for tile-wise disturbance detections are illustrated in Figure 4. The complete list of variable importance of all variables is shown in Supplementary Figure S2. The differenced variables dNBR, dNDVI, and dNDMI generally ranked higher in importance than the other variables, which is likely due to their sensitivity in detecting vegetation-related disturbances [55], but they were not always the most influential variables in all tiles. Single DOY indices and TOA observations in some tiles ranked the highest (e.g., tiles r06c03, r08c12, r09c08). The spatial change probability Z score variable was within the top 25 of three of the five prototyped tiles, and generally exhibited a positive relationship with the difference indices. The fact that the difference indices and SCZs were low in some cases, but single-date observations were high, indicate that the disturbance signals in these tiles had poor spectral contrast. Tiles with lower ranking of difference indices and SCZ scores indicate that the vegetation disturbance spectral signatures were not prominent or that the vegetation had recovered within the compositing period, thereby minimizing the spectral contrast in pre- and post-image differences.
Tile r01c02 (Figure 5) had a high internal and SAFER kappa (Table 2). It lies in the northwestern part of the United States, where the dominant vegetation class is Tree (Figure 1), and most of the disturbances are fire-related or mechanical harvests. These disturbances leave strong spectral contrast that last months to years and are reliably detected using interannual composite derivative data. In this tile, the difference indices of the late season composite are the most important variables (Figure 4). The SAFER outputs closely match the LF-disseminated disturbance product. Tile r02c15 (Figure 6), which is from the northeastern United States and has similar disturbance types (fire and mechanical harvest) as Tile r01c02, did not show a high SAFER kappa. The disturbances in this tile were smaller but nevertheless were expected to leave persistent disturbance scars that are readily captured by the seasonal composites. The dominant vegetation class in these northwest and northeast tiles is trees (Figure 1). The difference indices could be expected to rank higher (Figure 4), but in contrast to tile r01c02, tile r02c15 had a high internal kappa and relatively low SAFER kappa. A closer examination of the tile illustrated in Figure 6 shows that the scenes with the scan line corrector (SLC) failure error [73] from Landsat 7 resulted in substantial error. In addition, nondisturbance seasonal changes and other (non-Landsat 7) compositing artifacts make this area extremely difficult to map accurately. Despite these difficulties, the supervised algorithm was able to reject most of the commission errors. The analyst effort saved is less (~88%) for tile r02c15 than for the other tiles (Table 2), as the SAFER process did not eliminate all SLC off errors, and these errors had to be manually cleaned up by the analysts.
Figure 7 and Figure 8 illustrate tiles r06c03 and r09c08, respectively. These tiles each had relatively small training datasets for their tile-specific random forest model (Table 2), and the internal kappa was relatively higher than the SAFER kappa. The dominant vegetation types in r06c03 are Herb and Shrub (Figure 1), which made it difficult to distinguish between intact and disturbed vegetation. This is corroborated by the fact that the most important variables in this tile were single-date observations and not the difference indices. Figure 8 shows that most commission errors were eliminated by the supervised classification algorithm; however, certain disturbances were incorrectly discarded. The change products did not capture a new road under construction, but this was identified by an LF analyst. Such features with little vegetation can be missed by the automated processes, but usually, they are correctly identified during the analyst review and mapping. These residual errors requiring correction by LF analysts are minimal, as reflected by the relative analyst effort that is saved over these tiles (97%, Table 2).
LF’s use of analysts to review and correct the disturbance maps after the automated RSLC process produces a high-quality mapped product. For example, Figure 8 illustrates mapped disturbances for tile r09c08, where the dominant vegetation types are Herb and Shrub (Figure 1). This tile had a high internal kappa but low independent kappa, a reasonable finding given the relatively small number of training samples for the area. Like tile r06c03, the difference indices were not the most important variables in the model. The pre- and post-disturbance false color images for tile r09c08 (Figure 8) show strong spectral disturbance signals in the post-image, but similar contrast is seen over the undisturbed preimage as well, confounding change detection. Active efforts are underway at LANDFIRE to use different compositing procedures, including the use of different percentiles, e.g., the 20th and 90th, that may assist in better capturing stable and ephemeral disturbances.

4.2. 2017 SAFER Evaluation

The principal motivation for this work is to improve the accuracy and reduce the latency of data products that are dependent on LF disturbance mapping processes. The analysis of the prototype results indicate an increase in accuracy and a reduction in relative analyst effort, which implies a more efficiently produced disturbance product. Based on the results of the prototype, SAFER has been implemented into the LANDFIRE RSLC process. Presented below are the results of its first operational application for 2017 disturbance mapping. The atmospherically corrected LANDSAT 2015–2018 surface reflectance was used to ensure consistent data for all years of data, including training and predictive mapping of disturbances for the year 2017.
As expected from the prototype results, an increase in accuracies of the SAFER outputs compared to the heritage RSLC outputs (Supplementary Table S3 and Figure 9) is seen. Similar to the prototyping results, the median accuracy was 0.11 (kappa) and 94.55 (overall) for interim disturbance outputs and 0.47 (kappa) and 99.6 (overall) for SAFER outputs. The median omission and commission errors for disturbances were 36.87% and 28.81% in the SAFER outputs, with a range of 5–99% depending on the tile and fraction of disturbances. These results indicate that although the inclusion of SAFER processes increased accuracies, an analyst review is still required, but to a lesser extent, to ensure LANDFIRE accuracy standards. Figure 9 shows the kappa classification accuracy metrics for the interim RSLC disturbances, the SAFER outputs, and the relative effort saved. The kappa for SAFER (cyan) is higher than the one for the heritage RSLC (red) for most of the tiles. The overall accuracies of the SAFER (blue) are higher than RSLC (brown), but this is not as evident as in the case of the kappa. The relative effort (gray) saved ranged from 47.7% (marked by a solid black circle in Figure 9) to 99.4% (dashed black circle), and the median was relatively high at 92.4%, implying that the effort saved is fairly high for most tiles. Figure 9 shows that the relative effort saved is high in most regions and is low in regions that have disturbed vegetation in proximity to croplands. SAFER relies on spectral contrasts in time and space, with an implicit assumption that regions next to disturbed regions have undisturbed vegetation, which may not be the case when the vegetation is a mix of wildland and cropland. Despite regions with challenging conditions, SAFER still saved analyst effort and improved accuracy. A more comprehensive list of accuracy measures, including the kappa and overall accuracies, are given in Supplementary Table S2. It should be noted that the accuracy values that are reported in this table are in comparison to the analyst-reviewed product and are not an independent reference dataset. Future validation efforts using independent datasets such as LCMAP’s reference dataset are being actively considered.

5. Discussion and Conclusions

Loveland et al. (2002) [3] remarked that “The Holy Grail of change detection is still total automation and high accuracy. However, methods that reduce labor costs while maintaining consistency and accuracy are needed”. Despite substantial developments in the field of RS-based change detection algorithms over the last two decades, fully automatic and accurate change detection is still an active area of research.
Many federal programs, such as the LF, LCMAP, NLCD, and LCMS, have been producing national-scale landcover products that include disturbance. Although similarities exist among these programs, such as the use of remotely sensed imagery and scripted algorithms for change detection, differences also exist, such as the kind of imagery used, algorithms used, training data, and characterization of disturbance. For example, LF, LCMAP, and NLCD primarily use Landsat imagery, whereas LCMS uses Landsat and Sentinel as well. LF uses MIICA with multiple seasons and years of data for disturbance detection, LCMAP uses CCDC, LCMS uses MIICA and CCDC, and NLCD uses MIICA. With respect to the automation of disturbance mapping, LF is the only program to use human analysts to examine and correct disturbances that are mapped by these aforementioned algorithms. This additional analyst review increases the product’s accuracy but comes at the cost of product latency. SAFER brings the program closer to full automation and is hence expected to reduce latency.
Feedback from LF analysts indicates that SAFER outputs are reliable and substantially reduce the level of effort when producing accurately mapped disturbance products. Estimates in time savings for editing a tile ranged from 1 h or less in dry/desert-like environments (like tile r06c03; Figure 1) to 8 h in tiles with persistent cloud cover, data gaps, and large amounts of disturbance such as in the northeastern United States (like tile r02c15; Figure 1).
The operational worthiness of SAFER is proven by the fact that LANDFIRE has now moved from bi-annual updates to annual updates and is on the cusp of reducing its product latency to less than one year. The fully scripted and automated SAFER procedure is critical for this expedited schedule. The comparison of the RSLC interim disturbances, SAFER outputs, and the analyst-reviewed LF disturbances to the LCMAP reference dataset showed that the automated SAFER process brought the accuracies closer to the analyst-reviewed product. However, an analyst’s review is still required to maintain the legacy of a high-quality, accurate LF disturbance dataset.
The variable importance results showed that the single-date observations ranked higher than difference indices and the spatial change Z scores for certain tiles. This may occur if the difference images do not have spectral contrast, because the vegetation disturbance spectral signatures are not distinct, or they have recovered within the compositing period, thereby minimizing the spectral contrast in the pre- and post-image differences. Therefore, image selection close to the time of disturbance is important, and future efforts by LANDFIRE may explore the use of newer data streams like the Harmonized Landsat Sentinel (HLS) data and the recent Landsat 9 satellite. The increased temporal frequency of scene availability may help in identifying disturbances that are closer to the actual time of disturbance.
The accuracies of automated processes are sensitive to a multitude of factors including a poor disturbance spectral contrast, a lack of clear images to composite, poor timing of composites in relation to the time of disturbance, or too large a geographic extent to fit a single random forest model. The fact that in certain tiles, the internal and independent accuracy estimates were not always correlated strongly indicates that assuming similar relationships between predictor variables and disturbance even over consecutive years may not always be valid within an LF tile. Future studies could include more regional models based on ecoregions instead of arbitrary LF tiles, attempt to remove variables that do not have any explanatory power, and fine-tune the models’ training parameters. Future studies may also include using an additional mid-season composite to capture disturbances that may recover quickly. Further, the compositing algorithm itself is being actively researched given that the advent of Landsats 8 and 9 and Sentinels 2A and 2B can increase the temporal density of image acquisitions.
Additionally, newer published algorithms are being continuously tested for possible adoption in the LF disturbance mapping framework. Newer operational products like the LCMAP using time series CDAs [28] and LCMS using an ensemble of CDAs [32] are being actively considered for ingestion into our disturbance mapping efforts. By design, the modular RSLC software framework for disturbance detection allows for easy inclusion of additional disturbance (e.g., LCMAP change or LCMS loss detections) and predictor variables (e.g., different percentile composites, indices, and newer satellite data) in the future for more accurate and computationally efficient disturbance mapping. Further, this framework also allows for the assimilation of machine learning model tuning, easy migration from the currently deployed random forest [50] to algorithms like XGBoost [63], deep learning [64], or the state-of-the-art machine learning artificial intelligence-based algorithms [65] that are well suited for such applications. These updates to the LANDFIRE disturbance product are planned to continue LANDFIRE’s history of improvement between iterations.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/fire7020051/s1: Figure S1: Spatially explicit Landscape Fire and Resource Management Planning (LANDFIRE) tile specific growing season date ranges; Figure S2:Variable importance of all 73 predictor variables inferred using Gini Impurity for the random forest model for each of the five prototyping tiles. Please refer to Section 3.1.1 of the manuscript for variable name and their formulation; Table S1: Data sets described in this work (refer to Section 2); Table S2: 2016 Spatially Adaptable Filter for Error Reduction (SAFER) prototype evaluation results after inclusion to the remote sensing of landscape change (RSLC) process; Table S3: 2017 Remote Sensing of Landscape Change (SAFER) evaluation results.

Author Contributions

Conceptualization, S.S.K., B.P. and T.D.H.; Data curation, B.T. and I.L.P.; Formal analysis, S.S.K.; Methodology, S.S.K.; Project administration, T.D.H.; Software, R.D. and J.J.P.; Visualization, B.T.; Writing—original draft, S.S.K.; Writing—review and editing, S.S.K., B.T., R.D., J.J.P., I.L.P., B.P. and T.D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was performed under the US. Geological Survey (USGS) SSSC contract# 140G0119C0001 and TSSC contract# 140G0121D0001.

Data Availability Statement

All data used in this study were obtained from public domains and are freely available. MTBS: https://doi.org/10.5066/P9IED7RZ (accessed on 6 February 2024); BAER: https://burnseverity.cr.usgs.gov/baer/ (accessed on 6 February 2024); RAVG: https://burnseverity.cr.usgs.gov/ravg/ (accessed on 6 February 2024); USANPN: https://www.usanpn.org/data (accessed on 6 February 2024); NLCD: https://doi.org/10.5066/P9KZCM54 (accessed on 6 February 2024); DWSE: https://doi.org/10.5066/F7445KQK (accessed on 6 February 2024).

Acknowledgments

All data processing codes were deployed on the U.S. Geological Survey Denali Supercomputer: U.S. Geological Survey, https://doi.org/10.5066/P9PSW367.(accessed on 6 February 2024). Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. We thank Janet Carter and the anonymous reviewers for their comments and suggestions that greatly improved this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ojima, D.; Galvin, K.; Turner, B. The global impact of land-use change. BioScience 1994, 44, 300–304. [Google Scholar] [CrossRef]
  2. Singh, A. Review article digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  3. Loveland, T.; Sohl, T.; Stehman, S.; Gallant, A.; Sayler, K.; Napton, D. A Strategy for Estimating the Rates of Recent United States Land-Cover Changes. Photogramm. Eng. Remote Sens. 2002, 68, 1091–1099. [Google Scholar]
  4. Chen, G.; Hay, G.J.; Carvalho, L.M.; Wulder, M.A. Object-based change detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  5. Woodcock, C.E.; Allen, R.; Anderson, M.; Belward, A.; Bindschadler, R.; Cohen, W.; Gao, F.; Goward, S.N.; Helder, D.; Helmer, E. Free access to Landsat imagery. Science 2008, 320, 1011. [Google Scholar] [CrossRef] [PubMed]
  6. Zhu, Z.; Wulder, M.A.; Roy, D.P.; Woodcock, C.E.; Hansen, M.C.; Radeloff, V.C.; Healey, S.P.; Schaaf, C.; Hostert, P.; Strobl, P.; et al. Benefits of the free and open Landsat data policy. Remote Sens. Environ. 2019, 224, 382–385. [Google Scholar] [CrossRef]
  7. Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Opening the archive: How free data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ. 2012, 122, 2–10. [Google Scholar] [CrossRef]
  8. Turner, W.; Rondinini, C.; Pettorelli, N.; Mora, B.; Leidner, A.K.; Szantoi, Z.; Buchanan, G.; Dech, S.; Dwyer, J.; Herold, M. Free and open-access satellite data are key to biodiversity conservation. Biol. Conserv. 2015, 182, 173–176. [Google Scholar] [CrossRef]
  9. Anderson, J.R. Land use and land cover changes. A framework for monitoring. J. Res. By Geol. Surv. 1977, 5, 143–153. [Google Scholar]
  10. Homer, C.; Dewitz, J.; Jin, S.; Xian, G.; Costello, C.; Danielson, P.; Gass, L.; Funk, M.; Wickham, J.; Stehman, S. Conterminous United States land cover change patterns 2001–2016 from the 2016 national land cover database. ISPRS J. Photogramm. Remote Sens. 2020, 162, 184–199. [Google Scholar] [CrossRef]
  11. Ingram, K.; Knapp, E.; Robinson, J. Change Detection Technique Development for Improved Urbanized Area Delineation; CSC/TM-81/6087; NASA: Washington, DC, USA; Computer Sciences Corporation: Silver Springs, MD, USA, 1981.
  12. Rollins, M.G. LANDFIRE: A nationally consistent vegetation, wildland fire, and fuel assessment. Int. J. Wildland Fire 2009, 18, 235–249. [Google Scholar] [CrossRef]
  13. Ryan, K.C.; Opperman, T.S. LANDFIRE–A national vegetation/fuels data base for use in fuels treatment, restoration, and suppression planning. For. Ecol. Manag. 2013, 294, 208–216. [Google Scholar] [CrossRef]
  14. Calkin, D.E.; Thompson, M.P.; Finney, M.A.; Hyde, K.D. A real-time risk assessment tool supporting wildland fire decisionmaking. J. For. 2011, 109, 274–280. [Google Scholar]
  15. Blankenship, K.; Swaty, R.; Hall, K.R.; Hagen, S.; Pohl, K.; Shlisky Hunt, A.; Patton, J.; Frid, L.; Smith, J. Vegetation dynamics models: A comprehensive set for natural resource assessment and planning in the United States. Ecosphere 2021, 12, e03484. [Google Scholar] [CrossRef]
  16. Vaillant, N.M.; Reinhardt, E.D. An evaluation of the Forest Service Hazardous Fuels Treatment Program—Are we treating enough to promote resiliency or reduce hazard? J. For. 2017, 115, 300–308. [Google Scholar] [CrossRef]
  17. Krasnow, K.; Schoennagel, T.; Veblen, T.T. Forest fuel mapping and evaluation of LANDFIRE fuel maps in Boulder County, Colorado, USA. For. Ecol. Manag. 2009, 257, 1603–1612. [Google Scholar] [CrossRef]
  18. Zarnetske, P.L.; Thomas, C., Jr.; Moisen, G.G. Modeling forest bird species’ likelihood of occurrence in Utah with Forest Inventory and Analysis and Landfire map products and ecologically based pseudo-absence points. In Proceedings of the Seventh Annual Forest Inventory and Analysis Symposium, Portland, ME, USA, 3–6 October 2005; McRoberts, R.E., Reams, G.A., Van Deusen, P.C., McWilliams, W.H., Eds.; Gen. Tech. Rep. WO-77; US Department of Agriculture, Forest Service: Washington, DC, USA, 2005; pp. 291–305. [Google Scholar]
  19. Palaiologou, P.; Essen, M.; Hogland, J.; Kalabokidis, K. Locating Forest Management Units Using Remote Sensing and Geostatistical Tools in North-Central Washington, USA. Sensors 2020, 20, 2454. [Google Scholar] [CrossRef] [PubMed]
  20. Lott, C.A.; Akresh, M.E.; Costanzo, B.E.; D’Amato, A.W.; Duan, S.; Fiss, C.J.; Fraser, J.S.; He, H.S.; King, D.I.; McNeil, D.J. Do Review Papers on Bird–Vegetation Relationships Provide Actionable Information to Forest Managers in the Eastern United States? Forests 2021, 12, 990. [Google Scholar] [CrossRef]
  21. Jin, S.; Yang, L.; Danielson, P.; Homer, C.; Fry, J.; Xian, G. A comprehensive change detection method for updating the National Land Cover Database to circa 2011. Remote Sens. Environ. 2013, 132, 159–175. [Google Scholar] [CrossRef]
  22. Giglio, L.; Descloitres, J.; Justice, C.O.; Kaufman, Y.J. An enhanced contextual fire detection algorithm for MODIS. Remote Sens. Environ. 2003, 87, 273–282. [Google Scholar] [CrossRef]
  23. Lie, W.-N. Automatic target segmentation by locally adaptive image thresholding. IEEE Trans. Image Process. 1995, 4, 1036–1041. [Google Scholar]
  24. Liu, H.; Jezek, K. Automated extraction of coastline from satellite imagery by integrating Canny edge detection and locally adaptive thresholding methods. Int. J. Remote Sens. 2004, 25, 937–958. [Google Scholar] [CrossRef]
  25. Schroeder, W.; Oliva, P.; Giglio, L.; Quayle, B.; Lorenz, E.; Morelli, F. Active fire detection using Landsat-8/OLI data. Remote Sens. Environ. 2016, 185, 210–220. [Google Scholar] [CrossRef]
  26. Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr—Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef]
  27. Roy, D.; Lewis, P.; Justice, C. Burned area mapping using multi-temporal moderate spatial resolution data—A bi-directional reflectance model-based expectation approach. Remote Sens. Environ. 2002, 83, 263–286. [Google Scholar] [CrossRef]
  28. Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available Landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef]
  29. Verbesselt, J.; Zeileis, A.; Herold, M. Near real-time disturbance detection using satellite image time series. Remote Sens. Environ. 2012, 123, 98–108. [Google Scholar] [CrossRef]
  30. Kumar, S.S.; Roy, D.P. Global operational land imager Landsat-8 reflectance-based active fire detection algorithm. Int. J. Digit. Earth 2018, 11, 154–178. [Google Scholar] [CrossRef]
  31. Giglio, L.; Schroeder, W.; Justice, C.O. The collection 6 MODIS active fire detection algorithm and fire products. Remote Sens. Environ. 2016, 178, 31–41. [Google Scholar] [CrossRef]
  32. Healey, S.P.; Cohen, W.B.; Zhiqiang, Y.; Brewer, K.; Brooks, E.; Gorelick, N.; Gregory, M.; Hernandez, A.; Huang, C.; Hughes, J. Next-generation forest change mapping across the United States: The landscape change monitoring system (LCMS). In Proceedings of the Pushing Boundaries: New Directions in Inventory Techniques and Applications: Forest Inventory and Analysis (FIA) Symposium 2015, Portland, OR, USA, 8–10 December 2015; Gen. Tech. Rep. PNW-GTR-931; Stanton, S.M., Christensen, G.A., Eds.; US Department of Agriculture, Forest Service, Pacific Northwest Research Station: Portland, OR, USA, 2015; p. 217. [Google Scholar]
  33. Bar, M. Visual objects in context. Nat. Rev. Neurosci. 2004, 5, 617–629. [Google Scholar] [CrossRef]
  34. Svatonova, H. Analysis of visual interpretation of satellite data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 675–681. [Google Scholar] [CrossRef]
  35. Nelson, K.J.; Steinwand, D. A Landsat data tiling and compositing approach optimized for change detection in the conterminous United States. Photogramm. Eng. Remote Sens. 2015, 81, 573–586. [Google Scholar] [CrossRef]
  36. Eidenshink, J.; Schwind, B.; Brewer, K.; Zhu, Z.-L.; Quayle, B.; Howard, S. A project for monitoring trends in burn severity. Fire Ecol. 2007, 3, 3–21. [Google Scholar] [CrossRef]
  37. Picotte, J.J.; Bhattarai, K.; Howard, D.; Lecker, J.; Epting, J.; Quayle, B.; Benson, N.; Nelson, K. Changes to the Monitoring Trends in Burn Severity program mapping production procedures and data products. Fire Ecol. 2020, 16, 16. [Google Scholar] [CrossRef]
  38. Hudak, A.T.; Morgan, P.; Bobbitt, M.J.; Smith, A.M.; Lewis, S.A.; Lentile, L.B.; Robichaud, P.R.; Clark, J.T.; McKinley, R.A. The relationship of multispectral satellite imagery to immediate fire effects. Fire Ecol. 2007, 3, 64–90. [Google Scholar] [CrossRef]
  39. Baker, C.; Harvey, B.; Saberi, S.; Reiner, A.; Wahlberg, M. Regionally Adapted Models for the Rapid Assessment of Vegetation Condition after Wildfire Program in the Interior Northwest and Southwest United States. In Proceedings of the 2019 National Silviculture Workshop: A Focus on Forest Managementresearch Partnerships, Bemidji, MN, USA, 21–23 May 2019; Gen. Tech. Rep. NRS-P-193; Pile, L.S., Deal, R.L., Dey, D.C., Gwaze, D., Kabrick, J.M., Palik, B.J., Schuler, T.M., Eds.; US Department of Agriculture, Forest Service, Northern Research Station: Madison, WI, USA, 2019; pp. 6–10. [Google Scholar]
  40. Clark, J. Remote sensing and geospatial support to burned area emergency response (BAER) teams in assessing wildfire effects to hillslopes. In Landslide Science and Practice; Springer: Berlin/Heidelberg, Germany, 2013; pp. 211–215. [Google Scholar]
  41. Miller, J.D.; Quayle, B. Calibration and validation of immediate post-fire satellite-derived data to three severity metrics. Fire Ecol. 2015, 11, 12–30. [Google Scholar] [CrossRef]
  42. USA-NPN. USA National Phenology Network. Available online: https://www.usanpn.org/usa-national-phenology-network (accessed on 6 February 2024).
  43. NLCD. The National Land Cover Database. Available online: https://www.mrlc.gov/data (accessed on 6 February 2024).
  44. Jones, J.W. Improved automated detection of subpixel-scale inundation—Revised dynamic surface water extent (DSWE) partial surface water tests. Remote Sens. 2019, 11, 374. [Google Scholar] [CrossRef]
  45. DSWE. Dynamic Surface Water Extent. Available online: https://www.usgs.gov/centers/eros/science/usgs-eros-archive-landsat-landsat-level-3-dynamic-surface-water-extent-dswe (accessed on 6 February 2024).
  46. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US department of agriculture, national agricultural statistics service, cropland data layer program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  47. USDA-NASS. USDA National Agricultural Statistics Service Cropland Data Layer. Available online: http://nassgeodata.gmu.edu/CropScape/ (accessed on 6 February 2024).
  48. Nelson, K.J.; Long, D.G.; Connot, J.A. LANDFIRE 2010: Updates to the National Dataset to Support Improved Fire and Natural Resource Management; U.S. Geological Survey, Earth Resources Observation and Science (EROS) Center: Sioux Falls, SD, USA, 2016.
  49. Foody, G.M.; Mathur, A. Toward intelligent training of supervised image classifications: Directing training data acquisition for SVM classification. Remote Sens. Environ. 2004, 93, 107–117. [Google Scholar] [CrossRef]
  50. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  51. Strobl, C.; Boulesteix, A.-L.; Kneib, T.; Augustin, T.; Zeileis, A. Conditional variable importance for random forests. BMC Bioinform. 2008, 9, 307. [Google Scholar] [CrossRef]
  52. Wright, M.N.; Ziegler, A. ranger: A fast implementation of random forests for high dimensional data in C++ and R. arXiv 2015, arXiv:1508.04409. [Google Scholar] [CrossRef]
  53. Weiss, G.M.; Provost, F. Learning when training data are costly: The effect of class distribution on tree induction. J. Artif. Intell. Res. 2003, 19, 315–354. [Google Scholar] [CrossRef]
  54. Kumar, S.; Prihodko, L.; Lind, B.; Anchang, J.; Ji, W.; Ross, C.; Kahiu, M.; Velpuri, N.; Hanan, N. Remotely sensed thermal decay rate: An index for vegetation monitoring. Sci. Rep. 2020, 10, 9812. [Google Scholar] [CrossRef]
  55. Huang, H.; Roy, D.P.; Boschetti, L.; Zhang, H.K.; Yan, L.; Kumar, S.S.; Gomez-Dans, J.; Li, J. Separability analysis of Sentinel-2A multi-spectral instrument (MSI) data for burned area discrimination. Remote Sens. 2016, 8, 873. [Google Scholar] [CrossRef]
  56. Vermote, E.; Justice, C.; Claverie, M.; Franch, B. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sens. Environ. 2016, 185, 46–56. [Google Scholar] [CrossRef]
  57. Roy, D.P.; Boschetti, L.; Trigg, S.N. Remote sensing of fire severity: Assessing the performance of the normalized burn ratio. IEEE Geosci. Remote Sens. Lett. 2006, 3, 112–116. [Google Scholar] [CrossRef]
  58. Jin, S.; Sader, S.A. Comparison of time series tasseled cap wetness and the normalized difference moisture index in detecting forest disturbances. Remote Sens. Environ. 2005, 94, 364–372. [Google Scholar] [CrossRef]
  59. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  60. Kumar, S.S.; Picotte, J.J.; Tolk, B.; Dittmeier, R.; La Puma, I.P.; Peterson, B.; Hatten, T. A spatially adaptive filter for error reduction in satellite-based change detection algorithms. In AGU Fall Meeting Abstract; American Geophysical Union: Washington, DC, USA, 2020. [Google Scholar]
  61. Roy, D.P.; Kumar, S.S. Multi-year MODIS active fire type classification over the Brazilian Tropical Moist Forest Biome. Int. J. Digit. Earth 2017, 10, 54–84. [Google Scholar] [CrossRef]
  62. Zhang, F.; Yang, X. Improving land cover classification in an urbanized coastal area by random forests: The role of variable selection. Remote Sens. Environ. 2020, 251, 112105. [Google Scholar] [CrossRef]
  63. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  64. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  65. Shi, W.; Zhang, M.; Zhang, R.; Chen, S.; Zhan, Z. Change detection based on artificial intelligence: State-of-the-art and challenges. Remote Sens. 2020, 12, 1688. [Google Scholar] [CrossRef]
  66. Kumar, S.S.; Roy, D.P.; Cochrane, M.A.; Souza, C.M.; Barber, C.P.; Boschetti, L. A quantitative study of the proximity of satellite detected active fires to roads and rivers in the Brazilian tropical moist forest biome. Int. J. Wildland Fire 2014, 23, 532–543. [Google Scholar] [CrossRef]
  67. Warner, T. Kernel-based texture in remote sensing image classification. Geogr. Compass 2011, 5, 781–798. [Google Scholar] [CrossRef]
  68. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  69. Stehman, S.V.; Pengra, B.W.; Horton, J.A.; Wellington, D.F. Validation of the US Geological Survey’s Land Change Monitoring, Assessment and Projection (LCMAP) Collection 1.0 annual land cover products 1985–2017. Remote Sens. Environ. 2021, 265, 112646. [Google Scholar] [CrossRef]
  70. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  71. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [PubMed]
  72. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  73. NASA. Preliminary Assessment of the Value of Landsat 7 ETM+ SLC-off Data; NASA: Washington, DC, USA, 2003.
  74. LFREMAP. Landfire 2016 Remap. Available online: https://www.landfire.gov/lf_remap.php (accessed on 6 February 2024).
Figure 1. LANDFIRE (LF) tiles (gray rectangles) over the conterminous United States. The five prototyping tiles are highlighted. The background shows geographic extent of LF 2016 Remap Life Form classes.
Figure 1. LANDFIRE (LF) tiles (gray rectangles) over the conterminous United States. The five prototyping tiles are highlighted. The background shows geographic extent of LF 2016 Remap Life Form classes.
Fire 07 00051 g001
Figure 2. Heritage RSLC process (dashed boxes and lines) and the SAFER workflow additions (solid boxes and lines) flow chart. Please refer to the section numbers given in each process box for details.
Figure 2. Heritage RSLC process (dashed boxes and lines) and the SAFER workflow additions (solid boxes and lines) flow chart. Please refer to the section numbers given in each process box for details.
Fire 07 00051 g002
Figure 3. Graphical illustration of the spatial change Z scores (SCZs) computation. The left-most is the reference image (a), whose values for disturbance (lighter and browner shades) and its neighbors (greener shades) are expected to be different. Contiguous disturbance (b) clusters are assigned a unique id k. Two rings ((c); j = 1 pixel and (d); j = 2 pixel wide) are grown around each unique cluster and are assigned the corresponding ids. Pixels in rings that are common to more than one ring(s) are identified (gray) and marked as invalid. Pixel values from the reference image with land observations for the valid ring pixel locations are extracted and used to calculate the mean (m) and the standard deviation (s). The ring with the lowest coefficient of variation (σ/μ) is selected, and the corresponding μ and σ are used in Equation (5) for each disturbance and results in the SCZ (e) values for each pixel in the disturbance. Refer to Section 3.1.2 for more details.
Figure 3. Graphical illustration of the spatial change Z scores (SCZs) computation. The left-most is the reference image (a), whose values for disturbance (lighter and browner shades) and its neighbors (greener shades) are expected to be different. Contiguous disturbance (b) clusters are assigned a unique id k. Two rings ((c); j = 1 pixel and (d); j = 2 pixel wide) are grown around each unique cluster and are assigned the corresponding ids. Pixels in rings that are common to more than one ring(s) are identified (gray) and marked as invalid. Pixel values from the reference image with land observations for the valid ring pixel locations are extracted and used to calculate the mean (m) and the standard deviation (s). The ring with the lowest coefficient of variation (σ/μ) is selected, and the corresponding μ and σ are used in Equation (5) for each disturbance and results in the SCZ (e) values for each pixel in the disturbance. Refer to Section 3.1.2 for more details.
Fire 07 00051 g003
Figure 4. The top 30 most important predictor variables for the random forest model for each of the five prototyping tiles. A more detailed list including all 73 variables used in the model is provided in Supplementary Figure S2.
Figure 4. The top 30 most important predictor variables for the random forest model for each of the five prototyping tiles. A more detailed list including all 73 variables used in the model is provided in Supplementary Figure S2.
Fire 07 00051 g004
Figure 5. LF tile r01c02 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 2000 × 1500 pixel subset (60 km × 45 km), marked as a blue outline in the top row, is detailed in the bottom row.
Figure 5. LF tile r01c02 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 2000 × 1500 pixel subset (60 km × 45 km), marked as a blue outline in the top row, is detailed in the bottom row.
Fire 07 00051 g005
Figure 6. LF tile r02c15 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 pixel subset (15 km × 15 km), marked as a blue outline in the top row, is detailed in the bottom row.
Figure 6. LF tile r02c15 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 pixel subset (15 km × 15 km), marked as a blue outline in the top row, is detailed in the bottom row.
Fire 07 00051 g006
Figure 7. LF tile r06c03 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 pixel subset (15 km × 15 km), marked as a blue outline in the top row, is detailed in the bottom row. Analyst additions are shown in yellow.
Figure 7. LF tile r06c03 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 pixel subset (15 km × 15 km), marked as a blue outline in the top row, is detailed in the bottom row. Analyst additions are shown in yellow.
Fire 07 00051 g007
Figure 8. LF tile r09c08 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 (15 km × 15 km) pixel subset, marked as a blue outline in the top row, is detailed in the bottom row.
Figure 8. LF tile r09c08 predisturbance (2015 DOY 175) and post-disturbance (2016 DOY 250) false color (Shortwave Infrared 1 (red), Near Infrared (green), Red (blue)) images and results of RSLC interim disturbances, SAFER outputs, and LF-disseminated product. Detections are red, rejections are white, and unmapped regions are shown in gray. A 500 × 500 (15 km × 15 km) pixel subset, marked as a blue outline in the top row, is detailed in the bottom row.
Fire 07 00051 g008
Figure 9. The 2017 SAFER evaluation results. The color-coded bar charts represent the classification’s overall accuracy (OA) of the interim RSLC disturbances (red and brown), SAFER (cyan and blue), and the relative effort (gray) saved, all scaled to unity (black). Accuracies and relative analysts’ effort saved (Section 3.2) is high for SAFER outputs over the whole CONUS. The relative effort saved ranged from 47.7% (solid black circle) to 99.4% (dashed black circle), with a median of 92.7%. LANDFIRE (LF) tiles (gray rectangles) over the LF 2016 life form classes [74].
Figure 9. The 2017 SAFER evaluation results. The color-coded bar charts represent the classification’s overall accuracy (OA) of the interim RSLC disturbances (red and brown), SAFER (cyan and blue), and the relative effort (gray) saved, all scaled to unity (black). Accuracies and relative analysts’ effort saved (Section 3.2) is high for SAFER outputs over the whole CONUS. The relative effort saved ranged from 47.7% (solid black circle) to 99.4% (dashed black circle), with a median of 92.7%. LANDFIRE (LF) tiles (gray rectangles) over the LF 2016 life form classes [74].
Fire 07 00051 g009
Table 1. List of predictor variables used for SAFER random forest prototype model. Top of the Atmosphere (TOA) Landsat reflectance (ρ) values were used to derive the indices, their corresponding temporal differences, the spectral angle, and the spatial change Z scores (SCZs). For each year (Y), the two seasonal (DOY 175/250) composites are used individually and with their corresponding seasonal composite for paired years. The random forest model used three years of data from 2013–2015 for training and 2014–2016 to map 2016 disturbances. A total of 73 predictor variables were used in this process.
Table 1. List of predictor variables used for SAFER random forest prototype model. Top of the Atmosphere (TOA) Landsat reflectance (ρ) values were used to derive the indices, their corresponding temporal differences, the spectral angle, and the spatial change Z scores (SCZs). For each year (Y), the two seasonal (DOY 175/250) composites are used individually and with their corresponding seasonal composite for paired years. The random forest model used three years of data from 2013–2015 for training and 2014–2016 to map 2016 disturbances. A total of 73 predictor variables were used in this process.
VariableFormulation and Variable NameTraining Prediction
TOA multispectral band reflectanceρYblue DOY2013, 2014, 20152014, 2015, 2016
ρYgreen DOY2013, 2014, 20152014, 2015, 2016
ρYred DOY2013, 2014, 20152014, 2015, 2016
ρYnir DOY2013, 2014, 20152014, 2015, 2016
ρYswir1 DOY2013, 2014, 20152014, 2015, 2016
ρYswir2 DOY2013, 2014, 20152014, 2015, 2016
Indicesnir − ρswir2)/(ρnir + ρswir2)NBRY DOY2013, 2014, 20152014, 2015, 2016
nir − ρred)/(ρnir + ρred) NDVIY DOY2013, 2014, 20152014, 2015, 2016
nir − ρswir1)/(ρnir + ρswir1) NDMIY DOY2013, 2014, 20152014, 2015, 2016
Differenced variables Difference indices and
Spectral Angle Mapper (SAM)
dNBR DOY YPost-YPre2013–15, 2014–152014–16, 2015–16
dNDVI DOY YPost-YPre2013–15, 2014–152014–16, 2015–16
dNDMI DOY YPost-YPre2013–15, 2014–152014–16, 2015–16
cos 1 b = 5 7 ρ b y 1   ρ b y 2 b = 5 7 ( ρ b y 1 ) 2   b = 5 7 ( ρ b y 2 ) 2   SAM DOY YPost-YPre2013–15, 2014–152014–16, 2015–16
Differenced Spatial Change Z scores (SCZs)(6) SCZ_dNBR YPost-YPre2014–152015–16
(6) SCZ_dNDVI YPost-YPre2014–152015–16
(6) SCZ_dNDMI YPost-YPre2014–152015–16
Table 2. Accuracy metrics (kappa and overall) were derived by comparing the output of SAFER over the five prototyping tiles (Figure 1). Accuracy values were computed using the confusion matrix generated internally by the random forest algorithm using the training year 2015 data and using the confusion matrices generated by comparing the 2016 RSLC interim detections and 2016 SAFER outputs to the 2016 analyst-reviewed disturbances. The fraction column reports the relative proportion of disturbance in the training sample in that tile for the year 2015. More metrics are presented in the Supplementary Table S2.
Table 2. Accuracy metrics (kappa and overall) were derived by comparing the output of SAFER over the five prototyping tiles (Figure 1). Accuracy values were computed using the confusion matrix generated internally by the random forest algorithm using the training year 2015 data and using the confusion matrices generated by comparing the 2016 RSLC interim detections and 2016 SAFER outputs to the 2016 analyst-reviewed disturbances. The fraction column reports the relative proportion of disturbance in the training sample in that tile for the year 2015. More metrics are presented in the Supplementary Table S2.
kappa [0–1]Overall [%]Relative
TileFractionInternalInterimSAFERInterimSAFEREffort Saved [%]
r01c020.270.940.190.8296.2199.8295.38
r02c150.350.940.160.5992.5399.1087.89
r06c030.040.920.020.3295.5099.9699.19
r09c080.240.780.060.2492.1799.7696.91
r08c120.890.940.450.8196.5899.3781.55
Median0.270.940.160.5995.5099.7695.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.S.; Tolk, B.; Dittmeier, R.; Picotte, J.J.; La Puma, I.; Peterson, B.; Hatten, T.D. The Spatially Adaptable Filter for Error Reduction (SAFER) Process: Remote Sensing-Based LANDFIRE Disturbance Mapping Updates. Fire 2024, 7, 51. https://doi.org/10.3390/fire7020051

AMA Style

Kumar SS, Tolk B, Dittmeier R, Picotte JJ, La Puma I, Peterson B, Hatten TD. The Spatially Adaptable Filter for Error Reduction (SAFER) Process: Remote Sensing-Based LANDFIRE Disturbance Mapping Updates. Fire. 2024; 7(2):51. https://doi.org/10.3390/fire7020051

Chicago/Turabian Style

Kumar, Sanath Sathyachandran, Brian Tolk, Ray Dittmeier, Joshua J. Picotte, Inga La Puma, Birgit Peterson, and Timothy D. Hatten. 2024. "The Spatially Adaptable Filter for Error Reduction (SAFER) Process: Remote Sensing-Based LANDFIRE Disturbance Mapping Updates" Fire 7, no. 2: 51. https://doi.org/10.3390/fire7020051

APA Style

Kumar, S. S., Tolk, B., Dittmeier, R., Picotte, J. J., La Puma, I., Peterson, B., & Hatten, T. D. (2024). The Spatially Adaptable Filter for Error Reduction (SAFER) Process: Remote Sensing-Based LANDFIRE Disturbance Mapping Updates. Fire, 7(2), 51. https://doi.org/10.3390/fire7020051

Article Metrics

Back to TopTop