Next Article in Journal
Polarimetric Synthetic Aperture Radar Speckle Filter Based on Joint Similarity Measurement Criterion
Next Article in Special Issue
Multi-Year Time Series Transfer Learning: Application of Early Crop Classification
Previous Article in Journal
MADANet: A Lightweight Hyperspectral Image Classification Network with Multiscale Feature Aggregation and a Dual Attention Mechanism
Previous Article in Special Issue
Multi-Feature Dynamic Fusion Cross-Domain Scene Classification Model Based on Lie Group Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine

1
Department of Geography, Oregon State University, Corvallis, OR 97331, USA
2
Spatial Informatics Group, LLC, 2529 Yolanda Ct., Pleasanton, CA 94566, USA
3
SERVIR-Mekong, SM Tower, 24th Floor, 979/69 Paholyothin Road, Samsen Nai Phayathai, Bangkok 10400, Thailand
4
Earth System Science Center, The University of Alabama in Huntsville, 320 Sparkman Drive, Huntsville, AL 35805, USA
5
SERVIR Science Coordination Office, NASA Marshall Space Flight Center, 320 Sparkman Drive, Huntsville, AL 35805, USA
6
USDA Forest Service, Pacific Northwest Research Station, Portland, OR 97204, USA
7
Forest Ecosystems and Society, Oregon State University, Corvallis, OR 97331, USA
8
Geospatial Analysis Lab, University of San Francisco, San Francisco, CA 94117, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5223; https://doi.org/10.3390/rs15215223
Submission received: 8 September 2023 / Revised: 19 October 2023 / Accepted: 21 October 2023 / Published: 3 November 2023
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)

Abstract

:
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. One solution is to use Synthetic Aperture Radar (SAR) imagery acquired by the European Space Agency’s Sentinel-1A and B satellites. The Sentinel 1A and B satellites acquire C-band radar data that penetrates cloud cover and can be acquired during the day or night. One challenge associated with operational use of radar imagery is that the speckle associated with the backscatter values can complicate traditional pixel-based analysis approaches. A potential solution is to use deep learning semantic segmentation models that can capture predictive features that are more robust to pixel-level noise. In this analysis, we present a prototype SAR-based forest alert system that utilizes deep learning classifiers, deployed using the Google Earth Engine cloud computing platform, to identify forest cover change with near real-time classification over two Cambodian wildlife sanctuaries. By leveraging a pre-existing forest cover change dataset derived from multispectral Landsat imagery, we present a method for efficiently developing a SAR-based semantic segmentation dataset. In practice, the proposed framework achieved good performance comparable to an existing forest alert system while offering more flexibility and ease of development from an operational standpoint.

1. Introduction

Satellite derived forest information plays a crucial role in conservation planning and ecosystem monitoring [1,2,3,4]. Forest alert systems provide near-real-time data, strategically guiding the allocation of local forest conservation resources. They have also raised public awareness about forest cover change (FCC), particularly in the tropics [5,6,7]. For instance, the DETER system has notably reduced deforestation rates in the Brazilian Amazon, while enhancing transparency in reporting data concerning the timing and location of FCC [8]. Early forest alert systems like DETER and FORMA utilized 250 m multispectral imagery from the Moderate Resolution Imaging Spectrometer (MODIS) sensor to detect FCC events [9,10,11,12,13]. More recent systems employ Landsat satellite imagery (30 m), enhancing the detection of fine-scale FCC events often linked with illicit activities [2,14]. The Global Land Analysis and Discovery (GLAD) forest alert system pioneered the use of Landsat satellite imagery [2]. Since then, several Landsat-based forest alert frameworks have been evaluated, including those by Vargas et al. [14] and Ye et al. [15]. The GLAD alert system, capable of producing an observation every eight days by leveraging Landsat 7 ETM+ and Landsat 8 OLI sensors, is the most widely used due to its global coverage. However, in regions with persistent cloud cover, monitoring FCC events can be challenging due to long, patchy intervals between image acquisitions [1,16,17].
Synthetic Aperture Radar (SAR) serves as a powerful adjunct to multispectral satellite imagery, enabling continuous monitoring in regions with frequent cloud cover due to its cloud-penetration capabilities. The European Space Agency’s (ESA’s) Sentinel-1 A/B satellites, which provide regular, global SAR coverage, operate using C-band radar with a 5.6 cm wavelength. In forest ecosystems, the signal of C-band radar imagery is usually dominated by vegetation and soil moisture [18]. The scattering of C-band SAR in these ecosystems is typically influenced by branch structure and leaf/needle density [19,20]. Moreover, C-band radar signals have been found to correlate with forest structure, particularly aboveground biomass, with the relationship peaking at biomass densities of 100–200 Mg ha−1 [21]. Owing to the side-looking nature of SAR data, FCC can be identified by the shadowing effects produced along the peripheries of harvested patches [18]. Additionally, the removal of forest biomass instigates a shift in the underlying distribution of backscatter values due to changes in surface material and roughness. In tropical ecosystems, SAR backscatter time-series can exhibit cyclical patterns due to vegetation phenology and precipitation cycles. The presence of water notably attenuates an object’s dielectric constant, which subsequently impacts its reflectivity and conductivity in SAR backscatter [22].
SAR-based forest alert systems have proven highly effective in regions with dense cloud cover. The inaugural SAR-based forest alert system, the JJ-FAST algorithm, was developed by the Japanese International Cooperation Agency and the Japan Aerospace Exploration Agency. This system utilized PALSAR-2/ScanSAR data to generate alerts every 1.5 months, with a spatial resolution of 56 m [23]. The launch of the European Space Agency’s (ESA) Sentinel-1A satellite in 2014, followed by Sentinel-1B in 2016, addressed challenges such as irregular revisit periods and low spatial resolutions that had previously impeded the development of SAR-based alert systems [24,25]. Following the deployment of Sentinel-1, numerous SAR-based forest alert systems were developed for monitoring FCC in tropical regions. A Sentinel-1 monitoring system implemented over French Guiana, which experiences approximately 67% annual cloud cover, achieved an omission rate 43.5% lower than the GLAD system [26]. In the Congo Basin, a region with persistent cloud coverage, Sentinel-1 SAR alerts successfully detected small-scale (0.5 ha) forest disturbances caused by selective logging [1]. Additionally, SAR data have been effectively utilized in various near-real-time monitoring applications such as fire progression mapping [27,28], cropland monitoring [29,30,31], and water mapping [32]. These applications demonstrate the versatility of SAR data in change detection monitoring.
One challenge inherent to SAR remote sensing is the design of robust predictive features amidst the speckle-induced noise. Deep learning techniques offer a powerful methodology for automating feature extraction [33,34]. These models can be trained using complex input data structures, such as a time-series of multi-channel imagery, eliminating the need for user-driven feature engineering, such as the transformation of raw inputs to create independent variables. When identifying objects in remotely sensed imagery, deep models yield coherent patches even in the presence of considerable pixel-level noise. Deep learning models have been employed with time series of Sentinel-1 SAR imagery for tasks like surface water mapping [35,36], agricultural monitoring [37,38], forest above-ground biomass estimation [39], detection of ocean-going vessels [40,41,42], and fire-related mapping and monitoring [27,43]. A key challenge for applying deep learning techniques in ecology is the limited availability of training data. Collecting new training data tends to be costly, time-consuming, and necessitates trained personnel [44]. However, some studies have shown that deep learning methodologies exhibit impressive generalization capabilities in the face of significant signal and label noise [36,45,46,47]. This indicates that pre-existing FCC labels could potentially be harnessed to construct a training dataset for a deep learning FCC classifier in a SAR-based forest alert system.
Two wildlife sanctuaries in Cambodia, Prey Lang and Beng Per, were chosen as test cases for the SAR-based system proposed in this study. Deforestation in Cambodia in the 21st century has been driven by expansive Economic Land Concessions (ELCs) and Social Land Concessions (SLCs) for agriculture and agro-forestry, particularly the development of rubber plantations [48,49]. Forests within ELCs have been subjected to aggressive deforestation, with a notable spillover effect on adjacent forest patches [48]. This issue is exacerbated by the fact that numerous large ELCs border protected forest areas. While a moratorium on the creation of new ELCs was enacted in 2012, concessions have been granted to existing projects, and illegal deforestation often occurs under the guise of existing ELCs [50]. A SAR-based forest alert system could enhance the timeliness of forest alert issuance, given the region’s frequent cloud coverage, and potentially contribute to a reduction in the local deforestation rate.
We propose a framework for developing a forest alert system that uses SAR-based FCC semantic segmentation algorithm. It is possible to derive accurate annual forest change maps using multispectral imagery and temporal and phenological statistics derived from Landsat satellite imagery [51,52]. However, such annual labels often lack information on the intra-annual timing of FCC events. The inputs to the models used in our forest alert system are time series of 4 Sentinel-1 SAR subsets, with the last image being the most recent. The goal of the network is to identify areas of change that occurred between the 3 older SAR acquisitions and the most recent one. We tested a strategy for generating synthetic forest alert training examples using the annual Landsat-derived FCC maps as a target. SAR imagery was drawn from the years both prior to and following the date of the Landsat FCC maps. The structure of these examples emulates the same structure that would be obtained when the model is deployed in a near-real-time modeling context. Using a large number of these “synthetic” alert examples, we trained and deployed deep learning models on an extensive set of training examples across Cambodia and evaluated the performance of the models over two wildlife sanctuaries. We assess the effectiveness of the proposed forest alert system during both Cambodia’s wet and dry seasons.
Our specific objectives are to:
  • Construct a training dataset for the semantic segmentation models that will classify FCC using an existing multispectral-based FCC dataset.
  • Develop and validate a real-time forest alert system using deep learning classifiers for semantic segmentation of FCC.
  • Evaluate the spatial and temporal mapping of forest alerts.

2. Materials and Methods

2.1. Alert System Framework Overview

The framework for developing and deploying a SAR-based deforestation model consisted of constructing a semantic segmentation reference dataset, development of deep learning FCC classifiers, statistical validation of the classifiers’ accuracy, and the deployment of the forest alert system for monitoring (Figure 1). Sentinel-1 SAR imagery was aggregated and pre-processed using Google Earth Engine (GEE) [53,54]. Then, feature-label pairs were exported from GEE as TensorFlow TFRecords [55], similar to the workflow described by [56,57] to produce a modeling dataset. Next, two deep learning semantic segmentation models were trained, one using SAR scenes acquired during the descending orbits and the other using data from ascending orbits, to detect FCC. Finally, the ascending and descending SAR classifiers were deployed on the Google AI platform to detect near-real time FCC. The accuracy of the FCC detection models was assessed using a photo interpretation dataset developed using Collect Earth Online (CEO) [58].

2.2. Study Area

Our study area focuses on Cambodia’s Prey Lang and Beng Per wildlife sanctuaries (Figure 2). The Prey Lang wildlife sanctuary was established in 2016 and has an area of 431,000 ha. The Beng Per wildlife sanctuary was established in 1993 and encompasses 249,408 ha. Twenty ELCs exist along the northwestern to southwestern portions of the Prey Lang wildlife sanctuary, and 13 ELCs exist within/within the Beng Per wildlife sanctuary. A biodiversity assessment conducted within Prey Lang identified 198 tree species, 87 shrub species, and eight different forest types [59]. Using the Global Forest Change (2000–2021) 30 m FCC layer [51], we estimated the average (and 95% confidence intervals) annual area of FCC in Prey Lang as 3258 (±3488) ha yr−1 and in Beng Per as 5764 (±5173) ha yr−1. In the past 5 years (2017–2021), FCC in Prey Lang has increased to 7577 (±2463) ha yr−1. The rate of FCC in Beng Per has declined with 5090 (±564) ha yr−1 of FCC occurring from 2017–2021. The wildlife sanctuaries receive annual rainfall of 1600–2200 mm [60]. The wildlife sanctuaries, and Cambodia broadly, experience extensive cloud coverage (Figure 2B). In 2020, for example, over the majority of wildlife sanctuaries, 35–45% of pixel observations in the Landsat Collection 2 Tier 1 Surface Reflectance dataset were occluded by cloud cover as indicated by the FMask quality assurance band included with the surface reflectance imagery [61].

2.3. Sentinel-1 Satellite Imagery

The ESA’s Sentinel-1A and 1B satellites carry a C-band (5.6 cm) SAR sensor that captures dual polarized (VV and VH) imagery. Sentinel-1 Level 1 Ground Range Detected (GRD) data that intersected Cambodia and were acquired between 2018 and 2021 were aggregated for analysis in GEE. Sentinel-1 IW GRD are acquired with a spatial resolution of approximately 20 m × 22 m and are distributed with a uniform pixel size of 10 m [1,62]. The Sentinel-1 GRD dataset corrects for sensor position, border noise, and thermal noise, performs radiometric calibration, and converts the values to ground range distance [54]. We applied a Lee-sigma speckle filter [63] and the Angular-Based Radiometric slope correction to each scene to improve scene-to-scene coherence [64].

2.4. Multispectral Forest Loss Dataset

An annual forest loss dataset for 2018–2020 was developed by applying the methodology outlined by Potapov et al. [65] to more recent dates. Landsat Collection 1 Tier 1 satellite imagery (2018–2020) that intersected Cambodia’s boarder was aggregated for processing. During this time period, satellite imagery was acquired by Landsat 7’s Enhanced Thematic Mapper plus and Landsat 8’s Operational Land Imager sensors. The Landsat scenes were processed from digital number values to top-of-atmosphere values and then converted to normalized reflectance values using the MODIS 44C surface reflectance dataset [66,67]. The normalized imagery was then temporally aggregated to form a time series of 16-day composites covering the Lower Mekong. The time-series of images was denoised using outputs from the CFMask algorithm [61] and an ensemble of bagged decision trees trained to identify cloud, cloud shadows, haze, and water [68]. The 16-day composites were used to derive metrics quantifying intra-annual variations in spectral reflectance and were used to develop a model to map annual change loss using an ensemble of bagged regression trees [65]. This set of annual forest loss layers, hereafter referred to as the Mekong forest loss dataset, was used as a semantic segmentation target when developing deep learning models.

2.5. Synthetic Forest Alert Dataset

To parameterize the FCC detection models used in the alert system, we developed a large training dataset which combined annual Landsat-derived maps of forest loss and Sentinel-1 SAR data. Sample locations were selected by taking a stratified random sample of locations in Cambodia that had experienced FCC in the years 2017, 2018, and 2019. A random date was assigned to each location (e.g., 1 June 2019). Three SAR images were randomly selected 1 to 1.5 years prior to the randomly selected date (e.g., 1 January–31 May 2018). The final SAR image of the input features was selected 1 year after the randomly chosen date (e.g., 10 June 2020). Each input to the FCC models is a 8 × 128 × 128 (channels, width, height) tensors composed of four Sentinel-1 SAR images (each of which individually possess a VV and VH bands). The label associated with each input was a 128 × 128 (height, width) binary maps of FCC (resampled from 30 m to 10 m to match the Sentinel-1 GRD data). By randomly sampling throughout the year, the goal is to produce a classifier that is robust to variations in backscatter values caused by vegetation phenology. This sampling strategy also ensures that any FCC events occur in a similar manner to the what network would observe during monitoring (i.e., change events should manifest between the three previous acquisitions and the most recent acquisition). When the SAR deep learning classifiers are deployed for near-real time monitoring, each model was instead given three images acquired in the proceeding 1–3 weeks and then the most recent SAR acquisition.
We aggregated two different forest alert datasets, one consisting of scenes obtained during ascending orbits and another obtained during descending orbits. In preliminary experiments, we found that developing a single model which used inputs consisting of both ascending and descending Sentinel-1 SAR scenes were difficult to train (e.g., poor test-set performance, visually distorted outputs). We developed a two-network forest alert system where one model trained on scenes acquired during the ascending orbits and the other was trained on scenes acquired during descending orbits. An 80–20% randomized split was used to partition the ascending and descending examples into a training group and a testing group. We did not withhold an additional subset to assess the model’s generalization error. Instead, the model’s accuracy was characterized using an independent photo interpretation dataset (described below).
The modeling dataset consisted of 74,545 examples. These were partitioned into a training and a testing set (80–20% split). We synthetically increase the volume of training data by using geometric data augmentation during training. These transformations consisted of: horizontal and vertical flips, and rotations by 90, 180, and 270. The choice of transformation for each patch was governed by a uniform random variable, ensuring diversity within the dataset. In our study, approximately 60% of the patches underwent one or more of these transformations.

2.6. Neural Network Architecture

The underlying neural network architecture for performing semantic segmentation was the U-Net architecture [69]. U-Net is a fully convolutional neural network with an encoder-decoder structure. The encoder portion of the network is a Convolutional Neural Network (CNN) that captures a set of hierarchical representations that characterize the input feature space. The decoder portion of the model uses the hierarchical of feature data to construct a map of predicted forest cover change. The U-Net architecture significantly improves the performance of standard encoder-decoder architectures by incorporating skip-connections that allow features generated in the encoder portion of the network to be reused in the decoder portion of the network (as well as introducing additional paths for gradient flow during optimization). We modify the simple encoder structure of the original UNet architecture to use the MobileNetV3-Large CNN (here after MobileNetV3) [70] an encoder. The decoder network was similar to the original UNet decoder as both consist of up-sampling operations and two convolution blocks (consisting of a convolution layer, batch normalization [71], and a ReLU activation layer [72]). While the original UNet architecture used transposed convolutions to upsample the imagery, we utilized nearest neighbor upsampling. Some work suggests that transposed convolutions can result in checkerboard artifacts in the output prediction maps [73]. Additionally, this reduces the number of learnable parameters in the network. We refer to this architecture MobileNetV3-UNet (Figure 3).

2.6.1. MobileNetV3-Large Encoder Network

The MobileNetV3-Large (here after MobileNetV3) architecture was designed to reduce the number of computational operations required by the neural network during inference while retaining performance comparable to larger, more computationally intensive architectures [74]. The network consists of a sequence of network blocks (i.e., a combination of convolutional layers, activation functions, and aggregation operations). Generally, each network block consists of a 1 × 1 expansion convolution (i.e., a convolution that increases the number of feature maps), a depth-wise separable convolution, and a squeeze-and-excite block. Depth-wise separable convolutions are more computationally efficient than traditional 2D convolutions as they require fewer multiply and addition operations [74]. A depth-wise separable convolution consist of a depth-wise convolution followed by a 1 × 1 point-wise convolution. Squeeze-and-excite blocks are designed to selectively emphasize or de-emphasise features [75]. If the inputs and outputs of each network block in MobileNetV3 contain the same number of feature maps, a residual connection was used combine the input feature maps with the output feature maps [70,76]. The encoder contained both the ReLU activation function and the hard-swish activation function [72,77].

2.6.2. Network Training

The ascending and descending MobileNetV3 models were each trained for a maximum of 25 epochs. Early stopping, which monitors the loss curve for a plateau, was used to prevent overfitting. The dice loss function was used during model optimization [78]. The networks were optimized using mini-batch stochastic gradient descent with the ADAM optimizer [79]. Due to limited cloud computing resources, several short, preliminary runs were used to identify a viable hyperparameter combination using the training set alone. A batch size of 16 and a learning rate of 0.001 were selected based on the tests and prior results in the region [35]. Random data augmentation was applied during training by applying a random rotation ( 0 , 90 , and 180 ) and a horizontal flip of the imagery and label to diversify the training dataset [80,81].
Four performance indicators were tracked during model training: accuracy, precision, recall, and F1-score (Table 1). Accuracy describes the fraction of all FCC events, in the 128 × 128 reference label, that were correctly classified. Recall (also know as producer’s accuracy) characterizes the proportion of FCC in the reference that was correctly classified. Precision (also known as user’s accuracy) characterizes the proportion of all FCC events were correctly predicted by the classifier. F1-score is the harmonic mean of precision and recall and useful for interpreting a classifier’s accuracy given the class imbalance (change vs. no-change events) in most segmentation labels in the training or testing sets.

2.7. Alert System Deployment

The deep learning models were deployed in GEE. The Sentinel-1 scenes were uploaded into cloud computing storage buckets within 1–2 days of appearing at the ESA’s Sentinel-1 datahub [82]. Using the GEE Python API, we developed a program that continuously monitors the GEE Sentinel-1 SAR repository and, upon seeing a new acquisition in our study area, will feed the new acquisition through the appropriate MobileNetV3 classifier. The final layer of the MobileNetV3 classifier produces a map of logit values (ranging from 0–1) produced by the sigmoid output layer. Forest alerts for the ascending and descending classifier are issued for the first instance of FCC, in the current year, detected by the alert. FCC events are issued for the combined classifier only if the ascending and descending classifier independently detect a FCC event within 32-days of each other.

2.8. Alert System Validation

A photo interpretation dataset was created to validate the accuracy of the SAR-based alert system (Figure 4). The interpretation database was developed using the Collect Earth Online (CEO) photointrepretation tool [58]. High resolution (5 m), bi-annual PlanetLab’s satellite imagery, made available through NICFI (Norway’s International Climate and Forests Initiative) program, was used by the interpreters to classify FCC events. Interpreters used bi-annual PlanetLab’s satellite imagery to determine if a FCC event had occurred. If a disturbance was observed, the interpreters recorded if the disturbance occurred in the dry season (1 January–30 June) or the wet season (1 July–31 December). The interpretation dataset developed for this analysis was used to validate the spatial extent of the disturbances being produced by the alert system and to assess the general accuracy of the timing of disturbance events.
The location of photointrepretation “plots” was determined using a stratified sampling approach. To ensure the points were well-distributed throughout the dry and wet season, we created 7 map strata using information from the SAR-based forest alert system, the GLAD forest alert system, and the Mekong reference layer. We included samples from locations where the GLAD and SAR systems produced spatially coincident forest alerts and locations where one system independently identified a FCC event. Finally, points were distributed in locations where the Mekong reference FCC layer indicated there was FCC but neither forest alert system produced an alert. To reduce false-positives, a minimum mapping unit of 0.5 ha was applied to each of the forest alert datasets. In 2018, 1134 plots were collected overall and found to contain 501 dry season disturbances, 489 wet season disturbances, and 243 undisturbed plots. In 2019, 1194 plots were collected overall and found to contain 300 dry season disturbances, 360 wet season disturbances, and 537 undisturbed plots. A different sampling strategy was used in 2020 that emphasised points in the wet season when more imagery from the NICFI became available. In 2020, 921 plots were collected overall and found to contain 333 dry season disturbances, 345 wet season disturbances, and 141 undisturbed plots. In total, the data set contained 3249 photointrepretation plots.
Similar to techniques like logistic regression, the deep neural networks output a continuous value between 0 and 1 that reflects the probability of a FCC event occurring. The user must determine the threshold of probability at that change is considered likely. In the context of a forest alert system, an optimal threshold is the value at which classification accuracy is maximized and the classifier has a greater omission error rate (i.e., a greater number of false negatives) than commission error rate (i.e., the number of false positives) [2]. Given that forest management teams will potentially be deployed to investigate FCC events within the sanctuary, the cost of erroneously deploying a team is much greater than the cost of missing a FCC event, and the two outcomes should not be treated equally. Therefore, we use two criteria to select the final thresholds used for the ascending, descending, and combined classifiers: (1) the total area of FCC mapped by the SAR classifier cannot exceed the area of FCC mapped in the reference product and (2) the F1-score is maximized given the first constraint.

3. Results

3.1. Classifier Training and Deployment

Both the ascending and descending classifiers achieved good performance on the deep learning validation sets Table 2. Both models converged with F1-scores ≥ 0.9. Both the ascending and descending classifiers achieved good performance on the deep learning validation sets. Both models converged with F1-scores exceeding 0.9. The descending classifier outperformed the ascending classifier with marginally better accuracy (improvement of 0.1%) and F1-score (improvement of 0.01). Despite the similar convergence statistics, the two classifiers did map disturbances in a similar manner (Figure 5). The ascending classifier generally mapped FCC at a much greater rate relative to the descending classifier with the area of mapped disturbance only comparable with low threshold values (0.05 or 0.1). While each classifier would often agree on the spatial location of a FCC event, these predictions were often not sufficiently temporally coincident for the combined classifier to indicate that a FCC event had occurred. This resulted in the combined classifier issuing fewer alerts than either the ascending or descending classifiers.
Varying the inclusion threshold between 0.1 and 0.45, in increments of 0.05, produced a large variation in the rate of mapped cover change events (Figure 5). Each year, from 2018–2020, both the Mekong FCC product and the SAR alert systems indicated progressively more deforestation within the wildlife sanctuaries. In 2020, varying the inclusion threshold for the ascending classifier produced between 18,837,998 and 2,010,035 alerts, using thresholds of 0.1 and 0.45 respectively. The descending SAR classifier issued fewer alerts, issuing between between 18,096,955 and 558,234 forest alerts as the inclusion threshold was varied. Lastly, the combined SAR classifier system produced considerably the least alerts. The combined system issued between 210,360 and 24,648,393 alerts in 2020 as the inclusion threshold was varied from 0.45 to 0.1.
Often, the value that maximized the overall accuracy of the classifier did not correspond with the final thresholds selected (Figure 6). The forest alert systems generally had low recall scores (0.2–0.4) and a wide-range of precision scores (0.2–0.9). This indicates with respect to the photointrepretation dataset that the classifiers were quite conservative in issuing forest alerts (as indicated by the lower recall scores) but whose issued alerts had a high degree of accuracy (as indicated by the higher precision scores). However, we note that the photointrepretation dataset possessed a greater number of FCC observations than it did non-loss observations. As such, using accuracy alone to guide our choice of threshold would produce classifiers biased towards a lower threshold that issues too many disturbances. By constraining our selection of a threshold to one that restricted the area of the issued alerts to be less than the reference product, we achieve a more balanced error rate. This balancing of the errors is reflected in the fact that, generally, the selected inclusion threshold is the threshold that maximizes the F1-score. A threshold of 0.4 was chosen for the ascending model; a threshold of 0.3 for the descending model, and a threshold of 0.15 for the combined model.

3.2. Alert System Performance

Using the optimal inclusion thresholds, we summarized the ascending, descending, and combined SAR alert systems’ performance with respect to the photointrepretation reference dataset (Table 3). In 2018, the descending classifier achieving the best performance with regards to all metrics evaluated. The descending classifier correctly classified (within the appropriate window of time) 64% of all labels in the reference dataset. The descending system’s accuracy achieved a overall accuracy of 70.15% with an F1-score of 0.68 in the dry season. The classifier’s accuracy declined in the wet season with 61.9% of FCC events being captured. The ascending classifier performed notably poorly in the 2018 wet season (F1-score of 0.5; accuracy 46.1%) which produced a corresponding decline in the performance of the combined classifier.
In 2019, the most accurate SAR classifier was the combined classification approach. In the 2019 dry season, the combined alert system correctly classified 59.3% of the reference dataset. In the dry season, all classifiers performed well with the combined classifier achieved notably high accuracy from the combined alert system (accuracy 71.2%; F1-score 0.7). However, in the wet season, the ascending classifier’s performance again declined markedly (F1-score of 0.5, accuracy of 46.1% combined classifier’s F1-score declined by 0.12 to 0.5), which resulted in a corresponding decline in the combined systems’ accuracy. The descending classifier achieving the highest performance in the wet season (accuracy 61.9%; F1-score 0.63).
The descending classifier was the most accurate alert system in 2020. The descending system possessed an overall accuracy of 50.5% with an F-1 score of 0.32. The progression of alerts from year-to-year also nicely reflects the overall patterns of cover change within our study region (Figure 7). Classification scores in general were lower in 2020 than in 2018 or 2019 due to the greater proportion of FCC events, in the more challenging wet-season, in the interpretation dataset. Similar to 2018, the performance of the combined classifier was negatively impacted by performance of the ascending classifier.

3.3. Comparison with GLAD Alerts

We compared the most performant SAR-based classifier, the descending classifier, against the GLAD forest alert system. In 2018, the GLAD alert system achieved an overall accuracy of 64.3% and achieved better accuracy than the descending SAR system in both the wet and the dry seasons (Table 4). In 2019, the descending-classifier was the most performant classifier by a small margin according to all accuracy metrics. In the 2019 dry season, the descending system correctly captured 71.2% of FCC events and achieved and F1-score of 0.7. In the wet season, the descending classifier’s accuracy declined below that of the GLAD alert system. Overall, the descending classifier captured 58.3% of all FCC events while the GLAD system captured 61.5% of FCC events in 2018 and 2019 according to the independent validation set. The descending alert system exhibited superior dry season performance compared to the GLAD system (69.5% vs. 66.73%) but reduced performance in the wet season (53.8% vs 63.9%). The GLAD alert system was parameterized to provide conservative estimates of FCC thus minimizing the risk of false positives. By adjusting the inclusion threshold of the descending classifier, we were able to produce a forest alert system whose area of issued alerts was closer to the area of FCC estimated by the reference FCC product (Figure 8). However, in 2019 and 2020, both systems underestimated the total area of FCC. The two systems show notably similar patterns in alert issuance in 2019 and 2020.

4. Discussion

4.1. Alert System Framework Evaluation

A central benefit of the proposed framework is that it significantly simplifies the logistics of operationalizing a forest alert system. Traditionally, deploying such a system would necessitate configuring a server to (1) continuously monitor an external database hosting new satellite image acquisitions, (2) download and preprocess these new images, (3) apply the FCC classifier to issue new alerts, and (4) push the classified imagery to a publicly accessible repository. By positioning Google Earth Engine (GEE) at the core of our framework, these tasks are made more accessible and can be performed without any specialized computing hardware.
A key finding of this analysis was that it was possible to train a radar-based forest alert system using pre-existing FCC data products derived from multispectral sensors. A primary challenge in implementing deep learning models in an ecological context is the scarcity of high-quality reference data, which are often expensive and time-consuming to collect. As demonstrated in this analysis, it is feasible to construct a potentially large dataset using spatial sampling and pre-existing FCC products. Notably, the models developed in this analysis were trained using an annual forest loss layer, devoid of information about the actual timing of the event. Yet, the deep classifiers, when provided with a sufficient quantity of examples, were able to generalize and identify new FCC events. Near real-time classification poses numerous additional problems for standard FCC classification. Specifically, the classifier must be robust to noise associated with seasonal variations that produce cyclical patterns in the input SAR signals. Remarkably, our classifiers achieved performance comparable to an existing, purpose-built forest alert system, despite utilizing such a naive sampling strategy and labels without information on FCC timing.

4.2. Suggestions for Future Analyses

While our approach yielded a SAR-based classifier that was comparable to existing forest alert systems, previous results in the literature suggest additional gains in performance could be achieved. Reich et al. [1] developed a near-real time forest alert system over the Congo Basin using Sentinel-1 SAR imagery. Their forest alert system first detrended the SAR time-series using a harmonic model and then employed iterative Bayesian updates as described in Reich et al. [25]. Ballère et al. [26] developed a SAR forest alert system over French Guiana, which achieved a user’s accuracy of 96.2% (user’s accuracy is the complement of the omission error) and a user’s accuracy of 81.5% (user’s accuracy is the complement of the commission error). This system provided a significant improvement in the commission error rate compared with the GLAD forest alert system. The system employed by Ballère et al. [26] used a simple algorithm proposed by Bouvet et al. [18] which computes the relative change of the most recent SAR acquisition to a running average of prior SAR acquisitions, identifying FCC events.
We observed a discrepancy between the accuracy statistics derived from the deep learning test set and those from the photo interpretation dataset. On the SAR test set, the classifiers exhibited an F1-score ≥ 0.9 (0.91 for ascending and 0.9 for descending MobileNet-V3UNet models). However, the independent photo-interpretation dataset showed an overall F1-score of 0.4 and 0.49 for the ascending and descending MobileNetV3-UNet models, respectively. We attribute this to the spatial overlap of examples in the training and testing sets used to develop the MobileNetV3-UNet models. Deep learning algorithms possess a large number of model parameters, enabling them to memorize patterns. For instance, CNNs could achieve zero training error on the ImageNet ILSVRC 2012 dataset [83] with randomized labels after extensive training [84]. Given that some examples in the training set overlap with the testing set, we would expect an optimistic estimate of error. This aligns with the results of [85], which advocate spatial cross-validation to assess the generalization error of ecological models.
Additionally, utilizing transfer learning, a technique wherein training is initiated using pre-existing weights instead of randomly initialized weights, is a common practice used to decrease model training time [86,87]. Previously, transfer learning was challenging as many datasets were developed using datasets such as the ImageNet ILSVRC 2012 dataset (a dataset that consists of labeled RGB images). The domain transfer to remote sensing imagery and tasks could make transfer learning, in practice, difficult. However, A third option would be to generate a pre-training reference dataset using continuous change detection algorithms (e.g., the Continuous Change Detection and Classification algorithm [88]). Continuous change detection algorithms provide information on the intra-annual timing of FCC events. Higher accuracy FCC labels with timing information would allow for training examples to be constructed in a manner that more closely aligns with the final inputs to the model, thus reducing label noise. In the absence of cloud cover, FCC labels with timing information could be automatically constructed using change detection methods such as the CCDC algorithm.

5. Conclusions

This study marks the initial step toward a sophisticated SAR-based forest alert system, demonstrating promising potential despite the challenges presented by limited data resources. The proposed framework utilizes pre-existing annual FCC maps, derived from moderate-resolution, multispectral imagery, to construct a SAR-based semantic segmentation dataset. Our analysis indicates that deep learning models are capable of generalizing despite substantial amounts of label noise. The ubiquity of FCC products developed using moderate-resolution multispectral sensors (e.g., Landsat, Sentinel-2) suggests that integrating SAR and deep learning can be used to effectively map forest cover change in other forest ecosystems. Despite the coarse temporal (annual) resolution of the forest change cover labels used in this analysis, our SAR-based change detection approach achieved an F1-score similar to the GLAD forest alert system (F1-scores of 0.52 and 0.54, respectively). The capacity to streamline SAR dataset development, data aggregation, and model deployment within the Google Earth Engine cloud environment greatly simplifies the logistics of deploying the alert system, obviating the need for specialized computing hardware. The encouraging performance of our prototype, even with these constraints, underscores its potential. However, we anticipate that the ongoing collection of field data will substantially refine our training dataset, leading to enhanced model accuracy and the ability to more effectively generalize across diverse forest ecosystems.

Author Contributions

Conceptualization, J.B.K., A.P., J.S., K.T., D.B., M.G., R.K. and D.S.; Data curation, A.P.; Formal analysis, N.H.Q.; Funding acquisition, J.S., R.K. and D.S.; Investigation, N.S.T.; Methodology, J.B.K., A.P., B.B., N.S.T., N.H.Q. and J.S.; Project administration, K.T., R.K. and D.S.; Supervision, D.B. and M.G.; Validation, J.B.K., N.H.Q. and K.T.; Writing—original draft, J.B.K., B.B. and N.S.T.; Writing—review & editing, J.B.K., B.B. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the joint US Agency for International Development (USAID) and National Aeronautics and Space Administration (NASA) initiative SERVIR-Mekong, Cooperative Agreement Number: AID-486-A-14-00002. Individuals affiliated with the University of Alabama in Huntsville (UAH) are funded through the NASA Applied Sciences Capacity Building Program, NASA Cooperative Agreement: NNM11AA01A.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to the title. This change does not affect the scientific content of the article.

Abbreviations

The following abbreviations are used in this manuscript:
CEOCollect Earth Online
ELCEconomic Land Concession
ESAEuropean Space Agency
ETM+Enhanced Thematic Mapper+
FCCForest Cover Change
FORMAForest Monitoring for Action
GEEGoogle Earth Engine
GLADGlobal Land Analysis and Discovery
GRDGround Range Distance
NICFINorway’s International Climate and Forests Initiative
OLIOperational Land Imager
SARSynthetic Aperture Radar
SLCSocial Land Concession
VHVertical Horizontal
VVVertical Vertical

References

  1. Reiche, J.; Mullissa, A.; Slagter, B.; Gou, Y.; Tsendbazar, N.E.; Odongo-Braun, C.; Vollrath, A.; Weisse, M.J.; Stolle, F.; Pickens, A.; et al. Forest disturbance alerts for the Congo Basin using Sentinel-1. Environ. Res. Lett. 2021, 16, 024005. [Google Scholar] [CrossRef]
  2. Hansen, M.C.; Krylov, A.; Tyukavina, A.; Potapov, P.V.; Turubanova, S.; Zutta, B.; Ifo, S.; Margono, B.; Stolle, F.; Moore, R. Humid tropical forest disturbance alerts using Landsat data. Environ. Res. Lett. 2016, 11, 034008. [Google Scholar] [CrossRef]
  3. Poortinga, A.; Aekakkararungroj, A.; Kityuttachai, K.; Nguyen, Q.; Bhandari, B.; Soe Thwal, N.; Priestley, H.; Kim, J.; Tenneson, K.; Chishtie, F.; et al. Predictive analytics for identifying land cover change hotspots in the mekong region. Remote Sens. 2020, 12, 1472. [Google Scholar] [CrossRef]
  4. Moffette, F.; Alix-Garcia, J.; Shea, K.; Pickens, A.H. The impact of near-real-time deforestation alerts across the tropics. Nat. Clim. Chang. 2021, 11, 172–178. [Google Scholar] [CrossRef]
  5. Musinsky, J.; Tabor, K.; Cano, C.A.; Ledezma, J.C.; Mendoza, E.; Rasolohery, A.; Sajudin, E.R. Conservation impacts of a near real-time forest monitoring and alert system for the tropics. Remote Sens. Ecol. Conserv. 2018, 4, 189–196. [Google Scholar] [CrossRef]
  6. Finer, M.; Novoa, S.; Weisse, M.J.; Petersen, R.; Mascaro, J.; Souto, T.; Stearns, F.; Martinez, R.G. Combating deforestation: From satellite to intervention. Science 2018, 360, 1303–1305. [Google Scholar] [CrossRef]
  7. Tabor, K.M.; Holland, M.B. Opportunities for improving conservation early warning and alert systems. Remote Sens. Ecol. Conserv. 2021, 7, 7–17. [Google Scholar] [CrossRef]
  8. Oliveira, M.C.; Siqueira, L. Digitalization between environmental activism and counter-activism: The case of satellite data on deforestation in the Brazilian Amazon. Earth Syst. Gov. 2022, 12, 100135. [Google Scholar] [CrossRef]
  9. Wheeler, D.; Hammer, D.; Kraft, R.; Steele, A. Satellite-Based Forest Clearing Detection in the Brazilian Amazon: FORMA, DETER, and PRODES; World Resources Institute: Washington, DC, USA, 2014. [Google Scholar]
  10. Hammer, D.; Kraft, R.; Wheeler, D. Alerts of forest disturbance from MODIS imagery. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 1–9. [Google Scholar] [CrossRef]
  11. Wheeler, D.; Guzder-Williams, B.; Petersen, R.; Thau, D. Rapid MODIS-based detection of tree cover loss. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 78–87. [Google Scholar] [CrossRef]
  12. Hansen, M.C.; Potapov, P.; Turubanova, S. Use of coarse-resolution imagery to identify hot spots of forest loss at the global scale. In Global Forest Monitoring from Earth Observation; CRC Press: Boca Raton, FL, USA, 2012; pp. 93–109. [Google Scholar]
  13. Diniz, C.G.; de Almeida Souza, A.A.; Santos, D.C.; Dias, M.C.; da Luz, N.C.; de Moraes, D.R.V.; Maia, J.S.; Gomes, A.R.; da Silva Narvaes, I.; Valeriano, D.M.; et al. DETER-B: The new Amazon near real-time deforestation detection system. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3619–3628. [Google Scholar] [CrossRef]
  14. Vargas, C.; Montalban, J.; Leon, A.A. Early warning tropical forest loss alerts in Peru using Landsat. Environ. Res. Commun. 2019, 1, 121002. [Google Scholar] [CrossRef]
  15. Ye, S.; Rogan, J.; Zhu, Z.; Eastman, J.R. A near-real-time approach for monitoring forest disturbance using Landsat time series: Stochastic continuous change detection. Remote Sens. Environ. 2021, 252, 112167. [Google Scholar] [CrossRef]
  16. Sannier, C.; McRoberts, R.E.; Fichet, L.V.; Makaga, E.M.K. Using the regression estimator with Landsat data to estimate proportion forest cover and net proportion deforestation in Gabon. Remote Sens. Environ. 2014, 151, 138–148. [Google Scholar] [CrossRef]
  17. Hansen, M.C.; Stehman, S.V.; Potapov, P.V.; Loveland, T.R.; Townshend, J.R.; DeFries, R.S.; Pittman, K.W.; Arunarwati, B.; Stolle, F.; Steininger, M.K.; et al. Humid tropical forest clearing from 2000 to 2005 quantified by using multitemporal and multiresolution remotely sensed data. Proc. Natl. Acad. Sci. USA 2008, 105, 9439–9444. [Google Scholar] [CrossRef]
  18. Bouvet, A.; Mermoz, S.; Ballère, M.; Koleck, T.; Le Toan, T. Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. Remote Sens. 2018, 10, 1250. [Google Scholar] [CrossRef]
  19. Sinha, S.; Jeganathan, C.; Sharma, L.; Nathawat, M. A review of radar remote sensing for biomass estimation. Int. J. Environ. Sci. Technol. 2015, 12, 1779–1792. [Google Scholar] [CrossRef]
  20. Kasischke, E.S.; Melack, J.M.; Dobson, M.C. The use of imaging radars for ecological applications—A review. Remote Sens. Environ. 1997, 59, 141–156. [Google Scholar] [CrossRef]
  21. Omar, H.; Misman, M.A.; Kassim, A.R. Synergetic of PALSAR-2 and Sentinel-1A SAR polarimetry for retrieving aboveground biomass in dipterocarp forest of Malaysia. Appl. Sci. 2017, 7, 675. [Google Scholar] [CrossRef]
  22. Kellndorfer, J.; Flores-Anderson, A.; Herndon, K.; Thapa, R. Using SAR data for mapping deforestation and forest degradation. In The SAR Handbook. Comprehensive Methodologies for Forest Monitoring and Biomass Estimation; ServirGlobal: Hunstville, AL, USA, 2019; pp. 65–79. [Google Scholar]
  23. Watanabe, M.; Koyama, C.; Hayashi, M.; Kaneko, Y.; Shimada, M. Development of early-stage deforestation detection algorithm (advanced) with PALSAR-2/ScanSAR for JICA-JAXA program (JJ-FAST). In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 2446–2449. [Google Scholar]
  24. Raspini, F.; Bianchini, S.; Ciampalini, A.; Del Soldato, M.; Solari, L.; Novali, F.; Del Conte, S.; Rucci, A.; Ferretti, A.; Casagli, N. Continuous, semi-automatic monitoring of ground deformation using Sentinel-1 satellites. Sci. Rep. 2018, 8, 7253. [Google Scholar] [CrossRef]
  25. Reiche, J.; Hamunyela, E.; Verbesselt, J.; Hoekman, D.; Herold, M. Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2. Remote Sens. Environ. 2018, 204, 147–161. [Google Scholar] [CrossRef]
  26. Ballère, M.; Bouvet, A.; Mermoz, S.; Le Toan, T.; Koleck, T.; Bedeau, C.; André, M.; Forestier, E.; Frison, P.L.; Lardeux, C. SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. Remote Sens. Environ. 2021, 252, 112159. [Google Scholar] [CrossRef]
  27. Ban, Y.; Zhang, P.; Nascetti, A.; Bevington, A.R.; Wulder, M.A. Near real-time wildfire progression monitoring with Sentinel-1 SAR time series and deep learning. Sci. Rep. 2020, 10, 1322. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  29. Minasny, B.; Shah, R.M.; Che Soh, N.; Arif, C.; Indra Setiawan, B. Automated near-real-time mapping and monitoring of rice extent, cropping patterns, and growth stages in Southeast Asia using Sentinel-1 time series on a Google Earth Engine platform. Remote Sens. 2019, 11, 1666. [Google Scholar]
  30. Sawant, S.; Mohite, J.; Sakkan, M.; Pappula, S. Near real time crop loss estimation using remote sensing observations. In Proceedings of the 2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Istanbul, Turkey, 16–19 July 2019; pp. 1–5. [Google Scholar]
  31. Bazzi, H.; Baghdadi, N.; Fayad, I.; Zribi, M.; Belhouchette, H.; Demarez, V. Near real-time irrigation detection at plot scale using sentinel-1 data. Remote Sens. 2020, 12, 1456. [Google Scholar] [CrossRef]
  32. DeVries, B.; Huang, C.; Armston, J.; Huang, W.; Jones, J.W.; Lang, M.W. Rapid and robust monitoring of flood events using Sentinel-1 and Landsat data on the Google Earth Engine. Remote Sens. Environ. 2020, 240, 111664. [Google Scholar] [CrossRef]
  33. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  34. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  35. Mayer, T.; Poortinga, A.; Bhandari, B.; Nicolau, A.P.; Markert, K.; Thwal, N.S.; Markert, A.; Haag, A.; Kilbride, J.; Chishtie, F.; et al. Deep learning approach for Sentinel-1 surface water mapping leveraging Google Earth Engine. ISPRS Open J. Photogramm. Remote Sens. 2021, 2, 100005. [Google Scholar] [CrossRef]
  36. Bonafilia, D.; Tellman, B.; Anderson, T.; Issenberg, E. Sen1Floods11: A georeferenced dataset to train and test deep learning flood algorithms for Sentinel-1. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 210–211. [Google Scholar]
  37. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef]
  38. Qu, Y.; Zhao, W.; Yuan, Z.; Chen, J. Crop Mapping from Sentinel-1 Polarimetric Time-Series with a Deep Neural Network. Remote Sens. 2020, 12, 2493. [Google Scholar] [CrossRef]
  39. Ghosh, S.M.; Behera, M.D. Aboveground biomass estimates of tropical mangrove forest using Sentinel-1 SAR coherence data-The superiority of deep learning over a semi-empirical model. Comput. Geosci. 2021, 150, 104737. [Google Scholar] [CrossRef]
  40. Wang, Y.; Wang, C.; Zhang, H. Combining a single shot multibox detector with transfer learning for ship detection using sentinel-1 SAR images. Remote Sens. Lett. 2018, 9, 780–788. [Google Scholar] [CrossRef]
  41. Ren, Y.; Li, X.; Xu, H. A Deep Learning Model to Extract Ship Size From Sentinel-1 SAR Images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5203414. [Google Scholar] [CrossRef]
  42. Kang, M.; Leng, X.; Lin, Z.; Ji, K. A modified faster R-CNN based on CFAR algorithm for SAR ship detection. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 18–21 May 2017; pp. 1–4. [Google Scholar]
  43. Belenguer-Plomer, M.A.; Tanase, M.A.; Chuvieco, E.; Bovolo, F. CNN-based burned area mapping using radar and optical data. Remote Sens. Environ. 2021, 260, 112468. [Google Scholar] [CrossRef]
  44. Christin, S.; Hervet, É.; Lecomte, N. Applications for deep learning in ecology. Methods Ecol. Evol. 2019, 10, 1632–1644. [Google Scholar] [CrossRef]
  45. Rolnick, D.; Veit, A.; Belongie, S.; Shavit, N. Deep learning is robust to massive label noise. arXiv 2017, arXiv:1705.10694. [Google Scholar]
  46. Tai, X.; Wang, G.; Grecos, C.; Ren, P. Coastal image classification under noisy labels. J. Coast. Res. 2020, 102, 151–156. [Google Scholar] [CrossRef]
  47. Rahaman, M.; Hillas, M.M.; Tuba, J.; Ruma, J.F.; Ahmed, N.; Rahman, R.M. Effects of Label Noise on Performance of Remote Sensing and Deep Learning-Based Water Body Segmentation Models. Cybern. Syst. 2022, 53, 581–606. [Google Scholar] [CrossRef]
  48. Davis, K.F.; Yu, K.; Rulli, M.C.; Pichdara, L.; D’Odorico, P. Accelerated deforestation driven by large-scale land acquisitions in Cambodia. Nat. Geosci. 2015, 8, 772–775. [Google Scholar] [CrossRef]
  49. Grogan, K.; Pflugmacher, D.; Hostert, P.; Mertz, O.; Fensholt, R. Unravelling the link between global rubber price and tropical deforestation in Cambodia. Nat. Plants 2019, 5, 47–53. [Google Scholar] [CrossRef] [PubMed]
  50. Global Initiative. Forest Crimes in Cambodia: Rings of Illegality in Prey Lang Wildlife Sanctuary; Global Initiative: New York, NY, USA, 2021. [Google Scholar]
  51. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.; Goetz, S.J.; Loveland, T.R.; et al. High-resolution global maps of 21st-century forest cover change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  52. Potapov, P.; Hansen, M.C.; Kommareddy, I.; Kommareddy, A.; Turubanova, S.; Pickens, A.; Adusei, B.; Tyukavina, A.; Ying, Q. Landsat analysis ready data for global land cover and land cover change mapping. Remote Sens. 2020, 12, 426. [Google Scholar] [CrossRef]
  53. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  54. Filipponi, F. Sentinel-1 GRD preprocessing workflow. Proceedings 2019, 18, 11. [Google Scholar]
  55. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, 2015. Software. Available online: https://www.tensorflow.org (accessed on 1 August 2021).
  56. Parekh, J.R.; Poortinga, A.; Bhandari, B.; Mayer, T.; Saah, D.; Chishtie, F. Automatic detection of impervious surfaces from remotely sensed data using deep learning. Remote Sens. 2021, 13, 3166. [Google Scholar] [CrossRef]
  57. Poortinga, A.; Thwal, N.S.; Khanal, N.; Mayer, T.; Bhandari, B.; Markert, K.; Nicolau, A.P.; Dilger, J.; Tenneson, K.; Clinton, N.; et al. Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional neural network, NICFI high resolution satellite imagery and Google Earth Engine. ISPRS Open J. Photogramm. Remote Sens. 2021, 1, 100003. [Google Scholar] [CrossRef]
  58. Saah, D.; Johnson, G.; Ashmall, B.; Tondapu, G.; Tenneson, K.; Patterson, M.; Poortinga, A.; Markert, K.; Quyen, N.H.; San Aung, K.; et al. Collect Earth: An online tool for systematic reference data collection in land cover and use applications. Environ. Model. Softw. 2019, 118, 166–171. [Google Scholar] [CrossRef]
  59. Hayes, B.; Khou, E.; Neang, T.; Furey, N.; Chhin, S.; Holden, J.; Hun, S.; Phen, S.; La, P.; Simpson, V. Biodiversity Assessment of Prey Lang: Kratie, Kampong Thom, Stung Treng and Preah Vihear Provinces; Conservation International, Winrock International, USAID: Phnom Penh, Cambodia, 2015. [Google Scholar]
  60. Theilade, I.; Schmidt, L.; Chhang, P.; McDonald, J.A. Evergreen swamp forest in Cambodia: Floristic composition, ecological characteristics, and conservation status. Nord. J. Bot. 2011, 29, 71–80. [Google Scholar] [CrossRef]
  61. Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
  62. Agency, E.S. Sentinel-1 SAR User Guide. Available online: https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar (accessed on 1 August 2021).
  63. Lee, J.S.; Wen, J.H.; Ainsworth, T.L.; Chen, K.S.; Chen, A.J. Improved Sigma Filter for Speckle Filtering of SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar] [CrossRef]
  64. Vollrath, A.; Mullissa, A.; Reiche, J. Angular-based radiometric slope correction for Sentinel-1 on google earth engine. Remote Sens. 2020, 12, 1867. [Google Scholar] [CrossRef]
  65. Potapov, P.; Tyukavina, A.; Turubanova, S.; Talero, Y.; Hernandez-Serna, A.; Hansen, M.; Saah, D.; Tenneson, K.; Poortinga, A.; Aekakkararungroj, A.; et al. Annual continuous fields of woody vegetation structure in the Lower Mekong region from 2000–2017 Landsat time-series. Remote Sens. Environ. 2019, 232, 111278. [Google Scholar] [CrossRef]
  66. Carroll, M.; Townshend, J.; Hansen, M.; DiMiceli, C.; Sohlberg, R.; Wurster, K. MODIS vegetative cover conversion and vegetation continuous fields. In Land Remote Sensing and Global Environmental Change; Springer: New York, NY, USA, 2010; pp. 725–745. [Google Scholar]
  67. Chander, G.; Markham, B.L.; Helder, D.L. Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors. Remote Sens. Environ. 2009, 113, 893–903. [Google Scholar] [CrossRef]
  68. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: London, UK, 1984. [Google Scholar]
  69. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  70. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  71. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  72. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010. [Google Scholar]
  73. Odena, A.; Dumoulin, V.; Olah, C. Deconvolution and checkerboard artifacts. Distill 2016, 1, e3. [Google Scholar] [CrossRef]
  74. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  75. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  76. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  77. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
  78. Milletari, F.; Navab, N.; Ahmadi, S. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  79. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  80. Yu, X.; Wu, X.; Luo, C.; Ren, P. Deep learning in remote sensing scene classification: A data augmentation enhanced convolutional neural network framework. GISci. Remote Sens. 2017, 54, 741–758. [Google Scholar] [CrossRef]
  81. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  82. Ilyushchenko, V. Google Earth Engine Developers. Available online: https://groups.google.com/g/google-earth-engine-developers (accessed on 1 August 2023).
  83. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  84. Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Commun. ACM 2021, 64, 107–115. [Google Scholar] [CrossRef]
  85. Ploton, P.; Mortier, F.; Réjou-Méchain, M.; Barbier, N.; Picard, N.; Rossi, V.; Dormann, C.; Cornu, G.; Viennois, G.; Bayol, N.; et al. Spatial validation reveals poor predictive performance of large-scale ecological mapping models. Nat. Commun. 2020, 11, 4540. [Google Scholar] [CrossRef] [PubMed]
  86. Huang, Z.; Dumitru, C.O.; Pan, Z.; Lei, B.; Datcu, M. Classification of large-scale high-resolution SAR images with deep transfer learning. IEEE Geosci. Remote Sens. Lett. 2020, 18, 107–111. [Google Scholar] [CrossRef]
  87. Huang, Z.; Pan, Z.; Lei, B. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
  88. Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available Landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef]
Figure 1. The proposed framework uses Google Earth Engine (GEE) to pre-process the SAR imagery and build the deep learning training datasets. The training datasets are exported from GEE and are used to train models on the Google AI Platform. The final models were then hosted on the AI Platform and accessed in GEE to classify forest cover change in new SAR acquisitions as they appear in the GEE Sentinel-1 cloud storage buckets. A validation dataset was developed using the Collect Earth Online photo interpretation tool.
Figure 1. The proposed framework uses Google Earth Engine (GEE) to pre-process the SAR imagery and build the deep learning training datasets. The training datasets are exported from GEE and are used to train models on the Google AI Platform. The final models were then hosted on the AI Platform and accessed in GEE to classify forest cover change in new SAR acquisitions as they appear in the GEE Sentinel-1 cloud storage buckets. A validation dataset was developed using the Collect Earth Online photo interpretation tool.
Remotesensing 15 05223 g001
Figure 2. The study area of this analysis two wildlife sanctuaries. Prey Lang and Beng Per (A). However, to aggregate sufficient training data, all of Cambodia was considered (black outline). The region experiences significant cloud cover during the monsoon season. The % of Landsat 7 ETM+ and Landsat 8 OLI observations that were flagged as occluded in 2020 (B).
Figure 2. The study area of this analysis two wildlife sanctuaries. Prey Lang and Beng Per (A). However, to aggregate sufficient training data, all of Cambodia was considered (black outline). The region experiences significant cloud cover during the monsoon season. The % of Landsat 7 ETM+ and Landsat 8 OLI observations that were flagged as occluded in 2020 (B).
Remotesensing 15 05223 g002
Figure 3. The MobileNetV3-UNet model used to map forest cover change events in this analysis.
Figure 3. The MobileNetV3-UNet model used to map forest cover change events in this analysis.
Remotesensing 15 05223 g003
Figure 4. The point locations indicate plots where photointerpreters visually assessed the timing of SAR-based forest alert system. Locations were selected using a stratified random sampling approach in 2018, 2019, and 2020. The forest/non-forest layer base map was derived from the Global Forest Watch dataset [51].
Figure 4. The point locations indicate plots where photointerpreters visually assessed the timing of SAR-based forest alert system. Locations were selected using a stratified random sampling approach in 2018, 2019, and 2020. The forest/non-forest layer base map was derived from the Global Forest Watch dataset [51].
Remotesensing 15 05223 g004
Figure 5. The cumulative distribution of forest alerts issued by the ascending, descending, and combined alert systems over the course of 2018, 2019, and 2020. Darker colors indicate a lower inclusion threshold that results in a greater rate of mapped disturbance. The red horizontal line indicates the count of pixels in the reference FCC product (at a 10 m scale to match the SAR systems) that was used to create the neural network training dataset.
Figure 5. The cumulative distribution of forest alerts issued by the ascending, descending, and combined alert systems over the course of 2018, 2019, and 2020. Darker colors indicate a lower inclusion threshold that results in a greater rate of mapped disturbance. The red horizontal line indicates the count of pixels in the reference FCC product (at a 10 m scale to match the SAR systems) that was used to create the neural network training dataset.
Remotesensing 15 05223 g005
Figure 6. The accuracy of the SAR classifiers plotted against the classifier’s precision and recall scores. The red vertical line indicates the threshold selected for each classifier.
Figure 6. The accuracy of the SAR classifiers plotted against the classifier’s precision and recall scores. The red vertical line indicates the threshold selected for each classifier.
Remotesensing 15 05223 g006
Figure 7. Three 2.5 km-by-2.5 km subsets within the Prey Lang wildlife sanctuary (bold black outline) displaying the forest alerts issued by the descending SAR deep learning classifier for 2018, 2019, and 2020. The forest alerts are rendered over a false-color Sentinel-2 image.
Figure 7. Three 2.5 km-by-2.5 km subsets within the Prey Lang wildlife sanctuary (bold black outline) displaying the forest alerts issued by the descending SAR deep learning classifier for 2018, 2019, and 2020. The forest alerts are rendered over a false-color Sentinel-2 image.
Remotesensing 15 05223 g007
Figure 8. The total number of alerts issued by the descending SAR classifier using an inclusion threshold of 0.25 and the GLAD alert system. Each GLAD alert was counted 9 times as the GLAD alert system maps FCC at a 30 m spatial resolution while the SAR-based system maps FCC at a 10 m resolution.
Figure 8. The total number of alerts issued by the descending SAR classifier using an inclusion threshold of 0.25 and the GLAD alert system. Each GLAD alert was counted 9 times as the GLAD alert system maps FCC at a 30 m spatial resolution while the SAR-based system maps FCC at a 10 m resolution.
Remotesensing 15 05223 g008
Table 1. Accuracy metrics that were tracked during model training. TP stands for true positive. TN stands for true negative. FP stands for false positive. FN stands for false negative.
Table 1. Accuracy metrics that were tracked during model training. TP stands for true positive. TN stands for true negative. FP stands for false positive. FN stands for false negative.
MetricFormulation
Accuracy T N + T P T P + F P + T N + F N
Recall T P T P + F N
Precision T P T P + F P
F1-score 2 ( R e c a l l P r e c i s i o n ) ( R e c a l l + P r e c i s i o n )
Table 2. The test set error statistics for the ascending and descending for the MobileNetV3-UNet forest cover change classifier after training the models for 25 epochs.
Table 2. The test set error statistics for the ascending and descending for the MobileNetV3-UNet forest cover change classifier after training the models for 25 epochs.
AccuracyF1CE LossPrecisionRecall
Ascending Orbit0.920.910.0510.890.92
Descending Orbit0.900.900.0560.880.92
Table 3. A comparison of the accuracy of each SAR forest alert system using the optimal inclusion threshold. The accuracy assessment is conducted with respect to the photointrepretation dataset. Bolded values indicate the best performing model for each season.
Table 3. A comparison of the accuracy of each SAR forest alert system using the optimal inclusion threshold. The accuracy assessment is conducted with respect to the photointrepretation dataset. Bolded values indicate the best performing model for each season.
YearAlert SystemSeasonSample SizeAccuracy Metrics
LossNo LossAllAccuracyPrecisionRecallF1-Score
Ascending 62.230.520.770.62
DescendingDry16711127870.140.60.780.68
Combined 67.630.580.720.64
Ascending 51.660.530.770.62
2018DescendingWet10011121163.980.630.780.7
Combined 51.180.530.720.61
Ascending 52.120.350.770.48
DescendingAll26711137864.290.440.780.56
Combined 57.140.380.720.5
Ascending 64.390.560.70.62
DescendingDry16311527863.310.540.80.64
Combined 71.220.610.820.7
Ascending 45.960.470.70.56
2019DescendingWet12011523563.830.60.80.68
Combined 56.170.530.820.65
Ascending 52.010.340.70.46
DescendingAll28311539858.790.390.80.53
Combined 59.30.40.820.54
Ascending 46.880.320.380.35
DescendingDry814712846.880.390.770.51
Combined 57.030.450.720.55
Ascending 41.150.150.380.21
2020DescendingWet1794722657.960.30.770.43
Combined 47.350.240.720.36
Ascending 43.970.110.380.17
DescendingAll2604730750.490.20.770.32
Combined47.560.190.720.3
Ascending 49.770.290.670.4
OverallDescendingAll810273108358.360.350.790.49
Combined 55.220.330.760.46
Table 4. A comparison of the accuracy of the best performing SAR-Alert system and the GLAD Alert system. The accuracy assessment is conducted relative to the photointrepretation dataset. The accuracy statistics for the best performing model in each season are bolded.
Table 4. A comparison of the accuracy of the best performing SAR-Alert system and the GLAD Alert system. The accuracy assessment is conducted relative to the photointrepretation dataset. The accuracy statistics for the best performing model in each season are bolded.
YearAlert SystemSeasonSample SizeAccuracy Metrics
Loss No Loss All Accuracy Precision Recall F1-Score
2018DescendingDry16711127867.630.580.720.64
GLAD70.140.60.780.68
DescendingWet10011121151.180.530.720.61
GLAD63.980.630.780.7
DescendingAll26711137857.140.380.720.5
GLAD64.290.440.780.56
2019DescendingDry16311527871.220.610.820.7
GLAD63.310.540.80.64
DescendingWet12011523556.170.530.820.65
GLAD63.830.60.80.68
DescendingAll28311539859.30.40.820.54
GLAD58.790.390.80.53
OverallDescendingDry33022655669.420.60.770.67
GLAD66.730.560.790.66
DescendingWet22022644653.810.530.770.63
GLAD63.90.610.790.69
DescendingAll55022677658.250.390.770.52
GLAD61.470.420.790.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kilbride, J.B.; Poortinga, A.; Bhandari, B.; Thwal, N.S.; Quyen, N.H.; Silverman, J.; Tenneson, K.; Bell, D.; Gregory, M.; Kennedy, R.; et al. Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine. Remote Sens. 2023, 15, 5223. https://doi.org/10.3390/rs15215223

AMA Style

Kilbride JB, Poortinga A, Bhandari B, Thwal NS, Quyen NH, Silverman J, Tenneson K, Bell D, Gregory M, Kennedy R, et al. Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine. Remote Sensing. 2023; 15(21):5223. https://doi.org/10.3390/rs15215223

Chicago/Turabian Style

Kilbride, John Burns, Ate Poortinga, Biplov Bhandari, Nyein Soe Thwal, Nguyen Hanh Quyen, Jeff Silverman, Karis Tenneson, David Bell, Matthew Gregory, Robert Kennedy, and et al. 2023. "Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine" Remote Sensing 15, no. 21: 5223. https://doi.org/10.3390/rs15215223

APA Style

Kilbride, J. B., Poortinga, A., Bhandari, B., Thwal, N. S., Quyen, N. H., Silverman, J., Tenneson, K., Bell, D., Gregory, M., Kennedy, R., & Saah, D. (2023). Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine. Remote Sensing, 15(21), 5223. https://doi.org/10.3390/rs15215223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop