Next Article in Journal
Terrace Extraction Method Based on Remote Sensing and a Novel Deep Learning Framework
Next Article in Special Issue
Integration of Handheld and Airborne Lidar Data for Dicranopteris Dichotoma Biomass Estimation in a Subtropical Region of Fujian Province, China
Previous Article in Journal
High Resolution Ranging with Small Sample Number under Low SNR Utilizing RIP-OMCS Strategy and AHRC l1 Minimization for Laser Radar
Previous Article in Special Issue
A Hybrid Index for Monitoring Burned Vegetation by Combining Image Texture Features with Vegetation Indices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network

1
College of Geography and Remote Sensing, Hohai University, Nanjing 211100, China
2
School of Earth Science and Engineering, Hohai University, Nanjing 211100, China
3
The National Key Laboratory of Water Disaster Prevention, Hohai University, Nanjing 210024, China
4
Yangtze Institute for Conservation and Development, Hohai University, Nanjing 210024, China
5
Department of Geography and Planning, University of Saskatchewan, Saskatoon, SK S7N 5C8, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(9), 1648; https://doi.org/10.3390/rs16091648
Submission received: 4 March 2024 / Revised: 28 April 2024 / Accepted: 1 May 2024 / Published: 6 May 2024

Abstract

:
Invasive noxious weed species (INWS) are typical poisonous plants and forbs that are considered an increasing threat to the native alpine grassland ecosystems in the Qinghai–Tibetan Plateau (QTP). Accurate knowledge of the continuous cover of INWS across complex alpine grassland ecosystems over a large scale is required for their control and management. However, the cooccurrence of INWS and native grass species results in highly heterogeneous grass communities and generates mixed pixels detected by remote sensors, which causes uncertainty in classification. The continuous coverage of INWS at the pixel level has not yet been achieved. In this study, objective 1 was to test the capability of Senginel-2 imagery at estimating continuous INWS cover across complex alpine grasslands over a large scale and objective 2 was to assess the performance of the state-of-the-art convolutional neural network-based regression (CNNR) model in estimating continuous INWS cover. Therefore, a novel CNNR model and a random forest regression (RFR) model were evaluated for estimating INWS continuous cover using Sentinel-2 imagery. INWS continuous cover was estimated directly from Sentinel-2 imagery with an R2 ranging from 0.88 to 0.93 using the CNNR model. The RFR model combined with multiple features had a comparable accuracy, which was slightly lower than that of the CNNR model, with an R2 of approximately 0.85. Twelve green band-, red-edge band-, and near-infrared band-related features had important contributions to the RFR model. Our results demonstrate that the CNNR model performs well when estimating INWS continuous cover directly from Sentinel-2 imagery, and the RFR model combined with multiple features derived from the Sentinel-2 imager can also be used for INWS continuous cover mapping. Sentinel-2 imagery is suitable for mapping continuous INWS cover across complex alpine grasslands over a large scale. Our research provides information for the advanced mapping of the continuous cover of invasive species across complex grassland ecosystems or, more widely, terrestrial ecosystems over large spatial areas using remote sensors such as Sentinel-2.

1. Introduction

Invasive noxious weed species (INWS) are typical poisonous plants and forbs [1]. INWS are currently considered a major threat to global natural ecosystems [2,3,4]. On the one hand, invasive species have dramatic impacts on biodiversity, ecosystem function, and services [5,6]. The high competition abilities, strong growth rates, and rapid vegetative reproduction abilities of invasive species lead to native species replacement as they establish dominant communities [6,7], alter soil proprieties and ecosystem nutrient cycles [8,9], and, hence, threaten the sustainability of the function and service of native ecosystems [10,11]. On the other hand, invasive species also cause economic losses due to the expense invested in their management, productivity reduction caused by overgrowing, and the altered use values of ecosystem services [12,13]. Additionally, invasive species affect human well-being through disruptions of natural environmental conditions and loss of landscape aesthetics [14,15,16]. As the impacts of INWS on the alpine grassland ecosystem of the Qinghai–Tibetan Plateau (QTP) have lasted for decades and pose a series of severe ecological, economic, and social risks, effectively understanding and managing invasive species are particularly essential for the sustainable development of ecosystems [17,18,19].
The fundamental management and intervention of invasive species require a detailed understanding of their invasion and spread processes, such as their spatial distribution extent and cover [19,20]. The latest maps that describe the current status of invasive species are extremely valuable for the decision-making and planning of preintervention and quick-response conservation measures in their early invasion stage [6,21]. However, the evolution process of invasive species is commonly complex and spatially and temporally dynamic; the diversity of corresponding anthropogenic and natural drivers makes it harder to detect [20]. Previous studies have accomplished these aims with field-based investigations and acquired detailed site-level invasive species data; however, this is laborious, time-consuming, and costly. In addition, field-based surveys primarily focus on the local scale and are often limited in spatial extent, neglecting a synoptic view and being insufficient for collecting larger-scale invasive species information, which may hamper a comprehensive understanding of invasive species expansion in continuous processes [22,23]. In this regard, remote sensing techniques have been suggested as an efficient alternative for continuous invasive species detection, monitoring, and mapping over large spatial areas [24,25].
Previous remote sensing applications for mapping invasive species were typically based on a combination of spectral [26,27,28], texture, and object-based analysis [24,29,30] or functional features [31,32,33] of the target species extracted from optical multispectral [34,35], hyperspectral [36,37,38], and synthetic aperture radar (SAR) [39,40,41] data sources. For instance, studies have used both multispectral and hyperspectral imagery collected by airborne and satellite sources to detect woody and grass invasive species [6,42,43]. The commonly used data analysis methods in these remote sensing applications are feature-driven and rely on feature metrics derived from imagery [9,22,44,45]. Although these feature-driven approaches obtained satisfactory performance (over 85% overall accuracy) in invasive species detection studies, defining a series of appropriate feature metrics is challenging. On the one hand, this feature engineering process requires a wealth of knowledge of a target species’ biological properties and the interactions with the measured electromagnetic signal by a specific sensor; on the other hand, the manual feature selection procedure is personal and subjective and influences the accuracy of the features. Moreover, the numerous feature variables needed make it inefficient and impossible to derive sufficient predictors effectively and flexibly [46,47]. Therefore, they cannot automatically and sufficiently exploit information to detect different species’ characteristics [48]. In contrast, recent studies have highlighted the capability of deep learning in remote sensing vegetation applications because it outperforms traditional data analysis methods [46,49,50].
Deep learning (DL) algorithms have demonstrated state-of-the-art performance in classification and regression tasks across the remote sensing community [51,52,53]. The DL model is composed of multiple processing layers and allows data to be learned through multiple abstractions [54], which means that it is able to extract spatial, spectral, and temporal features automatically from complex geographic datasets and remote sensing images and circumvent the feature engineering process [47,52]. Among the mainstream DL algorithms, convolutional neural networks (CNNs) are the most popular and commonly used in remote sensing vegetation applications [46,50,52,55,56]. Studies using U-net CNNs to map woody vegetation extent with high spatial resolution satellite imagery in Australia reported an overall accuracy of approximately 90% [57]. Recently, CNNs have also been used in invasive species detection and have acquired promising results [28,58,59,60,61,62,63,64]. For example, [47] developed a CNN model and worked with high spatial and spectral satellite Worldview-2 imagery to detect invasive species across a heterogeneous landscape in Minnesota, USA, and reported a high accuracy of 96.1%.
Despite the results achieved in remote sensing invasive species detection applications with CNN-based methods, we detected gaps that currently limit a wider implementation of CNNs with remote sensing imagery for mapping invasive species. Firstly, CNNs are currently most often designed for remote sensing classification tasks that assign one class label to each pixel, but are less often developed for regression problems that predict a variable with a continuous value at each pixel. However, most natural environmental factors are represented as a nature of continuous distribution, particularly vegetation factors that feature a gradual change [65,66,67]. To this end, producing a discrete boundary map by a “hard” classifier that allocates each image pixel to only one labeled class may be inappropriate [68]. In addition, the heterogeneous community can produce mixed classes within an image pixel, which is particularly obvious in grassland communities due to the co-occurrence habitat of different grass species, such as native species and invasive species [48,67,68]. This can be resolved using high-resolution data in which pixels can match the individual plant size. However, sensors that can capture individual plant sizes remain rare, and even unmanned aerial vehicle (UAV)-based images often do not meet the requirements; nonetheless, the limitation of the small spatial areas covered by UAV imagery still hampers wider application in large-scale invasive species mapping [33]. Therefore, as vegetation is more appropriately described by continuous metrics, flexible and robust methods that can map target vegetation variables (invasive species cover) to a continuous scale rather than a discrete class are urgently needed. Recently, the convolutional neural network-based regression (CNNR) model has also been widely applied in the estimation of continuous metrics across multiple disciplines, achieving promising results and outperforming the traditional regression model [46,69,70,71,72].
Because noxious weed plants feature an unpalatable taste and are poisonous, they are not preferred by animals [73]. To date, the original edible plants are still dominant in communities; however, noxious weed species have become the secondary population and, in some areas, dominate grass communities due to their high adaptivity, competitiveness, and strong reproductive capacity. Therefore, it is essential to collect detailed cover information on INWS to develop a database enabling analysis for an improved understanding of their invasion patterns and dynamics for decision-making.
The Sentinel-2 images provided by the European Space Agency (ESA) are global and cost-free, with high spatial and temporal resolutions that cover a wide swath width, and the spatial resolution and bands may be sufficient to serve as critical data sources in the remote sensing-based mapping ranges of targeted invasive species continuous cover on a large scale [74]. Sentinel-2 has been utilized in a series of remote sensing-based vegetation applications [75,76,77,78]; however, only a few studies have tested the capability of Sentinel-2 data to map invasive plant species cover, especially across grassland ecosystems [6,79,80]. In this study, Sentinel-2 satellite imagery was our data source.
The INWS in the alpine grassland ecosystem in the QTP provide an excellent evaluation environment to fill the gaps in mapping the continuous cover of INWS across complex grassland ecosystems on a large scale. Accordingly, we proposed the following research hypotheses: (1) INWS continuous cover (%) can be estimated at the pixel level in alpine grassland ecosystems over a large scale and (2) the CNNR model can provide powerful support for mapping INWS continuous cover (%) at the pixel level. To test our hypotheses, we developed an operational workflow where a proposed novel CNNR model was trained and validated through a combination of Sentinel-2 data and field-collected reference data. The fitted CNNR model was then applied to generate an INWS continuous cover (%) spatial map. For comparison, we also used a random forest regression model trained with selected features to compare the performance of the CNNR model in INWS continuous cover (%) mapping efficiency at the pixel level across a complex alpine grassland ecosystem over a large spatial area.

2. Materials and Methods

2.1. Study Area

The study area was the eastern part of the Three River Headwaters Region (TRHR), the hinterland of the QTP (Figure 1). This area is situated at an average altitude of over 4500 m above sea level, longitude from 98.8°E to 99.2°E, and latitude from 35.0°N to 35.3°N. The TRHR is the headstream of three important rivers, the Yangtze River, Yellow River, and Lancang (Mekong) River; therefore, it is known as “China’s Water Tower” and is a critical ecological barrier to China [19,81]. The TRHR experiences a continental plateau climate, and the annual precipitation ranges from ~260 mm to ~780 mm, which is variable in space and time. Approximately 80% of the annual rainfall is concentrated from June to September, which is the peak growing season, and decreases from southeast to northwest. The annual average temperature ranges from ~−6° C to ~−4 °C and has significant diurnal variation [19,82,83,84,85]. Sampling sites were selected for sufficient quantities of original and invasive noxious weed plants, with high accessibility and homogenous grass communities.

2.2. Ground Reference Data Collection

A field survey was conducted from 8 August 8 to 22 August 2019, during the peak growing season in the study area (mid-August), later than the Sentinel-2 data acquisition date. We previously designed the sampling route in the lab based on our pre-known information. We set our sample sites in a homogenous community in which native species and INWS existed. At each sample site, we set up two 30 m × 30 m sample plots that were >100 m apart between each plot (Figure 2a). For each sample plot, nine 1 m × 1 m independent subplots were set in four directions extending from the plot’s center (Figure 2b). Several ecological measurements, including native species coverage, INWS coverage, dominant plant species, and bare soil coverage, were collected. Coordination information with an accuracy of ~0.5 m, which consisted of altitude, latitude, and longitude, was recorded in the plot center by using Trimble Geo 7X handheld submeter GPS units for each subplot. Additionally, all the 30 m × 30 m sample plots were more than 100 m from roads and rivers to reduce their influence. Moreover, from the literature and the information from local herdsmen and the manager of the Sanjiangyuan Nation Nature Reserve, we obtained the occurrence information that INWSs often occur in riverine and low-land flattened areas where it is easy to graze. Moreover, considering the ability of our team to work in the high-elevation plateau environment, we finally set our survey sites at around or below an elevation of 4500 m. Finally, 88 sampling points were selected in the study area (Figure 1c).

2.3. Sentinel-2 Data Acquisition and Processing

In this study, four cloud-free, level-1 Sentinel-2A satellite image scenes (Figure 1b) were acquired from the ESA. The data were acquired on 25 July 2019, during the peak growing season in the study area. The data acquired included 13 bands spanning from visible (VIS) and near-infrared (NIR) to shortwave infrared (SWIR) with different spatial resolutions that ranged from 10 m to 60 m [86]. The default processing that involved atmospheric and topographic correction was performed with Sen2Cor in the Sentinel-2 toolbox in the Sentinel Application Platform (SANP) V 8.0 [87]. Regarding the bands, bands 1, 9, and 10 designed for aerosols, cirrus detection, and the estimation of water vapor (radiometric correction purposes) had a relatively coarse spatial resolution (60 m) [88]; for our purpose in this study, they were therefore excluded from our data. The remaining four bands with 10 m spatial resolution, band 2 (490 nm), band 3 (560 nm), band 4 (665 nm), and band 8 (842 nm), and six bands with 20 m spatial resolution, band 5 (705 nm), band 6 (740 nm), band 7 (783 nm), band 8a (865 nm), band 11 (1610 nm), and band 12 (2190 nm), were used for further analysis. For ease of analysis, all the bands with 20 m spatial resolution were resampled to 10 m resolution by using the cubic convolutional method and then the data were mosaicked and clipped by the study area boundary (Figure 1c).

2.4. Random Forest Regression

2.4.1. Random Forest Regression Algorithm

Random forest (RF) is an ensemble-learning algorithm that combines a large set of decision trees developed by [89] to make a prediction; it is an ensemble for classification or regression tasks that generate categorical and continuous values. As a machine learning method, RF is robust against overfitting, can perform well with up to thousands of explanatory variables, and has been considered one of the best machine learning algorithms compared to other machine learning algorithms, such as the support vector machine (SVM) model [90,91]. In RF, trees are built using a deterministic algorithm by selecting a random set of features and a random vector sample from the training dataset [92]. Because the random vector samples are independent and there is no correlation problem between each decision tree in the forest, the RF does not require a hypothesis of a prior probability distribution; therefore, it is flexible and stable [91,93]. In regression tasks, the output result of the RF regressor (RFR) is based on the average result of all the prediction values from the inter-decision trees. Moreover, the inter-decision trees also determine the feature importance used to describe the influence degree of a feature on the RFR results. RFRs have been widely used in vegetation biomass estimation [94], yield prediction [95], soil moisture estimation [91], and air quality monitoring [96], among others [97]. The prediction procedure of the RFR model was as follows:
First, select a bootstrap sample of size N from the full training datasets. Then, grow a random forest tree f i to the bootstrapped data and repeat the steps for each terminal node of the tree to achieve the minimum node size. Select m variables from the p variables randomly; next, pick the best variable among the selected m variables (mtry) and then split the node into two branch nodes. Finally, output the ensemble of trees { f i } 1 N . Making a regression prediction at a new point x can be presented as Equation (1):
y ^ = 1 N i = 1 N f i ( x )
where y ^ is the regression prediction output, N is the total number of regression trees (ntree), and f i ( x ) is the regression prediction of the ith decision tree at point x.
There are two important parameters in RFRs: the user-defined number of different predictors (features) tested at each node (mtry) and the user-defined number of regression trees grown based on a bootstrap sample of the observations (ntree) [92,97]. In this study, we optimized the two parameters by using the root mean square error of the calibration method. Through multiple iterations, the number of regression trees (ntree) was set to 800 and the number of different feature tests at each node (mtry) was set to 7. In our study, the RFR model was implemented in the Python 3.9 environment.

2.4.2. Feature Extraction

The CNNR can extract features from the imagery itself by a convolutional operation; however, the feature-driven random forest regression model requires relative features for modeling as vegetation indices are considered powerful and are widely used indicators in vegetation continuous cover mapping and species discrimination [98,99]. In this study, except for the reflectance information provided by the spectral bands, we also derived many VIs that were assumed to contribute to INWS cover in this study. The VIs used in this study were selected due to their previous performance in plant species detection and discrimination [19,99,100,101,102,103]. The VIs used in this study included simple ratio indices, plant anthocyanin-related indices, and red-edge-related indices. In addition, principal component analysis (PCA) bands were also applied. Some studies also derived texture information for vegetation applications [93]. However, previous similar studies in the same study area already demonstrated that texture features are weak for INWS detection [100]. Therefore, texture features were not used in our study. Eventually, the feature band reflectance, vegetation index, and principal component analysis bands were derived from Sentinel-2 imagery and used in the random forest regression model (Table 1).

2.5. Proposed CNN Regression Method

The continuous cover of INWS is known to be an important structural factor indicator because it reflects the composition and distribution of vegetation in a grassland ecosystem, which has a significant impact on grassland health and productivity. However, due to the small physical size of INWS, their coexistence with native grass species within the alpine grassland community, limitations in data sources, and effective estimation methods, it is difficult to obtain accurate continuous cover of INWS over large scales. The aforementioned advantages of Sentinel-2 and DL algorithms in the introduction motivated us to develop a novel CNN regression model (InwsRCNN) to map INWS continuous cover at the pixel level (10 m spatial resolution) with Sentinel-2 imagery.

2.5.1. Model Definition

Convolutional layers are the basic blocks of building CNNs. Through the convolution operation in several dimensions, a convolutional layer can extract features from input data automatically and accomplish an estimation task. Regarding regression problems, regression results are output with a regressor layer [54,108]. In this study, a deep CNN using forward and backward propagation was constructed consisting of a convolutional layer, a batch normalization layer, an activation function, a fully connected layer, and an estimation (output) layer. The input data were fed into the convolutional layer with a shape of W × H × C, where W, H, and C are the width, height, and number of channels of the input data, respectively. After the data input, the convolution operation was applied to the input data with convolution filters that had a shape of fW × fH × n, where fW, fH, and n are the width, height, and depth of the filter, respectively. The output data shape was [((W + 2P − fW) − 1)/S] × [((H + 2P − fH) − 1)/S] × n, where P and S represent padding and stride, respectively. Next, a batch normalization layer was applied for regulation [109], and an activation function [110] activated the convolution layers. The output of the first convolution block was fed to the next convolution block with a deeper filter size to extract high-level features. When all the convolution block operations were accomplished, the output was passed to the fully connected layer [111] with a flattening operation; afterward, the outputs of the fully connected layer were fed to the final output layer.

2.5.2. Network Architecture

The proposed InwsRCNN consists of seven layers that include one input layer, three convolution layers and batch normalization layers, two fully connected layers, and a regression layer (Figure 3). In this study, the input images were point-centered and each image patch comprised the neighborhood pixels centered on the field observation point. Therefore, during convolutional processing, the internal filters could extract the spatial and spectral features of the neighborhood pixels surrounding the center point of the input patch, which meant that the features around the field observation points were considered. Since our Sentinel-2 data had a 10 m spatial resolution, we initially defined a 3 × 3-pixel neighborhood as the input patch size, which corresponded to covering a 900 m2 area on the ground. A 2D convolution kernel was used in this study to automatically match the input image patch [112]; therefore, the filter shape was 1 × 1 × 10, where 1 × 1 (width × height) is the filter width and height and 10 is the band number of the Sentinel-2 data in this study. All the feature maps consisted of the immediate spatial dimension and included all available values across the image patch channels. Although other types of activation existed (e.g., linear, tanh, etc.), in this study, the rectified linear unit (ReLU) [113] activation function was used for all layers’ activation. For the last regressor (output) layer, a linear function was applied to produce the real continuous cover value of INWS. In our proposed CNNs, we devoided the pooling layer, which aimed to downscale the operators to reduce the number of parameters and the computational volume because the input image size was already small and without translation variance issues.

2.5.3. Loss Function and Experimental Setting

Finally, for network simulation, since our objective involved a regression task, a mean square propagation (RMSprop) optimizer that updates learning rates automatically was used for network optimization, and the loss of the network was calculated by using the mean squared error (MSE). The training and test datasets were randomly split into 70% and 30% of the total training samples. For the parameter settings, the initial learning rate was set to 0.001, the batch size was set to 32, and training epochs were set to 100. For comparison, we also tested different input image patch sizes, which ranged from 5 × 5 to 7 × 7, 9 × 9, 11 × 11, 13 × 13, and 15 × 15 (Figure 4). The training and experiments in this study were carried out using the Windows 10 system, Intel i7-8700, and 32-GB memory hardware with an NVIDIA GeForce GTX 1660 GPU (16 GB of RAM) with a computing capability of 7.5 to accelerate the training process. The CNN model was built with the TensorFlow 2.10.0 framework [114] in a Python 3.9 environment.

2.6. Accuracy Assessment

In this study, to ensure that all the samples were tested, we repeatedly ran the RFR model and CNNR model ten times and assessed the models’ performance at estimating INWS continuous cover by using the coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE). The formulas of R2, MAE, and RMSE are presented in Equations (2)–(4). R2 was used to evaluate the correlation between estimations and observations, while MAE and RMSE were measures of the errors between the estimation and observations. In this study, INWS continuous cover was estimated by RFR and InwsRCNN between the field-collected INWS cover. More information regarding R2, MAE, and RMSE can be found in previous studies [115,116,117].
R 2 = 1 i = 1 n O i P i 2 / i = 1 n O i P ¯ 2
MAE = 1 N i = 1 n | O i P i |
RMSE = 1 N i = 1 n O i P i 2
where P i is the estimated INWS continuous cover (%), O i is the observed INWS continuous cover (%), P ¯ is the mean of the predicted INWS cover (%), and N is the total number of samples.

3. Results

3.1. Observation Analysis of INWS Cover

The field-observed INWS cover measurements over 88 sampling sites are shown in Table 2. Overall, the INWS cover across the study area was kept at a low level. The average INWS cover observed was 0.29, the lowest cover was 0.05, and the highest cover reached 0.62. Figure 5 displays the distribution of the observed INWS cover information. Figure 5a shows that the observed INWS cover exhibited a normal distribution and was mainly concentrated between 0.1 and 0.5. Figure 5b–d shows the variations in INWS cover along the longitude, latitude, and elevation directions. We observed no significant variations between the INWS cover and longitude and latitude changes in our study area, which may have been because our study area was quite small, covering 3 km × 3.8 km. However, the variation along the longitude change (R2 = 0.06) was more obvious than that along the latitude change (R2 = 0.01). For the elevation change, we also did not detect significant variation between INWS cover and the altitude change (R2 = 0.01); however, we found that our sampling sites were concentrated in an altitude range between 4150 m and 4250 m above sea level. Additionally, in this range, INWS cover was lower than in the remaining regions (Figure 5d).

3.2. RFR Model Evaluation

Table 3 shows the accuracy of the RFR model at INWS continuous cover estimation with different features derived from Sentinel-2 imagery for the study area. As shown in Table 3, overall, the RFR model using features derived from Sentinel-2 imagery achieved an R2 of approximately 85% and MAE and RMSE values of approximately 0.037 and 0.045, respectively. The highest R2 values with the lowest MAE and RMSE values of 0.85, 0.0369%, and 0.0444% were obtained by the RFR model using the combination of Sentinel-2 bands and VIs. The features with VIs obtained higher R2, MAE, and RMSE values, and the RFR model using only Sentinle-2 information acquired the lowest R2 and relatively higher MAE and RMSE values, demonstrating the potential of VIs derived from Sentinel-2 imagery for estimating INWS continuous cover in the study area.
Figure 6 displays the results of feature selection for different features used in the RFR model. Figure 6a shows that in Sentinel-2 band features, red edge1, NIR 2, green, red edge2, and red edge3 were more important than other bands. For VI features, Cire1, VIRRE3, GR, Cire2, ARI1, VIRRE2, SVI, and VIRRE1 showed higher importance than the remaining VI features (Figure 6b). In the combination of Sentinel-2 bands and VIs, the features of red edge1, VIRRE3, NIR 2, Cire1, red edge2, and GR showed obviously higher importance than the other features (Figure 6c). In the PC and VI features, PC2, Cire1, VIRRE3, PC3, GR, Cire2, and ARI1 were considered more important than the remaining features (Figure 6d). Finally, Figure 6e shows that in all combinations of Sentinel-2 bands, PCA, and VI features, the importance of red edge1, VIRRE3, NIR2, Cire1, PC3, GR, Cire2, red edge2, green, red edge3, PC2, and ARI1 was higher than that of the remaining features. A comprehensive comparison between the five feature combinations indicated that among Sentinel-2 bands, PCs, and VIs, five Sentinel-2 bands (red edge1, NIR2, red edge2, red edge3, and Green), five VIs (Cire1, VIRRE3, GR, Cire2, and ARI1), and two PCs (PC2 and PC3) were selected as important features. Additionally, the VIs of SVI, VIRRE1, and VIRRE2 also showed high importance. Further analysis revealed that the selected important features among Sentinel-2 reflectance bands and VIs were all related to red-edge, green, and NIR bands, which consistently revealed that the green, red-edge, and NIR bands in Sentinel-2 imagery were more important for the RFR model in estimating INWS continuous cover in the study area.

3.3. InwsRCNN Model Evaluation

Table 4 shows the accuracy of the CNNR model at estimating INWS continuous cover with Sentinel-2 imagery across alpine grasslands over a large scale. Overall, as shown in Table 4, the CNNR model obtained a high R2 of approximately 90% along with a low MAE and RMSE of approximately 0.20% and 0.03%, respectively. For our base input patch size (3 × 3), the CNNR model using Sentinel-2 reflectance bands obtained an R2 of 0.88 along with an MAE and RMSE of 0.023% and 0.033%, respectively. The CNNR model using PCs achieved an R2 of 0.92, which was higher than that using Sentinel-2 reflectance bands, while the MAE and RMSE values were 0.018% and 0.027%, respectively, which were lower than those using Sentinel-2 reflectance bands. The highest R2 obtained by the CNNR model using Sentinel-2 bands was 0.92, along with the lowest MAE and RMSE of 0.017% and 0.026%, respectively, with a 5 × 5 input patch size. The highest R2 achieved by the CNNR model using PCA was 0.93 with a 13 × 13 input patch size, and the lowest MAE and RMSE obtained were 0.016% and 0.025%, respectively, with an 11 × 11 input patch size. A comparison indicated that the CNNR model with the PCA data source acquired a higher R2 and lower MAE and RMSE than that with the Sentinel-2 reflectance bands, with a 0.04 improvement in R2 and a −0.004 and −0.0.006 decrease in MAE and RMSE, respectively. For the input image size, compared to our base input patch size, there was a 0.04 improvement in R2 and a −0.005 and −0.007 decrease in MAE and MRSE for the CNNR model using Sentinel-2 reflectance bands and a 0.01 improvement in R2 and a −0.001 and −0.002 decrease in MAE and MRSE using PCA. The results demonstrated that the CNNR model using Sentinel-2 imagery could achieve high accuracy when estimating INWS continuous cover, and there were no significant results variations among using the reflectance of PCA or between smaller and larger input image sizes.

3.4. Model Performance

Figure 7 displays the scatterplots of the observed and estimated INWS continuous cover for the RFR and InwsRCNN models. Compared to the RFR model, the CNNR model using Sentinel-2 reflectance bands and PCA more closely captured the INWS continuous cover observations, particularly those showing higher INWS cover values; in particular, the CNNR model using PCA obtained the best performance (Figure 7c). Additionally, compared to the RFR model, the slope of the linear line (black solid line in Figure 7) fitted to the CNNR model estimations was closer to one, indicating the better performance of the CNNR model in INWS continuous cover estimation. Overall, Figure 7 also shows that all estimation models underestimated the INWS cover level, especially the RFR model (Figure 7a), while CNNR models were less likely to do so (Figure 7b,c).

3.5. Spatial Distribution of the Estimated INWS Continuous Cover in the Study Area

Figure 8 displays the spatial patterns of the estimated INWS continuous cover in the study area. Figure 8a shows that the INWS cover was mainly distributed at an altitude of 4000 m to 4500 m above sea level, particularly in the region from 4200 m to 4300 m above sea level, accounting for over a quarter of the total INWS cover. This was approximately consistent with our field observations of INWS cover (Section 3.1, Figure 5), which were in line with expectations. Additionally, the estimated INWS continuous cover across the study area was kept at a low level, which was particularly obvious in the results from the RFR model, and the INWS cover obtained by the RFR model also showed a gradual transition. There were many high- or low-value areas presented, while the results acquired by the CNNR models showed a naturally gradual transition pattern, especially the results obtained by the CNNR model with PCA. Nonetheless, we could still observe that the higher-level INWS cover was distributed in the piedmont, lowland, and flattened-valley areas in the mountains since these areas are more accessible and, therefore, more human activities occur, such as grazing, which have more impacts on native grass species and benefit INWS growth and reproduction.

4. Discussion

4.1. RFR and CNNR Model Performance in Mapping INWS Continuous Cover

In this study, the RFR model with different features derived from the Sentinel-2 imagery map of INWS continuous cover achieved an R2 of approximately 0.85 along with an MAE and RMSE of approximately 0.037% and 0.045%, respectively (Table 3). The results we obtained were lower than those for mapping the fraction cover of invasive alien plants in a dryland ecosystem in Africa [118], which achieved an accuracy of 0.92. The reason may be due to the background ecosystem in this area, which is relatively simple, and the targeted species have a large leaf area and are easily detected by remote sensors. Compared to the study that also detected small-physical-size invasive species across grasslands with the RF classifier, which achieved an accuracy ranging from 68.0% to 71.3% [22], our result is acceptable for the mapping of the continuous cover of INWS. The selected important features derived from Sentinel-2 imagery were green-, red-edge-, and NIR band-related reflectance bands or vegetation indices (Section 3.2 and Table 1), demonstrating that red-edge and NIR bands in Sentinel-2 imagery were sensitive to INWS detection in this study, which is in line with previous studies using Sentinel-2 imagery for mapping woody invasive species [6]. Our results suggest the potential of the RFR model and Sentinel-2 imagery to estimate continuous INWS cover across alpine grasslands at a large scale.
Compared to the RFR model, the CNNR model in our study achieved a higher accuracy. The R2 had an improvement ranging from 4.18% to 8.05%, along with a decrease in MAE and RMSE ranging from −2.05% to 1.52% and −1.9% to −1.16%, respectively. The linear fit also showed that the CNNR model performed better than the RFR model (Figure 7). Our results suggest that the CNNR model outperformed RFR at estimating continuous INWS cover across alpine grasslands in the study area. This finding is consistent with those of previous studies using CNN applications [48,119]. Ref. [47] leveraged convolutional neural networks (CNNs) to map the invasive plant species leafy spurge (Euphorbia virgata) across a heterogeneous ecosystem in the USA with Worldview-2 imagery and obtained an accuracy of 96.1%. Furthermore, CNNR has prosperity in many continuous variable estimation applications [93,112,120,121,122], particularly in the field of atmospheric pollution monitoring. Ref. [119] leveraged a deep convolutional neural network and achieved a correlation coefficient (R) of 0.91 when estimating ground-level NO2 concentrations with a combination of station observation data and remote sensing data in TX, USA, outperforming the use of RFR, support vector machine regression (SVMR), and multiple linear regression (MLR). These findings suggest the potential of CNNR to estimate INWS continuous cover and other continuous variables.
Our results showed that the accuracy of CNNR is not significantly influenced by the input image patch size. Overall, a larger input image patch size obtained a higher R2 and lower MAE and RMSE when estimating INWS continuous cover (Table 4). This is mainly attributed to the larger patch size containing more spatial–spectral information (Figure 4). For example, the input image patch size of 5 × 5 covered an area of 2500 m2, which was 64% larger than the 3 × 3 image patch size in the Sentinel-2 imagery, which had a 10 m spatial resolution. Although the larger input image size achieved higher accuracy, according to our results, there were no significant accuracy improvements among different input image patch sizes (Table 4). The base input image patch size (3 × 3) still acquired a comparable accuracy. Moreover, since our field observation was across a 30 m × 30 m sample plot that covered an area of 3 × 3 pixels in the Sentinel-2 imagery, the base input image patch size gave the CNNR model a more substantial capacity to reflect the finer-detailed ground-truth information of the INWS and showed a more credible result of estimated INWS cover. Thus, in this study, we found that the relatively smaller input image patch size for the CNNR model was more appropriate for extracting nonlinear spatial–spectral features of the INWS within the image patch and fit better with the heterogeneous information provided than with a larger input image patch size. Previous studies have also shown that the input window size influences the performance of CNNR [94,96,123]. Studies have demonstrated that a smaller input image patch size advantageously represents the contextual features of broadened objects compared to a larger input image patch, while a larger input image size would be more suitable to extract deeper features characterizing larger-scale objects by a CNN model [124,125]. Ref. [112] found that using 3 × 3 pixels in Sentinel-2 imagery, which covered an area of 900 m2, provided optimal results for estimating water chlorophyll-a concentrations with a CNNR model [112]. Therefore, our results in this study demonstrate that a smaller input image patch size is more suitable for estimating INWS continuous cover using Sentinel-2 imagery with a CNNR model, and an appropriate input image patch size is important to the CNNR model for estimating continuous distributed variables.
Although quite high accuracy was obtained both with RFR and the CNNR model for estimating INWS cover in this study, there were still significant differences between the two algorithms, such as the different patterns of INWS cover over the region in the top left of the study area (Figure 8b–d). According to the field survey, this area is a floodplain; during the wet season or after rain and flooding, it becomes a temporary swamp, and INWS cover in this area is quite low. Figure 8b–d show that the estimation results of INWS cover by the feature-based RFR model were higher than those of the CNNR model since the estimation results of the CNNR model were generally consistent with the actual situation from the field observation during the field survey, while the estimation of the RFR model differed from the actual situation. We think the possible reasons for this phenomenon may be attributed to model ability and ground-truth samples. Firstly, the variations in the information capture ability between machine learning and deep learning algorithms caused the feature-based RFR model to not accurately capture the low INWS cover features in this area, resulting in an overestimation of the real INWS cover, while the deep feature extraction ability offered by the deep learning algorithms could provide more information for the low-INWS-cover area. Secondly, the lack of ground-truth data in this area (Figure 1c) overfitted the machine learning model and resulted in an overestimation of the INWS cover in this area because this area is inaccessible due to the soft and mushy ground face. Although the reasons mentioned above have not been confirmed at present, we are collecting more data and conducting more experiments to verify this. Therefore, we suggest that when conducting similar studies, the number of samples and the stability between the models should be taken into careful consideration.

4.2. Limitations

We leveraged RFR and a novel CNNR model to estimate the INWS continuous cover across alpine grassland; the model performance was overall high as the R2, MAE, and RMSE remained stable across the RFR and CNNR models. The R2 for the RFR model remained stable at approximately 0.85 for all features, along with MAE and RMSE values below 0.04 and 0.05, respectively. For the CNNR model, either using Sentinel-2 reflectance bands or principal component features, the R2 remained stable at approximately 90% and the MAE and RMSE were below 0.02 and 0.03, respectively. However, the scatter plots showed that the overall performance for both the RFR and CNNR models underestimated INWS continuous cover (Figure 7). Additionally, we observed that lower INWS cover was overestimated while higher INWS cover was underestimated, which was especially clear from the results of the CNNR model with Sentinel-2 reflectance bands (Figure 7b). This observation of overestimation for low cover and underestimation for higher cover was also reported by previous studies in continuous cover estimation [126,127]. Ref. [128] reported a general overestimate at the low end and an underestimation of fraction cover at the high end. Furthermore, previous studies also indicated that there is a tendency for pixels with overall low vegetation cover to be overestimated, which our results confirmed [129,130]. The possible reasons for the over- or underestimation of INWS cover could be the following. Firstly, the unsolvable spectral similarity among the vegetation types caused bias in the estimation results, such as those for the mixed and coexisting habitat of INWS and native grass species, leading to a similarity in the spectral reflectance, making it difficult to accurately estimate the INWS cover, which is consistent with the findings of previous studies [127]. Secondly, the influence of model architecture and input features on bias in this study may also have led to the underestimation of the results. Although deep learning algorithms were proposed in this study, only a shallow structure was implemented for feature learning and extraction, which may have restricted the feature extraction ability as the features inputted into the machine learning model were manually extracted; this method may have been unable to meet the requirements for accurately estimating INWS cover with a machine learning model. Additionally, the findings of this bias observed could also have been due to the low overall level of INWS cover in the study area and the small high-cover sample size (Figure 5). The unbalanced cover range also resulted in estimation bias.
In addition, the limited ground observation samples are another factor that influenced the regression performance. CNNs require more ground-truth training datasets to overcome model overfitting [46]. Previous studies indicated that in the field of vegetation remote sensing applications, most of the reference ground-truth data were commonly acquired through field survey-based collection using in situ plots or point observations [39]. However, field surveys normally require many resources, such as a professional team, transportation, and equipment investment, and the high costs involved constrain large-scale, long-term collection work; therefore, field survey-based reference data collection is usually limited in quantity and quality. Additionally, the natural environment of the study area also impacted the accessibility of sampling activities; in particular, study areas with harsh environments greatly hinder ground data collection frequency [46]. For example, in this study, our research objects were unique to the QTP, where the natural environment in this region is difficult to work in due to the high altitude, rough terrain, and sparse oxygen content in the air, greatly hampering our ground-truth data collection activities, resulting in limited sampling data (88 sample plots) for this study. The limited ground-truth data in this study introduced uncertainty into the estimation model, such as the different INWS distribution patterns shown in the top left region of the study area (Figure 8b–d), which may have been caused by the lack of efficient ground-truth samples in this area. Furthermore, since our study area was small in this study, the effect of limited ground-truth data was not obvious in the result, and we still obtained high accuracy (Table 3). However, in large-scale area applications, the disadvantages of limited ground-truth data would be magnified and significantly impact the reliability and generalizability of the results. Therefore, studies have indicated that deep learning-based geoscience models are usually under-constrained, owing to the limitations of representative reference data [52]. Since field-based surveys remain challenging in large reference dataset collection, recently, more alternative methods to collect reference data instead of using field-based surveys have been proposed, such as the use of UAVs as an alternative for field sampling to map woody invasive species [6].

5. Conclusions

Mapping the continuous cover of invasive species at a large scale is essential for invasive species management and control. In this study, we proposed a workflow to accomplish the following: (1) investigate the ability of the CNNR model to estimate INWS continuous cover (%) at the pixel level and (2) assess the capability of Sentinel-2 satellite imagery for large-scale INWS continuous cover mapping across complex grassland ecosystems. The CNNR model using Sentinel-2 imagery showed relatively high accuracy in estimating INWS continuous cover and achieved an overall R2 of 0.90 along with MAE and RMSE values below 0.002 and 0.003, respectively, either with reflectance bands or PCA. The RFR model using multiple features derived from Sentinel-2 obtained comparable accuracy for INWS continuous cover estimation, with an R2 of approximately 0.85 along with an MAE and RMSE of approximately 0.04 and 0.05, respectively. There were five Sentinel-2 band features, red edge1, NIR2, red edge2, red edge3, and green; five VI features, Cire1, VIRRE3, GR, Cire2, and ARI1; and two PCA features, PC2 and PC3, that made important contributions to the RFR model. This demonstrates the potential of the green band, red edge bands, and NIR band in Sentinel-2 imagery to map continuous INWS cover. We concluded that a random forest regression model combined with Sentinel-2 imagery-derived features can be used for continuous INWS cover mapping. The CNNR model performed much better than the RFR model at estimating INWS continuous cover directly from Sentinel-2 imagery. Moreover, the Sentinel-2 imagery was suitable for INWS continuous cover mapping across complex alpine grassland ecosystems at a large scale, particularly the red-edge and near-infrared bands.

Author Contributions

F.X.: Conceptualization, Methodology, Software, Validation, Writing—original draft. R.A.: Conceptualization, Review and editing, Supervision, Funding acquisition. X.G.: Review and editing, Supervision, Language editing, Funding acquisition. X.S.: Conceptualization, Review and editing, Supervision, Project administration, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China under grant 2023YFC3006505, the National Natural Science Foundation of China under grants 52309017 and 41871326, the Natural Science Foundation of Jiangsu Province under grant BK20230958, the Key Laboratory of Hydrometeorology of China Meteorological Administration under grant 23SWQXM047, the Natural Sciences and Engineering Research Council of Canada under grant [RGPIN-RGPIN-2022-04471], and the [China Scholarship Council] under grant 202106710115.

Data Availability Statement

All data and source codes associated with this paper are available upon request. The public datasets used are available online.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, H.; Zhao, X.; Tang, Y.; Gu, S.; Zhou, L. Alpine grassland degradation and its control in the source region of the Yangtze and Yellow Rivers, China. Grassl. Sci. 2005, 51, 191–203. [Google Scholar] [CrossRef]
  2. Foxcroft, L.C.; Pyšek, P.; Richardson, D.M.; Genovesi, P.; MacFadyen, S. Plant invasion science in protected areas: Progress and priorities. Biol. Invasions. 2017, 19, 1353–1378. [Google Scholar] [CrossRef]
  3. Kettenring, K.M.; Adams, C.R. Lessons learned from invasive plant control experiments: A systematic review and meta-analysis. J. Appl. Ecol. 2011, 48, 970–979. [Google Scholar] [CrossRef]
  4. Vilà, M.; Espinar, J.L.; Hejda, M.; Hulme, P.E.; Jarošík, V.; Maron, J.L.; Pergl, J.; Schaffner, U.; Sun, Y.; Pyšek, P. Ecological impacts of invasive alien plants: A meta-analysis of their effects on species, communities and ecosystems. Ecol. Lett. 2011, 14, 702–708. [Google Scholar] [CrossRef] [PubMed]
  5. Duenas, M.-A.; Hemming, D.J.; Roberts, A.; Diaz-Soltero, H. The threat of invasive species to IUCN-listed critically endangered species: A systematic review. Glob. Ecol. Conserv. 2021, 26, e01476. [Google Scholar] [CrossRef]
  6. Kattenborn, T.; Lopatin, J.; Förster, M.; Braun, A.C.; Fassnacht, F.E. UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote Sens. Environ. 2019, 227, 61–73. [Google Scholar] [CrossRef]
  7. Kumar Rai, P.; Singh, J.S. Invasive alien plant species: Their impact on environment, ecosystem services and human health. Ecol. Indic. 2020, 111, 106020. [Google Scholar] [CrossRef] [PubMed]
  8. Ehrenfeld, J.G. Ecosystem consequences of biological invasions. Annu. Rev. Ecol. Evol. Syst. 2010, 41, 59–80. [Google Scholar] [CrossRef]
  9. Nininahazwe, F.; Théau, J.; Marc Antoine, G.; Varin, M. Mapping invasive alien plant species with very high spatial resolution and multi-date satellite imagery using object-based and machine learning techniques: A comparative study. GISci Remote Sens. 2023, 60, 2190203. [Google Scholar] [CrossRef]
  10. Davies, K.W.; Johnson, D.D. Are we “Missing the Boat” on preventing the spread of invasive plants in rangelands? Invas. Plant Sci. Mana 2011, 4, 166–171. [Google Scholar] [CrossRef]
  11. Kumschick, S.; Gaertner, M.; Vilà, M.; Essl, F.; Jeschke, J.M.; Pyšek, P.; Ricciardi, A.; Bacher, S.; Blackburn, T.M.; Dick, J.T. Ecological impacts of alien species: Quantification, scope, caveats, and recommendations. BioScience 2015, 65, 55–63. [Google Scholar] [CrossRef]
  12. Bradshaw, C.J.; Leroy, B.; Bellard, C.; Roiz, D.; Albert, C.; Fournier, A.; Barbet-Massin, M.; Salles, J.-M.; Simard, F.; Courchamp, F. Massive yet grossly underestimated global costs of invasive insects. Nat. Commun. 2016, 7, 12986. [Google Scholar] [CrossRef] [PubMed]
  13. Diagne, C.; Leroy, B.; Vaissière, A.-C.; Gozlan, R.E.; Roiz, D.; Jarić, I.; Salles, J.-M.; Bradshaw, C.J.; Courchamp, F. High and rising economic costs of biological invasions worldwide. Nature 2021, 592, 571–576. [Google Scholar] [CrossRef] [PubMed]
  14. Jones, B.A. Invasive species impacts on human well-being using the life satisfaction index. Ecol. Econ. 2017, 134, 250–257. [Google Scholar] [CrossRef]
  15. Kelsch, A.; Takahashi, Y.; Dasgupta, R.; Mader, A.D.; Johnson, B.A.; Kumar, P. Invasive alien species and local communities in socio-ecological production landscapes and seascapes: A systematic review and analysis. Environ. Sci. Policy 2020, 112, 275–281. [Google Scholar] [CrossRef]
  16. Ogden, N.H.; Wilson, J.R.; Richardson, D.M.; Hui, C.; Davies, S.J.; Kumschick, S.; Le Roux, J.J.; Measey, J.; Saul, W.-C.; Pulliam, J.R. Emerging infectious diseases and biological invasions: A call for a One Health collaboration in science and management. R. Soc. Open Sci. 2019, 6, 181577. [Google Scholar] [CrossRef] [PubMed]
  17. An, R.; Zhang, C.; Sun, M.; Wang, H.; Shen, X.; Wang, B.; Xing, F.; Huang, X.; Fan, M. Monitoring grassland degradation and restoration using a novel climate use efficiency (NCUE) index in the Tibetan Plateau, China. Ecol. Indic. 2021, 131, 108208. [Google Scholar] [CrossRef]
  18. Xing, F.; An, R.; Guo, X.; Shen, X. Mapping invasive noxious weed species in the alpine grassland ecosystems using very high spatial resolution UAV hyperspectral imagery and a novel deep learning model. Gisci Remote Sens. 2024, 61, 2327146. [Google Scholar] [CrossRef]
  19. Xing, F.; An, R.; Wang, B.; Miao, J.; Jiang, T.; Huang, X.; Hu, Y. Mapping the occurrence and spatial distribution of noxious weed species with multisource data in degraded grasslands in the Three-River Headwaters Region, China. Sci. Total Environ. 2021, 801, 149714. [Google Scholar] [CrossRef]
  20. Courchamp, F.; Fournier, A.; Bellard, C.; Bertelsmeier, C.; Bonnaud, E.; Jeschke, J.M.; Russell, J.C. Invasion Biology: Specific Problems and Possible Solutions. Trends Ecol. Evol. 2017, 32, 13–22. [Google Scholar] [CrossRef]
  21. Guo, Y.; Zhao, Y.; Rothfus, T.A.; Avalos, A.S. A novel invasive plant detection approach using time series images from unmanned aerial systems based on convolutional and recurrent neural networks. Neural Comput. Appl. 2022, 34, 20135–20147. [Google Scholar] [CrossRef]
  22. Baron, J.; Hill, D.J. Monitoring grassland invasion by spotted knapweed (Centaurea maculosa) with RPAS-acquired multispectral imagery. Remote Sens. Environ. 2020, 249, 112008. [Google Scholar] [CrossRef]
  23. Dai, J.; Roberts, D.A.; Stow, D.A.; An, L.; Hall, S.J.; Yabiku, S.T.; Kyriakidis, P.C. Mapping understory invasive plant species with field and remotely sensed data in Chitwan, Nepal. Remote Sens. Environ. 2020, 250, 112037. [Google Scholar] [CrossRef]
  24. Bradley, B.A. Remote detection of invasive plants: A review of spectral, textural and phenological approaches. Biol. Invasions 2014, 16, 1411–1425. [Google Scholar] [CrossRef]
  25. Joshi, C.; De Leeuw, J.; van Duren, I.C. Remote sensing and GIS applications for mapping and spatial modelling of invasive species. In Proceedings of the ISPRS Congress: Geo-Imagery Bridging Continents, Istanbul, Turkey, 12–23 July 2004; p. B7. [Google Scholar]
  26. Asner, G.P.; Jones, M.O.; Martin, R.E.; Knapp, D.E.; Hughes, R.F. Remote sensing of native and invasive species in Hawaiian forests. Remote Sens. Environ. 2008, 112, 1912–1926. [Google Scholar] [CrossRef]
  27. Rizaludin Mahmud, M.; Numata, S.; Hosaka, T. Mapping an invasive goldenrod of Solidago altissima in urban landscape of Japan using multi-scale remote sensing and knowledge-based classification. Ecol. Indic. 2020, 111, 105975. [Google Scholar] [CrossRef]
  28. Rodríguez-Garlito, E.C.; Paz-Gallardo, A.; Plaza, A. Mapping invasive aquatic plants in Sentinel-2 images using convolutional neural networks trained with spectral indices. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2889–2899. [Google Scholar] [CrossRef]
  29. Jiang, Y.; Zhang, L.; Yan, M.; Qi, J.; Fu, T.; Fan, S.; Chen, B. High-Resolution Mangrove Forests Classification with Machine Learning Using Worldview and UAV Hyperspectral Data. Remote Sens. 2021, 13, 1529. [Google Scholar] [CrossRef]
  30. Tuceryan, M.; Jain, A.K. Texture analysis. In Handbook of Pattern Recognition and Computer Vision, 2nd ed.; World Scientific: Singapore, 1999; pp. 207–248. [Google Scholar]
  31. Gholizadeh, H.; Friedman, M.S.; McMillan, N.A.; Hammond, W.M.; Hassani, K.; Sams, A.V.; Charles, M.D.; Garrett, D.R.; Joshi, O.; Hamilton, R.G.; et al. Mapping invasive alien species in grassland ecosystems using airborne imaging spectroscopy and remotely observable vegetation functional traits. Remote Sens. Environ. 2022, 271, 112887. [Google Scholar] [CrossRef]
  32. Rossi, C.; Kneubühler, M.; Schütz, M.; Schaepman, M.E.; Haller, R.M.; Risch, A.C. From local to regional: Functional diversity in differently managed alpine grasslands. Remote Sens. Environ. 2020, 236, 111415. [Google Scholar] [CrossRef]
  33. Weisberg, P.J.; Dilts, T.E.; Greenberg, J.A.; Johnson, K.N.; Pai, H.; Sladek, C.; Kratt, C.; Tyler, S.W.; Ready, A. Phenology-based classification of invasive annual grasses to the species level. Remote Sens. Environ. 2021, 263, 112568. [Google Scholar] [CrossRef]
  34. Kganyago, M.; Odindi, J.; Adjorlolo, C.; Mhangara, P. Evaluating the capability of Landsat 8 OLI and SPOT 6 for discriminating invasive alien species in the African Savanna landscape. Int. J. Appl. Earth Obs. 2018, 67, 10–19. [Google Scholar] [CrossRef]
  35. Robinson, T.P.; Wardell-Johnson, G.W.; Pracilio, G.; Brown, C.; Corner, R.; van Klinken, R.D. Testing the discrimination and detection limits of WorldView-2 imagery on a challenging invasive plant target. Int. J. Appl. Earth Obs. 2016, 44, 23–30. [Google Scholar] [CrossRef]
  36. He, K.S.; Rocchini, D.; Neteler, M.; Nagendra, H. Benefits of hyperspectral remote sensing for tracking plant invasions. Divers. Distrib. 2011, 17, 381–392. [Google Scholar] [CrossRef]
  37. Skowronek, S.; Asner, G.P.; Feilhauer, H. Performance of one-class classifiers for invasive species mapping using airborne imaging spectroscopy. Ecol. Inf. 2017, 37, 66–76. [Google Scholar] [CrossRef]
  38. Underwood, E. Mapping nonnative plants using hyperspectral imagery. Remote Sens. Environ. 2003, 86, 150–161. [Google Scholar] [CrossRef]
  39. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  40. Ghulam, A.; Porton, I.; Freeman, K. Detecting subcanopy invasive plant species in tropical rainforest by integrating optical and microwave (InSAR/PolInSAR) remote sensing data, and a decision tree algorithm. ISPRS J. Photogramm. 2014, 88, 174–192. [Google Scholar] [CrossRef]
  41. Schmidt, J.; Fassnacht, F.E.; Förster, M.; Schmidtlein, S. Synergetic use of Sentinel-1 and Sentinel-2 for assessments of heathland conservation status. Remote Sens. Ecol. Conserv. 2018, 4, 225–239. [Google Scholar] [CrossRef]
  42. Elkind, K.; Sankey, T.T.; Munson, S.M.; Aslan, C.E.; Horning, N. Invasive buffelgrass detection using high-resolution satellite and UAV imagery on Google Earth Engine. Remote Sens. Ecol. Conserv. 2019, 5, 318–331. [Google Scholar] [CrossRef]
  43. Martin, F.-M.; Müllerová, J.; Borgniet, L.; Dommanget, F.; Breton, V.; Evette, A. Using single- and multi-date UAV and satellite imagery to accurately monitor invasive Knotweed Species. Remote Sens. 2018, 10, 1662. [Google Scholar] [CrossRef]
  44. Lourenço, P.; Teodoro, A.C.; Gonçalves, J.A.; Honrado, J.P.; Cunha, M.; Sillero, N. Assessing the performance of different OBIA software approaches for mapping invasive alien plants along roads with remote sensing data. Int. J. Appl. Earth Obs. 2021, 95, 102263. [Google Scholar] [CrossRef]
  45. Kishore, B.S.P.C.; Kumar, A.; Saikia, P.; Lele, N.; Srivastava, P.; Pulla, S.; Suresh, H.; Bhattarcharya, B.K.; Khan, M.L.; Sukumar, R. Mapping of understorey invasive plant species clusters of Lantana camara and Chromolaena odorata using airborne hyperspectral remote snesing. Adv. Space Res. 2022, 73, 1379–1396. [Google Scholar] [CrossRef]
  46. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on convolutional neural networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  47. Lake, T.A.; Briscoe Runquist, R.D.; Moeller, D.A.; Sankey, T.; Ke, Y. Deep learning detects invasive plant species across complex landscapes using Worldview-2 and Planetscope satellite imagery. Remote Sens. Ecol. Conserv. 2022, 8, 875–889. [Google Scholar] [CrossRef]
  48. Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S.; Horning, N.; Clerici, N. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef]
  49. Gibril, M.B.A.; Shafri, H.Z.M.; Shanableh, A.; Al-Ruzouq, R.; Wayayok, A.; Hashim, S.J. Deep convolutional neural network for large-scale date palm tree mapping from UAV-based images. Remote Sens. 2021, 13, 2787. [Google Scholar] [CrossRef]
  50. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  51. Ball, J.E.; Anderson, D.T.; Chan, C.S. Comprehensive survey of deep learning in remote sensing: Theories, tools, and challenges for the community. J. Appl. Remote Sens. 2017, 11, 042609. [Google Scholar] [CrossRef]
  52. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat, F. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
  53. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosc Rem. Sen. M. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  54. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  55. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  56. Pearse, G.D.; Tan, A.Y.S.; Watt, M.S.; Franz, M.O.; Dash, J.P. Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data. ISPRS J. Photogramm. 2020, 168, 156–169. [Google Scholar] [CrossRef]
  57. Flood, N.; Watson, F.; Collett, L. Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia. Int. J. Appl. Earth Obs. 2019, 82, 101897. [Google Scholar] [CrossRef]
  58. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  59. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  60. Moazzam, S.I.; Khan, U.S.; Tiwana, M.I.; Iqbal, J.; Qureshi, W.S.; Shah, S.I. A review of application of deep learning for weeds and crops classification in agriculture. In Proceedings of the 2019 International Conference on Robotics and Automation in Industry (ICRAI), Montreal, QC, Canada, 20–24 May 2019; pp. 1–6. [Google Scholar]
  61. Wang, Q.; Cheng, M.; Huang, S.; Cai, Z.; Zhang, J.; Yuan, H. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings. Comput. Electron. Agric. 2022, 199, 107194. [Google Scholar] [CrossRef]
  62. Qian, W.Q.; Huang, Y.Q.; Liu, Q.; Fan, W.; Sun, Z.Y.; Dong, H.; Wan, F.H.; Qiao, X. UAV and a deep convolutional neural network for monitoring invasive alien plants in the wild. Comput. Electron. Agric. 2020, 174, 105519. [Google Scholar] [CrossRef]
  63. Kounalakis, T.; Triantafyllidis, G.A.; Nalpantidis, L. Deep learning-based visual recognition of rumex for robotic precision farming. Comput. Electron. Agric. 2019, 165, 104973. [Google Scholar] [CrossRef]
  64. Liu, T.; Abd-Elrahman, A.; Zare, A.; Dewitt, B.A.; Flory, L.; Smith, S.E. A fully learnable context-driven object-based model for mapping land cover using multi-view data from unmanned aircraft systems. Remote Sens. Environ. 2018, 216, 328–344. [Google Scholar] [CrossRef]
  65. Foody, G.M.; Campbell, N.; Trodd, N.; Wood, T. Derivation and applications of probabilistic measures of class membership from the maximum-likelihood classification. Photogramm. Eng. Remote Sens. 1992, 58, 1335–1341. [Google Scholar] [CrossRef]
  66. Rocchini, D.; Foody, G.M.; Nagendra, H.; Ricotta, C.; Anand, M.; He, K.S.; Amici, V.; Kleinschmit, B.; Förster, M.; Schmidtlein, S. Uncertainty in ecosystem mapping by remote sensing. Comput. Geosci. 2013, 50, 128–135. [Google Scholar] [CrossRef]
  67. Schmidtlein, S.; Sassin, J. Mapping of continuous floristic gradients in grasslands using hyperspectral imagery. Remote Sens. Environ. 2004, 92, 126–138. [Google Scholar] [CrossRef]
  68. Frizzelle, B.G.; Moody, A. Mapping continuous distributions of land cover: A comparison of maximum-likelihood estimation and artificial neural networks. Photogramm. Eng. Remote Sens. 2001, 67, 693–706. [Google Scholar]
  69. Li, J.; Ma, Y.; Song, R.; Xi, B.; Hong, D.; Du, Q. A Triplet Semisupervised Deep Network for Fusion Classification of Hyperspectral and LiDAR Data. IEEE T Geosci. Remote 2022, 60, 5540513. [Google Scholar] [CrossRef]
  70. Osco, L.P.; Marcato Junior, J.; Marques Ramos, A.P.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. 2021, 102, 102456. [Google Scholar] [CrossRef]
  71. Schiefer, F.; Schmidtlein, S.; Frick, A.; Frey, J.; Klinke, R.; Zielewska-Büttner, K.; Junttila, S.; Uhl, A.; Kattenborn, T. UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series. ISPRS Open J. Photogramm. Remote Sens. 2023, 8, 100034. [Google Scholar] [CrossRef]
  72. Yang, H.; Du, Y.; Zhao, H.; Chen, F. Water Quality Chl-a Inversion Based on Spatio-Temporal Fusion and Convolutional Neural Network. Remote Sens. 2022, 14, 1267. [Google Scholar] [CrossRef]
  73. Wang, B.; An, R.; Jiang, T.; Xing, F.; Ju, F. Image Spectral Resolution Enhancement for Mapping Native Plant Species in a Typical Area of the Three-River Headwaters Region, China. Remote Sens. 2020, 12, 3146. [Google Scholar] [CrossRef]
  74. ESA. Sentinel-2 MSI User Guide; ESA: Paris, France, 2016; Volume 2023. [Google Scholar]
  75. Korhonen, L.; Hadi; Packalen, P.; Rautiainen, M. Comparison of Sentinel-2 and Landsat 8 in the estimation of boreal forest canopy cover and leaf area index. Remote Sens. Environ. 2017, 195, 259–274. [Google Scholar] [CrossRef]
  76. Liu, X.; Frey, J.; Munteanu, C.; Still, N.; Koch, B. Mapping tree species diversity in temperate montane forests using Sentinel-1 and Sentinel-2 imagery and topography data. Remote Sens. Environ. 2023, 292, 113576. [Google Scholar] [CrossRef]
  77. Sun, C.; Li, J.; Liu, Y.; Liu, Y.; Liu, R. Plant species classification in salt marshes using phenological parameters derived from Sentinel-2 pixel-differential time-series. Remote Sens. Environ. 2021, 256, 112320. [Google Scholar] [CrossRef]
  78. Yang, X.; Qiu, S.; Zhu, Z.; Rittenhouse, C.; Riordan, D.; Cullerton, M. Mapping understory plant communities in deciduous forests from Sentinel-2 time series. Remote Sens. Environ. 2023, 293, 113601. [Google Scholar] [CrossRef]
  79. Forster, M.; Schmidt, T.; Wolf, R.; Kleinschmit, B.; Fassnacht, F.E.; Cabezas, J.; Kattenborn, T. Detecting the Spread of Invasive Species in Central Chile with a Sentinel-2 time-series. In Proceedings of the 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (Multitemp), Brugge, Belgium, 27–29 June 2017. [Google Scholar] [CrossRef]
  80. Ng, W.-T.; Rima, P.; Einzmann, K.; Immitzer, M.; Atzberger, C.; Eckert, S. Assessing the potential of Sentinel-2 and Pléiades data for the detection of Prosopis and Vachellia spp. in Kenya. Remote Sens. 2017, 9, 74. [Google Scholar] [CrossRef]
  81. Sheng, W.; Zhen, L.; Xiao, Y.; Hu, Y. Ecological and socioeconomic effects of ecological restoration in China’s Three Rivers Source Region. Sci. Total Environ. 2019, 650, 2307–2313. [Google Scholar] [CrossRef] [PubMed]
  82. Ge, J.; Meng, B.; Liang, T.; Feng, Q.; Gao, J.; Yang, S.; Huang, X.; Xie, H. Modeling alpine grassland cover based on MODIS data and support vector machine regression in the headwater region of the Huanghe River, China. Remote Sens. Environ. 2018, 218, 162–173. [Google Scholar] [CrossRef]
  83. Xing, F.; An, R.; Guo, X.; Shen, X.; Soubry, I.; Wang, B.; Mu, Y.; Huang, X. Mapping alpine grassland fraction coverage using Zhuhai-1 OHS imagery in the Three River Headwaters Region, China. Remote Sens. 2023, 15, 2289. [Google Scholar] [CrossRef]
  84. Xiong, Q.; Xiao, Y.; Halmy, M.W.A.; Dakhil, M.A.; Liang, P.; Liu, C.; Zhang, L.; Pandey, B.; Pan, K.; El Kafraway, S.B.; et al. Monitoring the impact of climate change and human activities on grassland vegetation dynamics in the northeastern Qinghai-Tibet Plateau of China during 2000–2015. J. Arid Land 2019, 11, 637–651. [Google Scholar] [CrossRef]
  85. Xu, J.; Fang, S.; Li, X.; Jiang, Z. Indication of the Two Linear Correlation Methods Between Vegetation Index and Climatic Factors: An Example in the Three River-Headwater Region of China During 2000–2016. Atmosphere 2020, 11, 606. [Google Scholar] [CrossRef]
  86. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  87. ESA; SNAP. ESA Sentinel Application Platform; ESA: Paris, France, 2016. [Google Scholar]
  88. ESA. Sentinel-2 User Handbook; ESA: Paris, France, 2015. [Google Scholar]
  89. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  90. Liaw, A.; Wiener, M. Classification and regression by randomForest. R. News 2002, 2, 18–22. [Google Scholar]
  91. Liu, Y.; Qian, J.; Yue, H. Combined Sentinel-1A with Sentinel-2A to estimate soil moisture in farmland. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1292–1310. [Google Scholar] [CrossRef]
  92. Mutanga, O.; Adam, E.; Cho, M.A. High density biomass estimation for wetland vegetation using WorldView-2 imagery and random forest regression algorithm. Int. J. Appl. Earth Obs. 2012, 18, 399–406. [Google Scholar] [CrossRef]
  93. Zhang, F.; Tian, X.; Zhang, H.; Jiang, M. Estimation of Aboveground Carbon Density of Forests Using Deep Learning and Multisource Remote Sensing. Remote Sens. 2022, 14, 22. [Google Scholar] [CrossRef]
  94. Dong, L.; Du, H.; Han, N.; Li, X.; Zhu, D.E.; Mao, F.; Zhang, M.; Zheng, J.; Liu, H.; Huang, Z.; et al. Application of Convolutional Neural Network on Lei Bamboo Above-Ground-Biomass (AGB) Estimation Using Worldview-2. Remote Sens. 2020, 12, 958. [Google Scholar] [CrossRef]
  95. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  96. Zheng, T.; Bergin, M.H.; Hu, S.; Miller, J.; Carlson, D.E. Estimating ground-level PM2.5 using micro-satellite images by a convolutional neural network and random forest approach. Atmos. Environ. 2020, 230, 117451. [Google Scholar] [CrossRef]
  97. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  98. Barbosa, J.; Asner, G.; Martin, R.; Baldeck, C.; Hughes, F.; Johnson, T. Determining Subcanopy Psidium cattleianum Invasion in Hawaiian Forests Using Imaging Spectroscopy. Remote Sens. 2016, 8, 33. [Google Scholar] [CrossRef]
  99. Shoko, C.; Mutanga, O. Examining the strength of the newly-launched Sentinel 2 MSI sensor in detecting and discriminating subtle differences between C3 and C4 grass species. ISPRS J. Photogramm. 2017, 129, 32–40. [Google Scholar] [CrossRef]
  100. Ai, Z. Research on Components Coverage Information Extraction of Native Plant Species and Noxious Weeds in Typical Area of the Three-River Headwater Region by Using Hyperspectral Remote Sensing; Hohai University: Nanjing, China, 2022. [Google Scholar]
  101. Jiang, Z.; Huete, A.R.; Chen, J.; Chen, Y.; Li, J.; Yan, G.; Zhang, X. Analysis of NDVI and scaled difference vegetation index retrievals of vegetation fraction. Remote Sens. Environ. 2006, 101, 366–378. [Google Scholar] [CrossRef]
  102. Kiala, Z.; Mutanga, O.; Odindi, J.; Viriri, S.; Sibanda, M. A Hybrid Feature Method for Handling Redundant Features in a Sentinel-2 Multidate Image for Mapping Parthenium Weed. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3644–3655. [Google Scholar] [CrossRef]
  103. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-I. Crop classification from Sentinel-2-derived vegetation indices using ensemble learning. J. Appl. Remote Sens. 2018, 12, 026019. [Google Scholar] [CrossRef]
  104. Gitelson, A.A.; Chivkunova, O.B.; Merzlyak, M.N. Nondestructive estimation of anthocyanins and chlorophylls in anthocyanic leaves. Am. J. Bot. 2009, 96, 1861–1868. [Google Scholar] [CrossRef] [PubMed]
  105. Verrelst, J.; Schaepman, M.E.; Koetz, B.; Kneubühler, M. Angular sensitivity analysis of vegetation indices derived from CHRIS/PROBA data. Remote Sens. Environ. 2008, 112, 2341–2353. [Google Scholar] [CrossRef]
  106. Figueroa-Figueroa, D.K.; Ramírez-Dávila, J.F.; Antonio-Némiga, X.; Huerta, A.G. Mapping of avocado in the south of the state of Mexico by digital image processing sentinel-2. Rev. Mex. Cienc. Agrícolas 2020, 11, 865–879. [Google Scholar] [CrossRef]
  107. Kamenova, I.; Dimitrov, P. Evaluation of Sentinel-2 vegetation indices for prediction of LAI, fAPAR and fCover of winter wheat in Bulgaria. Eur. J. Remote Sens. 2020, 54, 89–108. [Google Scholar] [CrossRef]
  108. Aires, F.; Boucher, E.; Pellet, V. Convolutional neural networks for satellite remote sensing at coarse resolution. Application for the SST retrieval using IASI. Remote Sens. Environ. 2021, 263, 112553. [Google Scholar] [CrossRef]
  109. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
  110. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  111. Deep, S.; Zheng, X. Hybrid model featuring CNN and LSTM architecture for human activity recognition on smartphone sensor data. In Proceedings of the 2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), Gold Coast, Australia, 5–7 December 2019; pp. 259–264. [Google Scholar]
  112. Aptoula, E.; Ariman, S. Chlorophyll-a retrieval from Sentinel-2 images using convolutional neural network regression. IEEE Geosci. Remote Sens. 2022, 19, 6002605. [Google Scholar] [CrossRef]
  113. Sharma, S.; Sharma, S.; Athaiya, A. Activation functions in neural networks. Int. J. Eng. Appl. Sci. Technol. 2017, 4, 310–316. [Google Scholar] [CrossRef]
  114. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. Tensorflow: A system for large-scale machine learning. In Proceedings of the OSDI, Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  115. Asuero, A.G.; Sayago, A.; González, A.G. The Correlation Coefficient: An Overview. Crit. Rev. Anal. Chem. 2007, 36, 41–59. [Google Scholar] [CrossRef]
  116. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?—Arguments against avoiding RMSE in the literature. Geosci. Model. Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef]
  117. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (AME) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  118. Shiferaw, H.; Bewket, W.; Eckert, S. Performances of machine learning algorithms for mapping fractional cover of an invasive plant species in a dryland ecosystem. Ecol. Evol. 2019, 9, 2562–2574. [Google Scholar] [CrossRef]
  119. Ghahremanloo, M.; Lops, Y.; Choi, Y.; Yeganeh, B. Deep learning estimation of daily ground-level NO2 concentrations from remote sensing data. J. Geophys. Res. Atmos. 2021, 126, e2021JD034925. [Google Scholar] [CrossRef]
  120. Hong, S.M.; Cho, K.H.; Park, S.; Kang, T.; Kim, M.S.; Nam, G.; Pyo, J. Estimation of cyanobacteria pigments in the main rivers of South Korea using spatial attention convolutional neural network with hyperspectral imagery. Gisci. Remote Sens. 2022, 59, 547–567. [Google Scholar] [CrossRef]
  121. Ou, D.; Tan, K.; Lai, J.; Jia, X.; Wang, X.; Chen, Y.; Li, J. Semi-supervised DNN regression on airborne hyperspectral imagery for improved spatial soil properties prediction. Geoderma 2021, 385, 114875. [Google Scholar] [CrossRef]
  122. Tanabe, R.; Matsui, T.; Tanaka, T.S.T. Winter wheat yield prediction using convolutional neural networks and UAV-based multispectral imagery. Field Crop Res. 2023, 291, 108786. [Google Scholar] [CrossRef]
  123. Pyo, J.; Duan, H.; Baek, S.; Kim, M.S.; Jeon, T.; Kwon, Y.S.; Lee, H.; Cho, K.H. A convolutional neural network regression for quantifying cyanobacteria using hyperspectral imagery. Remote Sens. Environ. 2019, 233, 111350. [Google Scholar] [CrossRef]
  124. Sahiner, B.; Heang-Ping, C.; Petrick, N.; Datong, W.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images. IEEE Trans. Med. Imaging 1996, 15, 598–610. [Google Scholar] [CrossRef] [PubMed]
  125. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef]
  126. Nill, L.; Grünberg, I.; Ullmann, T.; Gessner, M.; Boike, J.; Hostert, P. Arctic shrub expansion revealed by Landsat-derived multitemporal vegetation cover fractions in the Western Canadian Arctic. Remote Sens. Environ. 2022, 281, 113228. [Google Scholar] [CrossRef]
  127. Viana-Soto, A.; Okujeni, A.; Pflugmacher, D.; García, M.; Aguado, I.; Hostert, P. Quantifying post-fire shifts in woody-vegetation cover composition in Mediterranean pine forests using Landsat time series and regression-based unmixing. Remote Sens. Environ. 2022, 281, 113239. [Google Scholar] [CrossRef]
  128. Okujeni, A.; van der Linden, S.; Hostert, P. Extending the vegetation–impervious–soil model using simulated EnMAP data and machine learning. Remote Sens. Environ. 2015, 158, 69–80. [Google Scholar] [CrossRef]
  129. Cooper, S.; Okujeni, A.; Jänicke, C.; Clark, M.; van der Linden, S.; Hostert, P. Disentangling fractional vegetation cover: Regression-based unmixing of simulated spaceborne imaging spectroscopy data. Remote Sens. Environ. 2020, 246, 111856. [Google Scholar] [CrossRef]
  130. Senf, C.; Laštovička, J.; Okujeni, A.; Heurich, M.; van der Linden, S. A generalized regression-based unmixing model for mapping forest cover fractions throughout three decades of Landsat data. Remote Sens. Environ. 2020, 240, 111691. [Google Scholar] [CrossRef]
Figure 1. Study area. (a) Location of the TRHR and QTP at the Asia scale. (b) Location of the study area in the TRHR. (c) Sampling sites and the study area shown by Sentinel-2 (R: b8, G: b2, B: b1) imagery.
Figure 1. Study area. (a) Location of the TRHR and QTP at the Asia scale. (b) Location of the study area in the TRHR. (c) Sampling sites and the study area shown by Sentinel-2 (R: b8, G: b2, B: b1) imagery.
Remotesensing 16 01648 g001
Figure 2. Schematic diagram of the sample plots: (a) 30 m × 30 m quadrat; (b) 1 m × 1 m subplot quadrat in the field.
Figure 2. Schematic diagram of the sample plots: (a) 30 m × 30 m quadrat; (b) 1 m × 1 m subplot quadrat in the field.
Remotesensing 16 01648 g002
Figure 3. Schematic diagram showing the convolutional neural network regression model used in this study. The input layer was composed of Sentinel-2 imagery and the fiend observation point with INWS continuous cover, three convolutional layers, two fully connected layers, and a regression layer with a sigmoid activation function for INWS continuous cover estimation. Conv + BN + ReLU represents the convolutional layer followed by the batch normalization layer and ReLU activation function.
Figure 3. Schematic diagram showing the convolutional neural network regression model used in this study. The input layer was composed of Sentinel-2 imagery and the fiend observation point with INWS continuous cover, three convolutional layers, two fully connected layers, and a regression layer with a sigmoid activation function for INWS continuous cover estimation. Conv + BN + ReLU represents the convolutional layer followed by the batch normalization layer and ReLU activation function.
Remotesensing 16 01648 g003
Figure 4. Input patch size. The green point represents the center of the 30 m × 30 m sample plot. The red line represents an input image patch size of 3 × 3, the yellow line represents 5 × 5, and the blue line represents 7 × 7.
Figure 4. Input patch size. The green point represents the center of the 30 m × 30 m sample plot. The red line represents an input image patch size of 3 × 3, the yellow line represents 5 × 5, and the blue line represents 7 × 7.
Remotesensing 16 01648 g004
Figure 5. Field-observed INWS cover (%). (a) INWS cover distribution. (b) INWS cover variation in the longitudinal direction. (c) INWS cover variation in the latitude direction. (d) INWS cover variation along the elevation.
Figure 5. Field-observed INWS cover (%). (a) INWS cover distribution. (b) INWS cover variation in the longitudinal direction. (c) INWS cover variation in the latitude direction. (d) INWS cover variation along the elevation.
Remotesensing 16 01648 g005
Figure 6. Feature importance evaluation based on RFR. (a) is the importance of Bands, (b) is VIs, (c) is Bands + VIs, (d) is PCA + VIs, and (e) is the Bands + PCA + VIs.
Figure 6. Feature importance evaluation based on RFR. (a) is the importance of Bands, (b) is VIs, (c) is Bands + VIs, (d) is PCA + VIs, and (e) is the Bands + PCA + VIs.
Remotesensing 16 01648 g006aRemotesensing 16 01648 g006b
Figure 7. Scatterplots showing the model performance in estimating INWS continuous cover for the study area. (a) Random forest (RF) model with Bands + VIs features. (b) Convolutional neural network (CNN) model with Sentinel-2 bands. (c) CNN model with PCA.
Figure 7. Scatterplots showing the model performance in estimating INWS continuous cover for the study area. (a) Random forest (RF) model with Bands + VIs features. (b) Convolutional neural network (CNN) model with Sentinel-2 bands. (c) CNN model with PCA.
Remotesensing 16 01648 g007
Figure 8. The spatial pattern of estimated INWS continuous cover over the study area. (a) INWS cover along elevation change. (b) RFR model. (c) CNNR model with Sentinel-2 bands. (d) CNNR model with PCA. The areas covered except grassland were masked using land cover/use data.
Figure 8. The spatial pattern of estimated INWS continuous cover over the study area. (a) INWS cover along elevation change. (b) RFR model. (c) CNNR model with Sentinel-2 bands. (d) CNNR model with PCA. The areas covered except grassland were masked using land cover/use data.
Remotesensing 16 01648 g008
Table 1. Features derived from Sentinel-2 data.
Table 1. Features derived from Sentinel-2 data.
Feature TypesIndicesDescriptionReference
BandsB2, 3, 4, 5, 6, 7, 8, 8a, 11, and 12The reflectance bands that excluded three atmospheric bands 1, 9, and 10
Vegetation indicesNDVINDVI = (Band 8 − Band 4)/(Band 8 + Band 4)[19]
SVISVI = (Band 8 − Band 6)/(Band 8 + Band 6)
GNDVIGNDVI = (Band 8 − Band 3)/(Band 8 + Band 3)[80]
nGNDVInGNDVI = (Band 7 − Band 3)/(Band 7 + Band 3)[100]
SAVISAVI = 1.5[(Band 8 − Band 4)/(Band 8 + Band 4 + 0.5)][19]
ARI1ARI1 = Band 3−1 − Band 5−1[104,105]
ARI2ARI2 = Band 8 × [Band 3−1 − Band 5−1][104,106]
CIgreenCIgreen = (Band 7/Band 3) − 1[19]
CIgreen1CIgreen1 = (Band 8/Band 3) − 1[107]
CIred-edge1CIred-edge1 = Band 8a/Band 5 − 1
CIred-edge2CIred-edge2 = Band 7/Band 5 − 1
VIRRE1VIRRE1 = Band 8/Band 5[19]
VIRRE1VIRRE2 = Band 8/Band 6
VIRRE3VIRRE3 = Band 8/Band 7
GR rationGR = Band 3/Band 2
PCAPC 1, PC2, PC3First component (PC1), second component (PC 2), and third component (PC3) of the principal component analysis result
The table contains the feature type and their extraction method. The VI calculation formulas are bands from Sentinel-2 data.
Table 2. Statistical measures of field-observed INWS cover (%).
Table 2. Statistical measures of field-observed INWS cover (%).
MinimumMaximumMeanStandard DeviationSample Points
INWS cover (%)0.050.620.290.1288
Table 3. Estimation of INWS continuous cover with features derived from Sentinel-2 imagery by the random forest regression (RFR) model over the study area.
Table 3. Estimation of INWS continuous cover with features derived from Sentinel-2 imagery by the random forest regression (RFR) model over the study area.
FeaturesR2MAE (%)RMSE (%)
Bands0.83640.03770.0464
VIs0.85020.03690.0444
Bands + VIs0.85050.03690.0444
PCA + VIs0.84750.03720.0448
Bands + PCA + VIs0.84810.03730.0447
Table 4. Estimation of INWS continuous cover by the CNNR model with Sentinel-2 bands and PCA bands over the study area.
Table 4. Estimation of INWS continuous cover by the CNNR model with Sentinel-2 bands and PCA bands over the study area.
BandsPCA
Input Patch SizeR2MAERMSER2MAE (%)RMSE (%)
3 × 30.87820.02250.03320.91700.01810.0268
5 × 50.91630.01720.02620.90820.01880.0280
7 × 70.89820.01930.03010.89230.01970.0296
9 × 90.88040.01960.03060.91160.01640.0273
11 × 110.90090.01970.02900.92520.01600.0250
13 × 130.90810.01950.02880.93100.01700.0254
15 × 150.90870.01840.02850.90600.01890.0281
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, F.; An, R.; Guo, X.; Shen, X. Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network. Remote Sens. 2024, 16, 1648. https://doi.org/10.3390/rs16091648

AMA Style

Xing F, An R, Guo X, Shen X. Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network. Remote Sensing. 2024; 16(9):1648. https://doi.org/10.3390/rs16091648

Chicago/Turabian Style

Xing, Fei, Ru An, Xulin Guo, and Xiaoji Shen. 2024. "Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network" Remote Sensing 16, no. 9: 1648. https://doi.org/10.3390/rs16091648

APA Style

Xing, F., An, R., Guo, X., & Shen, X. (2024). Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network. Remote Sensing, 16(9), 1648. https://doi.org/10.3390/rs16091648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop