Next Article in Journal
Role of the Nyainrong Microcontinent in Seismogenic Mechanism and Stress Partitioning: Insights from the 2021 Nagqu Mw 5.7 Earthquake
Next Article in Special Issue
Studying Tropical Dry Forests Secondary Succession (2005–2021) Using Two Different LiDAR Systems
Previous Article in Journal
A Strategy for Variable-Scale InSAR Deformation Monitoring in a Wide Area: A Case Study in the Turpan–Hami Basin, China
Previous Article in Special Issue
UAV-Based Hyperspectral Imagery for Detection of Root, Butt, and Stem Rot in Norway Spruce
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating Fractional Vegetation Cover Changes in Desert Regions Using RGB Data

1
Research Center of Forestry Remote Sensing and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
2
Research Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China
3
Research Institute of Forestry Policy and Information, Chinese Academy of Forestry, Beijing 100091, China
4
Institute of Forestry, Tribhuwan University, Kritipur 44600, Nepal
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3833; https://doi.org/10.3390/rs14153833
Submission received: 22 June 2022 / Revised: 2 August 2022 / Accepted: 2 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue UAS-Based Lidar and Imagery Data for Forest)

Abstract

:
Fractional vegetation cover (FVC) is an important indicator of ecosystem changes. Both satellite remote sensing and ground measurements are common methods for estimating FVC. However, desert vegetation grows sparsely and scantly and spreads widely in desert regions, making it challenging to accurately estimate its vegetation cover using satellite data. In this study, we used RGB images from two periods: images from 2006 captured with a small, light manned aircraft with a resolution of 0.1 m and images from 2019 captured with an unmanned aerial vehicle (UAV) with a resolution of 0.02 m. Three pixel-based machine learning algorithms, namely gradient enhancement decision tree (GBDT), k-nearest neighbor (KNN) and random forest (RF), were used to classify the main vegetation (woody and grass species) and calculate the coverage. An independent data set was used to evaluate the accuracy of the algorithms. Overall accuracies of GBDT, KNN and RF for 2006 image classification were 0.9140, 0.9190 and 0.9478, respectively, with RF achieving the best classification results. Overall accuracies of GBDT, KNN and RF for 2019 images were 0.8466, 0.8627 and 0.8569, respectively, with the KNN algorithm achieving the best results for vegetation cover classification. The vegetation coverage in the study area changed significantly from 2006 to 2019, with an increase in grass coverage from 15.47 ± 1.49% to 27.90 ± 2.79%. The results show that RGB images are suitable for mapping FVC. Determining the best spatial resolution for different vegetation features may make estimation of desert vegetation coverage more accurate. Vegetation cover changes are also important in terms of understanding the evolution of desert ecosystems.

Graphical Abstract

1. Introduction

Fractional vegetation cover (FVC) usually refers to the percentage of ground area covered by the vertical projection area of vegetation (including leaves, stems, branches and roots). It is a comprehensive ecological index used to express the surface conditions of plant communities [1,2]. Information on FVC is required for assessment of land degradation, salinization and desertification, as well as environmental monitoring [3]. Deserts are characterized by perennial shrubs forming simple plant communities. Sparse plant species exhibit unique adaptation strategies to the harsh conditions of desert ecosystems, which play an important role in their stabilization [4]. Therefore, it is of considerable significance to study FVC and its changes over the years.
Traditional desert vegetation surveys require periodic frequent ground surveys involving the collection of a large numbers of samples, which may require considerable resources [5]. Alternatively, remote-sensing-based data, which are the most prevalent in recent years, may be considered for estimation of FVC [6]. Characteristics of such data include multiresolution, flexible acquisition time and coverage of a large area, providing rich spatiotemporal information on FVC [7]. Based on the radiative transfer model and machine learning algorithms, Landsat data can be used for quantitative mapping of vegetation coverage in space and time to estimate FVC with satisfactory accuracy [8]. The random forest regression method in combination with FengYun-3 reflectance data generated an FVC estimate with a reliable spatiotemporal continuity and high accuracy [9]. Although satellite data are suitable for estimating vegetation cover on large scales, it is generally difficult to generate coverage estimates for different species, especially in areas with sparse vegetation, e.g., deserts. The spatial resolution of satellite images of deserts make it difficult to obtain individual information about different species, especially plants growing in deserts, which are small in size and sparsely and widely distributed [10,11,12]. Given these limitations, higher-resolution images are needed to estimate the coverage of different species in desert regions.
Unmanned aerial vehicles (UAVs), which are light, low-cost, low-altitude and ground-operated flight platforms capable of carrying a variety of equipment and performing a variety of missions, are becoming increasingly popular in the scientific field [13]. UAVs provide a remarkable opportunity for scientists to adapt to the scale of a variety of ecological variables and are used to monitor environments at both high spatial and temporal resolutions for responsiveness. UAV-based vegetation classification is also cost-effective [14]. High-resolution UAV images are used to extract the structural and functional attributes of vegetation and explore the mechanisms of biodiversity [15,16]. UAVs serve as an effective tool to measure the leaf angle of hardwood tree species [17]. If the flight path is low or the resolution of the sensor is high enough, previously undetected object details can be spatially identified, which is the main advantage of using UAVs to study vegetation dynamics [18]. UAVs provide a large amount of data that can be used to study spatial ecology and monitor environmental phenomena. UAVs also represent an important step toward more effective and efficient monitoring and management of natural resources [14,19].
The quality of sensors is gradually improving, and a variety of sensors is widely used [20,21,22]. RGB images may become an important form of data for investigating FVC in desert regions [10,11,12,23]. There have been few studies on such applications. Guo et al. used UAV RGB images to obtain FVC with the remote sensing indices in Mu Us sandy land [24]. Li et al. proposed a mean-based spectral decomposition method to estimate FVC from RGB photos taken by a UAV [25]. These studies confirmed the high accuracy of FVC estimates using RGB data.
Recently, machine learning methods have been applied for FVC estimation and estimation of other vegetation parameters due to their robust performance and nonlinear fitting capability [8,26,27]. For example, some researchers have used machine learning methods, such as backpropagation neural network (BPNN) and random forest (RF), to estimate FVC [26,27,28]. The processing of image patterns generally includes four steps: image acquisition, image preprocessing, image feature extraction and classification [29]. Image pattern recognition can be used to study phenology [30], plant phenotype analysis [31], crop yield estimation [32] and biodiversity assessment [18]. However, only a few studies on FVC estimation have combined a machine learning method with image pattern recognition, especially for estimation of sparse vegetation in desert regions.
The desert ecosystem in northwest China is fragile and unstable, mainly characterized by an arid climate, sparse precipitation, high evaporation rates, sparse vegetation and barren land. Affected by human land use, global climate change and geographic environmental elements, these areas have become some of the fastest desertifying areas in China. Desertification not only affects regional drought conditions but also results in the release of large amounts of carbon into the atmosphere due to the degradation of vegetation and soil, which may lead to regional or global climate change. Prevention and control of desertification has become an important part of China’s ecological civilization construction policy, and monitoring sparse vegetation cover is integral to managing desert ecosystems [25]. FVC derived from high-spatial-resolution UAV imagery could be exceptionally effective in mapping desert vegetation cover and assessing degradation, salinization and desertification [23].
In this study, we used high-resolution RGB data and pixel-based machine learning classification methods to estimate vegetation coverage of major species in the study area. The objective of this study was to explore the reliability of vegetation coverage extraction using high-resolution RGB data and pixel-based machine learning classification methods. The methods and results of this study will provide insights into identification techniques for extraction of vegetation coverage information of different desert species from RGB data.

2. Materials and Methods

2.1. Study Area

2.1.1. Overview

In this study, we investigated a 1 km × 1 km area in Ulan Buh desert in western Inner Mongolia, with the central coordinates of 40°15′37.8″N, 106°56′28″E (Figure 1). The area has little topographic fluctuation, mainly comprising conical dunes or crescent-shaped dunes with a height of less than 10 m and an average elevation of 1050 m. The area has a typical semiarid, continental, temperate climate, which is characterized by little precipitation, strong wind and dry conditions. The area receives an annual rainfall of 138.8 mm, with average annual evaporation of 2258.8 mm, an average temperature of 6.8 °C, temperature difference between day and night and annual sunshine time of 3229.9 h. The sandstorm season falls between November and May of the following year. The prevailing winds are westerly and northwesterly. The annual average wind speed is 4.1 m s−1, and hazards of sand storms are considered as the main natural disaster in the study area [33].

2.1.2. Vegetation Profile

The plant communities of the Ulan Buh desert are generally xerophytic vegetation, and dominant species in the community have morphological characteristics and physiological functions to adapt to drought. They are usually composed of small trees, shrubs and herbs, which are highly adaptable and resistant to local habitats. The main plant species in the study area are Elaeagnus angustifolia L. (Figure 2a), Artemisia desertorum Spreng. Syst. Veg. (Figure 2b), Haloxylon ammodendron (C. A. Mey.) Bunge (Figure 2c) and various herbaceous plants. The plants play an important role in desert ecosystem functioning.

2.2. Data Sources

2.2.1. Image Data

The Institute of Forest Resource Information Techniques (IFRIT), Chinese Academy of Forestry (CAF) adopted an ultralight aircraft low-altitude digital remote sensing system (Lars-1) for aerial photography in July 2006. The system used an MF-11 ultralight aircraft and carried a TOPDC45 measuring aerial digital camera to collect 130 photographs with 67~77% course overlap and 30~55% lateral overlap. Furthermore, 9 ground control points were deployed and measured (Figure 3a). The spatial resolution of the resulting orthomosaic imagery is 0.1 m (Figure 1a).
The IFRIT applied UAV technology to take aerial photographs of the study area in August 2019 and obtained more than 2000 digital aerial photographs and ultra-high-resolution aerial photography images of the study area. The UAV used in this survey was a Phantom 4 Pro, and the camera was an Canon EOS 5DS RGB camera. The flying speed was 5 m/s, and the flying height above the ground was 70 m. The course overlap rate was 80%, and the lateral overlap rate was 60%. In addition, 100 ground control points were deployed to verify the accuracy of the generated image (Figure 3b). The spatial resolution of the resulting orthomosaic imagery is 0.02 m (Figure 1b).

2.2.2. Ground Measurement Data

Ground-based measurements were collected by the members of IFRIT in July through August 2006 following the grid-based sampling framework. Within the 1 km × 1 km study area, 100 sample plots were allocated across in rows with a range of 100 m × 100 m (Figure 4a). In each 100 m × 100 m sample plot, 100 small quadrats with an area of 10 m × 10 m were further subdivided according to the row crossing (Figure 4b). We took a full survey of the study area based on each plot. The crown width in different directions, plant height, diameter at breast height of each tree and basal diameter of each shrub were measured manually, taking each small quadrat (10 m × 10 m) as a unit. Species, as well as the number of trees, shrubs and herbs, were recorded. More importantly, in order to make accurate training samples, validate samples and visually interpret results in the follow-up, we measured and recorded the location information of the species in of interest the investigation process.
In August and September 2019, ground measurements were collected using the same methods as in 2006.

2.3. Methods

The whole process of using high-resolution RGB data and pixel-based machine learning classification methods to estimate vegetation coverage of major species includes five steps (Figure 5): (1) preparation of orthophoto mosaics; (2) acquisition of training and validation data; (3) training of models and image classification; (4) evaluation of model accuracy; and (5) calculation of area and FVC.

2.3.1. Image Processing

Aerial photographs were imported into Agisoft Photoscan for mosaic and orthographic correction. Photoscan software is based on structure from motion, which can generate mosaicked and ortho-corrected images [34]. The general processing steps in Photoscan include camera alignment, building a point cloud, building a 3D grid, building textures, building a digital elevation model and generating the orthomosaic images. After the images are mosaicked, the precise coordinates of the ground control points are used to perform geometric correction on the generated images. A digital orthophoto map (DOM) of the study area was produced with the geometric corrections, satisfying the requirements of subsequent research.

2.3.2. Acquiring Training and Validation Data

We combined the location-specific information on trees, shrubs and herbaceous plants in the ground surveys, as well as representative regions of interest of Elaeagnus angustifolia L., Artemisia desertorum Spreng. Syst. Veg., Haloxylon ammodendron (C. A. Mey.) Bunge, grass and sand, using ArcGIS software with DOM through visual recognition. The representative data sets of different categories are shown in Figure 6. Then, the pixel values for each category were obtained in PyCharm, and the tagged data set was randomly divided into training and validation sets in a 9:1 ratio. The pixel numbers for each category are shown in Table 1. In addition, during the ground survey, we found that crops (14.31%) existed in the study area in 2019 but not in 2006. We determined the scope of crops with control points and did not treat them as objects for machine classification [35,36]. The control points in the fixed locations were measured by Zenith15Pro, so crops were manually mapped to reduce the complexity of vegetation classification.

2.3.3. Training the Models and Classifying Images

The machine learning algorithms used in this study include random forest (RF) [28,37], k-nearest neighbor (KNN) [37,38] and gradient boost decision tree (GBDT) [39], which are available in the ‘Sklearn’ package of Python. In the process of machine learning, cross validation was carried out on the training set, and the obtained accuracy was above 0.60, which ensured that the model could not be overfit.
Below is a brief description of each algorithm chosen for our study and the reasons for choosing them.
RF is a decision tree classification algorithm that generates a large number of decision trees; in each tree, features are selected on a random basis [40]. Classification results are obtained by voting using the base classifier. The final classification results are obtained by synthesizing the decision results of each decision tree using the majority voting method. The result can be expressed as:
H ( x ) = arg max γ i = 1 k I ( h i ( x ) = Y )
where H ( x ) is the final classification result of RF, h i ( x ) is the classification result of classification and regression tree (CART), Y is the output variable and I ( ) is the indicator function.
The KNN algorithm finds k objectives in a provided training data set that are closest to the test point; the class is based on a majority vote among the k objects, and the distance is based on the ordinary Euclidean metric; the classification output is determined based on this distance [41]. For each test instance, the distances or similarities among all the labeled instances in the training set are calculated using the traditional KNN algorithm. Then, k nearest instances are selected to derive the resulting class with maximum number. Normally, Euclidean distance is used for distance measurement of KNN. The Euclidean distance is expressed as:
d 2 ( x i , x j ) = ( x i x j ) 2 = ( x i x j ) T I ( x i x j )
where I = d i a g ( 1 , , 1 ) is an identity matrix.
The GBDT algorithm generates predictive models in the form of weak learners and iteratively combines weak learners into a stronger learner. Each decision tree is iteratively tuned to reduce the residuals of the preceding tree in the direction of the gradient [42]. Specifically, in order to minimize the loss (or error) between the output and true values, a CART model is suitable for the residuals of a preceding model. The final predictive model of the specific iteration is obtained by summing the outputs of all models from the previous iteration. The model can be represented by:
F M ( x ) = m = 1 M T ( x , Φ m )
where T ( x , Φ m ) is the decision tree, Φ m represents the parameters of the next decision tree and M is the number of decision trees. The loss function of the decision tree in GBDT can be represented by the square error function. The parameters of the next decision tree are determined by minimizing the loss function, which can be expressed as:
Φ m = arg min Φ m i = 1 N L [ y i , F m 1 ( x i ) + T ( x , Φ m ) ]
where F m 1 x i is the current decision tree, and L · is the loss function of the decision tree.
Based on the training data set obtained in the previous step, RGB characteristics are applied to RF, KNN and GBDT algorithms to generate corresponding models. The models are then applied to the DOM images of the whole study area. The study areas can be divided into five categories: Elaeagnus angustifolia L., Artemisia desertorum Spreng. Syst. Veg., Haloxylon ammodendron (C. A. Mey.) Bunge, grass and sand. Classification images are obtained based on the applied algorithms.

2.3.4. Accuracy Assessment of the Algorithms

The accuracy of the algorithms was assessed based on the confusion matrix. Each row represents a prediction category, and the total number of rows corresponds to the number of data points predicted for a given category. Each column of the confusion matrix represents the actual category to which the data belong, and the total number of columns represents the number of data instances in a given class.
Four commonly used evaluation measures—overall accuracy (OA), Kappa coefficient, producer accuracy (PA) and user accuracy (UA)—were used to evaluate the accuracy of the algorithms employed for vegetation classification.
Overall accuracy (OA) represents the accuracy of all categories and can be calculated with the following formula:
O A = i = 1 n C P i T × 100 %
where   n is the total number of all categories, C P i is the number of sample pixels that are predicted to be correct in class i and T is the total number of pixels in the validation samples.
Kappa coefficient is a measure of classification accuracy and can be calculated with the following formula:
K a p p a =   N i = 1 r x i i i = 1 r x i + × x + i N 2 i = 1 r x i + × x + i
Producer accuracy (PA) can be calculated with the following formula:
P A =   x i i x i + +
User accuracy (UA) can be calculated with the following formula:
U A =   x i i x + i
where r is the number of rows in the confusion matrix, x i i is the number of pixels along the diagonal, x i + is the total number of pixels in the i th row, x + i is the total number of pixels in column i and N is the total number of pixels.

2.3.5. Estimating Area and FVC in Different Species

After determining the accuracy of the algorithms, the best classification results in the two periods of data were used to estimate the complete study area. The results of classification accuracy for the study area were evaluated using a stratified random sampling design [43], whereby the random points are distributed according to the proportion of the area occupied by each category in the classification results. For rare categories, it is necessary to ensure that the number of randomly assigned points is greater than their proportional area [44]. The total number of sample units in the verification process is calculated as follows:
n =   W i S i 2 S O ^ 2 + 1 / N W i S i 2 W i S i S O ^ 2
where N = number of units in the ROI, and S O ^ is the desired standard error of the estimated overall accuracy. Here, we adopted a slighter higher value than that recommended in [43], i.e., 0.015. W i is the mapped proportion of the area of class i , S i is the standard deviation of class i , S i = U i 1 U i , where U i is the priori expected user accuracy. In this study, we used the obtained user accuracy values to validate the data set (Table 2).
The Openforis accuracy assessment tool was used to estimate the unbiased area and its 95% confidence interval. These estimates are based on the confusion matrix generated by the accuracy assessment process and the proportion of the area occupied by each category according to Equations (10) and (11).
S p ^ . k = i W i 2 n i k n i . 1 n i k n i . n i . 1 = i W i p ^ i k p ^ i k 2 n i . 1
95 %   C I   A ^ k = A i W i n i k n i . ± 1.96 A   S p ^ . k
where S p ^ . k is the standard error of the estimated area proportion for class k , W i is the area proportion of map class i , n i k is the sample count at cell i ,   k in the error matrix and n i . is the row sum for class i . Furthermore, 95% CI is the 95% confidence interval, A ^ k is the estimated area of class k and A is the total map area.
Finally, according to the estimated area, the FVC of different species in this study could be estimated and the change of FVC could also be obtained.

3. Results

3.1. Vegetation Classification

The digital orthophoto map of the study area (2006) obtained by the ultralight aircraft low altitude digital remote sensing system (LARs-1) and the classification results obtained with the investigated machine learning algorithms are shown in Figure 7. Other image data (2019) obtained by UAV and results of classification by different machine learning algorithms are shown in Figure 8.
Although the resolution of differed between the two images (from 2006 and 2019), Figure 7a and Figure 8a show the distribution of vegetation in the two years. The images generated with other classification results (Figure 7b–d and Figure 8b–d) show the effect of the investigated algorithms on the classification of main land cover with the two types of data.

3.2. Algorithm Comparison

Figure 9 shows the confusion matrices obtained using the three investigated algorithms for classification of the two types of data. According to the confusion matrices, the producer accuracy (PA), user accuracy (UA), overall accuracy (OA) and Kappa coefficient of the algorithms can be calculated using Equations (1) and (2) (Table 2). These indices are used to evaluate the classification results of the machine learning algorithms.
The overall accuracy and Kappa coefficients of the random forest (RF) algorithm for the 2006 image were 0.9478 and 0.8880 (Table 2). Compared with the K-nearest neighbor (KNN) algorithm and the gradient boost decision tree (GBDT) algorithm, RF obtained better classification results. However, KNN achieved slightly better classification results than RF and GBDT, and their OA and Kappa coefficient in 2019 were 0.8627 and 0.8073, respectively. The user accuracy and producer accuracy also proved that most pixels were correctly classified, with the exception of Haloxylon ammodendron (C. A. Mey.) Bunge, the user accuracy of which was low in both the 2006 and 2019 results (0.3058 and 0.2916, respectively). In addition, when classifying the 2006 image with a resolution of 0.1 m, the overall accuracies of the GBDT, KNN and RF algorithms were 0.9140, 0.9190 and 0.9478, respectively. When classifying the 2019 image with a resolution of 0.02 m, the overall accuracies of the GBDT, KNN and RF algorithms were 0.8466, 0.8627 and 0.8569, respectively, all of which were lower than the accuracies obtained using the low-resolution image (0.1 m).

3.3. Estimating FVC and Accuracy Assessment

Table 3 shows the random points assigned to each category of the classification results of the two periods. The total sample sizes for 2006 and 2019 were 794 and 400, respectively, calculated according to Equation (9). For the rarest classes (classes covering less than 5% of the map), a fixed number is assigned directly. For the most abundant classes, validation points are allocated from the remaining points based on proportion. The validation process is performed by a combination of visual interpretation and ground survey results.
The accuracy assessment of the complete study area classification showed an overall accuracy of 0.90 in 2006 (Table 4) and 0.85 in 2019 (Table 5). With the exception of Haloxylon ammodendron (C. A. Mey.) Bunge, the user accuracy and producer accuracy of other categories are relatively high.
Table 6 shows the area proportions of different categories, unbiased estimates of area and 95% confidence intervals for the entire study area.
Table 7 shows the FVC of major species, which is estimated according to the unbiased area and 95% CI area in the proportion of the study area.

4. Discussion

In previous studies, FVC estimation methods based on remote sensing data were mainly applied to a global or large scale [45]. In order to explore the FVC of each species in a small area, in this study, we evaluated the ability of high-resolution RGB data to estimate FVC in desert regions, mainly through ground surveying of the location information of each species in combination with combined with the images to make more accurate training samples and validation samples, as well as the use of a pixel-based machine learning method in two phases of RGB data for evaluation. According to the satisfying validation results obtained in terms of the overall accuracy, the high-resolution RGB data achieved reliable performance in FVC estimation, even at the species level, in line with results reported in previous studies. Zhou et al. used the RF method to classify desert vegetation based on RGB data [33] and proposed a method of mapping rare desert vegetation based on pixel classifiers and the use of an RF algorithm with a VDVI–texture feature, which obtained high accuracy. Olariu et al. found that RGB drone sensors were indeed capable of providing highly accurate classifications of woody plant species in semiarid landscapes [46]. Furthermore, we found that good accuracy was not obtained for all species in this study, which indicates that some species may do not perform well in RGB data. For example, the accuracy of H (Haloxylon ammodendron (C. A. Mey.) Bunge) was the lowest, indicating that it was misclassified more often than other species, possibly due to a lack of samples of this species or because the RGB characteristics of vegetation H do not significantly different from those of other types of vegetation, resulting in a poor classification effect of vegetation H. Oddi et al. combined high-resolution RGB images provided by UAVs with pixel-based classification workflows to map vegetation in recently eroded subalpine grasslands at the level of life forms (i.e., trees, shrubs and herbaceous species) [47].
Geng et al. used a boosted regression tree model (BRT) that effectively enhanced the vegetation information in RGB images and improved the accuracy of FVC estimation [48]. Wang et al. proposed a new method to classify woody vegetation and herbaceous vegetation and calculated FVC values based on high-resolution RGB images using a classification and regression tree (CART) machine learning algorithm [49]. Guo et al. obtained FVC and biomass information using an object-based classification method, single shrub canopy biomass model and vegetation index-based method [24]. These studies provide information that can be used in future studies to further improve the estimation of vegetation coverage using RGB data at the desert species level. Measures such as expanding color space, extraction of additional vegetation indices, combining multiple models and optimization of algorithms play a considerable role in studying the application of RGB data for estimation of vegetation coverage in desert regions. Furthermore, it is worth exploring additional approaches. Egli et al. presented a novel CNN employing a tree species classification approach based on low-cost UAV-RGB image data, achieving validation results of 92% according to spatially and temporally independent data [50]. Zhou et al. evaluated the feasibility of using UAV-based RGB imagery for wetland vegetation classification at the species level and employed OBIA technology to overcome the limitations of traditional pixel-based classification methods [51]. These methods may also improve accuracy when applied to desert vegetation areas, although further studies are required to verify this hypothesis.
Our study showed that the main desert vegetation in the study area changed slightly between 2006 and 2019, which indicates that the desert environment may have had an impact on the study area in the past decade. In addition, crops that were not present in 2006 appeared in the study area in 2019, which indicates that the study area was disturbed by human factors. In fact, China has been working on desert prevention, with attempts to improve the desert area through various measures [52,53]. With respect to estimation of FVC and desertification control in desert areas, improvements can be made in the following aspects.
Although RGB data from UAVs achieved notable performance in vegetation classification and coverage extraction studies, the present study is subject to some limitations. Compared with other image types (e.g., multispectral and hyperspectral), RGB data lack band information [54,55]. The near-infrared spectrum is an important band with respect to the study of vegetation [56,57,58,59]. Cameras can be modified to obtain images with information encompassing the near-infrared band, which could be used to improve the classification accuracy of desert vegetation and quantify vegetation coverage more accurately.
In addition, the optimal spatial resolution required depending on the research object must also be taken into account. Instead of always pursuing high-resolution images, the most suitable spatial resolution should be selected. Therefore, it is of considerable significance to explore the optimal spatial resolution of different vegetation features and determine the most reasonable flight altitudes for UAVs.
We only obtained image data for a specific period in the study area; therefore, our results only reflect the status of desert vegetation during that period, and we failed to take full advantage of the UAV. For example, UAVs could be used to acquire time-series data. In future work, UAVs should be used to photograph the growth status of desert regions at various time periods in a given year [36].

5. Conclusions

In this study, we used high-resolution RGB data and pixel-based machine learning classification methods to classify and estimate vegetation coverage of major species in the study area. Our main conclusions are as follows: (1) The high-resolution RGB image data from UAVs had promising effects on the classification process, resulting in reliable performance in terms of FVC estimation, even at the species level. Therefore, this method should be considered a reliable technique. Moreover, using a UAV to carry an RGB camera is a convenient and fast method to obtain image data with different spatial resolutions at different heights. (2) The FVC in the study area changed significantly between 2006 and 2019, with an increase in grass coverage from 15.47 ± 1.49% to 27.90 ± 2.79%. The magnitude of this change may also contribute to the environmental improvement of desert systems. (3) Although images with high spatial resolution provide more detailed information, they are associated with considerable challenges in the subsequent data processing steps. In order to ensure the accuracy of vegetation coverage estimation, determination of the optimal spatial resolution for different vegetation features should be the focus of future research.

Author Contributions

L.X., X.Z., X.M. and L.F. collected the data; X.M. and X.Z. analyzed the data; L.X., X.M., H.S., L.F. and R.P.S. wrote the manuscript, contributed critically to improve the manuscript and gave final approval for publication. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Central Public-Interest Scientific Institution Basal Research Fund (Grant No. CAFYBB2019QD003).

Data Availability Statement

Data used in this study are available from the Chinese Forestry Science Data Center (http://www.cfsdc.org, accessed on 6 January 2021).

Acknowledgments

We thank the Central Public-Interest Scientific Institution Basal Research Fund (Grant No. CAFYBB2019QD003) and the National Natural Science Foundation of China (Grant Nos. 31470641, 31300534 and 31570628) for financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Henderson-Sellers, A.; McGuffie, K.; Pitman, A.J. The Project for Intercomparison of Land-surface Parametrization Schemes (PILPS): 1992 to 1995. Clim. Dyn. 1996, 12, 849–859. [Google Scholar] [CrossRef]
  2. Purevdorj, T.S.; Tateishi, R.; Ishiyama, T.; Honda, Y. Relationships between percent vegetation cover and vegetation indices. Int. J. Remote Sens. 1998, 19, 3519–3535. [Google Scholar] [CrossRef]
  3. Dymond, J.R.; Stephens, P.R.; Newsome, P.F.; Wilde, R.H. Percentage vegetation cover of a degrading rangeland from SPOT. Int. J. Remote Sens. 1992, 13, 1999–2007. [Google Scholar] [CrossRef]
  4. Sun, Q.; Zhang, P.; Wei, H.; Liu, A.; You, S.; Sun, D. Improved mapping and understanding of desert vegetation-habitat complexes from intraannual series of spectral endmember space using cross-wavelet transform and logistic regression. Remote Sens. Environ. 2020, 236, 111516. [Google Scholar] [CrossRef]
  5. Cohen, W.B.; Maiersperger, T.K.; Gower, S.T.; Turner, D.P. An improved strategy for regression of biophysical variables and Landsat ETM+ data. Remote Sens. Environ. 2003, 84, 561–571. [Google Scholar] [CrossRef] [Green Version]
  6. Yang, L.; Jia, K.; Liang, S.; Liu, J.; Wang, X. Comparison of Four Machine Learning Methods for Generating the GLASS Fractional Vegetation Cover Product from MODIS Data. Remote Sens. 2016, 8, 682. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, Y.; Li, M. Annually Urban Fractional Vegetation Cover Dynamic Mapping in Hefei, China (1999–2018). Remote Sens. 2021, 13, 2126. [Google Scholar] [CrossRef]
  8. Yang, L.; Jia, K.; Liang, S.; Wei, X.; Yao, Y.; Zhang, X. A Robust Algorithm for Estimating Surface Fractional Vegetation Cover from Landsat Data. Remote Sens. 2017, 9, 857. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, D.; Jia, K.; Jiang, H.; Xia, M.; Tao, G.; Wang, B.; Chen, Z.; Yuan, B.; Li, J. Fractional Vegetation Cover Estimation Algorithm for FY-3B Reflectance Data Based on Random Forest Regression Method. Remote Sens. 2021, 13, 2165. [Google Scholar] [CrossRef]
  10. Yang, J.; Weisberg, P.J.; Bristow, N.A. Landsat remote sensing approaches for monitoring long-term tree cover dynamics in semi-arid woodlands: Comparison of vegetation indices and spectral mixture analysis. Remote Sens. Environ. 2012, 119, 62–71. [Google Scholar] [CrossRef]
  11. Loarie, S.R.; Joppa, L.N.; Pimm, S.L. Satellites miss environmental priorities. Trends Ecol. Evol. 2007, 22, 630–632. [Google Scholar] [CrossRef] [PubMed]
  12. Herwitz, S.; Johnson, L.; Dunagan, S.; Higgins, R.; Sullivan, D.; Zheng, J.; Lobitz, B.; Leung, J.; Gallmeyer, B.; Aoyagi, M.; et al. Imaging from an unmanned aerial vehicle: Agricultural surveillance and decision support. Comput. Electron. Agric. 2004, 44, 49–61. [Google Scholar] [CrossRef]
  13. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use. Remote Sens. 2012, 4, 1671. [Google Scholar] [CrossRef] [Green Version]
  14. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  15. Gobbi, B.; Van Rompaey, A.; Loto, D.; Gasparri, I.; Vanacker, V. Comparing Forest Structural Attributes Derived from UAV-Based Point Clouds with Conventional Forest Inventories in the Dry Chaco. Remote Sens. 2020, 12, 4005. [Google Scholar] [CrossRef]
  16. Zhang, J.; Hu, J.; Lian, J.; Fan, Z.; Ouyang, X.; Ye, W. Seeing the forest from drones: Testing the potential of lightweight drones as a tool for long-term forest monitoring. Biol. Conserv. 2016, 198, 60–69. [Google Scholar] [CrossRef]
  17. McNeil, B.E.; Pisek, J.; Lepisk, H.; Flamenco, E.A. Measuring leaf angle distribution in broadleaf canopies using UAVs. Agric. For. Meteorol. 2016, 218–219, 204–208. [Google Scholar] [CrossRef]
  18. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2011, 3, 397–404. [Google Scholar] [CrossRef]
  19. Faye, E.; Rebaudo, F.; Yánez-Cajo, D.; Cauvy-Fraunié, S.; Dangles, O. A toolbox for studying thermal heterogeneity across spatial scales: From unmanned aerial vehicle imagery to landscape metrics. Methods Ecol. Evol. 2015, 7, 437–446. [Google Scholar] [CrossRef] [Green Version]
  20. Nebiker, S.; Annen, A.; Scherrer, M.; Oesch, D. Light-Weight Multispectral Sensor for Micro UAV—Opportunities for Very High Resolution Airborne Remote Sensing. Int. Arch. Photogram. Rem. Sens. Spatial Inf. Sci. 2008, 37, 1193–2000. [Google Scholar]
  21. Córcoles, J.I.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle. Biosyst. Eng. 2013, 115, 31–42. [Google Scholar] [CrossRef]
  22. Lucieer, A.; Malenovský, Z.; Veness, T.; Wallace, L. HyperUAS-Imaging Spectroscopy from a Multirotor Unmanned Aircraft System. J. Field Robot. 2014, 31, 571–590. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, B.; Jia, K.; Liang, S.; Xie, X.; Wei, X.; Zhao, X.; Yao, Y.; Zhang, X. Assessment of Sentinel-2 MSI Spectral Band Reflectances for Estimating Fractional Vegetation Cover. Remote Sens. 2018, 10, 1927. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, Z.-C.; Wang, T.; Liu, S.-L.; Kang, W.-P.; Chen, X.; Feng, K.; Zhang, X.-Q.; Zhi, Y. Biomass and vegetation coverage survey in the Mu Us sandy land—Based on unmanned aerial vehicle RGB images. Int. J. Appl. Earth Obs. Geoinf. ITC J. 2020, 94, 102239. [Google Scholar] [CrossRef]
  25. Li, L.; Yan, G.; Mu, X.; Suhong, L.; Chen, Y.; Yan, K.; Luo, J.; Song, W. Estimation of fractional vegetation cover using mean-based spectral unmixing method. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3178–3180. [Google Scholar] [CrossRef]
  26. Baret, F.; Weiss, M.; Lacaze, R.; Camacho, F.; Makhmara, H.; Pacholcyzk, P.; Smets, B. GEOV1: LAI and FAPAR essential climate variables and FCOVER global time series capitalizing over existing products. Part1: Principles of development and production. Remote Sens. Environ. 2013, 137, 299–309. [Google Scholar] [CrossRef]
  27. Jia, K.; Liang, S.; Liu, S.; Li, Y.; Xiao, Z.; Yao, Y.; Jiang, B.; Zhao, X.; Wang, X.; Xu, S.; et al. Global Land Surface Fractional Vegetation Cover Estimation Using General Regression Neural Networks From MODIS Surface Reflectance. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4787–4796. [Google Scholar] [CrossRef]
  28. Ghosh, S.M.; Behera, M.D.; Paramanik, S. Canopy Height Estimation Using Sentinel Series Images through Machine Learning Models in a Mangrove Forest. Remote Sens. 2020, 12, 1519. [Google Scholar] [CrossRef]
  29. Yang, J. Image Pattern Recognition. In Encyclopedia of Biometrics; Li, S.Z., Jain, A., Eds.; Springer: Boston, MA, USA, 2009; pp. 726–729. [Google Scholar]
  30. Guo, W.; Fukatsu, T.; Ninomiya, S. Automated characterization of flowering dynamics in rice using field-acquired time-series RGB images. Plant Methods 2015, 11, 7. [Google Scholar] [CrossRef] [Green Version]
  31. Duan, T.; Chapman, S.; Holland, E.; Rebetzke, G.; Guo, Y.; Zheng, B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J. Exp. Bot. 2016, 67, 4523–4534. [Google Scholar] [CrossRef]
  32. Kamal, M.; Schulthess, U.; Krupnik, T. Identification of Mung Bean in a Smallholder Farming Setting of Coastal South Asia Using Manned Aircraft Photography and Sentinel-2 Images. Remote Sens. 2020, 12, 3688. [Google Scholar] [CrossRef]
  33. Zhou, H.; Fu, L.; Sharma, R.; Lei, Y.; Guo, J. A Hybrid Approach of Combining Random Forest with Texture Analysis and VDVI for Desert Vegetation Mapping Based on UAV RGB Data. Remote Sens. 2021, 13, 1891. [Google Scholar] [CrossRef]
  34. Torres-Sánchez, J.; Peña-Barragán, J.M.; Gómez-Candón, D.; De Castro, A.I.; López-Granados, F. Imagery from unmanned aerial vehicles for early site specific weed management. In Precision Agriculture’13; Wageningen Academic Publishers: Wageningen, The Netherlands, 2013. [Google Scholar]
  35. John, R.; Chen, J.; Giannico, V.; Park, H.; Xiao, J.; Shirkey, G.; Ouyang, Z.; Shao, C.; Lafortezza, R.; Qi, J. Grassland canopy cover and aboveground biomass in Mongolia and Inner Mongolia: Spatiotemporal estimates and controlling factors. Remote Sens. Environ. 2018, 213, 34–48. [Google Scholar] [CrossRef]
  36. Lu, B.; He, Y. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a hetero-geneous grassland. ISPRS J. Photogramm. Remote Sens. 2017, 128, 73–85. [Google Scholar] [CrossRef]
  37. Adnan, M.; Rahman, S.; Ahmed, N.; Ahmed, B.; Rabbi, F.; Rahman, R. Improving Spatial Agreement in Machine Learning-Based Landslide Susceptibility Mapping. Remote Sens. 2020, 12, 3347. [Google Scholar] [CrossRef]
  38. Cariou, C.; Le Moan, S.; Chehdi, K. Improving K-Nearest Neighbor Approaches for Density-Based Pixel Clustering in Hyperspectral Remote Sensing Images. Remote Sens. 2020, 12, 3745. [Google Scholar] [CrossRef]
  39. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  40. Fredensborg Hansen, R.M.; Rinne, E.; Skourup, H. Classification of Sea Ice Types in the Arctic by Radar Echoes from SARAL/AltiKa. Remote Sens. 2021, 13, 3183. [Google Scholar] [CrossRef]
  41. Shu, S.; Zhou, X.; Shen, X.; Liu, Z.; Tang, Q.; Li, H.; Ke, C.; Li, J. Discrimination of different sea ice types from CryoSat-2 satellite data using an Object-based Random Forest (ORF). Mar. Geod. 2019, 43, 213–233. [Google Scholar] [CrossRef]
  42. Liu, Z.; Guo, P.; Liu, H.; Fan, P.; Zeng, P.; Liu, X.; Feng, C.; Wang, W.; Yang, F. Gradient Boosting Estimation of the Leaf Area Index of Apple Orchards in UAV Remote Sensing. Remote Sens. 2021, 13, 3263. [Google Scholar] [CrossRef]
  43. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  44. Solórzano, J.V.; Mas, J.F.; Gao, Y.; Gallardo-Cruz, J.A. Land Use Land Cover Classification with U-Net: Advantages of Combining Sentinel-1 and Sentinel-2 Imagery. Remote Sens. 2021, 13, 3600. [Google Scholar] [CrossRef]
  45. Liu, D.; Yang, L.; Jia, K.; Liang, S.; Xiao, Z.; Wei, X.; Yao, Y.; Xia, M.; Li, Y. Global Fractional Vegetation Cover Estimation Algorithm for VIIRS Reflectance Data Based on Machine Learning Methods. Remote Sens. 2018, 10, 1648. [Google Scholar] [CrossRef] [Green Version]
  46. Olariu, H.G.; Malambo, L.; Popescu, S.C.; Virgil, C.; Wilcox, B.P. Woody Plant Encroachment: Evaluating Methodologies for Semiarid Woody Species Classification from Drone Images. Remote Sens. 2022, 14, 1665. [Google Scholar] [CrossRef]
  47. Oddi, L.; Cremonese, E.; Ascari, L.; Filippa, G.; Galvagno, M.; Serafino, D.; Cella, U. Using UAV Imagery to Detect and Map Woody Species Encroachment in a Subalpine Grassland: Advantages and Limits. Remote Sens. 2021, 13, 1239. [Google Scholar] [CrossRef]
  48. Geng, X.; Wang, X.; Fang, H.; Ye, J.; Han, L.; Gong, Y.; Cai, D. Vegetation coverage of desert ecosystems in the Qinghai-Tibet Plateau is underestimated. Ecol. Indic. 2022, 137, 108780. [Google Scholar] [CrossRef]
  49. Wang, H.; Han, D.; Mu, Y.; Jiang, L.; Yao, X.; Bai, Y.; Lu, Q.; Wang, F. Landscape-level vegetation classification and fractional woody and herbaceous vegetation cover estimation over the dryland ecosystems by unmanned aerial vehicle platform. Agric. For. Meteorol. 2019, 278, 107665. [Google Scholar] [CrossRef]
  50. Egli, S.; Höpke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
  51. Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-Based Wetland Vegetation Classification Using Multi-Feature Selection of Unoccupied Aerial Vehicle RGB Imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
  52. Wang, Z.; Xu, D.; Peng, D.; Zhang, Y. Quantifying the influences of natural and human factors on the water footprint of afforestation in desert regions of northern China. Sci. Total Environ. 2021, 780, 146577. [Google Scholar] [CrossRef]
  53. Shao, Y.; Zhang, Y.; Wu, X.; Bourque, C.P.-A.; Zhang, J.; Qin, S.; Wu, B. Relating historical vegetation cover to aridity patterns in the greater desert region of northern China: Implications to planned and existing restoration projects. Ecol. Indic. 2018, 89, 528–537. [Google Scholar] [CrossRef]
  54. Gaughan, A.E.; Kolarik, N.E.; Stevens, F.R.; Pricope, N.G.; Cassidy, L.; Salerno, J.; Bailey, K.M.; Drake, M.; Woodward, K.; Hartter, J. Using Very-High-Resolution Multispectral Classification to Estimate Savanna Fractional Vegetation Components. Remote Sens. 2022, 14, 551. [Google Scholar] [CrossRef]
  55. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  56. Kwan, C.; Gribben, D.; Ayhan, B.; Li, J.; Bernabe, S.; Plaza, A. An Accurate Vegetation and Non-Vegetation Differentiation Approach Based on Land Cover Classification. Remote Sens. 2020, 12, 3880. [Google Scholar] [CrossRef]
  57. Ahn, J.-H.; Park, Y.-J. Estimating Water Reflectance at Near-Infrared Wavelengths for Turbid Water Atmospheric Correction: A Preliminary Study for GOCI-II. Remote Sens. 2020, 12, 3791. [Google Scholar] [CrossRef]
  58. Fu, L.; Zhang, D.; Ye, Q. Recurrent Thrifty Attention Network for Remote Sensing Scene Recognition. IEEE T. Geosci. Remot. 2021, 59, 8257–8268. [Google Scholar] [CrossRef]
  59. Ye, Q.; Li, Z.; Fu, L.; Zhang, Z.; Yang, W.; Yang, G. Nonpeaked Discriminant Analysis for Data Representation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3818–3832. [Google Scholar] [CrossRef]
Figure 1. UAV survey study area in the desert region of Ulan Buh in western Inner Mongolia. The size of study area is 1 km × 1 km. (a) Aerial images of the study area in 2006. (b) Aerial images of the study area in 2019.
Figure 1. UAV survey study area in the desert region of Ulan Buh in western Inner Mongolia. The size of study area is 1 km × 1 km. (a) Aerial images of the study area in 2006. (b) Aerial images of the study area in 2019.
Remotesensing 14 03833 g001
Figure 2. The main vegetation types in the study area: (a) Elaeagnus angustifolia L., (b) Artemisia desertorum Spreng. Syst. Veg. and (c) Haloxylon ammodendron (C. A. Mey.) Bunge.
Figure 2. The main vegetation types in the study area: (a) Elaeagnus angustifolia L., (b) Artemisia desertorum Spreng. Syst. Veg. and (c) Haloxylon ammodendron (C. A. Mey.) Bunge.
Remotesensing 14 03833 g002
Figure 3. Ground control points used in the study. The red circle showed the control points we placed. (a) 2006. (b) 2019.
Figure 3. Ground control points used in the study. The red circle showed the control points we placed. (a) 2006. (b) 2019.
Remotesensing 14 03833 g003
Figure 4. Sample plot number setting. (a) The 1 km × 1 km study area was divided into 100 sample plots (100 m × 100 m). (b) Each 100 m × 100 m sample plot was divided into 100 small quadrats (10 m × 10 m).
Figure 4. Sample plot number setting. (a) The 1 km × 1 km study area was divided into 100 sample plots (100 m × 100 m). (b) Each 100 m × 100 m sample plot was divided into 100 small quadrats (10 m × 10 m).
Remotesensing 14 03833 g004
Figure 5. Diagram showing the workflow of vegetation classification and FVC estimation using machine learning algorithms: RF (random forest); KNN (K-nearest neighbor); GBDT (gradient boost decision tree).
Figure 5. Diagram showing the workflow of vegetation classification and FVC estimation using machine learning algorithms: RF (random forest); KNN (K-nearest neighbor); GBDT (gradient boost decision tree).
Remotesensing 14 03833 g005
Figure 6. Representative regions of interest of different categories.
Figure 6. Representative regions of interest of different categories.
Remotesensing 14 03833 g006
Figure 7. Orthomosaics derived from ultralight aircraft and different machine learning algorithms. (a) Digital orthophoto map (DOM). (b) Vegetation classification by gradient boost decision tree (GBDT) algorithm. (c) Vegetation classification by K-nearest neighbor (KNN) algorithm. (d) Vegetation classification by random forest (RF) algorithm.
Figure 7. Orthomosaics derived from ultralight aircraft and different machine learning algorithms. (a) Digital orthophoto map (DOM). (b) Vegetation classification by gradient boost decision tree (GBDT) algorithm. (c) Vegetation classification by K-nearest neighbor (KNN) algorithm. (d) Vegetation classification by random forest (RF) algorithm.
Remotesensing 14 03833 g007
Figure 8. Orthomosaics derived from UAV and different machine learning classification algorithms. (a) Digital orthophoto map (DOM) of super large sample plot, which was obtained from UAV aerial images. DOM images have a spatial resolution of 0.02 m/pixel.(b) Vegetation classification by gradient boost decision tree (GBDT) algorithm. (c) Vegetation classification by K-nearest neighbor (KNN) algorithm. (d) Vegetation classification by random forest (RF) algorithm.
Figure 8. Orthomosaics derived from UAV and different machine learning classification algorithms. (a) Digital orthophoto map (DOM) of super large sample plot, which was obtained from UAV aerial images. DOM images have a spatial resolution of 0.02 m/pixel.(b) Vegetation classification by gradient boost decision tree (GBDT) algorithm. (c) Vegetation classification by K-nearest neighbor (KNN) algorithm. (d) Vegetation classification by random forest (RF) algorithm.
Remotesensing 14 03833 g008
Figure 9. Confusion matrices for algorithm classification results. (a1) Confusion matrix for the GBDT algorithm based on the 2006 image data. (b1) Confusion matrix for the KNN algorithm based on the 2006 image data. (c1) Confusion matrix for the RF algorithm based on the 2006 image data. (a2) Confusion matrix for the GBDT algorithm based on the 2019 image data. (b2) Confusion matrix for the KNN algorithm based on the 2019 image data. (c2) Confusion matrix for the RF algorithm based on the 2019 image data.
Figure 9. Confusion matrices for algorithm classification results. (a1) Confusion matrix for the GBDT algorithm based on the 2006 image data. (b1) Confusion matrix for the KNN algorithm based on the 2006 image data. (c1) Confusion matrix for the RF algorithm based on the 2006 image data. (a2) Confusion matrix for the GBDT algorithm based on the 2019 image data. (b2) Confusion matrix for the KNN algorithm based on the 2019 image data. (c2) Confusion matrix for the RF algorithm based on the 2019 image data.
Remotesensing 14 03833 g009
Table 1. Pixels for each category.
Table 1. Pixels for each category.
CategoryPixels
20062019
TrainingValidationTrainingValidation
Elaeagnus angustifolia L.18,7252081827,85691,984
Artemisia desertorum Spreng. Syst. Veg.22,2622473179,81719,980
Haloxylon ammodendron (C. A. Mey.) Bunge2184242201,73022,414
Grass22,3702486804,23089,359
Sand159,66317,7411,222,659135,851
Table 2. The producer accuracy (PA), user accuracy (UA), overall accuracy (OA) and Kappa coefficient of each algorithm of vegetation classification results.
Table 2. The producer accuracy (PA), user accuracy (UA), overall accuracy (OA) and Kappa coefficient of each algorithm of vegetation classification results.
Artemisia desertorum Spreng. Syst. Veg.Haloxylon ammodendron (C. A. Mey.) BungeElaeagnus angustifolia L.GrassSand
2006GBDTProducer accuracy0.71210.02890.80630.72370.9936
User accuracy0.76570.50000.87030.70000.9679
OA0.9140
Kappa0.8124
KNNProducer accuracy0.72830.01650.81740.74900.9936
User accuracy0.76901.00000.88590.71230.9716
OA0.9190
Kappa0.8238
RFProducer accuracy0.84670.30580.88850.84430.9921
User accuracy0.86710.58270.89630.83760.9826
OA0.9478
Kappa0.8880
2019GBDTProducer accuracy0.70380.18060.92010.77570.9744
User accuracy0.82840.69200.92030.75780.8634
OA0.8466
Kappa0.7830
KNNProducer accuracy0.74770.29160.92110.81090.9684
User accuracy0.82050.61310.93130.77580.8990
OA0.8627
Kappa0.8073
RFProducer accuracy0.72470.27930.91460.80210.9686
User accuracy0.79690.56110.92420.77230.8987
OA0.8569
Kappa0.7992
Table 3. The random points assigned to each category of the classification results of the two periods. The information includes the mapped area proportions ( W i ), conjectured values of user accuracies ( U i ), standard deviations ( S i ) of the strata and the allocation results (Alloc). A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
Table 3. The random points assigned to each category of the classification results of the two periods. The information includes the mapped area proportions ( W i ), conjectured values of user accuracies ( U i ), standard deviations ( S i ) of the strata and the allocation results (Alloc). A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
20062019
W i U i S i Alloc W i U i S i Alloc
A0.12950.86710.33951000.08370.82050.383838
H0.03130.58270.4931500.04180.61310.487025
E0.09160.89630.3049700.10590.93130.252949
G0.15650.83760.32911200.27850.77580.4171128
S0.59110.32910.46994540.34710.89900.3013160
Table 4. Confusion matrix resulting from the accuracy assessment procedure of the 2006 study area classification. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
Table 4. Confusion matrix resulting from the accuracy assessment procedure of the 2006 study area classification. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
AHEGSUser Accuracy
A8543810.84
H0151100.88
E31262400.77
G87310120.83
S412164510.95
Producer accuracy0.850.300.890.840.99
Overall Accuracy0.90
Table 5. Confusion matrix resulting from the accuracy assessment procedure of the 2019 study area classification. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
Table 5. Confusion matrix resulting from the accuracy assessment procedure of the 2019 study area classification. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
AHEGSUser Accuracy
A2801100.93
H171300.58
E4245400.82
G413210450.81
S130161550.88
Producer accuracy0.740.280.920.810.97
Overall Accuracy0.85
Table 6. Area estimates obtained for the classifications, as well as the unbiased area and 95% confidence intervals for the area occupied by each class. CI, confidence intervals; A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
Table 6. Area estimates obtained for the classifications, as well as the unbiased area and 95% confidence intervals for the area occupied by each class. CI, confidence intervals; A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia L.; G, grass; S, sand.
Class20062019
Proportion of Study Area (%)Unbiased Area (m2)95% CIProportion of Study Area (%)Unbiased Area (m2)95% CI
A12.77127,71313,13110.0099,99215,630
H7.0370,33514,5786.2562,47020,861
E8.0980,92911,3529.7397,27015,208
G15.47154,73714,91927.90278,95827,866
S56.63566,28712,25131.83318,31018,903
Table 7. FVC of major species. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia Linn.; G, grass.
Table 7. FVC of major species. A, Artemisia desertorum Spreng. Syst. Veg.; H, Haloxylon ammodendron (C. A. Mey.) Bunge; E, Elaeagnus angustifolia Linn.; G, grass.
ClassFVC (%)
20062019
A12.77 ± 1.3110.00 ± 1.56
H7.03 ± 1.466.25 ± 2.09
E8.09 ± 1.149.73 ± 1.52
G15.47 ± 1.4927.90 ± 2.79
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, L.; Meng, X.; Zhao, X.; Fu, L.; Sharma, R.P.; Sun, H. Estimating Fractional Vegetation Cover Changes in Desert Regions Using RGB Data. Remote Sens. 2022, 14, 3833. https://doi.org/10.3390/rs14153833

AMA Style

Xie L, Meng X, Zhao X, Fu L, Sharma RP, Sun H. Estimating Fractional Vegetation Cover Changes in Desert Regions Using RGB Data. Remote Sensing. 2022; 14(15):3833. https://doi.org/10.3390/rs14153833

Chicago/Turabian Style

Xie, Lu, Xiang Meng, Xiaodi Zhao, Liyong Fu, Ram P. Sharma, and Hua Sun. 2022. "Estimating Fractional Vegetation Cover Changes in Desert Regions Using RGB Data" Remote Sensing 14, no. 15: 3833. https://doi.org/10.3390/rs14153833

APA Style

Xie, L., Meng, X., Zhao, X., Fu, L., Sharma, R. P., & Sun, H. (2022). Estimating Fractional Vegetation Cover Changes in Desert Regions Using RGB Data. Remote Sensing, 14(15), 3833. https://doi.org/10.3390/rs14153833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop