Next Article in Journal
Human and Natural Activities Effects on Soil Erosion in Karst Plateau Based on QAM Model: A Case Study of Bijie City, Guizhou Province, China
Previous Article in Journal
Adapting and Verifying the Liming Index for Enhanced Rock Weathering Minerals as an Alternative Liming Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery

1
College of Ecology and Environment, Xinjiang University, Urumqi 830046, China
2
College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China
3
Xinjiang Key Laboratory of Oasis Ecology, Xinjiang University, Urumqi 830046, China
4
Forestry and Grassland Work Station of Xinjiang Production and Construction Corps, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Land 2024, 13(11), 1840; https://doi.org/10.3390/land13111840
Submission received: 27 September 2024 / Revised: 31 October 2024 / Accepted: 4 November 2024 / Published: 5 November 2024

Abstract

:
Fractional vegetation cover (FVC) plays a key role in ecological and environmental status assessment because it directly reflects the extent of vegetation cover and its status, yet vegetation is an important component of ecosystems. FVC estimation methods have evolved from traditional manual interpretation to advanced remote sensing technologies, such as satellite data analysis and unmanned aerial vehicle (UAV) image processing. Extraction methods based on high-resolution UAV data are being increasingly studied in the fields of ecology and remote sensing. However, research on UAV-based FVC extraction against the backdrop of the high soil reflectance in arid regions remains scarce. In this paper, based on 12 UAV visible light images in differentiated scenarios in the Ebinur Lake basin, Xinjiang, China, various methods are used for high-precision FVC estimation: Otsu’s thresholding method combined with 12 Visible Vegetation Indices (abbreviated as Otsu-VVIs) (excess green index, excess red index, excess red minus green index, normalized green–red difference index, normalized green–blue difference index, red–green ratio index, color index of vegetation extraction, visible-band-modified soil-adjusted vegetation index, excess green minus red index, modified green–red vegetation index, red–green–blue vegetation index, visible-band difference vegetation index), color space method (red, green, blue, hue, saturation, value, lightness, ‘a’ (Green–Red component), and ‘b’ (Blue–Yellow component)), linear mixing model (LMM), and two machine learning algorithms (a support vector machine and a neural network). The results show that the following methods exhibit high accuracy in FVC extraction across differentiated scenarios: Otsu–CIVE, color space method (‘a’: Green–Red component), LMM, and SVM (Accuracy > 0.75, Precision > 0.8, kappa coefficient > 0.6). Nonetheless, higher scene complexity and image entropy reduce the applicability of precise FVC extraction methods. This study facilitates accurate, efficient extraction of vegetation information in differentiated scenarios within arid and semiarid regions, providing key technical references for FVC estimation in similar arid areas.

1. Introduction

Fractional vegetation cover (FVC), typically expressed as a percentage, represents the proportion of land surface covered by vegetation. It serves as a critical parameter for assessing vegetation health and understanding the interactions between global change and terrestrial ecosystems [1,2]. Moreover, FVC plays an essential role in evaluating ecosystem models and predicting ecological changes [3,4], highlighting its ecological, environmental, and societal relevance. Accurate FVC estimation aids in monitoring ecosystem health and contributes to the sustainable management of natural resources, essential for both environmental conservation and human society [5,6].
Arid and semiarid regions, such as Xinjiang, have distinctive ecosystems shaped by their unique geographical and climatic conditions. These regions are dominated by desert and grassland systems, where key vegetation types include desert plants like desert willow and poplar, semiarid species like bitterweed and reed, and mountain shrubs like saxaul and tamarisk [7,8,9]. These vegetation types play significant ecological roles, including mitigating wind erosion, preventing desertification, supporting wildlife habitats, and maintaining soil stability and water cycles. Given the ecological importance of this vegetation in Xinjiang, monitoring FVC is crucial for assessing regional ecological quality. However, traditional FVC retrieval methods face significant challenges due to environmental disturbances, necessitating the development of high-precision, repeatable techniques for accurate vegetation monitoring [10,11,12].
FVC extraction methods have traditionally relied on ground measurement and remote sensing. Ground-based methods, while precise, are resource-intensive and limited to small spatial and temporal scales, making them impractical in harsh environments [13,14]. Satellite remote sensing, on the other hand, provides wide coverage and allows large-scale vegetation monitoring, but its accuracy diminishes in arid and semiarid areas where vegetation is sparse and fragmented. The heterogeneity of these regions, coupled with complex terrain, further complicates satellite-based FVC extraction, and specialized methods are required for better accuracy [15,16].
In recent years, unmanned aerial vehicle (UAV) remote sensing has emerged as a promising alternative for estimating FVC, offering high spatial resolution, flexibility in data collection, and strong resistance to environmental interference [17,18]. Additionally, advances in machine learning algorithms have enhanced the ability to extract geoinformation from high-resolution UAV images, contributing to improved monitoring of vegetation dynamics and biodiversity [19,20]. Various techniques have been applied for FVC extraction, although the effectiveness of different algorithms varies depending on the region and vegetation characteristics [21,22].
This paper discusses different FVC extraction methods for arid and semiarid areas under varying entropy-difference conditions. The following Python-based methods are used: the Otsu–VVI method, color space method, linear mixing model [LMM], and two machine learning algorithms (a support vector machine [SVM] and a neural network [NN]). The precision of 24 extraction methods is validated using manually labeled points, confusion matrices, and kappa coefficients to select optimal algorithms suitable for arid and semiarid areas. This study aims to provide a scientific algorithmic basis and reference for the use of UAV monitoring in extracting FVC in such environments.

2. Materials

2.1. Study Area

The study area is in the Ebinur Lake basin, geographically situated between 43°38′ N and 45°52′ N, and 79°53′ E and 85°02′ E (Figure 1). The region features various geomorphological types, predominantly plains. Vegetation types are diverse and include the following: Saline vegetation: this vegetation dominates the area and is characterized by salt-tolerant plants. Common plants include Suaeda salsa and Haloxylon ammodendron [23], which are able to survive the saline conditions typical of the region. Wetland vegetation: this includes reedbeds dominated by Phragmites australis and Typha, which provide important habitat for waterfowl and act as natural water filters. Desert shrubs: in the surrounding arid areas, the vegetation includes drought-tolerant plants such as Tamarix chinensis Lour and Haloxylon ammodendron [24,25], which play an important role in stabilizing the soil and preventing erosion. Riparian vegetation: at the edge of the lake, plants such as Populus and Willow Salix provide important habitat for birds and other wildlife [26,27]. In recent years, human activities, especially agricultural expansion, irrigation water deployment and industrial development, dam construction, and pumping of water from water supply rivers have reduced the inflow to the lake, resulting in a drop in the water level and a drastic reduction in the size of the lake, a reduction which threatens biodiversity, especially the flora and fauna that depend on these ecosystems for their survival [28]. Changes in the Lake Ebey watershed therefore have important implications for scientific research and conservation efforts [29].

2.2. UAV Data and Preprocessing

The UAV used in this study is the DJI Phantom4 RTK SE, which is equipped with a quadrotor flight system and a 20-million-pixel camera capable of capturing image information in three visible light bands (red: 650 nm ± 16 nm; green: 560 nm ± 16 nm; blue: 450 nm ± 16 nm [RGB]). The experiments were conducted from 28 July 2022 to 1 August 2022 using 3D photogrammetry (grid flight). The conditions were wind speeds less than 8 m/s, clear weather, visibility greater than 5 km, and a solar altitude angle greater than 45° at time of flight. The flight altitude was 40 m with an 80% lateral overlap and an 80% longitudinal overlap. Ground control points were established in each flight test site (total error < 0.5 m) to ensure the correct georegistration of UAV orthoimages (Figure 2). The collected UAV images were processed using the software Pix4D 4.4.10 (https://www.pix4d.com) (accessed on 11 July 2023) to obtain orthoimages with a pixel spatial resolution of 0.03 m. FVC was extracted from the UAV image data using a combination of methods based on Python: the Otsu–VVI method, the color space method, an LMM, and two machine learning algorithms (an SVM and an NN). These methods were evaluated for accuracy, and the optimal FVC extraction method was selected.

3. Methods

3.1. Scenarios and Entropy

Differentiated scenarios typically refer to specific environments influenced by varying geographic, climatic, and soil conditions and human activities. The concept of differentiated scenarios was used in this study to understand and analyze vegetation distribution and changes accurately under different environmental conditions. Based on manual visual interpretation, 12 orthorectified visible light (RGB) drone images obtained in the experiments were categorized into six scenarios according to vegetation distribution and surface complexity: (1) sparse shrub areas with similar backgrounds, (2) mixed grass–shrub areas with distinct ground vegetation demarcation, (3) mixed zone of sparse herbs and shrubs, (4) extensive shrub areas with minor grassland integration, (5) mixed grass–shrub areas with indistinguishable soil backgrounds, and (6) complex vegetation types with high cover and architectural interference.
Herein, entropy was used to describe these scenarios quantitatively. In remote sensing image processing, entropy is commonly used to measure the randomness or richness of information in image pixels [30]. Specifically for vegetation cover images, entropy can be used to analyze and differentiate the extent and type of vegetation coverage because different vegetation types and coverage exhibit various textures and gray-level variabilities. High entropy values indicate a wide and complex distribution of image pixel values, corresponding to areas with complex or mixed vegetation. Low entropy values suggest that the area is relatively uniform, with little variation in color and brightness, indicating simple vegetation areas. Entropy is typically calculated using a gray-level cooccurrence matrix (GLCM), a statistical method for characterizing image texture features [31]. A GLCM calculates the frequency distribution of gray-level similarity or dissimilarity between an image pixel and its neighboring pixels. Entropy is a statistical measure from this matrix used to describe the complexity and irregularity of image textures. The calculation formula is [31]
H = i = 1 p i log 2 p i ,
P i = C i / N ,
where P i is the probability of the i th gray level occurring in the image, C i is the number of occurrences of gray level i, and N is the total number of pixels in the image.

3.2. FVC Extraction Methods

(1)
Otsu–VVI Method
Otsu thresholding, also known as the maximum interclass variance method, is an adaptive thresholding method for image segmentation based on image data. Its principle involves calculating the image’s grayscale histogram to determine a threshold that divides the image into the foreground and background. VVIs (Table 1) are used to extract vegetation information from remote sensing images using the differential reflection between vegetation and nonvegetation areas [32,33]. These indices, typically calculated using visible light reflectance values, are based on data from different spectral bands [34]. Combining Otsu thresholding with VVIs leverages their respective advantages to enhance the accuracy and reliability of vegetation cover extraction.
(2)
Color Space Method
The color space method for FVC extraction is based on the absorption and reflection characteristics of vegetation at different wavelengths. This method involves converting color remote sensing images into an appropriate color space, commonly RGB; hue, saturation, value (HSV); and CIELab. Depending on the chosen color space, the original image is transformed into the corresponding color space, and the relevant color channels (R, G, B, H, S, V, L, a, and b) are extracted. Appropriate thresholds are set to segregate vegetated and nonvegetated areas within the image. Different color channels may require varying thresholds. Finally, FVC is obtained from UAV image data by calculating the proportion of vegetated pixels to nonvegetated ones in the binary image.
(3)
LMM
An LMM posits that the reflectance value of a pixel in a specific spectral band is a linear combination of the reflectance values of the pixel’s endmember components and their respective abundances [45].
R e f i = j = 1 m P i , j + ε i      
where i = 1 , 2 , 3 , , n (where n is the number of spectral bands); j = 1,2 , 3 , , m (where m is the number of endmember components within a pixel); R e f i is the mixed-pixel reflectance value; P i , j is the reflectance value of the j th endmember component in the i th spectral band; and ε i is the error in the i th band.
(4)
SVM
An SVM fundamentally operates by identifying a hyperplane in the feature space that maximizes the margin between different classes. This hyperplane serves as a decision boundary that discriminates between classes. For data that are not linearly separable, an SVM uses kernel techniques to project these data into a higher-dimensional space to identify a linear separating hyperplane. An SVM solves the optimization problem [46]
min w , b 1 2 w 2 + C i = 1 n ξ i
subject to the constraints
y i ( w · x i + b ) 1 ξ i ,     ξ i 0 ,   i = 1 , , n ,
where w is the normal vector to the hyperplane; b is the bias of the hyperplane; C is a regularization parameter that controls the misclassification penalty; ξ i denotes slack variables, allowing some data points to be on the incorrect side of the margin; y i is the label of each sample, typically +1 or −1; and x i denotes the feature vectors.
The SVM model parameter settings include the kernel type, penalty coefficient C , and gamma. C balances the accuracy of classification with the smoothness of the decision surface. A larger C can reduce training errors but may lead to overfitting, whereas a smaller C enhances the model’s robustness to noise but may increase training errors. The gamma parameter determines the reach of a single training example’s influence, thereby affecting classification granularity or smoothness.
(5)
NN
Numerous nodes (neurons) are organized in a hierarchical structure within an NN. Each neuron processes input signals through an activation function and produces output signals for the next layer. In an NN, each connection has a weight, which is adjusted during training through backpropagation algorithms to minimize the error between the model outputs and the true labels. The basic equations of an NN involve forward propagation and error backpropagation processes. In forward propagation, the output y of each neuron is calculated as [36]
y = f i = 1 n w i x i + b ,
where x i is the input value; w i is the weight; b is the bias; and f is the activation function, whose common functions include sigmoid, tanh, or ReLU. During backpropagation, the network minimizes the loss function, typically a function of the prediction error, such as the mean squared error or cross-entropy loss.
The number of hidden layers and nodes, activation function, learning rate, number of epochs, and batch size are commonly used parameters in traditional MLP NNs. The numbers of hidden layers and their nodes define the network depth and width, respectively, with each hidden layer potentially having a distinct number of nodes. Additional layers and nodes can increase model complexity and lead to overfitting. Common activation functions include sigmoid, tanh, and ReLU, which introduce nonlinear factors allowing the network to learn complex data patterns. The learning rate, which determines the step size of weight adjustments, is a crucial parameter in optimization algorithms. The number of iterations, or epochs, is the number of times the entire training dataset is used to update the model weights, with multiple iterations helping the network learn better. During training, data are divided into batches, each used to compute model errors and update weights.

3.3. Precision Evaluation

The vegetated and nonvegetated areas in the 12 UAV images were manually marked in this study (Figure 3). According to the area size and vegetation sparsity, type, and complexity, vegetation and nonvegetation were proportionally marked in the images. From the 1st image to the 12th image, named 1–12, the total number of point markers was 200, 400, 600, 300, 600, 500, 200, 500, 400, 400, and 400, respectively. The 12 marked images were then used to extract vegetation cover using the Otsu–VVI method, color space method, LMM, and two machine learning methods (SVM and NN), followed by accuracy validation using confusion matrices and kappa coefficients.
(1)
Confusion Matrix
The FVC extraction results of different methods were evaluated using confusion matrices and accuracy metrics [37] (Formulas (7)–(12); Table 2).
A c c u r a c y = a + d a + b + c + d
P r e c i s i o n = a a + c
R e c a l l = a a + b
f = 2 a 2 a + b + c
O A = c c + d
U A = b a + b
Several key statistical metrics were used to assess the FVC extraction accuracy of the classification models: total accuracy, precision, recall, overestimation error, underestimation error, and the f score. These metrics are defined as follows:
A c c u r a c y : This is the proportion of correctly identified observations (vegetation and nonvegetation in this study) to the total number of observations. This is the most straightforward performance evaluation metric; a higher value indicates better overall model performance.
P r e c i s i o n : This was the proportion of correctly predicted vegetation observations out of all predicted vegetation. A higher value indicated higher accuracy in the predicted vegetation samples.
R e c a l l : This metric was the proportion of actual vegetation observations that were correctly predicted as vegetation. A higher value indicated a stronger ability of the model to extract vegetation.
f score: This is the harmonic mean of precision and recall, ranging between 0 and 1. Values closer to 1 indicated better FVC extraction effectiveness.
Overestimation Error ( O A ): This metric was the probability of nonvegetation observations being incorrectly predicted as vegetation.
N is the total number of observations, X is the number of correct classifications, and E is the number expected by chance. The kappa coefficient ranges from −1 (total disagreement) to +1 (perfect agreement); a value of 0 indicates that the observed agreement is random. In practical applications, a higher kappa value signifies better classification performance and considers the impact of random factors.
Underestimation Error ( U A ): This refers to the probability of vegetation observations being missed or undetected.
These metrics constitute a comprehensive framework for evaluating the performance of vegetation classification models in different aspects. Total accuracy reflects the overall accuracy of each model. Underestimation and overestimation errors provide information about potential misjudgments by each model.
(2)
Kappa Coefficient
The kappa coefficient is a statistical measure used to assess classification accuracy, especially in terms of accounting for random agreement. In the classification of FVC, the kappa coefficient was used to understand the degree of consistency between the classification results and actual conditions, providing insights beyond those offered by overall accuracy alone. The kappa coefficient of each FVC extraction method was calculated as [38]
N = a + b + c + d ,
X = a + d ,
E = ( a + c ) × ( a + b ) N + ( b + d ) × ( c + b ) N ,
K a p p a = X E N E .

4. Result

4.1. Scenarios and Entropy

Entropy values were calculated for the 12 images across the six scenarios (Table 3). Based on the average entropy value and standard deviation, entropy classifications were assigned as follows: high entropy (values greater than the mean minus 0.5 times the standard deviation), low entropy (values less than the mean minus 0.5 times the standard deviation), and medium entropy (values in between). These categories were used to classify each of the 12 images into low-, medium-, and high-entropy categories. Subsequently, FVC was extracted from these images using the Otsu–VVI method, color space method, LMM, SVM, and NN.

4.2. Otsu–VVIs

Twelve Otsu–VVI methods were used to extract FVC from the 12 orthorectified RGB drone images, and extraction result maps were generated (Figure 4; Table 4).

4.3. Color Space Method

FVC was extracted from the 12 images using the color space method, and FVC extraction result maps were produced (Figure 5; Table 5).

4.4. LMM

The LMM was used to extract FVC from the 12 images, and extraction outcome images were obtained (Figure 6; Table 6).

4.5. SVM

The SVM was used to extract FVC from the 12 images, and extraction outcome images were obtained (Figure 7; Table 7).

4.6. NN

The NN was used to extract FVC from the 12 images, and extraction outcome images were generated (Figure 8; Table 8).

4.7. Confusion Matrix

(1)
Otsu–VVIs
The vegetation indices EXG, VDVI, RGBVI, V-MSAVI, and CIVE display universality and stability across various entropy conditions (Figure 9). For the low-entropy images, these indices demonstrate an A c c u r a c y range of 0.7–0.96, indicating excellent differentiation between vegetation and nonvegetation. P r e c i s i o n ranges from 0.85 to 1, showing high vegetation recognition precision. Although R e c a l l is generally good, some misclassification occurs. For the medium-entropy images, A c c u r a c y slightly drops to 0.75–0.94, but P r e c i s i o n increases to 0.87–1. R e c a l l is similar to that for the low-entropy images (prone to misclassification). For the high-entropy images, A c c u r a c y ranges from 0.74 to 0.86 and P r e c i s i o n is 0.98–1; however, the vegetation extraction accuracy is notably lower than that for the low- and medium-entropy images. R e c a l l is still generally good but includes some misclassifications.
(2)
Color Space Method
Each color component (R, G, B, H, S, V, L, a, and b) exhibits distinct stability and universality between entropy conditions (Figure 9). For the low-entropy images, A c c u r a c y is 0.53–0.97 and P r e c i s i o n is 0.56–1, indicating good overall model performance and high vegetation extraction precision, respectively. For the medium-entropy images, A c c u r a c y is 0.31–0.94 and P r e c i s i o n is 0.07–0.98, showing notable declines in overall performance and vegetation extraction precision, respectively, compared with those for the low-entropy images. For the high-entropy images, A c c u r a c y ranges from 0.34 to 0.96 and P r e c i s i o n from 0.41 to 0.98, suggesting lower overall performance and precision in vegetation extraction, respectively, than for the low- and medium-entropy images. Furthermore, more misclassifications are observed here than for the low- and medium-entropy images.
(3)
LMM
For the low-, medium-, and high-entropy images, the model’s overall performance, vegetation extraction precision, and accuracy significantly improve (Figure 9). Specifically, for the low-entropy images, A c c u r a c y is between 0.79 and 0.89, P r e c i s i o n is 0.87–0.97, and R e c a l l is between 0.59 and 0.9, with few misclassifications. As for the medium-entropy images, A c c u r a c y is between 0.88 and 0.91, P r e c i s i o n is between 0.87 and 0.99, and R e c a l l is 0.77–0.91, with few misclassifications. For the high-entropy images, A c c u r a c y is 0.89–0.94, P r e c i s i o n is 0.87–0.99, and R e c a l l is 0.80–0.97, with few misclassifications. Overall, the overall model performance and vegetation extraction accuracy and precision are notably better than those under the Otsu–VVI and color space methods.
(4)
SVM
For the low-entropy images, FVC extraction A c c u r a c y is 0.89–0.94, P r e c i s i o n is 0.87–0.99, and R e c a l l is 0.78–0.96, with few misclassifications (Figure 9). For the medium-entropy images, A c c u r a c y ranges from 0.89 to 0.98, P r e c i s i o n from 0.86 to 0.99, and R e c a l l from 0.86 to 0.98, with few misclassifications. For the high-entropy images, A c c u r a c y is 0.76–0.95, P r e c i s i o n is 0.88–0.95, and R e c a l l is 0.61–0.99, with few misclassifications. These indicate high stability and precision across all entropy conditions.
(5)
NN
For the low-entropy images, A c c u r a c y ranges from 0.74 to 0.93, P r e c i s i o n from 0.8 to 1, and R e c a l l from 0.49 to 0.91, with few misclassifications (Figure 9). These suggest good overall model performance and significantly good vegetation extraction precision. For the medium-entropy images, A c c u r a c y is 0.45–0.99, P r e c i s i o n is 0.45–0.99, and R e c a l l is 0.46–0.99, with few misclassifications. These show good overall model performance and vegetation extraction precision.

4.8. Kappa Coefficient

(1)
Otsu–VVIs
For images with low, medium, and high entropy values, the VVIs EXG, VDVI, RGBVI, V-MSAVI, and CIVE show optimal vegetation extraction accuracy. In particular, the CIVE demonstrates high precision across all entropy conditions, with kappa coefficients ranging from 0.54 to 0.94 (Figure 10; Table 9).
(2)
Color Space Method
The ‘a’ component achieves high accuracy in images with low, medium, and high entropy values, with kappa coefficients ranging from 0.63 to 0.89. This indicates that this component has significant discriminative ability in extracting vegetation cover across various entropy conditions (Table 10).
(3)
LMM
This model shows higher vegetation extraction accuracy in the low-entropy images, with kappa coefficients between 0.62 and 0.78. For the medium-entropy images, the kappa coefficients range from 0.76 to 0.89, indicating an improvement in vegetation extraction accuracy compared with that for the low-entropy images. In the high-entropy images, the kappa coefficients range from 0.78 to 0.89, suggesting that the model’s overall precision in vegetation extraction is higher than that for the low-entropy images but lower than that for the medium-entropy images (Table 11).
(4)
SVM
In FVC extraction using the SVM, the low-entropy images show kappa coefficients ranging from 0.75 to 0.96, indicating high vegetation extraction accuracy. The medium-entropy images have kappa coefficients ranging from 0.79 to 0.97, showing a slight decline in accuracy compared with the low-entropy images, but the overall performance of the model remains good. The high-entropy images have kappa coefficients of 0.52 and 0.91, indicating a decrease in vegetation extraction accuracy compared with that for the low- and medium-entropy images (Table 12).
(5)
NN
Except the first low-entropy image, which has a kappa coefficient of 0.49, all low-entropy images have kappa coefficients of 0.68–0.86, indicating low precision in sparse vegetation areas. Except the fourth medium-entropy image, which has a kappa of −0.1, the medium-entropy images have kappa coefficients ranging from 0.82 to 0.97, showing high precision in extracting widely distributed vegetation but lower precision in extracting sporadically distributed vegetation. Except the eighth high-entropy image, which has a kappa coefficient of 0.50, the high-entropy images have kappa coefficients of 0.70 to 0.89, indicating lower precision in extracting vegetation in areas with sparse herbaceous vegetation and widespread soil backgrounds (Table 13).

5. Discussion

5.1. Comparison of Differentiated Scenarios and Entropy for FVC Extraction

(1)
Sparse Shrub Areas with Similar Backgrounds (No. 1 and No. 2)
Both are low-entropy images. In No. 1, the sparse yellow vegetation blends visually with its soil background in color and texture. This reduces the applicability of extraction methods other than the vegetation indices EXG, VDVI, RGBVI, V-MSAVI, and CIVE and the ‘b’ component of the color space method. These methods have limitations in distinguishing subtle differences, affecting the accuracy and reliability of the overall data analysis. Conversely, No. 2 displays a clearer contrast between vegetation and the background, improving the applicability and effectiveness of up to 20 high-precision FVC extraction methods (including the above). This highlights the importance of environmental background differences for the selection of vegetation extraction methods and data interpretation. Xie et al. (2020) introduced a new red–green–blue ratio vegetation index, which showed 93.5% accuracy in vegetation cover extraction using simple RGB data [39]. However, this is clearly not suitable for FVC extraction in arid and semiarid regions.
(2)
Mixed Grass–Shrub Areas with Distinct Ground Vegetation Demarcation (No. 3 and No. 10)
Both images cover mixed grass–shrub areas but show significant differences in the distinctiveness of surface vegetation, so the same high-precision FVC extraction methods apply to both images except the ‘S’ component. Specifically, the content of No. 10, a low-entropy image, is uniform and simple, where the ‘S’ component (saturation) demonstrates high accuracy in vegetation extraction. Therefore, in environments with simple, sparse vegetation distributions, the ‘S’ component can effectively distinguish between vegetated and nonvegetated areas, thus enhancing extraction accuracy. No. 3, a medium-entropy image, is visually more complex and diverse, containing more information and noise due to its mix of vegetation and nonvegetation and different surface features, reducing the performance of the ‘S’ component in vegetation extraction. This complexity sharply differs from the simpler, sparser vegetation distribution in No. 10, further highlighting how environmental complexity affects the choice and effectiveness of UAV data analysis methods, as noted by Mariana et al. (2017) [40].
(3)
Areas Cooccupied by Shrubs and Sparse Herbaceous Vegetation (No. 4 and No. 12)
In these scenarios, the same methods apply to both images except NNs, which show differing efficiencies. No. 4, a medium-entropy image, has a complex distribution of shrubs and grass, including unevenly distributed, densely interwoven vegetation structures, and variable surface features, which increase classification difficulty, compromising the performance of the NN model. Similar to the findings by Yan et al. (2019), environmental complexity significantly hinders NN performance [41]. By contrast, No. 12, a low-entropy image, may have simpler or more regular vegetation and terrain distribution despite also having shrubs and sparse grass, providing a more manageable data structure for the NN. Additionally, No. 12 may benefit from optimized lighting conditions, further enhancing data processing efficiency and accuracy. Thus, environmental complexity significantly affects the effectiveness of NN methods in vegetation classification.
(4)
Extensive Shrub Areas with Minor Grassland Integration (No. 6 and No. 7)
Both are medium-entropy images. Up to 19 methods are suitable for these widely, uniformly vegetated areas, mainly because the adequate resolution and spectral information of the images allow these high-precision methods to better process and analyze the characteristics of extensive, uniform vegetation distributions. Zhang et al. (2022) demonstrated that random forest models perform particularly well in processing such uniform vegetation distributions in arid regions [42]. Additionally, these methods benefit from their algorithms’ high data processing capability and robustness to environmental noise and background variations, which are crucial for extensively vegetated areas. Therefore, methods besides the five vegetation indices (NGRDI, EXR, EXER, RGRI, NGBDI) enable a more comprehensive assessment and accurate extraction of vegetation cover in such environments.
(5)
Mixed Grass–Shrub Areas with Indistinguishable Soil Backgrounds (No. 9)
In this medium-entropy image, the spectral properties of the soil closely resemble those of the surrounding vegetation, so traditional vegetation indices cannot effectively distinguish between vegetation and soil. This significantly reduces the number of suitable high-precision vegetation extraction methods, especially in the color space method, where only the R, S, and ‘a’ component exhibit high extraction precision. Adjusting the UAV’s shooting angle and optimizing lighting conditions, as suggested by Catherine et al. (2013), can help improve vegetation extraction in such complex scenarios [43]. Data quality should be optimized through adjustments in image capture timing, lighting conditions, and camera angles to improve vegetation extraction in complex scenarios. Flying under optimal sunlight conditions, such as early morning or late evening, can avoid the high reflectance and intense shadows caused by direct midday sunlight and reduce spectral differences between vegetation and soil due to changes in solar angle. Adjusting the UAV’s shooting angle can capture more dimensional surface information, increasing the visual and spectral distinction between vegetation and soil in images. This improves the recognition and classification accuracy of vegetation features in UAV images and optimizes vegetation cover estimates in areas with complex soil backgrounds.
(6)
Complex Vegetation Types with High Cover and Architectural Interference (No. 5, 8, and 11)
No. 5 and No. 8 are high-entropy images, whereas Image 11 is a medium-entropy one. In this scenario, the same high-precision vegetation extraction methods are applied to No. 5 and No. 11 due to their similar settings. As for No. 8, only 13 FVC extraction methods are available, fewer than those for the two other images. Specifically, only five components (G, B, H, S, V, and b) in the color space method are suitable. This is mainly because of the extensive presence of sparse herbaceous vegetation and the soil background in Image 8, which lowers the accuracy of these color space components in distinguishing between vegetation and nonvegetation. Li et al. (2019) have similarly highlighted the limitations of vegetation indices in complex, mixed environments [44].
In conclusion, the effectiveness of vegetation cover extraction in different entropy scenarios significantly depends on the entropy level and scene complexity, so selecting extraction methods suitable for specific backgrounds and conditions is crucial. Future research should further explore how to integrate the advantages of various methods—traditional vegetation indices, machine learning techniques, and recent image processing algorithms—to adapt to diverse environmental scenarios and improve vegetation extraction precision (Figure 11).

5.2. Comparison of FVC Extraction Methods

This study uses UAV image data and 24 FVC extraction methods to explore vegetation cover extraction in typical arid areas in Xinjiang across different entropy values. These methods encompass widely used vegetation indices, the color space method, LMM, and machine learning algorithms (SVM and NN) that are becoming mainstream in various research domains. Under different entropy conditions, the optimal FVC extraction methods are the CIVE, the ‘a’ component of the color space method, LMM, and machine learning. However, each method has distinct strengths and limitations in extracting FVC under specific conditions, such as shadows, vegetation under shadows, yellow vegetation, sparse vegetation, and biological soil crusts. These are detailed as follows (Figure 12, 15 m × 14 m):
(1)
CIVE
The CIVE performs well in distinguishing pure shadows from vegetation but identifies biological soil crusts, sparse vegetation, yellow vegetation, and their branches as nonvegetation, which aligns with findings in desert regions by Hao et al. (2020), in which areas under shadows are also frequently misclassified, highlighting the need for strict control over lighting and angles during UAV image capture to enhance FVC extraction accuracy under these conditions [45]. Vegetation under shadows is also frequently misclassified. This necessitates strict control over lighting and angles during UAV image capture to enhance FVC extraction accuracy under these conditions.
(2)
Color Space Method
The ‘a’ component improves extraction accuracy for sparse vegetation and pure shadow parts but identifies biological soil crusts, vegetation under shadows, yellow vegetation, and branch areas as nonvegetation, consistent with previous research on urban vegetation cover mapping [46]. Threshold adjustment can optimize the distinction between vegetation and nonvegetation for optimal accuracy.
(3)
LMM
Compared with the CIVE and the ‘a’ component, the LMM significantly improves extraction accuracy for pure shadows, sparse vegetation, vegetation under shadows, and biological soil crusts, but it has limitations with yellow vegetation and branches. Its overall accuracy is significantly higher than that of the CIVE and the ‘a’ component, as reported by Ni (2023) [46].
(4)
Machine Learning Algorithms
The SVM and NN outperform the other methods in extracting yellow vegetation and its branches. However, they have limitations in extracting pure shadow and sparse vegetation parts, with the NN performing notably worse. Vegetation under shadows and biological soil crusts are also not effectively recognized, with the SVM generally outperforming the NN. Moreover, despite these methods’ high extraction accuracy for pure green vegetation, limitations remain in extracting biological soil crusts, yellow vegetation, branches, shadows, and vegetation under shadows. Future research should explore combining these methods’ strengths to develop more accurate and practical strategies, particularly for vegetation cover extraction in arid and semiarid areas [47].

5.3. Accuracy of UAV Remote Sensing Images

UAV remote sensing bridges the gap between ground measurements and low-spatial-resolution satellite sensing, providing fine centimeter-level ground data without the constraints of timing or other factors [48]. This particularly holds in specific small-scale monitoring tasks, where UAV operation costs are significantly lower than those of traditional satellite sensing methods. In practical applications, high-resolution UAV imagery captures more details, greatly enhancing the accuracy of assessments of vegetation type, health, and cover. The high-resolution UAV imagery is one of its greatest advantages, allowing researchers to observe and analyze surface features meticulously. However, whether the accuracy of vegetation cover extraction linearly increases with the resolution of the original images or the effect of resolution on FVC extraction stops being significant beyond a certain threshold remains a critical question. This issue should be discussed to optimize UAV remote sensing applications in ecological monitoring and environmental management. Current research indicates that although high-resolution imagery provides rich surface information, it can also introduce noise, especially in areas with unclear vegetation boundaries or dense vegetation, potentially decreasing FVC extraction accuracy [49]. Moreover, a high image resolution typically entails considerable data processing demands, requiring more advanced data processing hardware and high processing time and costs. Therefore, determining the optimal image resolution to balance accuracy and cost is an important direction for future research. Experimental and theoretical analyses should be performed to assess FVC extraction performance systematically at different resolutions and establish precise vegetation monitoring models. Finally, the lack of near-infrared (NIR) as well as multispectral data in this study greatly limits the ability to capture nuances in vegetation health, especially in the later stages of growth. Without the use of NIR or multispectral imagery, more detailed information on plant health and soil-vegetation interactions cannot be obtained. NIR data, however, are able to distinguish between healthy and stressed vegetation and are widely used in vegetation studies.

6. Conclusions

This study utilizes UAV data combined with Otsu–VVIs (EXG, EXR, EXER, NGRDI, NGBDI, RGRI, CIVE, V-MSAVI, EXGR, MGRVI, RGBVI, and VDVI), the color space method (R, G, B, H, S, V, L, a, and b), LMM, and two machine learning algorithms (SVM and NN) to extract fractional vegetation cover in different entropy scenarios in the arid regions of Xinjiang. The most effective methods for extracting fractional vegetation cover against the backdrop of the strong reflective soil found in arid and semiarid regions were identified and validated: the CIVE, the ‘a’ component in the color space method, LMM, and SVM.
Regarding the Otsu–VVIs in high-, medium-, and low-entropy images, the CIVE outperforms the other vegetation indices, with A c c u r a c y = 0.77–0.97 and P r e c i s i o n = 0.82–1. These results highlight the CIVE’s superior overall performance, accuracy, and precision in vegetation extraction, with low rates of missed and false detections.
Regarding the color space method in high-, medium-, and low-entropy images, the ‘a’ component demonstrates superior FVC extraction accuracy compared with the other color components. It shows A c c u r a c y = 0.82–0.95 and P r e c i s i o n = 0.75–0.96. These results emphasize the applicability of the ‘a’ component in extracting vegetation cover in the arid regions of Xinjiang.
The LMM achieves the following metrics in FVC extraction across high-, medium-, and low-entropy images: A c c u r a c y = 0.81–0.94, P r e c i s i o n = 0.87–0.99. Therefore, the LMM provides high precision and accuracy in extracting vegetation cover in Xinjiang’s arid regions, with low rates of missed and false detections.
The SVM achieves the following values in FVC extraction across high-, medium-, and low-entropy images: A c c u r a c y = 0.76–0.98, P r e c i s i o n = 0.88–0.95. Hence, the SVM is highly applicable for extracting FVC under different entropy conditions in Xinjiang’s arid and semiarid regions. On the contrary, the NN shows relatively lower accuracy in extracting vegetation cover under varying entropy conditions among sparse distributions of shrubs and grasslands in arid regions.

Author Contributions

Conceptualization, Y.M. and C.S.; methodology, C.S.; software, C.S.; validation, C.S., H.P., and J.G.; formal analysis, C.S.; investigation, C.S. and N.L.; resources, Y.M. and H.R.; data curation, C.S. and Q.W.; writing—original draft preparation, C.S.; writing—review and editing, Y.M. and C.S.; visualization, C.S. and H.P.; supervision, Y.M. and H.R.; project administration, Y.M.; funding acquisition, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the carbon storage, turnover, biological origins, and future scenario prediction of representative wetland ecosystems in Xinjiang (Grant No. 2023D01D01), sponsored by the Natural Science Foundation of Xinjiang Uygur Autonomous Region, and supported by the Third Xinjiang Scientific Expedition Program (Grant No. 2021xjkk1400).

Data Availability Statement

All data included in this study are available upon request by contact with the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, C.; Zhang, X.; Wang, T.; Chen, G.; Zhu, K.; Wang, Q.; Wang, J. Detection of vegetation cover-age changes in the Yellow River Basin from 2003 to 2020. Ecol. Indic. 2022, 138, 108818. (In English) [Google Scholar] [CrossRef]
  2. Song, J.M.; Xia, N.; Hai, W.Y.; Tang, M.Y. Spatial-temporal variation of vegetation in Gobi region of Xinjiang based on NDVI-DFI model. Southwest China J. Agric. Sci. 2022, 35, 2867–2875. [Google Scholar]
  3. Zhao, J.; Li, J.; Zhang, Z.X.; Wu, S.L.; Zhong, B.; Liu, Q.H. A dataset of 16 m/10-day fractional vegetation cover of MuSyQ GF-series (2018–2020, China, Version 01). China Sci. Data 2022, 7, 221–230. [Google Scholar] [CrossRef]
  4. Zhang, S.; Yang, R.; Wenxing, H.; Wang, L.; Shuang, L.; Song, H.; Zhao, W.; Li, L. Analysis of Fractional Vegetation Cover Changes and Driving Forces on Both Banks of Yongding River Before and After Ecological Water Replenishment. Ecol. Environ. Sci. 2023, 32, 264–273. [Google Scholar]
  5. Chen, X.; Lv, X.; Ma, L.; Chen, A.; Zhang, Q.; Zhang, Z. Optimization and Validation of Hyperspectral Estimation Capability of Cotton Leaf Nitrogen Based on SPA and RF. Remote Sens. 2022, 14, 5201. [Google Scholar] [CrossRef]
  6. Li, Y.; Sun, J.; Wang, M.; Guo, J.; Wei, X.; Shukla, M.K.; Qi, Y. Spatiotemporal Variation of Fractional Vegetation Cover and Its Response to Climate Change and Topography Characteristics in Shaanxi Province, China. Appl. Sci. 2023, 13, 11532. [Google Scholar] [CrossRef]
  7. Cai, Y.; Zhang, M.; Lin, H. Estimating the Urban Fractional Vegetation Cover Using an Object-Based Mixture Analysis Method and Sentinel-2 MSI Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 341–350. [Google Scholar] [CrossRef]
  8. Liang, J.; Liu, D. Automated estimation of daily surface water fraction from MODIS and Landsat images using Gaussian process regression. Int. J. Remote Sens. 2021, 42, 4261–4283. [Google Scholar] [CrossRef]
  9. Jia, X.; Shao, M.; Zhu, Y.; Luo, Y. Soil moisture decline due to afforestation across the Loess Plateau, China. J. Hydrol. 2017, 546, 113–122. [Google Scholar] [CrossRef]
  10. Elmendorf, S.C.; Henry, G.H.; Hollister, R.D.; Björk, R.G.; Bjorkman, A.D.; Callaghan, T.V.; Collier, L.S.; Cooper, E.J.; Cornelissen, H.C.; Day, T.A.; et al. Global assessment of experimental climate warming on tundra vegetation: Heterogeneity over space and time. Ecol. Lett. 2012, 15, 164–175. [Google Scholar] [CrossRef]
  11. Wang, N.; Guo, Y.; Wei, X.; Zhou, M.; Wang, H.; Bai, Y. UAV-based remote sensing using visible and multi-spectral indices for the estimation of vegetation cover in an oasis of a desert. Ecol. Indic. 2022, 141, 109155. [Google Scholar] [CrossRef]
  12. Zhong, G.; Chen, J.; Huang, R.; Yi, S.; Qin, Y.; You, H.; Han, X.; Zhou, G. High Spatial Resolution Fractional Vegetation Coverage Inversion Based on UAV and Sentinel-2 Data: A Case Study of Alpine Grassland. Remote Sens. 2023, 15, 4266. [Google Scholar] [CrossRef]
  13. Brazier, R.E.; Turnbull, L.; Wainwright, J.; Bol, R. Carbon loss by water erosion in drylands: Implications from a study of vegetation change in the south-west USA. Hydrol. Process. 2013, 28, 2212–2222. [Google Scholar] [CrossRef]
  14. Maurya, A.K.; Bhargava, N.; Singh, D. Efficient selection of SAR features using ML based algorithms f-or accurate FVC estimation. Adv. Space Res. 2022, 70, 1795–1809. [Google Scholar] [CrossRef]
  15. Fernández-Guisuraga, J.M.; Verrelst, J.; Calvo, L.; Suárez-Seoane, S. Hybrid inversion of radiative transfer models based on high spatial resolution satellite reflectance data improves fractional vegetation cover retrieval in heterogeneous ecological systems after fire. Remote Sens. Environ. 2021, 255, 12304. [Google Scholar] [CrossRef]
  16. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2011, 3, 397–404. [Google Scholar] [CrossRef]
  17. Wu, S.; Deng, L.; Zhai, J.; Lu, Z.; Wu, Y.; Chen, Y.; Guo, L.; Gao, H. Approach for Monitoring Spatiotemporal Changes in Fractional Vegetation Cover Through Unmanned Aerial System-Guided-Satellite Survey: A Case Study in Mining Area. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5502–5513. [Google Scholar] [CrossRef]
  18. Zhang, T.; Liu, D. Estimating fractional vegetation cover from multispectral unmixing modeled with local endmember variability and spatial contextual information. ISPRS J. Photogramm. Remote Sens. 2024, 209, 481–499. [Google Scholar] [CrossRef]
  19. Wang, H.; Han, D.; Mu, Y.; Jiang, L.; Yao, X.; Bai, Y.; Lu, Q.; Wang, F. Landscape-level vegetation classification and fractional woody and herbaceous vegetation cover estimation over the dryland ecosystems by unmanned aerial vehicle platform. Agric. For. Meteorol. 2019, 278, 107665. [Google Scholar] [CrossRef]
  20. AAshapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef]
  21. Du, M.; Li, M.; Noguchi, N.; Ji, J.; Ye, M. Retrieval of Fractional Vegetation Cover from Remote Sensing Image of Unmanned Aerial Vehicle Based on Mixed Pixel Decomposition Method. Drones 2023, 7, 43. [Google Scholar] [CrossRef]
  22. Guo, W.; Rage, U.K.; Ninomiya, S. Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput. Electron. Agric. 2013, 96, 58–66. [Google Scholar] [CrossRef]
  23. Hu, Q.; Yang, J.; Xu, B.; Huang, J.; Memon, M.S.; Yin, G.; Zeng, Y.; Zhao, J.; Liu, K. Evaluation of Global Decametric-Resolution LAI, FAPAR and FVC Estimates Derived from Sentinel-2 Imagery. Remote Sens. 2020, 12, 912. [Google Scholar] [CrossRef]
  24. Wang, B.; Jia, K.; Liang, S.; Xie, X.; Wei, X.; Zhao, X.; Yao, Y.; Zhang, X. Assessment of Sentinel-2 MSI Spectral Band Reflectances for Estimating Fractional Vegetation Cover. Remote Sens. 2018, 10, 1927. [Google Scholar] [CrossRef]
  25. Li, J.; Fan, W.; Li, M. Application of linear mixing spectral model to classification of multi-spectral remote sensing image. J. Northeast. For. Univ. 2008, 36, 45–69. [Google Scholar]
  26. Hultquist, C.; Chen, G.; Zhao, K. A comparison of Gaussian process regression, random forests and support vector regression for burn severity assessment in diseased forests. Remote Sens. Lett. 2014, 5, 723–732. [Google Scholar] [CrossRef]
  27. Durbha, S.S.; King, R.L.; Younan, N.H. Support vector machines regression for retrieval of leaf area index from multiangle imaging spectroradiometer. Remote Sens. Environ. 2007, 107, 348–361. [Google Scholar] [CrossRef]
  28. Gränzig, T.; Fassnacht, F.E.; Kleinschmit, B.; Foerster, M. Mapping the fractional coverage of the invasive shrub Ulex europaeus with multi-temporal Sentinel-2 imagery utilizing UAV orthoimages and a new spatial optimization approach. Int. J. Appl. Earth Obs. Geoinf. 2021, 96, 102281. [Google Scholar] [CrossRef]
  29. Mao, H.; Meng, J.; Ji, F.; Zhang, Q.; Fang, H. Comparison of Machine Learning Regression Algorithms for Cotton Leaf Area Index Retrieval Using Sentinel-2 Spectral Bands. Appl. Sci. 2019, 9, 1459. [Google Scholar] [CrossRef]
  30. Jamin, A.; Humeau-Heurtier, A. (Multiscale) Cross-Entropy Methods: A Review. Entropy 2019, 22, 45. [Google Scholar] [CrossRef]
  31. Rankine, W.J.M.; Tait, P.G. Miscellaneous Scientific Papers; C. Griffin: Glasgow, Scotland, 1881. [Google Scholar]
  32. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2011, 23, 406–418. [Google Scholar] [CrossRef]
  33. Geng, X.; Wang, X.; Fang, H.; Ye, J.; Han, L.; Gong, Y.; Cai, D. Vegetation coverage of desert ecosystems in the Qinghai-Tibet Plateau is underestimated. Ecol. Indic. 2022, 137, 108780. [Google Scholar] [CrossRef]
  34. Sainui, J.; Pattanasatean, P. Color Classification based on Pixel Intensity Values. In Proceedings of the 2018 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Busan, Republic of Korea, 27–29 June 2018; pp. 302–306. [Google Scholar]
  35. Pan, Y.; Wu, W.; He, J.; Zhu, J.; Su, X.; Li, W.; Li, D.; Yao, X.; Cheng, T.; Zhu, Y.; et al. A novel approach for estimating fractional cover of crops by correcting angular effect using radiative transfer models and UAV multi-angular spectral data. Comput. Electron. Agric. 2024, 222, 109030. [Google Scholar] [CrossRef]
  36. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASAE 1998, 41, 1189–1197. [Google Scholar] [CrossRef]
  37. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. In Proceedings of the SPIE the International Society for Optical Engineering, Boston, MA, USA, 14 January 1999; pp. 124–523. [Google Scholar]
  38. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  39. Verrelst, J.; Schaepman, M.E.; Koetz, B.; Kneubühler, M. Angular sensitivity analysis of vegetation indices derived from CHRIS/PROBA data. Remote Sens. Environ. 2008, 112, 2341–2353. [Google Scholar] [CrossRef]
  40. Liu, J.; Wei, L.; Zheng, Z.; Du, J. Vegetation cover change and its response to climate extremes in the Yellow River Basin. Sci. Total Environ. 2023, 905, 167366. [Google Scholar] [CrossRef]
  41. Zaiming, Z.; Yanming, Y.; Benqing, C. Research on Vegetation Extraction and Fractional Vegetation Cover of Spartina Alterniflora Using UAV Images. Remote Sens. Technol. Appl. 2017, 32, 714–720. [Google Scholar]
  42. Huete, A.; Liu, H.; de Lira, G.; Batchily, K.; Escadafal, R. A soil color index to adjust for soil and litter noise in vegetation index imagery of arid regions. In Proceedings of the IGARSS’ 94—1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August; pp. 1042–1043.
  43. Yan, G.; Li, L.; Coy, A.; Mu, X.; Chen, S.; Xie, D.; Zhang, W.; Shen, Q.; Zhou, H. Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing. ISPRS J. Photogramm. Remote. Sens. 2019, 158, 23–34. [Google Scholar] [CrossRef]
  44. Li, L.; Mu, X.; Macfarlane, C.; Song, W.; Chen, J.; Yan, K.; Yan, G. A half-Gaussian fitting method for estimating fractional vegetation cover of corn crops using unmanned aerial vehicle images. Agric. For. Meteorol. 2018, 262, 379–390. [Google Scholar] [CrossRef]
  45. Van de Voorde, T.; Vlaeminck, J.; Canters, F. Comparing Different Approaches for Mapping Urban Vegetation Cover from Landsat ETM+ Data: A Case Study on Brussels. Sensors 2008, 8, 3880–3902. [Google Scholar] [CrossRef] [PubMed]
  46. Zhang, D.; Ni, H. Inversion of Forest Biomass Based on Multi-Source Remote Sensing Images. Sensors 2023, 23, 9313. (In English) [Google Scholar] [CrossRef] [PubMed]
  47. Hao, M.; Qin, L.; Mao, P.; Luo, J.; Zhao, W.; Qiu, G. Unmanned aerial vehicle(UAV)based methodology for spatial distribution pattern analysis of desert vegetation. J. Desert Res. 2020, 40, 169–179. [Google Scholar]
  48. Keesstra, S.D.; Bouma, J.; Wallinga, J.; Tittonell, P.; Smith, P.; Cerdà, A.; Montanarella, L.; Quinton, J.N.; Pachepsky, Y.; van der Putten, W.H.; et al. The significance of soils and soil science towards realization of the United Nations Sustainable Development Goals. Soil 2016, 2, 111–128. [Google Scholar] [CrossRef]
  49. Croft, T.L.; Phillips, T.N. Least-Squares Proper Generalized Decompositions for Weakly Coercive Elliptic Problems. SIAM J. Sci. Comput. 2017, 39, A1366–A1388. [Google Scholar] [CrossRef]
Figure 1. Overview map of the study area (the red numbers represent the number of each plot (1–12), and the different colored tows indicate the entropy values (high, medium, and low entropy) of the 12 plots).
Figure 1. Overview map of the study area (the red numbers represent the number of each plot (1–12), and the different colored tows indicate the entropy values (high, medium, and low entropy) of the 12 plots).
Land 13 01840 g001
Figure 2. Flowchart of FVC extraction from UAV image data.
Figure 2. Flowchart of FVC extraction from UAV image data.
Land 13 01840 g002
Figure 3. Thumbnails of 12 markers.
Figure 3. Thumbnails of 12 markers.
Land 13 01840 g003
Figure 4. Thumbnails of 12 UAV results extracted based on the Otsu–VVI method.
Figure 4. Thumbnails of 12 UAV results extracted based on the Otsu–VVI method.
Land 13 01840 g004
Figure 5. Thumbnails of 12 UAV image results extracted based on the color space method. R, G, B, H, S, V, L, a and b in Figure 5 represent the components in the color space method, respectively.
Figure 5. Thumbnails of 12 UAV image results extracted based on the color space method. R, G, B, H, S, V, L, a and b in Figure 5 represent the components in the color space method, respectively.
Land 13 01840 g005
Figure 6. Thumbnails of 12 UAV image results extracted based on the LMM.
Figure 6. Thumbnails of 12 UAV image results extracted based on the LMM.
Land 13 01840 g006
Figure 7. Thumbnail extraction of 12 UAV image results based on the SVM.
Figure 7. Thumbnail extraction of 12 UAV image results based on the SVM.
Land 13 01840 g007
Figure 8. Thumbnail extraction of 12 UAV image results based on the NN.
Figure 8. Thumbnail extraction of 12 UAV image results based on the NN.
Land 13 01840 g008
Figure 9. Thumbnails of heat maps of high, medium, and low entropy values.
Figure 9. Thumbnails of heat maps of high, medium, and low entropy values.
Land 13 01840 g009
Figure 10. Thumbnails of radars of high, medium, and low entropy values.
Figure 10. Thumbnails of radars of high, medium, and low entropy values.
Land 13 01840 g010
Figure 11. (1)–(3) Original image, SVM and NN. Red: FVC; white: non-FVC.
Figure 11. (1)–(3) Original image, SVM and NN. Red: FVC; white: non-FVC.
Land 13 01840 g011
Figure 12. (1) Original, (2) CIVE, (3) ‘a’ component, (4) LMM, (5) SVM, (6) NN. White: FVC; black: non-FVC. The red squares in subplot a highlight the particular vegetation encountered during the extraction process and its particular environment.
Figure 12. (1) Original, (2) CIVE, (3) ‘a’ component, (4) LMM, (5) SVM, (6) NN. White: FVC; black: non-FVC. The red squares in subplot a highlight the particular vegetation encountered during the extraction process and its particular environment.
Land 13 01840 g012
Table 1. Visible Vegetation Indices (VVIs) and their formulas.
Table 1. Visible Vegetation Indices (VVIs) and their formulas.
VVIsFormulaReference
Excess green index (EXG) 2 × G R B [35]
Excess red index (EXR) 1.4 × R B [36]
Excess red minus green index (EXER) 2 × G 2.4 [37]
Normalized green–red difference index (NGRDI) ( G R ) / ( G + R ) [38]
Normalized green–blue difference index (NGBDI) ( G B ) / ( G + B ) [39]
Red–green ratio index (RGRI) R / B [40]
Visible-band difference vegetation index (VDVI) ( 2 × G R B ) / ( 2 × G + R + B ) [19]
Visible-band-modified soil-adjusted vegetation index (V-MSAVI) ( 2 × G + 1 2 × G + 1 2 8 × 2 × G R B ) / 2 [41]
Excess green minus red index (EXGR) 3 × G 2.4 × R B [42]
Modified green–red vegetation index (MGRVI) ( G 2 R 2 ) / ( G 2 + R 2 ) [43]
Red–green–blue vegetation index (RGBVI) ( G 2 B × R ) / ( G 2 + B × R ) [34]
Vegetation color index (CIVE) 0.441 × R 0.811 × G + 0.385 × B + 18.78745 [44]
R, G, and B represent the reflectance of the red, green, and blue bands, respectively.
Table 2. Confusion matrix for precision validation.
Table 2. Confusion matrix for precision validation.
Marking ResultExtraction Result
FVCnon-FVC
FVCab
non-FVCcd
Table 3. Classification of entropy and differentiated scenarios.
Table 3. Classification of entropy and differentiated scenarios.
No.EntropyDifferentiated Scenario
1Low EntropySparse shrub areas with similar backgrounds
2Low EntropySparse shrub areas with similar backgrounds
3Medium EntropyMixed grass–shrub areas with distinct ground vegetation demarcation
4Medium EntropyAreas cooccupied by shrubs and sparse herbaceous vegetation
5High EntropyComplex vegetation types with high cover and architectural interference
6High EntropyExtensive shrub areas with minor grassland integration
7High EntropyExtensive shrub areas with minor grassland integration
8High EntropyComplex vegetation types with high cover and architectural interference
9Medium EntropyMixed grass–shrub areas with indistinguishable soil backgrounds
10Low EntropyMixed grass–shrub areas with distinct ground vegetation demarcation
11Medium EntropyComplex vegetation types with high cover and architectural interference
12Low EntropyAreas cooccupied by shrubs and sparse herbaceous vegetation
High entropy > Mean-0.5 × SD; low entropy < Mean-0.5 × SD; Mean-0.5 × SD ≤ medium entropy ≤ Mean: 0.5 × SD, where SD is the standard deviation and Mean is the mean.
Table 4. Accuracy verification of Otsu–VVI method a.
Table 4. Accuracy verification of Otsu–VVI method a.
No.VVIs a b c d A c c u r a c y P r e c i s i o n R e c a l l f O A U A
1NGRDI613979210.4100.4360.6100.5080.7900.390
EXG63373970.8000.9550.6300.7590.0300.370
EXR376359410.3900.3850.3700.3780.5900.630
EXER010001000.5000.0000.0000.0000.0001.000
RGRI69319280.3850.4290.6900.5290.9200.310
NGBDI29811890.4550.1540.0200.0350.1100.980
VDVI68322980.8300.9710.6800.8000.0200.320
RGBVI61391990.8000.9840.6100.7530.0100.390
MGRVI21793970.5900.8750.2100.3390.0300.790
EXGR29801000.5101.0000.0200.0390.0000.980
V-MSAVI59411990.7900.9830.5900.7380.0100.410
CIVE69314960.8250.9450.6900.7980.0400.310
2NGRDI6813221980.6650.9710.3400.5040.0100.660
EXG12674231770.7580.8460.6300.7220.1150.370
EXR16337791210.7100.6740.8150.7380.3950.185
EXER020002000.5000.0000.0000.0000.0001.000
RGRI15941461540.7830.7760.7950.7850.2300.205
NGBDI5195941060.2780.0510.0250.0330.4700.975
VDVI12674141860.7800.9000.6300.7410.0700.370
RGBVI12773151850.7800.8940.6350.7430.0750.365
MGRVI5195901100.2880.0530.0250.0340.4500.975
EXGR020002000.5000.0000.0000.0000.0001.000
V-MSAVI12575141860.7780.8990.6250.7370.0700.375
CIVE13862301700.7700.8210.6900.7500.1500.310
3NGRDI34266242760.5170.5860.1130.1900.0800.887
EXG17712322980.7920.9890.5900.7390.0070.410
EXR4325729550.0800.1270.1430.1350.9830.857
EXER1528503000.5251.0000.0500.0950.0000.950
RGRI702301561440.3570.3100.2330.2660.5200.767
NGBDI030003000.5000.0000.0000.0000.0001.000
VDVI15314722980.7520.9870.5100.6730.0070.490
RGBVI16813222980.7770.9880.5600.7150.0070.440
MGRVI12117912990.7000.9920.4030.5730.0030.597
EXGR5924112990.5970.9830.1970.3280.0030.803
V-MSAVI17112922980.7820.9880.5700.7230.0070.430
CIVE18611422980.8070.9890.6200.7620.0070.380
4NGRDI4146471030.3570.0780.0270.0400.3130.973
EXG1113901500.8701.0000.7400.8510.0000.260
EXR4410614820.1530.2290.2930.2570.9870.707
EXER411091501500.4240.2150.2730.2400.5000.727
RGRI12138108420.1800.1000.0800.0890.7200.920
NGBDI985201500.8271.0000.6530.7900.0000.347
VDVI965401500.8201.0000.6400.7800.0000.360
RGBVI1044601500.8471.0000.6930.8190.0000.307
MGRVI906001500.8001.0000.6000.7500.0000.400
EXGR807001500.7671.0000.5330.6960.0000.467
V-MSAVI1094101500.8631.0000.7270.8420.0000.273
CIVE1331701500.9431.0000.8870.9400.0000.113
5NGRDI892111571430.3870.3620.2970.3260.5230.703
EXG19610432970.8220.9850.6530.7860.0100.347
EXR11818129190.2120.2890.3950.3330.9700.605
EXER329712990.5030.7500.0100.0200.0030.990
RGRI126174264360.2700.3230.4200.3650.8800.580
NGBDI129942960.4950.2000.0030.0070.0130.997
VDVI17312722980.7850.9890.5770.7280.0070.423
RGBVI17912122980.7950.9890.5970.7440.0070.403
MGRVI10519532970.6700.9720.3500.5150.0100.650
EXGR5124912990.5830.9810.1700.2900.0030.830
V-MSAVI18311722980.8020.9890.6100.7550.0070.390
CIVE2089242960.8400.9810.6930.8130.0130.307
6NGRDI4320743620.2960.5000.1720.2560.4100.828
EXG1757502500.8501.0000.7000.8240.0000.300
EXR109141228220.2620.3230.4360.3710.9120.564
EXER624402500.5121.0000.0240.0470.0000.976
RGRI67183238120.1580.2200.2680.2410.9520.732
NGBDI3247122380.4820.2000.0120.0230.0480.988
VDVI1589202500.8161.0000.6320.7750.0000.368
RGBVI1628802500.8241.0000.6480.7860.0000.352
MGRVI8316702500.6661.0000.3320.4980.0000.668
EXGR3721302500.5741.0000.1480.2580.0000.852
V-MSAVI1688202500.8361.0000.6720.8040.0000.328
CIVE1886202500.8761.0000.7520.8580.0000.248
7NGRDI109067330.2150.1300.1000.1130.6700.900
EXG53471990.7600.9810.5300.6880.0100.470
EXR643680200.4200.4440.6400.5250.8000.360
EXER59501000.5251.0000.0500.0950.0000.950
RGRI168487130.1450.1550.1600.1580.8700.840
NGBDI01006940.4700.0000.0000.0000.0601.000
VDVI50501990.7450.9800.5000.6620.0100.500
RGBVI52481990.7550.9810.5200.6800.0100.480
MGRVI247601000.6201.0000.2400.3870.0000.760
EXGR208001000.6001.0000.2000.3330.0000.800
V-MSAVI56441990.7750.9820.5600.7130.0100.440
CIVE73271990.8600.9860.7300.8390.0100.270
8NGRDI30220155950.2500.1620.1200.1380.6200.880
EXG14710312490.7920.9930.5880.7390.0040.412
EXR149101186640.4260.4450.5960.5090.7440.404
EXER36214132370.5460.7350.1440.2410.0520.856
RGRI44206193570.2020.1860.1760.1810.7720.824
NGBDI4246472030.4140.0780.0160.0270.1880.984
VDVI1638722480.8220.9880.6520.7860.0080.348
RGBVI1529822480.8000.9870.6080.7520.0080.392
MGRVI73177322180.5820.6950.2920.4110.1280.708
EXGR6418612490.6260.9850.2560.4060.0040.744
V-MSAVI1648622480.8240.9880.6560.7880.0080.344
CIVE1836732470.8600.9840.7320.8390.0120.268
9NGRDI17921891110.7250.6680.8950.7650.4450.105
EXG15347231770.8250.8690.7650.8140.1150.235
EXR18713881120.7480.6800.9350.7870.4400.065
EXER020011990.4980.0000.0000.0000.0051.000
RGRI1919140600.6280.5770.9550.7190.7000.045
NGBDI0200471530.3830.0000.0000.0000.2351.000
VDVI17030451550.8130.7910.8500.8190.2250.150
RGBVI1604031970.8930.9820.8000.8820.0150.200
MGRVI4196231770.4530.1480.0200.0350.1150.980
EXGR219802000.5051.0000.0100.0200.0000.990
V-MSAVI1505031970.8680.9800.7500.8500.0150.250
CIVE1425891910.8330.9400.7100.8090.0450.290
10NGRDI20180621380.3950.2440.1000.1420.3100.900
EXG1821811990.9530.9950.9100.9500.0050.090
EXR10298119810.4580.4620.5100.4850.5950.490
EXER020002000.5000.0000.0000.0000.0001.000
RGRI53147181190.1800.2260.2650.2440.9050.735
NGBDI1199101900.4780.0910.0050.0090.0500.995
VDVI1841611990.9580.9950.9200.9560.0050.080
RGBVI1811911990.9500.9950.9050.9480.0050.095
MGRVI9410611990.7330.9890.4700.6370.0050.530
EXGR341662002000.3900.1450.1700.1570.5000.830
V-MSAVI1831711990.9550.9950.9150.9530.0050.085
CIVE1891121980.9680.9900.9450.9670.0100.055
11NGRDI26174154460.1800.1440.1300.1370.7700.870
EXG1257511990.8100.9920.6250.7670.0050.375
EXR47153175250.1800.2120.2350.2230.8750.765
EXER2018011990.5480.9520.1000.1810.0050.900
RGRI29171180200.1230.1390.1450.1420.9000.855
NGBDI0200181820.4550.0000.0000.0000.0901.000
VDVI1287211990.8180.9920.6400.7780.0050.360
RGBVI1247611990.8080.9920.6200.7630.0050.380
MGRVI8711341960.7080.9560.4350.5980.0200.565
EXGR4915111990.6200.9800.2450.3920.0050.755
V-MSAVI1425811990.8530.9930.7100.8280.0050.290
CIVE1673391910.8950.9490.8350.8880.0450.165
12NGRDI40160126740.2850.2410.2000.2190.6300.800
EXG1217971930.7850.9450.6050.7380.0350.395
EXR11189190100.3030.3690.5550.4430.9500.445
EXER1618402000.5401.0000.0800.1480.0000.920
RGRI73127167330.2650.3040.3650.3320.8350.635
NGBDI020051950.4880.0000.0000.0000.0251.000
VDVI7912102000.6981.0000.3950.5660.0000.605
RGBVI7812202000.6951.0000.3900.5610.0000.610
MGRVI6213802000.6551.0000.3100.4730.0000.690
EXGR3916102000.5981.0000.1950.3260.0000.805
V-MSAVI8811202000.7201.0000.4400.6110.0000.560
CIVE1336761940.8180.9570.6650.7850.0300.335
a a, b, c, and d in the Table 4 header indicate extracted results and results made from the confusion matrix.
Table 5. Accuracy verification of color space method a.
Table 5. Accuracy verification of color space method a.
No.Component a b c d A c c u r a c y P r e c i s i o n R e c a l l f O A U A
1R94630700.8200.7580.9400.8390.3000.060
G762431690.7250.7100.7600.7340.3100.240
B693130700.6950.6970.6900.6930.3000.310
H35655950.6500.8750.3500.5000.0500.650
S63377930.7800.9000.6300.7410.0700.370
V40608920.6600.8330.4000.5410.0800.600
L435710900.6650.8110.4300.5620.1000.570
a72283970.8450.9600.7200.8230.0300.280
b21799910.5600.7000.2100.3230.0900.790
2R15545451550.7750.7750.7750.7750.2250.225
G12476581420.6650.6810.6200.6490.2900.380
B11189641360.6180.6340.5550.5920.3200.445
H1831791910.9350.9530.9150.9340.0450.085
S15545201800.8380.8860.7750.8270.1000.225
V13169591410.6800.6890.6550.6720.2950.345
L12377501500.6830.7110.6150.6600.2500.385
a15842141860.8600.9190.7900.8490.0700.210
b1307051950.8130.9630.6500.7760.0250.350
3R297352950.9870.9830.9900.9870.0170.010
G2851532970.9700.9900.9500.9690.0100.050
B2821832970.9650.9890.9400.9640.0100.060
H156144922080.6070.6290.5200.5690.3070.480
S24654253470.4880.4930.8200.6160.8430.180
V293772930.9770.9770.9770.9770.0230.023
L2964101900.9720.9670.9870.9770.0500.013
a28218922080.8170.7540.9400.8370.3070.060
b24654225750.5350.5220.8200.6380.7500.180
4R1311921480.9300.9850.8730.9260.0130.127
G1401041460.9530.9720.9330.9520.0270.067
B1381221480.9530.9860.9200.9520.0130.080
H13812361140.8400.7930.9200.8520.2400.080
S1381251450.9430.9650.9200.9420.0330.080
V1381241460.9470.9720.9200.9450.0270.080
L1391131470.9530.9790.9270.9520.0200.073
a1391151450.9470.9650.9270.9460.0330.073
b13317321180.8370.8060.8870.8440.2130.113
5R2937162840.9620.9480.9770.9620.0530.023
G2928152850.9620.9510.9730.9620.0500.027
B26040112890.9150.9590.8670.9110.0370.133
H23664912090.7420.7220.7870.7530.3030.213
S26040192810.9020.9320.8670.8980.0630.133
V2919142860.9620.9540.9700.9620.0470.030
L2937142860.9650.9540.9770.9650.0470.023
a2425862940.8930.9760.8070.8830.0200.193
b193107282720.7750.8730.6430.7410.0930.357
6R2464591910.8740.8070.9840.8860.2360.016
G2464551950.8820.8170.9840.8930.2200.016
B22723252250.9040.9010.9080.9040.1000.092
H22129392110.8640.8500.8840.8670.1560.116
S22624861640.7800.7240.9040.8040.3440.096
V2473432070.9080.8520.9880.9150.1720.012
L2464452050.9020.8450.9840.9090.1800.016
a2143682420.9120.9640.8560.9070.0320.144
b19258352150.8140.8460.7680.8050.1400.232
7R97338620.7950.7190.9700.8260.3800.030
G93738620.7750.7100.9300.8050.3800.070
B93732680.8050.7440.9300.8270.3200.070
H95528720.8350.7720.9500.8520.2800.050
S96435650.8050.7330.9600.8310.3500.040
V98253470.7250.6490.9800.7810.5300.020
L98258420.7000.6280.9800.7660.5800.020
a94612880.9100.8870.9400.9130.1200.060
b544619810.6750.7400.5400.6240.1900.460
8R211391101400.7020.6570.8440.7390.4400.156
G210401051450.7100.6670.8400.7430.4200.160
B216342161270.5780.5000.8640.6330.6300.136
H242825000.4840.4920.9680.6521.0000.032
S1727825000.3440.4080.6880.5121.0000.312
V1995125000.3980.4430.7960.5691.0000.204
L201491331170.6360.6020.8040.6880.5320.196
a21931442060.8500.8330.8760.8540.1760.124
b23812239110.4980.4990.9520.6550.9560.048
9R94106431570.6280.6860.4700.5580.2150.530
G93107981020.4880.4870.4650.4760.4900.535
B36164551450.4530.3960.1800.2470.2750.820
H14357143570.5000.5000.7150.5880.7150.285
S1871310190.6320.6490.9350.7660.9180.065
V37163104960.3330.2620.1850.2170.5200.815
L50150411590.5230.5490.2500.3440.2050.750
a18614311690.8880.8570.9300.8920.1550.070
b9101112880.3130.0740.0820.0780.5600.918
10R1982911090.7680.6850.9900.8100.4550.010
G15347221780.8280.8740.7650.8160.1100.235
B16634191810.8680.8970.8300.8620.0950.170
H17727431570.8270.8050.8680.8350.2150.132
S191931970.9700.9850.9550.9700.0150.045
V15743241760.8330.8670.7850.8240.1200.215
L15644151850.8530.9120.7800.8410.0750.220
a1928191810.9330.9100.9600.9340.0950.040
b50150401600.5250.5560.2500.3450.2000.750
11R13565181820.7930.8820.6750.7650.0900.325
G95105191810.6900.8330.4750.6050.0950.525
B11684181820.7450.8660.5800.6950.0900.420
H17426221780.8800.8880.8700.8790.1100.130
S16040651350.7380.7110.8000.7530.3250.200
V100100171830.7080.8550.5000.6310.0850.500
L93107171830.6900.8450.4650.6000.0850.535
a18020221780.8950.8910.9000.8960.1100.100
b15644221780.8350.8760.7800.8250.1100.220
12R1722891910.9080.9500.8600.9030.0450.140
G1919471530.8600.8030.9550.8720.2350.045
B1919401600.8780.8270.9550.8860.2000.045
H18119721280.7730.7150.9050.7990.3600.095
S18119291710.8800.8620.9050.8830.1450.095
V15941101900.8730.9410.7950.8620.0500.205
L18911381620.8780.8330.9450.8850.1900.055
a16139161840.8630.9100.8050.8540.0800.195
b17129129710.6050.5700.8550.6840.6450.145
a a, b, c, and d in the Table 5 header indicate extracted results and results made from the confusion matrix, and the first column of Table 5 shows the components of the color space method.
Table 6. Accuracy verification of LMM a.
Table 6. Accuracy verification of LMM a.
No. a b c d A c c u r a c y P r e c i s i o n R e c a l l f O A U A
172283970.8450.9600.7200.8230.0300.280
216337401600.8080.8030.8150.8090.2000.185
32316932970.8800.9870.7700.8650.0100.230
413614201300.8870.8720.9070.8890.1330.093
52396152950.8900.9800.7970.8790.0170.203
62163422480.9280.9910.8640.9230.0080.136
79738920.9450.9240.9700.9460.0800.030
822921332170.8920.8740.9160.8950.1320.084
91663461940.9000.9650.8300.8920.0300.170
1017723211790.8900.8940.8850.8890.1050.115
1117624111890.9130.9410.8800.9100.0550.120
1217921271730.8800.8690.8950.8820.1350.105
a a, b, c, and d in the Table 6 header indicate extracted results and results made from the confusion matrix.
Table 7. Accuracy verification of SVM a.
Table 7. Accuracy verification of SVM a.
No. a b c d A c c u r a c y P r e c i s i o n R e c a l l f O A U A
178223970.8750.9630.7800.8620.0300.220
21594111990.8950.9940.7950.8830.0050.205
3294642960.9830.9870.9800.9830.0130.020
41351551450.9330.9640.9000.9310.0330.100
5183117262740.7620.8760.6100.7190.0870.390
623119122380.9380.9510.9240.9370.0480.076
79918920.9550.9250.9900.9570.0800.010
818961172330.8440.9170.7560.8290.0680.244
918911311690.8950.8590.9450.9000.1550.055
10192811990.9780.9950.9600.9770.0050.040
111861451950.9530.9740.9300.9510.0250.070
121693191910.9000.9490.8450.8940.0450.155
a a, b, c, and d in the Table 7 header indicate extracted results and results made from the confusion matrix.
Table 8. Accuracy verification of NN a.
Table 8. Accuracy verification of NN a.
No. a b c d A c c u r a c y P r e c i s i o n R e c a l l f O A U A
1495101000.7451.0000.4900.6580.0000.510
21633721980.9030.9880.8150.8930.0100.185
3296442960.9870.9870.9870.9870.0130.013
41381621681320.4500.4510.4600.4550.5600.540
52148622980.8530.9910.7130.8290.0070.287
621436382120.8520.8490.8560.8530.1520.144
79738920.9450.9240.9700.9460.0800.030
817476482020.7520.7840.6960.7370.1920.304
91901061940.9600.9690.9500.9600.0300.050
101722802000.9301.0000.8600.9250.0000.140
1117624131870.9080.9310.8800.9050.0650.120
1218218461540.8400.7980.9100.8500.2300.090
a a, b, c, and d in the Table 8 header indicate extracted results and results made from the confusion matrix.
Table 9. Kappa of FVC extraction using Otsu–VVIs.
Table 9. Kappa of FVC extraction using Otsu–VVIs.
No.VVIsKappa
1NGRDI−0.180
EXG0.600
EXR−0.220
EXER0.000
RGRI−0.230
NGBDI−0.090
VDVI0.660
RGBVI0.600
MGRVI0.180
EXGR0.020
V-MSAVI0.580
CIVE0.650
2NGRDI0.330
EXG0.515
EXR0.420
EXER0.000
RGRI0.565
NGBDI−0.445
VDVI0.560
RGBVI0.560
MGRVI−0.425
EXGR0.000
V-MSAVI0.555
CIVE0.540
3NGRDI0.033
EXG0.583
EXR−0.840
EXER0.050
RGRI−0.287
NGBDI0.000
VDVI0.503
RGBVI0.553
MGRVI0.400
EXGR0.193
V-MSAVI0.563
CIVE0.613
4NGRDI−0.287
EXG0.740
EXR−0.693
EXER−0.212
RGRI−0.640
NGBDI0.653
VDVI0.640
RGBVI0.693
MGRVI0.600
EXGR0.533
V-MSAVI0.727
CIVE0.887
5NGRDI−0.227
EXG0.643
EXR−0.575
EXER0.007
RGRI−0.460
NGBDI−0.010
VDVI0.570
RGBVI0.590
MGRVI0.340
EXGR0.167
V-MSAVI0.603
CIVE0.680
6NGRDI−0.163
EXG0.700
EXR−0.476
EXER0.024
RGRI−0.684
NGBDI−0.036
VDVI0.632
RGBVI0.648
MGRVI0.332
EXGR0.148
V-MSAVI0.672
CIVE0.752
7NGRDI−0.570
EXG0.520
EXR−0.160
EXER0.050
RGRI−0.710
NGBDI−0.060
VDVI0.490
RGBVI0.510
MGRVI0.240
EXGR0.200
V-MSAVI0.550
CIVE0.720
8NGRDI−0.500
EXG0.584
EXR−0.148
EXER0.092
RGRI−0.596
NGBDI−0.172
VDVI0.644
RGBVI0.600
MGRVI0.164
EXGR0.252
V-MSAVI0.648
CIVE0.720
9NGRDI0.450
EXG0.650
EXR0.495
EXER−0.005
RGRI0.255
NGBDI−0.235
VDVI0.625
RGBVI0.785
MGRVI−0.095
EXGR0.010
V-MSAVI0.735
CIVE0.665
10NGRDI−0.210
EXG0.905
EXR−0.085
EXER0.000
RGRI−0.640
NGBDI−0.045
VDVI0.915
RGBVI0.900
MGRVI0.465
EXGR−0.317
V-MSAVI0.910
CIVE0.935
11NGRDI−0.640
EXG0.620
EXR−0.640
EXER0.095
RGRI−0.755
NGBDI−0.090
VDVI0.635
RGBVI0.615
MGRVI0.415
EXGR0.240
V-MSAVI0.705
CIVE0.790
12NGRDI−0.430
EXG0.570
EXR−0.395
EXER0.080
RGRI−0.470
NGBDI−0.025
VDVI0.395
RGBVI0.390
MGRVI0.310
EXGR0.195
V-MSAVI0.440
CIVE0.635
Table 10. Kappa of FVC extraction using color space method.
Table 10. Kappa of FVC extraction using color space method.
No.ComponentsKappa
1R0.640
G0.450
B0.390
H0.300
S0.560
V0.320
L0.330
a0.690
b0.120
2R0.550
G0.330
B0.235
H0.870
S0.675
V0.360
L0.365
a0.720
b0.625
3R0.973
G0.940
B0.930
H0.213
S−0.023
V0.953
L0.941
a0.633
b0.070
4R0.860
G0.907
B0.907
H0.680
S0.887
V0.893
L0.907
a0.893
b0.673
5R0.923
G0.923
B0.830
H0.483
S0.803
V0.923
L0.930
a0.787
b0.550
6R0.748
G0.764
B0.808
H0.728
S0.560
V0.816
L0.804
a0.824
b0.628
7R0.590
G0.550
B0.610
H0.670
S0.610
V0.450
L0.400
a0.820
b0.350
8R0.404
G0.420
B0.213
H−0.032
S−0.312
V−0.204
L0.272
a0.700
b−0.004
9R0.255
G−0.025
B−0.095
H0.000
S0.021
V−0.335
L0.045
a0.775
b−0.468
10R0.535
G0.655
B0.735
H0.653
S0.940
V0.665
L0.705
a0.865
b0.050
11R0.585
G0.380
B0.490
H0.760
S0.475
V0.415
L0.380
a0.790
b0.670
12R0.815
G0.720
B0.755
H0.545
S0.760
V0.745
L0.755
a0.725
b0.210
Table 11. Kappa of FVC extraction using LMM.
Table 11. Kappa of FVC extraction using LMM.
No.Kappa
10.690
20.615
30.760
40.773
50.780
60.856
70.890
80.784
90.800
100.780
110.825
120.760
Table 12. Kappa of FVC extraction using SVM.
Table 12. Kappa of FVC extraction using SVM.
No.Kappa
10.750
20.790
30.967
40.867
50.523
60.876
70.910
80.688
90.790
100.955
110.905
120.800
Table 13. Kappa of FVC extraction using NN.
Table 13. Kappa of FVC extraction using NN.
No.Kappa
10.490
20.805
30.973
4−0.100
50.707
60.704
70.890
80.504
90.920
100.860
110.815
120.680
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, C.; Ma, Y.; Pan, H.; Wang, Q.; Guo, J.; Li, N.; Ran, H. Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery. Land 2024, 13, 1840. https://doi.org/10.3390/land13111840

AMA Style

Sun C, Ma Y, Pan H, Wang Q, Guo J, Li N, Ran H. Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery. Land. 2024; 13(11):1840. https://doi.org/10.3390/land13111840

Chicago/Turabian Style

Sun, Changning, Yonggang Ma, Heng Pan, Qingxue Wang, Jiali Guo, Na Li, and Hong Ran. 2024. "Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery" Land 13, no. 11: 1840. https://doi.org/10.3390/land13111840

APA Style

Sun, C., Ma, Y., Pan, H., Wang, Q., Guo, J., Li, N., & Ran, H. (2024). Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery. Land, 13(11), 1840. https://doi.org/10.3390/land13111840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop