Next Article in Journal
The Structural Gravity Model and Its Implications on Global Forest Product Trade
Next Article in Special Issue
Estimating Forest Characteristics for Longleaf Pine Restoration Using Normalized Remotely Sensed Imagery in Florida USA
Previous Article in Journal
Towards Sustainable Forest Management in Central America: Review of Southern Pine Beetle (Dendroctonus frontalis Zimmermann) Outbreaks, Their Causes, and Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China

Key Laboratory for Forest Pest Control, College for Forestry, Beijing Forestry University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Forests 2020, 11(2), 172; https://doi.org/10.3390/f11020172
Submission received: 11 January 2020 / Revised: 2 February 2020 / Accepted: 3 February 2020 / Published: 5 February 2020

Abstract

:
In recent years, the red turpentine beetle (RTB) (Dendroctonus valens LeConte) has invaded the northern regions of China. Due to the short invasion time, the outbreak of tree mortality corresponded to a low level of damage. Important information about tree mortality, provided by remote sensing at both single-tree and forest stand scale, is needed in forest management at the early stages of outbreak. In order to detect RTB-induced tree mortality at a single-tree scale, we evaluated the classification accuracies of Gaofen-2 (GF2) imagery at different spatial resolutions (1 and 4 m) using a pixel-based method. We also simultaneously applied an object-based method to 1 m pan-sharpened images. We used Sentinel-2 (S2) imagery with different resolutions (10 and 20 m) to detect RTB-induced tree mortality and compared their classification accuracies at a larger scale—the stand scale. Three kinds of machine learning algorithms—the classification and regression tree (CART), the random forest (RF), and the support vector machine (SVM)—were applied and compared in this study. The results showed that 1 m resolution GF2 images had the highest classification accuracy using the pixel-based method and SVM algorithm (overall accuracy = 77.7%). We found that the classification of three degrees of damage percentage within the S2 pixel (0%, <15%, and 15% < x < 50%) was not successful at a forest stand scale. However, 10 m resolution S2 images could acquire effective binary classification (<15%: overall accuracy = 74.9%; 15% < x < 50%: overall accuracy = 81.0%). Our results indicated that identifying tree mortality caused by RTB at a single-tree and forest stand scale was accomplished with the combination of GF2 and S2 images. Our results are very useful for the future exploration of the patterns of spatial and temporal changes in insect pest transmission at different spatial scales.

1. Introduction

Since 1998, the outbreak of red turpentine beetle (RTB) has brought about serious tree mortality to several provinces in northern China [1,2,3]. In 2004, the area of RTB occupancy was determined to be more than 5 × 105 ha, with 6 million pine trees dead, indirectly causing economic loss to the value of ¥8.1 billion [4,5]. In recent years, along with the distribution of pine forest, RTB has spread to the more northern provinces of Inner Mongolia and Liaoning [6]. Due to the short time of introduction, the outbreak of Chinese pine (Pinus tabulaeformis) tree mortality caused by RTB is considered to be in the early stage and has not reached a large-scale level considering the landscape. The invasion of RTB has brought great loss to the local area, including timber loss, diminished recreational value, and altered ecosystem function. The large area of forest land has increased the time required for traditional field investigations. By the time the damage to the forest was discovered, the spread of RTB was already serious, making prevention and control work difficult. It is therefore very important to monitor the damage to pine forest and implement control measures in a timely manner.
RTB can damage multiple Pinus species, including Chinese pine (Pinus tabuliformis Carr.), Mongolian pine (Pinus sylvestris var. mongolica Litv.), and Lacebark pine (Pinus bungeana Zucc.) [7]. After being successfully attacked by RTB, Chinese pine go through a series of damage stages that are similar to the infestation process of the mountain pine beetle and spruce beetle [8,9]. RTB attacks pines in late spring or late summer (twice, corresponding to feathering and flying periods) and, in the following few months, the needles remain green, with the pines in a “green attack” stage [9]. Within 1–2 years after RTB infestation, the color of needles changes from green to yellow-green, and then to red, which is called the “red attack” stage [9,10]. After 3–4 years of infestation by RTB, all the needles fall off, and the pines enter the stage of “gray attack” [9]. Different spectral responses due to the changes in canopy color make it possible to monitor RTB using remote sensing technology. However, remote sensing studies on bark beetles are more concentrated on the mountain pine beetle and spruce beetle [8,11,12,13,14,15,16], and there are few remote sensing studies which monitor tree mortality caused by RTB.
Considering the importance of cross-scale interaction in the outbreak dynamics of beetles [17], monitoring bark beetles at both single-tree and forest stand scales can provide important information on the spatiotemporal dynamics of beetle infestation [8]. The successful detection of the mountain pine beetle and spruce beetle has been carried out by remote sensing platforms with fine- to medium-scale spectral resolution sensors, such as aerial photography, QuickBird, GeoEye-1, and Landsat. Meddens et al. [18] used multispectral aerial data to detect mountain pine beetle infestation and found that the highest accuracy (overall accuracy = 90%) was obtained when the image was resampled with the average crown size (2.4 m) of lodgepole pine. Coops et al. [12] reported that there was a good correlation between the red attack crowns of field investigation and the classified red attack crowns using QuickBird imagery. Dennison et al. [19] found that red- and gray-attack tree crowns extracted from GeoEye-1 images had high correlation coefficients, similar to those of the actual survey (red attack: R2 = 0.77; gray attack: R2 = 0.70). At a stand scale, Landsat imagery has often been employed for the detection of bark beetle outbreaks and has achieved high classification accuracies [8,10,20,21] because of its spectral resolution, large spatial and spectral band range, and free access.
The above research has been defective in that it has hindered understanding of the capability of different resolution imageries to detect beetle-caused tree mortality. Most of these studies mapped tree mortality at high damage levels. Large-scale and interconnected red attack tree-reduced spectral variability increases the classification accuracy because homogeneous regions are easily classified and mapped [20]. Few published papers have evaluated the capability of remote sensing images of monitoring low damage level tree mortality. White et al. [11] used IKONOS images to monitor mountain pine beetle red attacks with a low level of attack (less than 5% of forest stand damage). The results indicated that the accuracy of red attack tree detection was 71% when creating a 4 m buffer around the pixels at 4 m resolution. Accordingly, this buffer setting led to inaccuracy in the location of individual damaged trees. Meddens et al. [22] investigated the capacity of Landsat images to quantify tree mortality attacked by mountain pine beetle at different levels. The results showed that it was unable to detect pixels within fewer than 40% of pines in the red stage at an acceptable accuracy. No matter whether a single-tree or forest stand scale was considered, higher resolution imagery was needed to improve the classification accuracy under low levels of tree damage.
It is a new methodological challenge to monitor RTB on a single-tree scale using high spatial resolution image, because the spectral response of an individual tree is affected by changes in canopy illumination and background effects [23], which often results in the noisy “salt-and-pepper effect” [24]. The object-oriented classification method provides a new way for high spatial resolution image classification. In Bavarian Forest National Park, object-orientated image analysis yielded a 91.5% classification accuracy for the detection of dead spruce caused by spruce bark beetle [25]. In western Canada, Coggins et al. [26] explored the invasion model of mountain pine beetle with high spatial resolution aerial images processed by an object-based method, which achieved an average accuracy of 80.2%. However, the object-based method and pixel-based method are rarely compared in terms of extracting damaged trees caused by bark beetles.
To address those gaps, our study has two main objectives: (1) investigate the efficacy of GF2 satellite imagery for detecting individual damaged trees caused by RTB at a low damage level (stands around 5% damage), including green trees, and red- and gray-attack trees. We compared the classification accuracy of GF2 images at a pan-sharpened 1 m resolution and 4 m multispectral resolution. For pan-sharpened 1 m resolution imagery, we applied object-based and pixel-based methods to map individual damage trees and compared their accuracies. (2) Investigate the efficacy of S2 satellite imagery to detect tree mortality caused by RTB at a forest stand scale. We evaluated the classification accuracy of different spatial resolutions (10 and 20 m) for detecting the percentage of tree mortality in the early stage of invasion (less than 50% damage within pixels). In addition, feature selection strategies will also be discussed and three different machine learning algorithms—the classification and regression tree (CART), the random forest (RF), and the support vector machine (SVM)—were applied and compared at both scales for improving the classification accuracy.

2. Materials and Methods

2.1. Study Area

The study areas were located in Dahebei Town of Chaoyang City in Liaoning Province (Figure 1). The total area of the two study areas (red box in Figure 1c) was about 100 hectares. The areas were chosen because the Chinese pine mortality caused by RTB in those areas was at a low damage level. According to the four grading criteria of White et al. [11], infested susceptible stands with a low damage level represent 1%–5% of red attacked trees, and the infestation intensities of the two stands (Figure 1c) in the study area were around 5%. The areas were dominated by pure forest of Chinese pine. Larch (Larix principis-rupprechtii Mayr) and some broad-leaved species were included in the study areas. The mean average annual precipitation of this area is 450.9 mm, and the annual temperature is 8.6 °C. The average monthly maximum and minimum temperatures are 15.5 and 2.3 °C [27], respectively. The elevation ranges from 428 (Dahebei Town) to 1018 m (Southdawa Mountains).

2.2. Reference Data

The unmanned aerial vehicle (UAV) images were collected in August 2018. The platform model used was Inspair 2, manufactured by Dà-Jiàng Company (Shen Zhen, China), which was equipped with an RGB (red, green, and blue) high-resolution camera. The camera was a Zenmuse X5S with an adjusted focal length of 15 mm. The flight path lines covered the entire study areas and generated three sets of images of the areas. Considering the height of the mountains, we performed flights at a height of approximately 220 m above the ground, with 90% frontal overlap and 85% side overlap. Image mosaics, texture and mesh generation, and generative orthophotos were completed in Agisoft PhotoScan Professional [28]. The resolution of UAV imagary was 3.8 cm and the reprojection error was 2 pixel, which were from Agisoft PhotoScan processing report.
In order to assist with the visual interpretation of UAV images, four plots (30 × 30 m) were set up in the study areas in 2018 (Figure 1c). At each plot, we recorded the diameter at breast height (DBH) and height of trees with a DBH ≥ 7.5 cm. In addition, we randomly measured the crown diameter of 34 trees located in plots and the average diameter of Chinese pine was 3.4 m. The different stages of tree were assigned based on a visual assessment of the canopy color, the percentage of needles left in the trees, and the presence of beetle boreholes at the base of trunk (Table 1) (Figure 2). Finally, the canopies of individual trees in different attack stages were delineated on UAV images (Figure 1d). In the two study areas, the total number of crown delineations was 538 (green trees: 199, red trees: 199, gray trees: 73).

2.3. Satellite Image Preparation

A GF2 image (Table 2) was acquired for mapping individual trees at different damage stages caused by RTB. GF2, China’s new-generation satellite, launched in August 2014, provided high-resolution images for earth observation. According to the calibration parameters given on the official website [29], we carried out radiometric calibration and atmospheric correction (FLAASH) for GF2 images (processing level: L1A) to convert the original digital number (DN) values to radiation values and reflectance. The satellite scenes were orthorectified using “ASTER GDEM “data, which was 30 m resolution. Pan-sharpened 1 m resolution GF2 images were created by fusing the 4 m multispectral GF2 images with the 1 m panchromatic GF2 images using the nearest-neighbor diffusion (NNDiffuse) algorithm.
For detecting tree mortality at a forest stand scale, a scene of S2 image was obtained from the European Space Agency [30]. S2 is a multispectral platform that contains 13 bands with different resolutions of 10, 20, and 60 m, and detailed band information is shown in Table 2. The S2 L1C images was processed into level 2A images (note: 60 m resolution bands were excluded) through atmospheric correction and terrain correction using the Sen2Cor Processor [31].
In order to accurately locate the damaged trees delineated from UAV images in satellite images, the GF2 and S2 images were geographically registered with RMSE less than 3 based on obvious ground control points selected from UAV images. The GF2 and S2 images were recorded on 23 June 2018 and 17 September 2018 with no cloud cover, which were adjacent to the UAV image acquisition time.

2.4. Extraction of Tree Mortality at a Single-Tree Scale

We classified the GF2 images to compare their accuracy in detecting tree mortality using 1 m resolution images based on object and pixel methods and 4 m resolution images (Figure 3). The multiresolution segmentation was used to segment the GF2 1 m resolution images into objects in eCognition developer software [32,33]. The best segmentation parameters were assessed by the scale of tree crowns. We used multiple levels of detail to iteratively optimize segmentation and adjust the shape and compactness parameters. The final scale parameter was 10, and the homogeneity criteria of shape and compactness were set as the default values in the software.
For the GF2 1 m resolution images, the shadow and herb image objects were removed to ensure that only the tree crowns exposed to sunlight were used in the subsequent classification. We developed a stepwise masking system and used histograms to find the optimal threshold that resulted in the highest matching accuracy between the mask areas and the reference data. Objects with a normalized difference vegetation index (NDVI) value greater than 0.58 were masked out as vegetation. Next, the sunlit areas of vegetated objects were separated using intensity values I[RGB] from hue, saturation and intensity (HSI) transformation of the RGB bands [34], where values smaller than 0.0061 were assigned to shadows. Lastly, the masked sunlit areas were then separated into the tree canopy and non-tree canopy (herbaceous) with a vegetation index (excess green) [35] threshold ≤291 for GF2 pan-sharpened images. For original multispectral GF2 images, the tree crowns and shadows were mixed, so we masked only the herbaceous area with an excess green [35] threshold ≤276. There were only slight changes in the mask threshold covered by the two study areas.
A total of 20 features were extracted from GF2 images (Table 3). The basic spectral information included the mean and ratio values of each band and three indices of the HSI transform of the RGB bands. Indices such as the NDVI and red–green index (RGI) were tested because they have been successfully used to identify pest disturbances in previous research [12,13,14,18]. Six types of textural information were obtained from the gray level co-occurrence matrix (GLCM) [36]. We did not use geometry information in the object-based method because Hooman Latifi et al. [37] revealed that spatial metrics did not play a major role in characterizing the infested stands. CART could be used to filter features according to the importance ranking in regression [38,39]. Among the three feature selection methods studied by Li et al. [40], the features selected by CART had the highest accuracy for classification of tree species and CATR took the shortest time to select features. So we used CART to select subsets of features (contribution rate greater than or equal to 10%) prior to classification to reduce redundancy and intercorrelation among the characteristic variables presented in Table 3.

2.5. Extraction of the Percentage of Tree Mortality at a Forest Stand Scale

We compared the classification accuracy of the original 10 m resolution with that of 20 m resolution bands in S2 images for detecting the percentage of tree mortality caused by RTB (Figure 3). Before the extraction of tree mortality using S2 images, we used the same method as that of GF2 4 m resolution images to remove no-forest land and herbaceous areas by employing a histogram to find the best threshold. Previous studies have used the proportion of corresponding reference classification pixels (aerial images) in superpixels (30 m) and the point-counting method [46,47]. We randomly and evenly sampled 60 and 40 pixels in 10 and 20 m resolution images of S2 and the corresponding damaged tree crowns in the pixels were calculated according to the reference UAV images. The average number of tree crowns was 8 and 25 for a S2 10 m pixel resolution and 20 m pixel resolution. According to the number of corresponding tree crowns in the S2-size pixels, we determined three types of damage percentages, including 0%, <15%, and 15% < x < 50%. Long et al. [47] precluded the ability of Landsat images to reliably determine the difference between 5% and 0% of Pixel damage percentage. In order to improve the classification accuracy of S2 20 m resolution images, we set the damaged pixel of only one damaged tree as the healthy pixel (0). Similar to GF2 images, CART was used to select important feature variables for S2 image classification. The difference was that we specially added MSI and NDMI indices to feature selection for the 20 m resolution images, because these indices were reported to be sensitive to beetle infestation [22,48,49].
We compared the ability of S2 superpixels (10 and 20 m resolution) for detecting damage percentages in three types with three machine learning algorithms (described in Section 2.6). In further research, we used the best algorithm to determine the classification accuracy for a binary (damage/live) characterization of different damage percentages.

2.6. Classification

We applied three machine learning algorithms to map damaged individual trees and the percentage of tree mortality, including CART, RF, and SVM. We used the statistical computing program R for data analysis, which included the packages C50, e1071, and randomForest. RF and SVM have been widely used and exhibited a good performance in land cover classification, while CART can be easily carried out and explained by certain rules [50,51]. CART is a binary recursive partitioning algorithm based on tree nodes generated by training data [38]. RF is an improvement in the traditional decision tree, and consists of a large number of decision trees. New data is classified by the majority of votes in the classification results of all constructed decision trees [52]. We set the Mtree and Ntree parameters of RF as default values, which were the square root of the number of features and 500 trees, respectively. The aim of the SVM algorithm is to find the optimal hyperplane as a decision function in high-dimensional space, and classify input vectors into different classes [53]. In our research, the linear function was chosen as the kernel and the cost parameter was set to 102 in SVM.
Objects or pixels of each class were randomly split into 50% training and 50% evaluation datasets. Finally, the accuracies in identifying individual damaged trees and the percentage of tree mortality were evaluated based on the overall accuracy (OA), producer’s accuracy (PA), user’s accuracy (UA), and Kappa coefficient that resulted from confusion matrices produced with cross-validation samples [54]. Kappa coefficient is a strict estimate of the accuracy of classification, because it penalizes the agreement that may happen accidentally. It was suggested that a Kappa value represents the degree of agreement from “poor” (<0.4) to “moderate” (0.4–0.8) to “strong” (>0.80) [18,55]. We randomly selected training and validation data and repeated collection 10 times to compute the average of the 10 confusion matrix metrics. For GF2 and S2 images, differences among combinations of resolutions and algorithms in the classification performance (OA) were analyzed using one-way analysis of variance (ANOVA), and pairwise differences were evaluated using a t-test.

3. Results

3.1. Comparisons of Classification on a Single-Tree Scale

According to the CART classifier, the feature variables contributing more than or equal to 10% of GF2 classification are depicted in Table 4. The selected features were applied to the subsequent classification, which combined different resolutions and algorithms.
There are extremely significant differences between combinations of resolutions and algorithms (p = 0.000). (1) For the same resolution image method, there is a significant difference between C50 and the other two algorithms, and no significant difference existed between SVM and RF, except for the 1 m resolution images based on the pixel method. However, regardless of the kind of resolution images that were used, SVM always had the highest classification accuracy and CART exhibited the lowest (Figure 4). (2) For the same classification algorithm, there was no significant difference among the three types of resolution images with the C50 and RF classification algorithms. Using the SVM classification algorithm with the highest classification accuracy, there was a significant difference between the pixel-based and object-based methods for the 1 m resolution images, while there was no significant difference between the 1 and 4 m resolution images based on the pixel method. The GF2 1 m resolution images based on the pixel method had the highest classification accuracy among the three types of resolution images (Figure 4).
As a result, the combination of GF2 1 m resolution images based on the pixel method and SVM algorithm resulted in the highest classification accuracy, which was shown in Figure 5. The overall accuracy and Kappa coefficients of this combination were 77.7% and 0.58, respectively (Table 5). The PA and UA of red trees were 80.3% and 74.8%, respectively. However, the PA and UA of gray trees were only 1.3% and 18.4%, respectively. Most of the gray trees were mistakenly classified as either red trees or green trees.

3.2. Comparisons of Classification at a Forest Stand Scale

Prior to classification, 9 and 11 feature variables were screened out for the 10 and 20 m resolution images of S2, respectively (Table 4).
There are extremely significant differences between combinations of resolutions and algorithms (p = 0.000) for S2 images. (1) For images of the same resolution, subsequent classification results revealed that SVM and RF classifiers performed much better than CART and there is no significant difference between SVM and RF algorithms (Figure 6). (2) For the same classification algorithm, there was a significant difference in the classification performance of 10 and 20 m resolution images. The classification accuracy of 10 m resolution images was better than that of 20 m resolution images (Figure 6). However, no matter which combinations of resolution and algorithm were used, the classification efficiency was not good at a low damage level of tree mortality, for which the OA was less than 60% (Figure 6).
In further research, we carried out binary classification (live/damaged) for S2 10 and 20 m resolution images with the SVM algorithm. In addition to OA, we also paid special attention to the accuracy of the damage class. The accuracy assessment is shown in Table 6. The >15% damage percentage of 10 m resolution images had the highest classification accuracy, for which the OA and Kappa coefficient were 81.0% and 0.62, respectively. The OA and Kappa of <15% damage percentage of 10 m resolution images were 74.9% and 0.49, respectively. The binary classification accuracies of 10 m resolution images were significantly higher than those of 20 m resolution images (Table 6).

4. Discussion

4.1. Extraction of Tree Mortality at a Single-Tree Scale

In the early outbreak stage of tree mortality, GF2 1 m resolution images produced the highest 80.3% PA based on the pixel method. The result was higher than that presented in the work of White et al. [11], for which the accuracy of red attack detection was 70.1% for areas of low infestation. The reason for this might be related to the use of optimal variables and appropriate algorithms. In addition, according to the research of Meddens et al. [18], the classification accuracy should be higher when GF2 images are resampled to a 3.4 m pixel resolution, which is consistent with the average tree crown size.
Compared with previous studies [8,18], the detection accuracy of the gray attack stage was very low in our research. In addition to considering the reason for the damage level at which the detection accuracy of high damage level was greater than that of low damage level [11], the main reason for this might be related to the structural characteristics of host tree species. The branches of lodgepole pine and spruce are dense, as shown by the research of Meddens et al. [18] and Hart et al. [8], and the whole crown shape is still close after the needles fall in the gray attack stage, which can be easily detected in remote sensing images. However, the host tree species in this study, Chinese pine, had scattered branches and a loosened crown shape after the pine needles fell down (Figure 2c), which caused them to be easily mixed with underground vegetation and confused with green trees and red attack trees during classification.
In our research, we could not properly compare the object-based classification method with the pixel-based method. The spatial resolution of GF2 images was not fine enough, so image segmentation could not be carried out effectively according to tree crowns (Figure 7a). The classification objects in the reference data were mixed with other classes, which made them more heterogeneous. On the contrary, finer pixels could fall into the reference tree crown (Figure 7b), which made the classified pixels more homogeneous. The higher classification accuracy of the pixel-based method was expected because more homogeneous pixels often results in a better separability of classes [18,47]. Improving the classification of finer-resolution images, such as WorldView-2, may be achieved with object-based classification.
In general, although we could not truly detect gray attack Chinese pines, our methods indicated the usefulness of GF2 imagery for detecting red attack trees, which were more prevalent in the study areas when stands were at a low damage level.

4.2. Extraction of Tree Mortality at a Forest Stand Scale

We explored the ability to identify the percentage of tree mortality within S2-sized pixels at a low level of tree mortality. The unsatisfactory result for forest managers was similar to that of Meddens et al. [22], who reported that when the pixel damage percentage was less than 40%, the classification accuracy was less than 50%. The authors researched two degrees of classes, including the undisturbed stand and damaged stand, and we studied three degrees of classes in which less than 15% of the damage class were seriously mixed with the healthy class and more than 15% of the damage class (Table 7). When we only conducted two classifications (health class and damage class), the accuracy of classification was significantly increased.
In Medden’s research [22], the red stage class accuracies of <15% damage percentage and >15% damage percentage (<50%) within Landsat-scale superpixels were about 5% and 30%, respectively. A comparison of the results with either 10 or 20 m resolution images of the S2 satellite (Table 6) supported the idea that, with the increase in resolution of medium resolution satellite images, the monitoring accuracy of tree mortality was increased at a stand scale. The authors also reported that multidate classification was greater than single-date classification at lower levels of tree mortality. However, multitemporal images are not always available due to the presence of clouds. It would be a good choice to use a single S2 10 m resolution image to monitor the early outbreak of tree mortality caused by RTB. The disadvantage was that, due to the lack of damage pixels, we could only divide three categories of damage percentage according to the distribution of the damage pixels, which could not be divided into multiple damage stages with 10% as an interval at a low damage level, as studied by Meddens et al. [22]

4.3. Feature Variables and Classification Algorithm

In previous studies, The RGI and NDVI index were used to extract bark-beetle-caused tree mortality [8,13,14,18,22]. For GF2 images, we found that when HSI transformed into hue, the ratio of visible bands and mean value of the red band played an important role in classification at a single-tree scale (Table 4). Compared with spectral information, texture information plays a minor role.
In S2 images, the mean values of the red band, VEG1 band, and transformed HSI were very important in the importance ranking of feature variables, while the textural information played a secondary role. Fassnacht et al. [56], who monitored tree mortality caused by European bark beetle using aerial hyperspectral images, considered that the selected spectral regions agreed fairly well with the spectral bands of S2. Our research was consistent with the findings of the authors. However, among the selected spectral region, 2190 nm—corresponding to the SWIR2 band of S2 images—did not play an important role in classification. In addition, the spectral index generated by the shortwave infrared band of 20 m resolution images failed to be used in classification (Table 4). The reason for this might be that the shortwave infrared band was not sensitive at a low damage level. The spectral information was more important than the index and texture information at both a single-tree and forest stand scale.
As for the three machine learning algorithms, CART had the worst classification accuracy. The classification accuracy of RF and SVM was similar, but in most of the processing, the SVM algorithm was higher than the RF algorithm. SVM, which has been widely recognized to be particularly adept at dealing with small training samples [8,57], was demonstrated to be successful. RF does have an advantage, though, when the data contain numerous weak explanatory variables, as previously established [52,58].

5. Conclusions

In this paper, we have proposed an approach to detect tree mortality caused by RTB at a single-tree scale and forest stand scale by combining GF2 imagery and S2 imagery. This was the first study to combine GF2 and S2 images in order to map tree mortality caused by RTB at a low damage level. The main conclusions derived from our analysis are as follows:
(1)
Different scales of RTB-caused tree mortality could be accurately detected in the early stage of outbreak using GF2 imagery and S2 imagery;
(2)
SVM and RF performed well in the extraction of tree mortality; nevertheless, SVM achieved a relatively higher overall accuracy and was considered to be a useful algorithm for small training samples;
(3)
In classification with the early stage of tree mortality, spectral information was more important than index and texture information.
Mapping RTB-caused tree mortality at different scales will be very useful for understanding the spatial and temporal patterns of RTB outbreaks. More importantly, our results could help forest managers choose different satellite images according to their own needs for monitoring the area of RTB occurrence and taking timely control measures in the early outbreak stage. Our study also demonstrated the usefulness of UAV imagery as a reference dataset for classification development and the evaluation of satellite imagery and, thus, its ability to replace more time-consuming ground surveys and more expensive aerial imagery. In the future, we will further use the optical and thermal information of remote sensing images to obtain RTB hazard information in the green attack stage [9,59].

Author Contributions

Conceptualization, Z.Z., L.Y., L.R. and Y.L.; Data curation, Z.Z.; Formal analysis, Z.Z., L.Y., Z.L. and B.G.; Investigation, Z.Z., B.G. and L.W.; Methodology, Z.Z.; Writing—original draft, Z.Z. and L.W.; Writing—review and editing, Z.Z., L.Y., L.R. and Y.L. All authors have read and agree to the published version of the manuscript.

Funding

This research was funded by the National Key Research & Development Program of China (2018YFD0600200) and Beijing’s Science and Technology Planning Project (Z191100008519004).

Acknowledgments

The Heilihe National Nature Reserve and Dahebei Forest Farm supported the field experiment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yan, Z.; Sun, J.H.; Don, O.; Zhang, Z. The red turpentine beetle, Dendroctonus valens LeConte (Scolytidae): An exotic invasive pest of pine in China. Biodivers. Conserv. 2005, 14, 1735–1760. [Google Scholar] [CrossRef]
  2. Pan, J.; Wang, T.; Wen, J.B.; Luo, Y.Q.; Zong, S.X. Changes in invasion characteristics of Dendroctonus valens after introduction into China. Acta Ecol. Sin. 2011, 31, 1970–1975. [Google Scholar]
  3. Pan, J.; Wang, T.; Zong, S.X.; Wen, J.B.; Luo, Y.Q. Geostatistical analysis and sampling technique on spatial distribution pattern of Dendroctonus valens population. Acta Ecol. Sin. 2011, 31, 0195–0202. [Google Scholar]
  4. Zhao, Z.Y.; Shen, F.Y.; Liu, J.L. Red turpentine beetle are threatening China’s forestry production. Plant. Quar. 2002, 16, 86–88. [Google Scholar] [CrossRef]
  5. Xu, H.R.; Li, N.C.; Li, Z.Y. The analysis of outbreak reason and spread directions of Dendroctonus valens. Plant. Quar. 2006, 20, 278–280. [Google Scholar] [CrossRef]
  6. Tao, G.Z.; Cong, X.W.; Liu, S.S. Effective pre-endanger management and control mechanism and technology scheme for Dendroctonus valens. J. Hebei For. Sci. Technol. 2019, 1, 68–70. [Google Scholar] [CrossRef]
  7. Yao, J. Study on the behavior trend of bark of four Conifers by Dendroctonus valens. Plant. Quar. 2011, 25, 1–5. [Google Scholar] [CrossRef]
  8. Hart, S.J.; Veblen, T.T. Detection of spruce beetle-induced tree mortality using high-and medium-resolution remotely sensed imagery. Remote Sens. Environ. 2015, 168, 134. [Google Scholar] [CrossRef] [Green Version]
  9. Wulder, M.; Dymond, C.; White, J.; Leckie, D.; Carroll, A. Surveying mountain pine beetle damage of forests: A review of remote sensing opportunities. For. Ecol. Manag. 2006, 221, 27–41. [Google Scholar] [CrossRef]
  10. Wulder, M.A.; White, J.C.; Bentz, B.; Alvarez, M.F.; Coops, N.C. Estimating the probability of mountain pine beetle red-attack damage. Remote Sens. Environ. 2006, 101, 150–166. [Google Scholar] [CrossRef]
  11. White, J.; Wulder, M.; Brooks, D.; Reich, R.; Wheate, R. Detection of red attack stage mountain pine beetle infestation with high spatial resolution satellite imagery. Remote Sens. Environ. 2005, 96, 340–351. [Google Scholar] [CrossRef]
  12. Coops, N.C.; Johnson, M.; Wulder, M.A.; White, J.C. Assessment of QuickBird high spatial resolution imagery to detect red attack damage due to mountain pine beetle infestation. Remote Sens. Environ. 2006, 103, 67–80. [Google Scholar] [CrossRef]
  13. Wulder, M.A.; White, J.C.; Coops, N.C.; Butson, C.R. Multi-Temporal analysis of high spatial resolution imagery for disturbance monitoring. Remote Sens. Environ. 2008, 112, 2729–2740. [Google Scholar] [CrossRef]
  14. Hicke, J.A.; Logan, J. Mapping whitebark pine mortality caused by a mountain pine beetle outbreak with high spatial resolution satellite imagery. Int. J. Remote Sens. 2009, 30, 4427–4441. [Google Scholar] [CrossRef]
  15. DeRose, R.J.; Long, J.; Ramsey, R. Combining dendrochronological data and the disturbance index to assess Engelmann spruce mortality caused by a spruce beetle outbreak in southern Utah, USA. Remote Sens. Environ. 2011, 115, 2342–2349. [Google Scholar] [CrossRef]
  16. Makoto, K.; Tani, H.; Kamata, N. High-Resolution multispectral satellite image and a postfire ground survey reveal prefire beetle damage on snags in Southern Alaska. Scand. J. For. Res. 2013, 28, 1–5. [Google Scholar] [CrossRef]
  17. Raffa, K.F.; Aukema, B.H.; Bentz, B.J.; Carroll, A.L.; Hicke, J.A.; Turner, M.G.; Romme, W.H. Cross-Scale drivers of natural disturbances prone to anthropogenic amplification: The dynamics of bark beetle eruptions. BioScience 2008, 58, 501–517. [Google Scholar] [CrossRef] [Green Version]
  18. Meddens, A.; Hicke, J.; Vierling, L. Evaluating the potential of multispectral imagery to map multiple stages of tree mortality. Remote Sens. Environ. 2011, 115, 1632–1642. [Google Scholar] [CrossRef]
  19. Dennison, P.E.; Brunelle, A.R.; Carter, V.A. Assessing canopy mortality during a mountain pine beetle outbreak using GeoEye-1 high spatial resolution satellite data. Remote Sens. Environ. 2010, 114, 2431–2435. [Google Scholar] [CrossRef]
  20. Franklin, S.; Wulder, M.; Skakun, R.S.; Carroll, A.L. Mountain pine beetle red-attack forest damage classification using stratified landsat TM data in British Columbia, Canada. Photogramm. Eng. Remote Sens. 2003, 69, 283–288. [Google Scholar] [CrossRef]
  21. Skakun, R.; Wulder, M.; Franklin, S. Sensitivity of the thematic mapper enhanced wetness difference index to detect mountain pine beetle red-attack damage. Remote Sens. Environ. 2003, 86, 433–443. [Google Scholar] [CrossRef]
  22. Meddens, A.; Hicke, J.; Vierling, L.; Hudak, A.T. Evaluating methods to detect bark beetle-caused tree mortality using single-date and multi-date Landsat imagery. Remote Sens. Environ. 2013, 132, 49–58. [Google Scholar] [CrossRef]
  23. Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
  24. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M. Object-Based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef] [Green Version]
  25. Heurich, M.; Ochs, T.; Andresen, T.; Schneider, T. Object-Orientated image analysis for the semi-automatic detection of dead trees following a spruce bark beetle (Ips typographus) outbreak. Eur. J. For. Res. 2009, 129, 313–324. [Google Scholar] [CrossRef]
  26. Coggins, S.; Coops, N.C.; Wulder, M.A. Initialization of an insect infestation spread model using tree structure and spatial characteristics derived from high spatial resolution digital aerial imagery. Can. J. Remote Sens. 2008, 34, 485–502. [Google Scholar] [CrossRef]
  27. China Meteorological Data Service Center. Available online: http://data.cma.cn/ (accessed on 10 October 2018).
  28. Agisoft PhotoScan. Available online: http://www.agisoft.com/ (accessed on 11 September 2018).
  29. China Centre for Resources Satellite Data and Application. Available online: http://www.cresda.com/CN/ (accessed on 9 June 2019).
  30. Copernicus Open Access Hub. Available online: https://scihub.copernicus.eu/ (accessed on 9 August 2019).
  31. Müller-Wilm, U.; Louis, J.; Richter, R.; Gascon, F.; Niezette, M. Sentinel-2 level-2A pototype processor: Architecture, algorithms and first results. In Proceedings of the ESA Living Planet Symposium, Edinburgh, UK, 9–13 September 2013. [Google Scholar]
  32. Multiresolution Segmentation—An Optimization Approach for High Quality Multi-Scale Image Segmentation. Available online: http://www.isprs.org/proceedings/xxxviii/4-c7/pdf/Happ_143.pdf (accessed on 20 August 2019).
  33. Benz, U.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-Resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  34. Waser, L.; Küchler, M.; Jütte, K.; Stampfer, T. Evaluating the potential of WorldView-2 data to classify tree species and different levels of Ash mortality. Remote Sens. 2014, 6, 4515–4545. [Google Scholar] [CrossRef] [Green Version]
  35. Torres-Sánchez, J.; Peña-Barragán, J.M.; De Castro, A.; López-Granados, F. Multi-Temporal mapping of the vegetation fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 2014, 103, 104–113. [Google Scholar] [CrossRef]
  36. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  37. Latifi, H.; Fassnacht, F.E.; Schumann, B.; Dech, S. Object-Based extraction of bark beetle (Ips typographus L.) infestations using multi-date LANDSAT and SPOT satellite imagery. Prog. Phys. Geogr. 2014, 38, 755–785. [Google Scholar] [CrossRef]
  38. Everitt, B.S. Classification and regression trees. In Encyclopedia of Statistics in Behavioral Science; Everitt, B.S., Howell, D.C., Eds.; John Wiley & Sons Ltd.: Chichester, UK, 2005; pp. 287–290. ISBN 978-0-470-86080-9. [Google Scholar]
  39. Bittencourt, H.R.; Clarke, R.T. Use of classification and regression trees (CART) to classify remotely-sensed digital images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; pp. 3751–3753. [Google Scholar] [CrossRef]
  40. Li, Z.; Zhang, Q.Y.; Peng, D.L. The time phase and method selection of tree species classification based on GF-2 remote sensing images. Chin. J. Appl. Ecol. 2019, 30, 4059–4070. [Google Scholar] [CrossRef]
  41. Reference Book of eCognition Developer. Available online: http://www.ecognition.com/ (accessed on 20 October 2018).
  42. Tucker, C. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  43. Hilker, T.; Wulder, M.A.; Coops, N.C. A new data fusion model for high spatial-and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
  44. Wilson, E.; Sader, S. Detection of forest harvest type using multiple dates of Landsat TM imagery. Remote Sens. Environ. 2002, 80, 385–396. [Google Scholar] [CrossRef]
  45. Rock, B.N.; Vogelmann, J.E.; Williams, D.L. Field and airborne spectral characterization of suspected damage in Red Spruce (Picea rubens) from Vermont. In Proceedings of the 11th International Symposium on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 25–27 June 1985; pp. 71–81. [Google Scholar]
  46. Bellhouse, D.R. Area estimation by point-counting techniques. Biometrics 1981, 37, 303–312. [Google Scholar] [CrossRef]
  47. Long, J.A.; Lawrence, R.L. Mapping percent tree mortality due to mountain pine beetle damage. For. Sci. 2016, 62, 392–402. [Google Scholar] [CrossRef] [Green Version]
  48. Goodwin, N.; Magnussen, S.; Coops, N.; Wulder, M. Curve fitting of time-series Landsat imagery for characterizing a mountain pine beetle infestation. Int. J. Remote Sens. 2010, 31, 3263–3271. [Google Scholar] [CrossRef]
  49. Yu, L.F.; Huang, J.X.; Zong, S.X.; Huang, H.G.; Luo, Y.Q. Detecting shoot beetle damage on Yunnan Pine using Landsat time-series data. Forests 2018, 9, 39. [Google Scholar] [CrossRef] [Green Version]
  50. Jing, W.; Yang, Y.; Yue, X.; Zhao, X. Mapping urban areas with integration of DMSP/OLS nighttime light and MODIS data using machine learning techniques. Remote Sens. 2015, 7, 12419–12439. [Google Scholar] [CrossRef] [Green Version]
  51. Kaszta, Ż.; Van De Kerchove, R.; Ramoelo, A.; Cho, M.; Madonsela, S.; Mathieu, R.; Wolff, E. Seasonal separation of african savanna components using WorldView-2 imagery: A comparison of pixel-and object-based approaches and selected classification algorithms. Remote Sens. 2016, 8, 763. [Google Scholar] [CrossRef] [Green Version]
  52. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  53. Cortes, C.; Vapnik, V.N. Support vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  54. Congalton, R. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  55. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Fassnacht, F.E.; Latifi, H.; Ghosh, A.; Joshi, P.K.; Koch, B. Assessing the potential of hyperspectral imagery to map bark beetle-induced tree mortality. Remote Sens. Environ. 2014, 140, 533–548. [Google Scholar] [CrossRef]
  57. Abdel-Rahman, E.M.; Mutanga, O.; Adam, E.; Ismail, R. Detecting Sirex noctilio grey-attacked and lightning-struck pine trees using airborne hyperspectral data, random forest and support vector machines classifiers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 48–59. [Google Scholar] [CrossRef]
  58. Lawrence, R.; Wood, S.; Sheley, R. Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (RandomForest). Remote Sens. Environ. 2006, 100, 356–362. [Google Scholar] [CrossRef]
  59. Abdullah, H.; Darvishzadeh, R.; Skidmore, A.K.; Heurich, M. Sensitivity of Landsat-8 OLI and TIRS data to foliar properties of early stage bark beetle (Ips typographus, L.) infestation. Remote Sens. 2019, 11, 398. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Liaoning Province in China; (b) true-color Gaofen-2 (GF2) image from 2018; (c) the study areas and field plots in the right and left corner of the GF2 image; (d) unmanned aerial vehicle (UAV) images in one of the study areas.
Figure 1. (a) Liaoning Province in China; (b) true-color Gaofen-2 (GF2) image from 2018; (c) the study areas and field plots in the right and left corner of the GF2 image; (d) unmanned aerial vehicle (UAV) images in one of the study areas.
Forests 11 00172 g001
Figure 2. Examples of green trees (a), red trees (b), and gray trees (c) in UAV images.
Figure 2. Examples of green trees (a), red trees (b), and gray trees (c) in UAV images.
Forests 11 00172 g002
Figure 3. Flowchart of operations used to classify GF2 and S2 images.
Figure 3. Flowchart of operations used to classify GF2 and S2 images.
Forests 11 00172 g003
Figure 4. Overall accuracy (OA) of classification for different combinations of resolutions and algorithms using GF2 images, for which the significance level was 0.05. a: 1 m resolution images based on the object method; b: 1 m resolution images based on the pixel method; c: 4 m resolution images. Error bars indicate the standard deviations of 10 classifications. A–D: pairwise differences among combinations of resolutions and algorithms in the classification performance.
Figure 4. Overall accuracy (OA) of classification for different combinations of resolutions and algorithms using GF2 images, for which the significance level was 0.05. a: 1 m resolution images based on the object method; b: 1 m resolution images based on the pixel method; c: 4 m resolution images. Error bars indicate the standard deviations of 10 classifications. A–D: pairwise differences among combinations of resolutions and algorithms in the classification performance.
Forests 11 00172 g004
Figure 5. Detail example of the image classification of study areas. (a) 1m resolution image based on the pixel method and SVM algorithm; (b) 1m resolution image based on the pixel method and RF algorithm; (c) 1m resolution image based on the pixel method and C50 algorithm; (d) 1m resolution image based on the object method and SVM algorithm; (e) 4m resolution image based on the SVM algorithm; (f) GF2 true color image; (g) UAV image.
Figure 5. Detail example of the image classification of study areas. (a) 1m resolution image based on the pixel method and SVM algorithm; (b) 1m resolution image based on the pixel method and RF algorithm; (c) 1m resolution image based on the pixel method and C50 algorithm; (d) 1m resolution image based on the object method and SVM algorithm; (e) 4m resolution image based on the SVM algorithm; (f) GF2 true color image; (g) UAV image.
Forests 11 00172 g005
Figure 6. OA of classification for combinations of resolutions and algorithms using S2 images. a: 10 m resolution images; b: 20 m resolution images. Error bars indicate the standard deviations of ten classifications. A–C: pairwise differences among combinations of resolutions and algorithms in the classification performance.
Figure 6. OA of classification for combinations of resolutions and algorithms using S2 images. a: 10 m resolution images; b: 20 m resolution images. Error bars indicate the standard deviations of ten classifications. A–C: pairwise differences among combinations of resolutions and algorithms in the classification performance.
Forests 11 00172 g006
Figure 7. Subset of selected samples from GF2 1 m resolution images: (a) based on the object method; (b) based on the pixel method. bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.
Figure 7. Subset of selected samples from GF2 1 m resolution images: (a) based on the object method; (b) based on the pixel method. bluepolygons correspond to delineated trees from UAV images, dark green corresponds to green trees, purple to red trees, white to gray trees, olive green to tree canopy areas, cyan to herbaceous, and black to shadow.
Forests 11 00172 g007
Table 1. Three stages of individual trees according to the sample plot survey.
Table 1. Three stages of individual trees according to the sample plot survey.
CodeClass NameTree Status
1Green treeLive or current beetle attack; needles green
2Red treeBeetle attack of about two years; needles orange or red
3Gray treeBeetle attack of more than three years; no needles
Table 2. GF2 and Sentinel-2 (S2) satellite sensor characteristics.
Table 2. GF2 and Sentinel-2 (S2) satellite sensor characteristics.
GF-2Sentinel-2
Spatial Resolution (m)Spectral Band (μm)Spatial Resolution (m)Spectral Band (μm)
1Pan: 045–0.9010Blue: 0.490
Green: 0.560
4Blue: 0.45–0.52Red: 0.665
NIR: 0.842
Green: 0.52–0.5920VEG1: 0.705
VEG2: 0.740
Red: 0.63–0.69VEG3: 0.783
VEG4: 0.865
NIR: 0.77–0.89SWIR1: 1.610
SWIR2: 2.190
Pan: panchromatic; NIR: near-infrared; VEG: vegetation red edge; SWIR: short wave infrared.
Table 3. All features extracted from GF2 and S2 images.
Table 3. All features extracted from GF2 and S2 images.
CategoryFeatureDescriptionReference
Spectral informationMeanMean of values in objects/pixels of each band[41]
RatioBand mean divided by sum of all bands[41]
Transformed HSIThe bands of RGB were color transformed to HSI into the channel hue (H), saturation (S), and intensity (I)[34]
IndicesNDVINormalized difference vegetation index: (NIR − RED)/(NIR + RED)[42]
RVIRatio vegetation index: NIR/RED[43]
RGIRed–green index: RED/GREEN[12]
NDMI *Normalized difference moisture index: (NIR −MIR)/(NIR + MIR)[44]
MSI *Moisture stress index: MIR/NIR[45]
Textural informationGLCM_HGLCM homogeneity of all directions[36]
GLCM_ConGLCM contrast of all directions[36]
GLCM_DGLCM dissimilarity of all directions[36]
GLCM_EGLCM entropy of all directions[36]
GLCM_SGLCM standard deviation of all directions[36]
GLCM_CorGLCM correlation of all directions[36]
* Features only for S2 20 m resolution images.
Table 4. Summary of features selected from GF2 and S2 images.
Table 4. Summary of features selected from GF2 and S2 images.
1 m-Object1 m-Pixel4 m10 m20 m
FeatureImport.FeatureImport.FeatureImport.FeatureImport.FeatureImport.
HSI_H1HSI_H1Mean red1Mean red1Mean VEG11
Ratio blue0.73Ratio blue0.69Ratio green0.51HSI_H0.53HSI_S0.98
RGI0.61HSI_S0.57HSI_H0.36Ratio red0.51HSI_H0.70
HSI_S0.39Mean NIR0.55Ratio red0.29Ratio green0.47GLCM_Cor0.48
Mean NIR0.37Ratio red0.42GLCM_H0.23RGI0.45Ratio NIR0.41
GLCM_Con0.24Ratio green0.42Ratio NIR0.21HSI_S0.44GLCM_E0.37
Mean red0.22Mean red0.39Mean green0.16Ratio NIR0.29Ratio green0.31
Mean green0.19RGI0.17GLCM_E0.16GLCM_E0.28Ratio SWIR10.26
RVI0.18 HSI_S0.14Ratio blue0.16Ratio red0.25
GLCM_D0.11 Mean blue0.14 HSI_I0.24
Ratio red0.10 Ratio VEG10.18
Mean green0.17
Ratio VEG20.10
Import.: features importance.
Table 5. Confusion matrix of GF2 1 m resolution images based on the pixel method and support vector machine (SVM) algorithm.
Table 5. Confusion matrix of GF2 1 m resolution images based on the pixel method and support vector machine (SVM) algorithm.
Green TreeRed TreeGray TreeTotalUA
Green tree875130.4861091.40.802
Red tree106.5545.876.9729.20.748
Gray tree5.53.82.111.40.184
Total9876801651832
PA0.8870.8030.013OA0.777
Kappa0.58
Table 6. Accuracy assessment of binary classification for S2 images using the SVM algorithm. 15% < the second damage percentage of each resolution image <50%.
Table 6. Accuracy assessment of binary classification for S2 images using the SVM algorithm. 15% < the second damage percentage of each resolution image <50%.
ResolutionDamage PercentageOAKappaPA for DamageUA for Damage
10 m<15%0.7490.490.6840.722
>15%0.8100.620.7770.798
20 m<15%0.6760.310.5430.599
>15%0.7150.350.5550.560
Table 7. Confusion matrix for three class classifications with the SVM algorithm using S2 10 m resolution images; 15% < the third class < 50%.
Table 7. Confusion matrix for three class classifications with the SVM algorithm using S2 10 m resolution images; 15% < the third class < 50%.
0<15%>15%TotalUA
048.912.58.369.70.702
<15%9.317.412.238.90.447
>15%5.819.132.557.40.566
Total644953166
PA0.7640.3550.613OA0.595
Kappa0.39

Share and Cite

MDPI and ACS Style

Zhan, Z.; Yu, L.; Li, Z.; Ren, L.; Gao, B.; Wang, L.; Luo, Y. Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China. Forests 2020, 11, 172. https://doi.org/10.3390/f11020172

AMA Style

Zhan Z, Yu L, Li Z, Ren L, Gao B, Wang L, Luo Y. Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China. Forests. 2020; 11(2):172. https://doi.org/10.3390/f11020172

Chicago/Turabian Style

Zhan, Zhongyi, Linfeng Yu, Zhe Li, Lili Ren, Bingtao Gao, Lixia Wang, and Youqing Luo. 2020. "Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China" Forests 11, no. 2: 172. https://doi.org/10.3390/f11020172

APA Style

Zhan, Z., Yu, L., Li, Z., Ren, L., Gao, B., Wang, L., & Luo, Y. (2020). Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China. Forests, 11(2), 172. https://doi.org/10.3390/f11020172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop