Next Article in Journal
The Structural Gravity Model and Its Implications on Global Forest Product Trade
Next Article in Special Issue
Estimating Forest Characteristics for Longleaf Pine Restoration Using Normalized Remotely Sensed Imagery in Florida USA
Previous Article in Journal
Towards Sustainable Forest Management in Central America: Review of Southern Pine Beetle (Dendroctonus frontalis Zimmermann) Outbreaks, Their Causes, and Solutions
 
 
Article
Peer-Review Record

Combining GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China

Forests 2020, 11(2), 172; https://doi.org/10.3390/f11020172
by Zhongyi Zhan, Linfeng Yu, Zhe Li, Lili Ren, Bingtao Gao, Lixia Wang and Youqing Luo *
Reviewer 1:
Reviewer 2: Anonymous
Forests 2020, 11(2), 172; https://doi.org/10.3390/f11020172
Submission received: 11 January 2020 / Revised: 2 February 2020 / Accepted: 3 February 2020 / Published: 5 February 2020

Round 1

Reviewer 1 Report

I have just a few comments:

line 106: machine learning algorithms not mechanical.

line 125: is it a Reference date? or Reference data?

line 220: you mean spatially added MSI and NDMI?

line 268: it is good to explain what a Kappa value means especially for an audience with no remote sensing background.

line 291: I don't think "fuse" is the right phrase here. there was no fusion done between those two satellite images. you assessed two satellite images and three different classifications algorithms to identify RTB at different damage levels. this sentence needs to be rephrased. 

Author Response

Response to Reviewer 1 Comments

Dear Reviewer,

Thank you for your comments on our manuscript. Those comments are very helpful for revising and improving our paper, as well as the important guiding significance to our research. We have studied the comments carefully and made corrections which we hope meet with approval. The main corrections are in the manuscript which are marked using the track changes mode and the responds to the reviewers’ comments are as follows (the replies are highlighted in red).

 

Point 1: line 106: machine learning algorithms not mechanical.

 

Response 1: We are very sorry for our carelessness. We have changed “mechanical earning algorithms” to “machine learning algorithms” and rewrite this expression of the full text, please see in line 30 and line 108. Thank you very much.

 

Point 2: line 125: is it a Reference date? or Reference data?

 

Response 2: We are sorry for our carelessness. Here should be the reference data as the UAV images used as training and validation data. We have revised it in the manuscript. Thank you very much.

 

Point 3: line 220: you mean spatially added MSI and NDMI?

 

Response 3: Thank you for your question. Here refers to the addition of MSI and NDMI index specifically for the classification of Sentinel-2 (S2) 20 m resolution images, rather than the addition of MSI and NDMI index spatially. Because only S2 20 m resolution images has SWIR1 band. According to the literature study, MSI and NDMI indexes generated by SWIR1 band and NIR band are sensitive to bark beetles damage classification. I hope this answer can solve your question.

 

Point 4: line 268: it is good to explain what a Kappa value means especially for an audience with no remote sensing background.

 

Response 4: Thanks for your advice and it’s very valuable for improving our paper. We have added the meaning of kappa coefficient and new references [18,55]. Please see in lines 256-259

 

Point 5: line 291: I don't think "fuse" is the right phrase here. there was no fusion done between those two satellite images. you assessed two satellite images and three different classifications algorithms to identify RTB at different damage levels. this sentence needs to be rephrased. 

 

Response 5: Thanks for your advice. We have changed the inappropriate word "fuse" to "combine". The revised sentences can be seen on line 2, line27 and line 413.

 

Reference

18.  Meddens, A.; Hicke, J.; Vierling, L. Evaluating the potential of multispectral imagery to map multiple stages of tree mortality. Remote Sens. Environ. 2011, 115, 1632-1642. doi:10.1016/j.rse.2011.02.018

55.  Landis, J. R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics. 1977, 33, 159-174. doi: 10.2307/2529310

Author Response File: Author Response.docx

Reviewer 2 Report

General comments

The paper presents an objective take on the issue of automated identification of trees subjected to beetle infestation. The materials and methods used by the authors are up to date and rely on current trends in image processing and analysis. The results fall into the general consensus regarding the detection accuracy of trees with altered foliage color due to bark beetle infestation, with a degree of novelty given by the species investigated and the context of low percentage of damaged trees on a stand level.

Some inadvertencies are presented in more detail below. 

 

The title “Fusing GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China” seems to suggest a fusion of images with different resolutions to extract tree mortality. The actual methodology does not follow this line of research, as the two mentioned images have not been “fused”, but used on two different scales – tree level and stand level. I suggest to change the title accordingly.

Line 34. The authors mention “ecological loss”, but I think the original papers mention the economic loss.

Line 47. The term “infection” is used for the successful attack of bark beetle, while in other parts the authors use “infestation”, as is most widely used in general.

Lines 57-86. The authors describe the outcomes and shortcomings of different efforts to identify and quantify beetle infestation, presented in scientific articles. Practically, the same articles are analyzed in the first paragraph as outcomes and in the second paragraphs, their shortcomings are highlighted, which seems redundant. I suggest a more unified perspective, with each method or result fully analyzed upon its presentation.

In the study area section, just a description of the area is presented and examples of images. No scale is overlaid on the map of the study area, nor is there any mention of the total area subjected to the study. As remote sensing methods should provide means of investigation of broad scale phenomena, it is important to mention the scale of the study, especially since no other map is presented in results or discussions.

Chapter 2.2. is named reference date, which was probably meant to be “reference data”, as it describes the UAV images used as training areas and validation data.

Line 163-165. The phrase “The acquisition time between UAV images and satellite images should not be greater than 6 months apart, to avoid potential confusion using trees in different attack stages” appears unnecessary in the argumentation, since it is not referenced in the literature and the time lapse between UAV and satellite images is less than a month.

Table 2, columns 1 and 3: change “spectral resolution” to “spatial resolution”

Line 176-177. The authors mention the “homogeneity criteria” as default. It is unclear whether they used the default settings of the segmentation module of eCognition or these are the final values chosen as optimal for segmentation.

Line 217 mentions Landsat-scale pixels, I presume is an error since the authors have used S2 images?

The results only compare the classification accuracies through confusion matrices. Now full scale or detail example of the image classification is shown, to compare also the noise in the image, especially since the authors are using very high spatial resolution images. I think the impact of the paper would benefit from such images, as the authors also mention the importance of the results for forest managers.

There is an inconsistency in the use of term “green tree”. In the Methods section, is described as live or current attack of beetle. In lines 270-271, it is mentioned that gray trees are mistaken for red trees or healthy trees. In figure 6, there are green trees (dark green color) that stand out from the “canopy areas”, which are olive green. This needs further explanations, since the articles tackles the early stages of beetle infestation. Most of the results show that the main focus of the analysis are the red trees.

The explanation for Figure 6 mentions that red trees are delineated trees from UAV images. But there are also crown delineations that overlap white areas (gray trees) and dark green areas (green trees). Please clarify, preferably with a legend on the figure.

 

 

 

Author Response

Response to Reviewer 2 Comments

Dear Reviewer,

Thank you for your valuable comments and suggestions. Those comments are very helpful for make our manuscripts more scientific and enriching, as well as the important guiding significance to our research. We have studied the comments carefully and made corrections which we hope meet with approval. The main corrections are in the manuscript which are marked using the track changes mode and the responds to the reviewers’ comments are as follows (the replies are highlighted in red).

 

Point 1: The title “Fusing GF-2 and Sentinel-2 Images to Detect Tree Mortality Caused by Red Turpentine Beetle during the Early Outbreak Stage in North China” seems to suggest a fusion of images with different resolutions to extract tree mortality. The actual methodology does not follow this line of research, as the two mentioned images have not been “fused”, but used on two different scales – tree level and stand level. I suggest to change the title accordingly.

 

Response 1: Thank you very much for your suggestion. We have changed the inappropriate word "fuse" to "combine". The revised sentences can be seen on line 2, line27 and line 414.

 

Point 2: Line 34. The authors mention “ecological loss”, but I think the original papers mention the economic loss.

 

Response 2: We are very sorry for our incorrect writing. We have changed “ecological” to “economic”. Please see in line34. Thank you very much.

 

Point 3: Line 47. The term “infection” is used for the successful attack of bark beetle, while in other parts the authors use “infestation”, as is most widely used in general.

 

Response 3: We are sorry for not using a unified expression. We have changed "infection" to "infestation" and replaced this expression in the full text. Please see in line 47, line 50, line 63 and line 233.

 

Point 4: Lines 57-86. The authors describe the outcomes and shortcomings of different efforts to identify and quantify beetle infestation, presented in scientific articles. Practically, the same articles are analyzed in the first paragraph as outcomes and in the second paragraphs, their shortcomings are highlighted, which seems redundant. I suggest a more unified perspective, with each method or result fully analyzed upon its presentation.

 

Response 4: Thank you very much for your suggestion. White et al. [11] and Meddens et al. [22] have been studied mountain pine beetle infestation at both high damage level and low damage level. It's true that discussing their findings in two different thematic paragraphs makes readers feel fuzzy and redundant. After discussion, we decided to delete the outcomes of these two articles on the high damage level caused by mountain pine beetle. The main focus is on the shortcomings of these two articles in the study of low damage level. Please see in lines 61-72.

 

Point 5: In the study area section, just a description of the area is presented and examples of images. No scale is overlaid on the map of the study area, nor is there any mention of the total area subjected to the study. As remote sensing methods should provide means of investigation of broad scale phenomena, it is important to mention the scale of the study, especially since no other map is presented in results or discussions.

 

Response 5:

We are sorry for negligence about the scale of map. It’s grateful for your reminder. We have added a map scale to figure 1 and explained the total area of the two study areas in the article. Please see in line 115.

 

Point 6: Chapter 2.2. is named reference date, which was probably meant to be “reference data”, as it describes the UAV images used as training areas and validation data.

 

Response 6: We are sorry for our carelessness. Here should be the reference data as the UAV images used as training and validation data. We have revised it in the manuscript. Thank you very much.

 

Point 7: Line 163-165. The phrase “The acquisition time between UAV images and satellite images should not be greater than 6 months apart, to avoid potential confusion using trees in different attack stages” appears unnecessary in the argumentation, since it is not referenced in the literature and the time lapse between UAV and satellite images is less than a month.

 

Response 7: Thank you for your suggestion, we have deleted this sentence. Please see lines 169-171.

 

Point 8: Table 2, columns 1 and 3: change “spectral resolution” to “spatial resolution”

 

Response 8: Thank you for your careful work. We have changed “spectral resolution” to “spatial resolution”.

 

Point 9: Line 176-177. The authors mention the “homogeneity criteria” as default. It is unclear whether they used the default settings of the segmentation module of eCognition or these are the final values chosen as optimal for segmentation.

 

Response 9: We are sorry for our unclear expression. After we tried to set multiple sets of values for shape and compactness, we find that the segmentation results have no significant change. So we set the shape and compactness as the default values in the software to segment the images. We have rewritten this sentence. Please see lines 182-183.

 

Point 10: Line 217 mentions Landsat-scale pixels, I presume is an error since the authors have used S2 images?

 

Response 10: Long et al. [47] precluded the ability of Landsat images to reliably determine the difference between 5% and 0% of Pixel damage percentage. In order to improve the classification accuracy of S2 20 m resolution images, we set the damaged pixel of only one damaged tree as the healthy pixel, so as to carry out the next training and validation. We have revised this sentence, please see in lines 223-228. Thank you very much for your question.

 

Point 11: The results only compare the classification accuracies through confusion matrices. Now full scale or detail example of the image classification is shown, to compare also the noise in the image, especially since the authors are using very high spatial resolution images. I think the impact of the paper would benefit from such images, as the authors also mention the importance of the results for forest managers.

 

Response 11: Thank you very much for your useful advice. For GF2 images, we have added the classification map of different resolution images of the optimal algorithm and the classification map of different algorithms of the optimal resolution image to compare the noise in the images. Please see figure 5 (line293-298). We have reorganized the description at line 280-283. For S2 images, it is difficult to compare noise in the form of classification map due to the lack of enough validation pixels (total damage pixels of validation = 102). So we have not added classification map to S2 images. Thank you again for your suggestion.

 

Point 12: There is an inconsistency in the use of term “green tree”. In the Methods section, is described as live or current attack of beetle. In lines 270-271, it is mentioned that gray trees are mistaken for red trees or healthy trees. In figure 6, there are green trees (dark green color) that stand out from the “canopy areas”, which are olive green. This needs further explanations, since the articles tackles the early stages of beetle infestation. Most of the results show that the main focus of the analysis are the red trees.

 

Response 12: Thank you for your careful work. Green trees represent healthy trees and early stages of damaged trees. Due to the limitations of data sources, we focus on the identification of red trees rather than early stages of damaged trees in this study. We have changed "health" to "green" in line 285 and 343.

In addition, the green trees were randomly selected in the images and covered a wide area, so they are less included in the sample selection map which mainly contains red trees in figure 7. These green trees may be health trees or damage trees in the green attack stage. We have changed "olive" to "olive green" which represent “tree canopy areas” in figure 7

 

Point 13: The explanation for Figure 6 mentions that red trees are delineated trees from UAV images. But there are also crown delineations that overlap white areas (gray trees) and dark green areas (green trees). Please clarify, preferably with a legend on the figure.

 

Response 13: We are very sorry for our carelessness. This is supposed to be blue polygons instead of red polygons. When selecting samples, we delineated the tree crown of green trees, red trees and grey trees, and then train and verify them. So the delineated trees from UAV images included these three types. The total number of gray trees is small, and green trees were randomly selected in the whole map, covering a wide area. Therefore, the number of green and gray trees in the sample selection map is small, while the number of red trees is mostly in figure 7. We have changed the “red polygons” to the “blue polygons” in figure 7. Please see line 360.

As for the legend, the focus of sample selection map is on the different selection of object-based and pixel based methods and there are arrow and text expressions in the figure 7. It seems that adding legend to the figure is not good-looking. We have added a legend to the classification map (figure 6). In addition, the white in the figure represents the gray trees. If we add a legend, the white color will be mixed with the background color. If you still feel the need to add a legend, we will rework the figure 7, replacing the white representing the gray tree with the coral color, which represent gray trees in figure 6.

 

Reference:

11.  White, J.; Wulder, M.; Brooks, D.; Reich, R.; Wheate, R. Detection of red attack stage mountain pine beetle infestation with high spatial resolution satellite imagery. Remote Sens. Environ. 2005, 96, 340-351. doi:10.1016/j.rse.2005.03.007

12.  Meddens, A.; Hicke, J.; Vierling, L.; Hudak, A.T. Evaluating methods to detect bark beetle-caused tree mortality using single-date and multi-date Landsat imagery. Remote Sens. Environ. 2013, 132, 49–58. doi:10.1016/j.rse.2013.01.002

47.  Long, J.A.; Lawrence, R.L. Mapping percent tree mortality due to mountain pine beetle damage. For. Sci. 2016, 62, 392-402. doi:10.5849/forsci.15-046

Author Response File: Author Response.docx

Back to TopTop