Postfire Forest Regrowth Algorithm Using Tasseled-Cap-Retrieved Indices
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsIn order to improve the monitoring accuracy by taking into account the correlation indices with vegetation growth, the manuscript proposed an algorithm to monitor the forest regrowth, which utilizes the tasseled cap transformation to identify soil, vegetation and moisture.
For the content and results of the article, there are several shortcomings in it:
1. The format of Equation 5 is different from the other formulas;
2. Table 4 expresses too much information. It is recommended to be plotted in two parts. In the comparison of the results 'Bistritsa' and 'Perperek ' experimental sites with average EO values greater than 40%, please give the explanation.
3. In the manuscript, soil, vegetation and moisture are made as key variable parameters, the ablation experiment should be added to analyze the different importance of the parameters, and the method proposed in this manuscript should be compared with other PFIR methods.
Comments on the Quality of English Languagenone
Author Response
Please, see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsGeneral comments:
In this paper, the authors seek to use tasseled cap transformation method to develop a novel approach to map post-fire regrowth. This is an interesting idea however this paper did not explain the methods – PFIR very clearly such as how it is calculated, the value ranges, what does low and high value mean, which is the crux of this paper. In addition, this paper needs to clearly state if the objective is to compare the performance of the method across different sensors or not. Overall, this paper could be interesting if organized in a systematic way, well explanation of the method, and require additional analysis. In addition, I have the following comments.
Major comments:
1. It lacks a clear explanation of research gap such as why and how it is unique and novel because there are so many methods to measure qualitative and quantitative regrowth after fire or any disturbance.
2. This paper used images from high resolution UAV and Worldview as well as moderate resolution imagery from Landsat and Sentinel 2A. Each sensor has different spatial resolution so how they address the issue of different spatial resolution among sensors.
3. The number of sample data used for PFIR threshold determination and independent test data set for classification accuracy is not clearly explained. Without this information, it cannot be confirmed the validation of this accuracy explained in the paper.
4. How does classification accuracy vary across different sensors?
5. Comparison of classification accuracy from literature review and this study would help to emphasize the novelty of this study.
Specific comments:
Line 49 -64: merge to one paragraph.
Line 67: definition of acronym should be here instead at line 72. In addition, define PFIR before explaining categories.
Line 70: remove AAP acronym, too many acronyms create confusion for the reader, so I would suggest removing it.
Lines 71-73: It looks like conclusion, so remove it.
Line 78: need to clarify sentence whether names are town or district name. I would suggest using preposition to clarify the site locations.
Lines 111 – 146: Data used in the study. It lacks information about spatial resolution of each sensor and whether data were atmospherically corrected or not.
Lines 154 – 162: Figure 2 is not necessary; it does not add value to the paper and increase the readability. Too many arrows connecting from each other increases unnecessary complexity while the model is very simple and easy to understand from lines 163 to 203.
Line 204: Section 2.3.17 Classification. Please state what kind of classification was used. There are so many machine learning classification methods out there such as random forest, support vector machine, neural network etc.
Line 208: Section 2.3.2 PFIR threshold values determination and AAP. There is no clear definition of how PFIR values were derived and its value range. Without proper explanation, it is hard to understand that this paper applied 0.5 incremental threshold to determine the actual threshold for three classes of PFIR.
Lines 244-253: There is no clear explanation of how classification accuracy was estimated. The number of sample points used to determine threshold values for PFIR classes and independent sample polygons/points used for classification testing purposes.
Line 367: It is unclear about how 81.5% confidence was estimated for MRI class.
Lines 370-373: This paragraph seems like a method so move it to the methodology section.
Lines 385 – 399: It seems like a discussion section rather than conclusion.
Comments for author File: Comments.pdf
Author Response
Please, see the attachment.
Author Response File: Author Response.pdf