Predicting Canopy Nitrogen Content in Citrus-Trees Using Random Forest Algorithm Associated to Spectral Vegetation Indices from UAV-Imagery
Round 1
Reviewer 1 Report
The subject of the study is interesting and topical, with high scientific and practical importance.
The introduction is presented correctly, in accordance with the subject. Numerous scientific articles, in concordance to the topic of the study, were consulted.
Methodology of the study was clearly presented, and appropriate to the proposed objectives.
The obtained results are important and have been analyzed and interpreted correctly, in accordance with the current methodology.
The discussions are appropriate, in the context of the results, and vas conducted compared to other studies in the field.
The scientific literature, to which the reporting was made, is recent and representative in the field.
It is recommended that the authors to review the References chapter in accordance with the Reference List and Citations Style Guide for MDPI Journals.
For example:
";" instead of "," between authors
"and" it is present before the last author in some bibliographic sources, and there is not present in other sources.
Journals titles: some are abbreviated and others not abbreviated in the present list in the References chapter.
Citation model, from Reference List and Citations Style Guide for MDPI Journals:
Díaz, D.D.; Converso, A.; Sharpless, K.B.; Finn, M.G. 2,6-Dichloro-9-thiabicyclo[3.3.1]nonane: Multigram Display of Azide and Cyanide Components on a Versatile Scaffold. Molecules 2006, 11, 212–218, doi:10.3390/11040212.Author Response
The subject of the study is interesting and topical, with high scientific and practical importance. The introduction is presented correctly, in accordance with the subject. Numerous scientific articles, in concordance to the topic of the study, were consulted. Methodology of the study was clearly presented, and appropriate to the proposed objectives. The obtained results are important and have been analyzed and interpreted correctly, in accordance with the current methodology. The discussions are appropriate, in the context of the results, and vas conducted compared to other studies in the field. The scientific literature, to which the reporting was made, is recent and representative in the field. It is recommended that the authors to review the References chapter in accordance with the Reference List and Citations Style Guide for MDPI Journals.
For example:
";" instead of "," between authors
"and" it is present before the last author in some bibliographic sources, and there is not present in other sources.
Journals titles: some are abbreviated and others not abbreviated in the present list in the References chapter.
Citation model, from Reference List and Citations Style Guide for MDPI Journals:
Díaz, D.D.; Converso, A.; Sharpless, K.B.; Finn, M.G. 2,6-Dichloro-9-thiabicyclo[3.3.1]nonane: Multigram Display of Azide and Cyanide Components on a Versatile Scaffold. Molecules2006, 11, 212–218, doi:10.3390/11040212.
Reply: We performed the modifications needed in the Reference section. We appreciate the review. Please see the revised manuscript.
Author Response File: Author Response.docx
Reviewer 2 Report
Some critical information is still needed; specifically, the package/library used to create the models and, if implemented from an unavailable source, the full source code. Additionally, the accuracy of the model using just the top 5-10 indexes would be appreciated, and potentially, a plot of the random forest tree would be helpful to ascertain the relationship between indexes. In the same vein of feature selection, lasso or ridge regression may prove to be just as powerful, and I would love to see these in comparison with the selected features.
Since the aim of this model was accuracy I would also like to see a boosted model such as xgboost.
Also, the architecture of the neural net was not specified; it would not be surprising if the accuracy were low if the neural net was not large/deep enough or sufficiently tuned.
To this end, I recommend adding a portion to the methods accurately describing the machine learning approaches, what programming language was used? What hyperparameter tuning was done for all models? Was the same fold of data used to train all models, or was data reshuffled for each model?
Author Response
Thank you for the provided feedback. We really appreciate your time and dedication to review our research. We have performed all the suggestions made. Your contribution was vital to improve manuscript quality. All of them were added to the new revised document. Please see the attachment below. Below, a point-by-point response to the reviewer’s comments.
Some critical information is still needed; specifically, the package/library used to create the models and, if implemented from an unavailable source, the full source code.
Reply: Thank you for the comment. We have made the necessary changes in Section 3.4. Thank you again for this recommendation.
Additionally, the accuracy of the model using just the top 5-10 indexes would be appreciated, and potentially, a plot of the random forest tree would be helpful to ascertain the relationship between indexes.
Reply: We appreciate the comment. We evaluated the performance of the Random Forest algorithm with the best 5 (five) and 10 (ten) spectral indices. We also plotted a short tree visualization. Since our RF model consisted of 200 trees, we select one representative tree. Please see Section 4.
In the same vein of feature selection, lasso or ridge regression may prove to be just as powerful, and I would love to see these in comparison with the selected features.
Reply: We appreciate the reviewer’s comment. We have implemented both Lasso and Ridge regression in or analysis. Please see Sections 3.4 and 4. Thank you for this recommendation.
Since the aim of this model was accuracy, I would also like to see a boosted model such as xgboost.
Reply: Very nice comment. We implemented the XGBoost model in our evaluation. It returned interesting results, especially when reducing the number of spectral indices used. Please see Sections 3.4 and 4. Thank you for improving this aspect of the paper.
Also, the architecture of the neural net was not specified; it would not be surprising if the accuracy were low if the neural net was not large/deep enough or sufficiently tuned.
Reply: Thank you for the comment. We have made the necessary changes in Section 3.4.
To this end, I recommend adding a portion to the methods accurately describing the machine learning approaches, what programming language was used? What hyperparameter tuning was done for all models? Was the same fold of data used to train all models, or was data reshuffled for each model?
Reply: Implemented. We have made the necessary changes in Section 3.4.