Next Article in Journal
InSAR Monitoring of Arctic Landfast Sea Ice Deformation Using L-Band ALOS-2, C-Band Radarsat-2 and Sentinel-1
Next Article in Special Issue
Oil Spill Identification in Radar Images Using a Soft Attention Segmentation Model
Previous Article in Journal
A Robust InSAR Phase Unwrapping Method via Phase Gradient Estimation Network
Previous Article in Special Issue
Oil Spills or Look-Alikes? Classification Rank of Surface Ocean Slick Signatures in Satellite Data
 
 
Article
Peer-Review Record

Improved Classification Models to Distinguish Natural from Anthropic Oil Slicks in the Gulf of Mexico: Seasonality and Radarsat-2 Beam Mode Effects under a Machine Learning Approach

Remote Sens. 2021, 13(22), 4568; https://doi.org/10.3390/rs13224568
by Ítalo de Oliveira Matias 1, Patrícia Carneiro Genovez 1,*, Sarah Barrón Torres 1, Francisco Fábio de Araújo Ponte 1, Anderson José Silva de Oliveira 1, Fernando Pellon de Miranda 2 and Gil Márcio Avellino 2
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2021, 13(22), 4568; https://doi.org/10.3390/rs13224568
Submission received: 30 September 2021 / Revised: 8 November 2021 / Accepted: 10 November 2021 / Published: 13 November 2021
(This article belongs to the Special Issue Remote Sensing Observations for Oil Spill Monitoring)

Round 1

Reviewer 1 Report

The manuscript is interesting and would be beneficial to the potential readers however there are some concerns which need to be addressed. The manuscript compared different machine learning models for classification of oil spills. This has been widely performed by previous researchers, but the authors have directly compared and summarized the methods in a single paper.

Firstly, when addressing the importance of machine learning methods for legal purposes of oil spill cases, the authors should clearly state that these methods can be useful or handy tools to aid evaluation / assessments. Most legislations would use such data merely as a tool for indicators during oil spill fingerprinting but not a result for prosecutions. I would suggest that the authors clearly define this as to prevent misunderstanding to the potential readers.

Secondly, there have also been several papers recently published on the potentials of combining several satellite images prior to machine learning models to distinguish phytoplankton and algal bloom patches from oil spills. Maybe the author should also refer and make acknowledgements to those studies and compare with the currently described method. Additional statement in the results and discussion or further research needs would aware the potential readers of current studies.

Apart from that, on what platforms were the models performed? Matlab, python, C++, java? Since no descriptions of these methods were described, it is unclear if the models used were based on toolboxes. A short description in the main body of manuscript or in the supplementary information would be useful.

Optimization methods and conditions of each models were not described so it would be hard to tell if the model accuracy performance were robust or appropriate. As most modelers would cross-check with model performances, it would be useful if the authors can briefly include the optimized methods in the supplementary as to show that the authors have applied methods that avoided model over-fittings.

Author Response

November 4th, 2021.

The authors would like to thank the reviewer for dedicated time to analyze the document and for all suggested recommendations. Please see the attachment to have access to the point-by-point review.

Researchers Pontifical Catholic University (PUC-Rio) and Petrobras Research and Development Center (CENPES)

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors explored the better ML method for distinguishing oil spills and seeps in the Gulf of Mexico using RADARSAT data. I believe it is interesting for RS readers. I have some comments:
1. Suggest to specify the ‘satellite beam mode’ in title, such as the spaceborne SAR beam mode, or the RADARSAT SAR beam mode.
2. Suggest to explain or describe the natural and anthropic oil slicks at the beginning of this paper. For me, I am confused of their difference.
3. Are the Section 4, and part of Section 3, supposed to be in the Section 1?
4. In Section 5, the spectral features are described, however, I think the sigma, beta, and gamma are the different forms for the same backscattering coefficient, can these be used as the spectral? I think different sensor band frequency is spectra.
5. The study is based on the RADARSAT-2 images. The RADARSAT sensors works at C band. Can these results be applied or suitable for other C-band sensors? Can these results be applied or suitable for other band SAR sensors? 
6. In Section 6.1, when you firstly select features, you divide them into five groups, and then filter each group according to the correlation matrix, leaving only one group. This is not fair for the next algorithm comparison, such as decision trees are structured according to information gain, but you may filter out the features that best fit it, leaving the best suitable features for LAR.
7. The authors have five groups in grouping, but why there are only four groups in Table 4?
8. In table 10, through experimental comparison, the author found that in terms of seasonality, spring and summer had the highest standard deviation (Table 10(a)), indicating that ML algorithm achieved the best performance through difference. However, whether seasonality affected spectral features or geometric features and whether there was a coupling relationship between them. It mentioned it maybe connected with strong wind. Have you checked their relationship? Is your data number big enough? Is your conclusion reliable?
9. The SCNB is supposed to be defined in abstract
10. Check the reference format in the text, such as ‘[6],[7]’ at Line 50, ‘[3],[5]’ at Line 53, ‘[3-5], [26-28]’ at Line 79, and so on.
11. Is it common in your research field that the accuracy number are represented as 80, but not 80%?
12. At Line 334, ‘splitting 70 of the samples for training and 30 for testing’, are the numbers supposed to be 70%, 30%?
13. At Line 319, is the ‘Uni and multivariate’ used correctly? Please check it.
14. At Line 329, ‘Six (6) …’, does the ‘(6)’ have other meaning except the ‘Six’?
15. At Line 612, check the sentence, please.

Author Response

November 4th, 2021.

The authors would like to thank the reviewer for dedicated time to analyze the document and for all suggested recommendations. Please see the attachment to have access to the point-by-point review.

Researchers Pontifical Catholic University (PUC-Rio) and Petrobras Research and Development Center (CENPES)

Author Response File: Author Response.pdf

Reviewer 3 Report

Abstract, line 61 and elsewhere: Terminology. Anthropogenic source vs. natural source. All sources are natural, all spilled oiled, from seeps or drilled wells come from natural sources. A seep spills natural oil in the oceanic water column. Please correct this throughout the paper. It’s the cause leading to a spill that can be natural or anthropogenic, the source is always natural unless we are discussing treated or processed oils, i.e., benzene, kerosene, and many others., which is not the case in this article.

Line 43: oil is not considered a mineral by some/many. No need to state "mineral oil" when it is clear from the beginning that the authors are referring to crude oil spilled from seeps and drilled wells.

Line 43: man-made oil slicks is not accurate. What’s man-made is the release of oil into the ocean, and this can be accidental or intentional. Slicks are formed in the ocean after oil is spilled in it. What can be man-made is the spilling of the oil, not the formation of slicks.

Line 98: the line ends with a period when one expects the sentence to continue. The overall English style and grammar would need improvement too.

The sentence starting in line 480 is not a good reflection of reality. Storms/waves tend to damage pipelines more than rigs and can trigger gravity currents, e.g., turbidity currents, as it happened in 2004 with hurricane Ivan the Taylor platform MC20. Hurricanes don’t directly damage rigs. 

Line 660. Change title to Conclusions and Outlook

It is unpredictable where slicks are going to emerge. Satellites have specific paths and while some can be focused and re-focused to sense a particular spot/area, it could be faster and more accurate to observe slicks from aircraft/airplanes-mounted sensors. Could machine learning techniques be also used to process data taken from airplanes? A paragraph discussing this, or at least few sentences on this could be very valuable for many readers.

Author Response

November 4th, 2021.

The authors would like to thank the reviewer for dedicated time to analyze the document and for all suggested recommendations. Please see the attachment to have access to the point-by-point review.

Researchers Pontifical Catholic University (PUC-Rio) and Petrobras Research and Development Center (CENPES)

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thanks for authors' efforts. I think it is better improved.
Suggest to modify the '2. Data Base Description and Methodology' as '2. Materials and Methods'.

Author Response

November 8h, 2021.

The authors would like to thank the reviewer for all sent comments. As suggested we replaced the title ‘2. Data Base Description and Methodology' by '2. Materials and Methods'.
Best regards,
Dr Patrícia Genovez

Back to TopTop