Next Article in Journal
Innovative Business Process Reengineering Adoption: Framework of Big Data Sentiment, Improving Customers’ Service Level Agreement
Previous Article in Journal
Yolov5 Series Algorithm for Road Marking Sign Identification
 
 
Article
Peer-Review Record

EffResUNet: Encoder Decoder Architecture for Cloud-Type Segmentation

Big Data Cogn. Comput. 2022, 6(4), 150; https://doi.org/10.3390/bdcc6040150
by Sunveg Nalwar 1,*, Kunal Shah 1, Ranjeet Vasant Bidwe 2,*, Bhushan Zope 2, Deepak Mane 3, Veena Jadhav 4 and Kailash Shaw 2
Reviewer 1:
Reviewer 3: Anonymous
Big Data Cogn. Comput. 2022, 6(4), 150; https://doi.org/10.3390/bdcc6040150
Submission received: 8 October 2022 / Revised: 29 November 2022 / Accepted: 30 November 2022 / Published: 7 December 2022

Round 1

Reviewer 1 Report


Comments for author File: Comments.pdf

Author Response

Thank you for your suggestions, we tried our best to make all the necessary changes. Please check the word attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The manuscript presents the new concept of cloud-type segmentation based on the encoder-decoder architecture. The solution is a variation of the known and reliable framework ResUNet that allows getting performant results for the task of semantic segmentation of monotemporal, very high-resolution aerial images. The description of the new approach is clear and convincing. It is good that the authors do not mention many unnecessary threads that do not affect the results of the research. Below are my comments:

1. The reasons for using an encoder-decoder neural network architecture can be different. The authors should explain why they think using this type of architecture for a neural network is better than adding other types of architecture.
2. The CONVM layer, acting as the autoencoder latent layer, has a size of 10x15. Have the authors also verified other dimensions of this layer (larger or smaller)?
3. What changes did the authors try to make to the configuration of network hyperparameters and learning process parameters (e.g., transfer functions, mini-batch size, learner algorithm, learning rate, conditions for stopping the learning process, ways to prevent overfitting, etc.)?
4. The graphical way of presenting the results needs improvement. The resolution of Figures 5 and 6 is insufficient. The line descriptions in Figure 6 are partially invisible.

Author Response

Thank you for your suggestions, we tried our best to make all the necessary changes. Please check the word attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

In the introduction section, I suggest that the authors acknowledge that this kind of problem is lately also being tackled with swarm intelligence algorithms. The authors should cite the following comprehensive review: https://www.mdpi.com/2076-3417/8/9/1521. This should be mentioned in a short paragraph to show the author's field knowledge. 

 

In the paper, there cannot be super subjective sentences such as "We have implemented brilliant pre and post-processing techniques."

 

The authors must provide a better literature review and pseudocode of the proposed approach. Furthermore, a more in-depth explanation of the used parameter settings should be provided. Did the authors conduct some parameter tuning? 

 

Figure 6 should be corrected.

 

The discussion section should be provided with a more detailed statistical analysis. 

 

Used references are relevant.

 

 

 

 

 

Author Response

Thank you for your suggestions, we tried our best to make all the necessary changes. Please check the word attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

My comments have been addressed by the authors. 

Author Response

Thank you for your suggestions

Reviewer 3 Report

After addressing my and other reviewer comments, the manuscript has improved from the previous version and, in my opinion, can be accepted. 

Author Response

Thank you for your suggestions

Author Response File: Author Response.docx

Back to TopTop