Ground Target Detection and Damage Assessment by Patrol Missiles Based on YOLO-VGGNet
Round 1
Reviewer 1 Report
The manuscript proposes a novel method to evaluate the damage caused by strikes to mobile ground targets using UAVs. The authors also propose a way to assess the damage to individual parts, which is not present in earlier approaches. The authors present a rather detailed evaluation of their method, including comparison to other methods.
The paper is generally very well-written and understandable, the style and grammar are good. The figures and tables are high-quality and informative. I noticed a few mistakes, however:
1. "However, it is hard to be extended to various mobile ground targets." should be "...hard to extend to..."
1.a More importantly, why are these methods hard to extend to mobile targets? The authors provide no justification for this claim.
2. "better understandable" should be "more understandable"
3. Transposed convolution and deconvolution are not the same, rather transposed convolution, which is often used in deep learning for upsampling is incorrectly referred to by many as deconvolution.
Some generic comments/questions:
1. Also, I am not convinced that the method the authors present as 'deconvolution' fits either terms. It looks like a form of guided backpropagation to me. Some clarification on this point would be recommended. If the authors call this deconvolution, then some connection to the well-known form of deconvolution (division by filter in frequency domain) should be shown.
2. What exactly do the authors mean by online processing? On the UAV's hardware? Is remotely sending the images or processing them on the UAV more efficient in the authors' opinion?
3. Providing accuracies as percentages as well in the tables would be nice.
4. Recent military reports claim that the Ukrainian army has successfully used HIMARS decoys built from wood to draw Russian strikes from the actual weapons. This is basically a manually arranged adversarial attack. Is the proposed method robust to suck attacks, and if not, what would the authors propose to make it more robust?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
I highly recommend following a traditional flow of Paper's writing. After the general method part followed in the 2nd part is explained in general terms, the steps of the process should be presented one by one. More findings are presented, but what is the novelty and contribution of the authors? Just apply a known method to a dataset?
Synthetic data set was used in the study. The constraints of not using real images or realistic images have not been adequately examined. An assumptions and constraints heading should be added to the method section. Also, state should be added to the result section.
Comparison with various methods should be presented. The advantages or disadvantages of the proposed or chosen method should be discussed.
It is known that there are many different versions of Yolo. It is also possible to combine them with VGG etc. Why hasn't benchmarking been tried?
The deficiencies in the literature review are evident in the findings, discussion and conclusion. In this respect, I strongly recommend that competitor and similar studies be examined and added as benchmarking.
The publication in its current form contains serious flaws. It may be a conference paper, but SCI-E is not at the level of an article in the journal.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
Paper can be accepted in present form.