Next Article in Journal
Power Quality Analysis of a Microgrid-Based on Renewable Energy Sources: A Simulation-Based Approach
Previous Article in Journal
Advanced Control Scheme Optimization for Stand-Alone Photovoltaic Water Pumping Systems
 
 
Article
Peer-Review Record

Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions

Computation 2024, 12(11), 225; https://doi.org/10.3390/computation12110225
by Peter Klco, Dusan Koniar *, Libor Hargas and Marek Paskala
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Computation 2024, 12(11), 225; https://doi.org/10.3390/computation12110225
Submission received: 8 October 2024 / Revised: 30 October 2024 / Accepted: 7 November 2024 / Published: 12 November 2024
(This article belongs to the Section Computational Engineering)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

  The authors applied three YOLOv8 models and additional two image processing techniques to enhance the detection performance of soldering splashes. However, some points should be improved in the revised manuscript.

- The authors should compare the detection performance between the present study and previously published papers to show the novelty of the present study. 

- Especially, their previous study[8] is similar with the present study. The improvement points should be emphasized.

- Were data split into train, valid and test datasets? What was the number of each dataset?

- The example of an output image from the trained YOLOv8 model should be demonstrated for better understanding.

- YOLO model shows both individual bounding boxes and classification results at the same time. However, the authors compared metrics (recall, precision) related with classification results. It is necessary to compare whether the trained model detects the soldering splash region well according to YOLOv8 types and image processing techniques. 

Comments on the Quality of English Language

- Please check the minor English error (ungrammatical sentences, ',' usage)

Author Response

Respected reviewer,

thank you for your comments and remarks. This is our point-by-point answer to your questions.

The authors should compare the detection performance between the present study and previously published papers to show the novelty of the present study. 

In new version of our article we created new section Related works in which we bring systematic overview of works in time window 2021-2024 (approximately 15 works). This overview contains state-of-the-art approaches in the field of quality inspection of PCBs using CNNs with focus on YOLO models and preprocessing techniques used for increasing the detection algorithms performance. Reference list was extended with these new works. Novelty of our study is added to Discussion section. Statistical analysis confirmed our assumptions: application itself of suitable preprocessing method can additionally increase the performance of selected object detection algorithm.

Especially, their previous study [8] is similar with the present study. The improvement points should be emphasized.

On the first sight, our previous study might look very similar to present study. But in previous study we focused on selection of proper CNN model for detecting very small soldering splashes on PCBs (We compared different models: YOLOv5, YOLOv8, Faster RCNN.) In the experimental phase we decided for the YOLO. No statistical analysis was done (comparison of basic metrics was performed), impact of different preprocessing methods was not also investigated.

In current study we mainly focus how to further increase the detection performance of best model candidate – YOLOv8 (selected in previous study). In the Discussion section we added information about statistical analysis and assessment of our results. Such statistical assessment and hypothesis testing can provide valuable information to this field of research. Statistical analysis is not often presented in published studies.

Were data split into train, valid and test datasets? What was the number of each dataset?

The information about splitting the dataset into train and validation was added to Materials and methods section. Dataset was randomly split to 80:20 (train:validation) as mentioned in several studies, e.g.:

Mamidi, J. S. S. V., Sameer, S., & Bayana, J. (2022). A Light Weight Version of PCB Defect Detection system using YOLO V4 Tiny. In 2022 International Mobile and Embedded Technology Conference (MECON). 2022 International Mobile and Embedded Technology Conference (MECON). IEEE. https://doi.org/10.1109/mecon53876.2022.9752361

The example of an output image from the trained YOLOv8 model should be demonstrated for better understanding.

In the Experimental results section, the new Figure 11 was added which illustrates simple example how usage of CLAHE (the best candidate for preprocessing) turns the false negative object into the true positive.

YOLO model shows both individual bounding boxes and classification results at the same time. However, the authors compared metrics (recall, precision) related with classification results. It is necessary to compare whether the trained model detects the soldering splash region well according to YOLOv8 types and image processing techniques.

As we declare in new version of article, the classification metrics (especially recall) is more important in given industrial task. On the other side, we assume that these two types of metrics are often conditionally correlated: increasing of classification score was caused also by increasing of localization score. We didn’t perform this kind of analysis in our study, but can be potentially object of further research.

Please check the minor English error (ungrammatical sentences, ',' usage).

Minor English errors were corrected in entire article.

Added or edited parts of article were highlighted in the text.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

It is suggested to include an explicit section dedicated to the systematic review of the literature that leads to adding works reported in the literature related to the case study of this paper in the time window 2020 to 2025.

 

Author Response

Respected reviewer,

thank you for your comments and remarks. This is our answer to your suggestion:

“It is suggested to include an explicit section dedicated to the systematic review of the literature that leads to adding works reported in the literature related to the case study of this paper in the time window 2020 to 2025.”

 

In our article we added new section called Related works and did a summary of approximately 15 works dedicated to deep learning and CNNs used for PCB quality inspection or preprocessing methods improving the efficiency of detection. The time window of presented works ranges from 2021 to 2024.

 

Also based on remarks of another reviewer, we added some information to next parts of article:

  • We added information about structure of dataset (how many images were used for training and validation) to Materials and methods section
  • We added new figure illustrating how CLAHE improves detection and avoids some false negativities to Experimental results section
  • We added some key findings to Discussion with focus on the statistical assessment of our results.

English (minor mistakes) was corrected. All the changes are highlighted in the new version of article.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors revised the manuscript responding my questions.

Back to TopTop