Next Article in Journal
Power Quality Analysis of a Microgrid-Based on Renewable Energy Sources: A Simulation-Based Approach
Previous Article in Journal
Advanced Control Scheme Optimization for Stand-Alone Photovoltaic Water Pumping Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions

Department of Mechatronics and Electronics, Faculty of Electrical Engineering and Information Technology, University of Zilina, Univerzitna 1, SK-01026 Zilina, Slovakia
*
Author to whom correspondence should be addressed.
Computation 2024, 12(11), 225; https://doi.org/10.3390/computation12110225
Submission received: 8 October 2024 / Revised: 30 October 2024 / Accepted: 7 November 2024 / Published: 12 November 2024
(This article belongs to the Section Computational Engineering)

Abstract

:
Quality inspection of electronic boards during the manufacturing process is a crucial step, especially in the case of specific and expensive power electronic modules. Soldering splash occurrence decreases the reliability and electric properties of final products. This paper aims to compare different YOLOv8 models (small, medium, and large) with the combination of basic image preprocessing techniques to achieve the best possible performance of the designed algorithm. As preprocessing methods, contrast-limited adaptive histogram equalization (CLAHE) and image color channel manipulation are used. The results show that a suitable combination of the YOLOv8 model and preprocessing methods leads to an increase in the recall parameter. In our inspection task, recall can be considered the most important metric. The results are supported by a standard two-way ANOVA test.

1. Introduction

Automated optical inspection methods are very often used in industrial practice. It is therefore necessary to ensure constant imaging conditions for visual inspection. Therefore, in many applications, testing chambers are used. The advantage of such solutions is that they ensure the required conditions, especially the homogeneity of the lighting [1,2,3].
The visual inspection system used in our application consists of a testing chamber, which houses a high-resolution camera system and appropriate lighting. The examined objects (specific power electronic boards) move along the conveyor belt. To be able to synchronize the moving objects on the conveyor belt and the imaging of samples for automated inspection, it is necessary to control this process. Control is carried out using PLC (Programmable Logic Controller) devices, which control the movement of the conveyor belt, the operation of position sensors, and the safety of the equipment. The camera position is set so that the image captures the entire inspected object. After imaging, the analysis of the given object is performed. The block diagram of the automated inspection system is shown in Figure 1.
The task of the proposed visual inspection system is to search for soldering splashes. The analysis is done on the service PC in an environment with an object detection algorithm based on a convolutional neural network [3,4,5,6,7]. The specific board is transported via a conveyor belt to the testing chamber and positioned at a precisely defined place. Immediately after analysis, the board moves further along the conveyor belt to the end of the testing line. At this point, the results of the analysis are accessible to the quality operator. The operator detects whether there are any defects on the board or not. Based on these results, the robotic arm either moves the board for further standard processing or moves the board to the repair station, if a problem is detected.
Our paper focuses on the YOLOv8 object detection algorithm based on convolutional neural networks and the possible increase of its detection performance metrics parameters. In our previous research [8], we proposed an initial solution. To increase the object detection performance, we would like to evaluate the impact of various preprocessing methods applied to original images of specific boards and also the impact of different YOLOv8 models. On the other hand, the variability of preprocessing techniques can serve as data augmentation for training our system. As presented in [9], a substantial number of samples must be collected to create a model with reliable results. Collecting numerous data is often time-consuming or expensive and can be substituted by data augmentation.

2. Related Works

A novel deep learning approach for automated visual inspection of PCBs using an enhanced Mask R-CNN architecture was presented in [10]. The objective of the proposed system was to enhance the detection and identification of surface-mounted devices (SMDs) on PCBs. The system was tested on a dataset of 748 images and reached a sensitivity of 92% and specificity of 94%. The authors concluded that the GeometryMask system showed notable improvements compared to current PCB inspection techniques, providing a potential automated quality control solution for PCB manufacturing. The authors consider a small number of training samples and a small variability of PCB images to be limiting factors in the research.
The authors of [5] introduced a new defect detection method for PCBs using deep learning technology, aiming to address the limitations of conventional Automated Optical Inspection (AOI) systems. Their model implemented image preprocessing methods such as color normalization and noise filtering to correct variations caused by various AOI machine configurations. They demonstrated an accuracy rate of 95%, a recall rate of 94%, and a fast processing rate (27 ms). The authors admitted that the suggested deep learning model enhances the precision of detecting defects on PCBs, decreases the need for manual re-inspections, and is easily compatible with various AOI machines.
The potential CNNs for identifying soldering defects in PCBs were examined in [11]. The study presented a custom CNN model optimized for fault detection and compared it with the YOLO model. The key advantages of the custom model included an improved average precision score and faster inference time. The authors evaluated six different models and concluded that adding residual blocks and bottleneck layers enhanced accuracy and computational efficiency. The authors also recommend increasing the variety of input data to potentially improve the model’s accuracy.
Ref. [12] introduces an advanced algorithm for detecting defects in PCBs based on the Faster R-CNN framework. The major enhancements included improved anchor creation and revised network structure. Their model has achieved 95.6% average accuracy and 125 ms inference time. Their dataset contained six typical PCB defect categories. The authors consider the upgraded algorithm as a notable progression in the automated detection of defects in PCBs, providing increased accuracy and improved performance for industrial applications.
Researchers in [13] created a unique dataset of 1741 images of PCBs. The image size was 640 × 640 pixels. The study aimed to improve PCB defect detection using the YOLO models. The study evaluated various YOLO model versions (v5–v8). The YOLOv5n has achieved the best performance metrics: a recall of 96% and an accuracy of 95%. The detection speed was 120 frames per second. The authors intended to increase the variability of defects in the dataset and decrease the number of model parameters to improve efficiency in future research.
Enhancing PCB defect detection through an optimized YOLOv8 model was proposed in [14]. The study utilizes defect-focused image creation and multiple sophisticated components to deal with the challenge of identifying small defects on large PCBs. The main improvements included creating images with a focus on defects and the creation of a convolutional block attention module (CBAM). Evaluating the YOLOv8 model, the mentioned improvements lead to increased mean average precision score (mAP) and recall across defect types. The tests showed that the model outperformed existing models in precision and recall scores, making it a valuable tool for detecting defects in industrial applications.
CC-YOLO, an enhanced version of YOLOv7 aimed at the identification of surface defects on PCBs was introduced in [15]. The model achieved an accuracy of 98.9%, surpassing the original YOLOv7 model by 2.3%. The key innovations included a custom up-sampling operator—content-aware ReAssembly of Features (CARAFE)—and an enhanced CSPDarknet53 backbone. The presented model outperformed many advanced models in comparative analysis, such as FastRCNN or YOLOv5. Further research could explore applications in practical environments and use in different fields, such as identifying pedestrians and small-part fault diagnosis.
The authors of [16] presented a simple system for identifying faults in PCBs utilizing the YOLOv4 Tiny neural network. The authors used 80% of the dataset for training and 20% for validation purposes. The classification included six kinds of defects shorts, spurs, mouse bites, missing holes, open circuits, and spurious copper. Detection accuracy for missing holes reached 99%, but the accuracy for other defects ranged from 65% to 83%. In conclusion, the YOLOv4 Tiny model showed improved accuracy in defect detection compared to traditional methods, while operating in real time. The model’s lightweight design makes it ideal for embedded and mobile applications.
The YOLOv7 detection model’s performance in the detection of PCB defects was demonstrated in [17]. Their dataset included 1386 pictures of different types of PCB defects. The results of the study showed better performance in comparison to earlier models. YOLOv7 achieved 98.6% accuracy and 97.8% recall, outperforming the YOLOv5 model (97.7% accuracy and 96.6% recall). The authors recommended the presented model for the automated detection of PCB defects in real-time conditions.
The authors of [18] introduced YOLO-HMC, a model built upon the YOLOv5 framework, to improve PCB defect detection accuracy, specifically for small defects. Their model used a special HorNet structure for high-order spatial feature extraction and multiple convolutional attention modules (MCBAM) for the model’s focus on defect-prone areas. Tests conducted on a dataset containing 693 high-resolution PCB images revealed YOLO-HMC’s superiority. The model achieved a mean average precision (mAP) of 98.6%, outperforming models like Faster R-CNN and YOLOv8 in both detection accuracy and speed, making it suitable for real-time industrial applications.
Finally, works in [19,20] demonstrate that combining CNNs (especially YOLO models) with contrast-limited adaptive histogram equalization (CLAHE) as a preprocessing technique increases the efficiency of specific object detection.

3. Materials and Methods

3.1. YOLOv8

The YOLO model is regarded as the most advanced object detection system available today [21]. This model’s main advantages are detection speed and good accuracy. Having a powerful graphics card allows the detection system to operate in quasi-real time. The YOLO algorithm is a single-stage detection system that consists of a backbone section, a neck section, and a head section [22]. Figure 2 shows the architecture of the YOLOv8 detection algorithm, with an input image of a specific PCB board entered into the backbone section. The input data, such as images or patches, are fed to the backbone section. The backbone is a pre-existing network model that is used to extract valuable features from images. Among the most commonly used are ResNet-50, Darknet53, and VGG16. The neck portion extracts “feature pyramids” to allow for greater generalization across different image sizes and resolutions. Ultimately, the head section executes the last-stage tasks by utilizing anchor boxes and generating the final results: the name of the class, its bounding boxes, and its confidence score.
This algorithm has been updated eight times since 2016. Ultralytics introduced YOLOv5 in 2015 and YOLOv8 in 2022. The initial version is commonly used in automated object detection, whereas the second version includes extra features and improvements [23].
The Stochastic gradient descent (SGD) is YOLOv8’s default optimization algorithm. However, the authors of the YOLOv8 environment also offer other optimizers: Adam, AdamW, and RMSprop. Despite the fact that the RMSprop algorithm belongs to the lesser-known optimizers, respected researchers consider the RMSprop as one of the most popular optimization algorithms used in the field of machine learning [24]. RMSprop, originally proposed by Geoff Hinton [25], belongs to the optimizers with adaptive learning rates. This optimization algorithm is becoming more popular in recent years [26], and, therefore, most machine learning frameworks include this optimizer method.
Let wt be a neural network’s weight coefficient in a discrete-time step t and η a learning rate. Let C denote a differentiable error function with respect to parameters w. The purpose of optimization is the minimization of the expected value of the mentioned function relative to the neural network model’s parameters.
The learning step η is set to 0.001 by default, and the moving average hyperparameter β is set to 0.9. Equation (1) represents the algorithm’s update of the moving average of squared gradient E[g2] in time step t:
E g 2 t = β E g 2 t 1 + ( 1 β ) δ C δ w 2
Equation (2) describes the weight coefficient’s update in time step t. From (2), we can see that the learning rate value is divided by the square root of the squared gradient, and, therefore, the learning parameter is adapted with respect to the magnitude of the moving average squared gradient:
w t = w t 1 η E g 2 t δ C δ w

3.2. Image Preprocessing Methods

To analyze the efficiency and performance of the quality inspection algorithm, we decided to use contrast and color enhancements of existing image datasets simultaneously in combination with different YOLOv8 versions. The preprocessing method with the most positive effect will be used in the future for input image manipulation. The preprocessing methods are focused mainly on contrast enhancement and color channel manipulation to achieve better highlighting of features for soldering splashes. The authors of [27,28,29] successfully used contrast modification and color channel mixing applied to original image data to increase the efficiency of the classification of medical images with convolutional neural networks. Their image data also contained small objects of interest, qualifying the images into a defined category.

3.2.1. Modification 1: Images Without Preprocessing

Original color images without any preprocessing were captured with a high-resolution USB 3.0 camera, Basler ace acA4600-10uc (Basler AG, Ahrensburg, Germany), with a resolution of 15 MPix (4608 × 3288 pixels) in the testing chamber with homogenous light conditions. The color balance (white balance) was corrected using a calibration color table and updating the R, G, and B gain in the camera. Due to the small area of objects of interest, a high-resolution camera is needed. Images without any additional preprocessing were used as reference datasets for comparison regarding the influence of other preprocessing methods. In further text, data without preprocessing will be denoted as “raw”.
The dataset contains 272 images, and this entire dataset was randomly split into two subsets: the training set (213 images) and the validation set (59 images). The resolution of all images is 640 × 640 pixels, and they contain the objects of interest (splash).

3.2.2. Modification 2: Color Channel Manipulation

Mixing the independent color channels of RGB images can increase the features of specific objects for their localization. Mácsik et al. [30] created their 3-channel color image, highlighting the object of interest features in images. In this study, the maximal value channel, green channel, and grayscale channel were used. Due to some key similarities with medical images in Mácsik’s work, we decided to use their preprocessing method.
Soldering splashes occur in the image as areas with very high intensity, so the maximal channel contains the maximum of R, G, and B values for each color pixel.
In general, green (G) is the color the human eye is most sensitive to. The G channel also contains less noise than the red or blue channel. The G channel usually has the best contrast.
The grayscale version of the image defines geometrical relations in the image. Grayscale image is obtained using a luminosity color-to-grayscale image algorithm (3):
I G s c = 0.21 R + 0.72 G + 0.07 B
Arranging the channels in order: maximal-green-grayscale (MaxGGsc), we can obtain the improved version of the original RGB image (their comparison is shown in Figure 3).

3.2.3. Modification 3: Contrast Limited Adaptive Histogram Equalization

As mentioned above, the soldering splashes occur in areas with high intensity in different areas of the printed circuit board (PCB). The most often occurrence of soldering splashes is in the area covered by copper. The main idea of the histogram equalization technique is to increase the contrast between the soldering splash and its surroundings (background).
Ordinary histogram equalization [31] can be easily used when considering the same image degradation across the entire image (e.g., underexposed image), where the distribution of pixel values is similar throughout the image. Histogram equalization is a very fast method provided as a look-up point-by-point operation. In the case of color images, histogram equalization applied separately on the R, G, and B channels (which are then merged back to the final color image) leads to offsets in hue. Alternatively, the second approach seems to be more correct: the color image is transformed into an HSI color model, and only the intensity (I) channel is equalized. A comparison of these two methods is shown in Figure 4. Increasing the contrast also increases the noise amplitude.
When the image contains regions that are significantly lighter or darker than most of the image, the contrast in those regions will not be sufficiently enhanced (Figure 5). Adaptive histogram equalization (AHE) is an image pre-processing technique used to improve contrast in images. It computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the luminance values of the image. It is therefore suitable for improving the local contrast and enhancing the definitions of edges in each region of an image. However, AHE tends to overamplify noise in relatively homogeneous regions of an image.
A variant of adaptive histogram equalization called contrast-limited adaptive histogram equalization (CLAHE) prevents this effect by limiting the amplification [32]. The same problem with the usage of the RGB color model with CLAHE appears as in the use of ordinary histogram equalization: CLAHE is used just for the intensity channel (when using, e.g., the HSI color model). The comparison between the original RGB image and the CLAHE version is shown in Figure 6.

3.3. Statistical Methods

The listed statistical methods were used to assess the impact of different preprocessing methods on object detection performance. The Shapiro–Wilk test confirmed the Gaussian distribution of assessed variables. Two-way ANOVA is a statistical tool for comparing the differences between groups where data have been split into two independent variables (also called factors). In our research, the image preprocessing method and YOLOv8 model version are the independent variables. The precision and recall scores were considered the dependent variables. Two-way ANOVA analysis was performed for the evaluation of potential statistical differences between Raw’s CLAHE and MaxGGSc’s recall and precision values. The sample size was 10 for all groups of performed tests.

4. Results

The detection algorithm based on a convolutional neural network was implemented using the Python programming language (Python interpreter 3.9) and PyTorch machine learning framework (Pytorch 1.13). Statistical analysis was performed with TIBCO Statistica™ 14.0.0 software. All neural network training and testing were performed on the workstation with a 6-core AMD Ryzen 5600 CPU, dual NVIDIA GeForce RTX 3060 GPU, 96 GB RAM, and Windows 10 operating system (OS). All necessary machine learning libraries were installed and run via the Anaconda package management tool.
We have compared three different methods of data preprocessing for the task of automated object detection:
  • Raw method (reference method with no additional preprocessing)
  • MaxGGsc method
  • CLAHE method
For each method, we implemented three different versions of the YOLOv8 detection algorithm: small, medium, and large (YOLOv8s, YOLOv8m, YOLOv8l). These versions differ in the number of trainable parameters and computation complexity.
Table 1 shows the main parameters of different YOLOv8 versions: the number of trainable parameters, the model’s computational complexity in terms of floating-point operations per second (FLOPs), and the detection latency based on CPU (for images with a 640 × 640 resolution).
For statistical comparison of methods and reproducibility, each method and model version was run 10 times with a controlled seed parameter, which is the main initialization parameter for the software implementation of the pseudo-random number generator (RNG). The seed parameter varied within one preprocessing method or model version. Two different model versions or preprocessing methods were always simulated with the same seed value. Therefore, a total of 90 (3 × 3 × 10) software simulations were performed.
Table 2 compares the main parameters of descriptive statistics for the Raw, CLAHE, and MaxGGsc processing methods mentioned above. For simplification, this table contains only the results of the large version of the YOLOv8 detection algorithm, since this model achieved the best overall score. The presented results for YOLOv8l were consistent with the results of small and medium detection models (YOLOv8s and YOLOv8m) in terms of recall score differences.
From Table 2, we can see differences among distinctive groups in terms of recall score. In comparison with the Raw method, the mean recall in the MaxGGsc method was 4% higher (76.5% vs. 72.3%), and the mean recall for the CLAHE method was 14% higher (86.2% vs. 72.3%). The best model in the CLAHE group reached a 90% recall score, while the best model in the MaxGGsc group reached 80.5%.
Interestingly, when analyzing the precision score, there are only negligible differences between individual groups. However, the recall score is more important than the precision score in our application of soldering splash detection, since the probability of a false negative is greater than the probability of a false positive.
Table 3 shows the main parameters obtained from ANOVA analysis applied to the prepared data. These results confirmed, at a level of significance 0.001, that both the image preprocessing method and variant of the YOLOv8 model have an impact on the recall score. Therefore, there are significant differences in mean recall scores between applied preprocessing methods and between explored variants of YOLOv8 models.
The YOLOv8 model version and image preprocessing method will be referred to as factors in the following paragraphs. All applied models and preprocessing methods were tested on the validation set, and the entire dataset contained only images of specific PCB boards with soldering splashes. We formulated statistics hypothesis H0, H2, and H4 and alternative statistics hypothesis H1 and H3:
H0: 
There is no significant difference in mean recall score between groups of YOLOv8 model versions (YOLOv8s, YOLOv8m, YOLOv8l).
H1: 
There is a significant difference in mean recall score between groups of YOLOv8 model versions (YOLOv8s, YOLOv8m, YOLOv8l).
H2: 
There is no significant difference in mean recall score between groups of image preprocessing methods applied in the YOLOv8 model version (Raw, MaxGGsc, CLAHE).
H3: 
There is a significant difference in mean recall score between groups of image preprocessing methods applied in the YOLOv8 model version (Raw, MaxGGsc, CLAHE).
To find the interaction between the mentioned factors (model version and preprocessing method), we also formulated hypothesis H4.
H4: 
There is no interaction effect between the version of the YOLOv8 model and the type of image preprocessing method (in terms of recall and precision parameters).
In Figure 7 we can see a comparison of mean recall and precision scores between different YOLOv8 model versions. The YOLOv8l model achieved the highest recall and precision scores. Further, the type of YOLOv8 model affects recall more than precision. Considering the results of a two-way ANOVA analysis, we reject hypothesis H0 and accept the alternative hypothesis H1: There is a significant difference in mean recall score between groups of YOL0v8 detection models at a significance level of 0.001.
Figure 8 shows a comparison of the mean recall and precision scores when applying different image preprocessing methods in the application of automated soldering splash detection using the YOLOv8 algorithm. The application of the CLAHE preprocessing method helped to achieve the best recall score. However, there were minimal differences between different preprocessing methods in evaluating the precision score. Considering the results of the two-way ANOVA analysis, we reject hypothesis H2 and accept the alternative hypothesis H3: There is a significant difference in mean recall score between groups of image preprocessing methods applied in the YOLOv8 detection model at a significance level of 0.001.
Figure 9 shows the recall score across different image preprocessing methods when the type of YOLOv8 model represents an independent variable. Applying the CLAHE preprocessing method, there were the biggest differences in recall scores between YOLOv8 models. These results indicate that the type of YOLOv8 model has a statistically significant effect on the preprocessing method at a significance level of 0.001.
Figure 10 shows recall scores across different YOLOv8 model versions when the type of image preprocessing method represents an independent variable. Analyzing the YOLOv8l soldering splash detection results, there were the biggest differences in recall scores applying different image preprocessing methods. These results indicate that the type of image preprocessing method has a statistically significant effect on the variant of the YOLOv8 detection model at a significance level of 0.001. Considering the results of the two-way ANOVA analysis, we reject hypothesis H4 that there is no interaction effect between the type of YOLOv8 model and the type of image preprocessing method. Therefore, we conclude that there is an interaction effect between the type of YOLOv8 model and the type of image preprocessing method at a significance level of 0.001.
The example depicted in Figure 11 demonstrates how the application of CLAHE preprocessing prevents false negative detection so that the soldering splash is correctly detected.

5. Discussion

In this section, we discuss the results from several perspectives. The main goal of our research was to compare different image preprocessing methods in the application of automated soldering splash detection and to objectively quantify any potential differences between these methods. Two different image preprocessing methods were tested across three different variants of the YOLOv8 detection algorithm. All preprocessing methods were compared against the raw images on which no preprocessing was performed. Our results have shown that the application of both the CLAHE and MaxGGsc preprocessing methods can increase the detection algorithm’s recall score in all tested YOLOv8 models. Presented statistical analysis has confirmed that there are statistically significant differences in investigated parameters of object detection algorithms.
Motivated by a particular industrial task, our research focused on the identification of solder splashes over the accurate localization of single splashes. From this point of view, “classification metric parameters” were preferred over “localization metrics”.
From a computational point of view, the overall computer simulation and testing required performing more than 2.945 × 1015 multiplication and addition math operations. The high computational difficulty of this task was the main reason why we have compared only three different variants of the YOLOv8 detection algorithm. However, taking into account the related work in the Section 3, the YOLO detection algorithm belongs to the state-of-the-art automated object detectors [33]. In terms of detection speed and accuracy, the YOLO model can be considered the optimal solution. Despite the fact, that newer versions of YOLO models appeared during ongoing research, the large community and support from Ultralytics were the biggest advantages of the YOLOv8 detection algorithm.
Considering the presented results, in the process of YOLOv8 model training, we recommend replacing input images for training and validation set with images processed by the CLAHE method. Statistical analysis in our research proved that this modification improved recall scores in the application of automated soldering splash detection by the YOLOv8 model. In comparison with raw images, the application of the MaxGGsc method has proved useful. In this case, the statistical analysis also confirmed reducing the number of false negative detections, but not to the same extent as the application of the CLAHE method.
In this article, all our conclusions were confirmed by statistical analysis which represents the added value of our research. We also used hypothesis testing to provide evidence for our claims.

6. Conclusions

In the presented research, we propose the application of the CLAHE and MaxGGsc image preprocessing methods in the application of automated soldering splash detection algorithms in the area of specific electronic boards. Despite the fact, that the mentioned methods are not new, statistical analysis and comparison of these methods in the application of automated soldering splash detection is novel.
All performed software tests of preprocessing methods and object detection models assumed a specific type of electronic board and single object category—solder splash—as introduced in [34]. Original images of PCB boards used for neural network training and testing were created under standardized and controllable light conditions. These assumptions were based on requirements from industry-oriented research of automated object detection based on machine learning [8].
The image preprocessing methods presented in our research are relatively easy to implement with the help of publically accessible software libraries like OpenCV. The image preprocessing needs to be done only once in the process of input data acquisition and does not need to be repeated during neural network training or testing.
Based on the presented statistical analysis and performed software simulations, the implementation of the image preprocessing method CLAHE or MaxGGsc in the process of automated optical inspection (AOI) can be useful for increasing the recall score and therefore reducing the number of false negative detections. Therefore, the application of the mentioned preprocessing method can potentially lead to an increase in the metric parameters of object detection, such as recall and precision, not only in the presented field of specific electronic board defects detection but also in other areas of research.
In future research, we would like to verify the potential benefits of tested image preprocessing methods also in other object detection algorithms such as YOLOv9 and Fast-er-RCNN. In addition to already mentioned and tested image preprocessing methods, the application of other preprocessing methods can be another subject of future research.

Author Contributions

Conceptualization, P.K., D.K., and L.H.; methodology, P.K.; validation, M.P.; formal analysis, D.K. and L.H.; investigation, P.K., D.K., and L.H.; data curation: P.K.; writing—original draft preparation, P.K., D.K., and L.H.; writing—review and editing, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

The results in this project were supported by grant VEGA 1/0563/23: Research and development of visual inspection algorithms for manufacturing process quality increasing of power semiconductor modules and by Grant No. APVV-20-0500: Research of methodologies to increase the quality and lifetime of hybrid power semiconductor modules.

Data Availability Statement

The images and necessary annotations files of the training and validation set for each image preprocessing method as well as statistical analysis results are available in the preprocessing methods repository, https://github.com/Peet62/Preprocessing_methods_ANOVA (accessed on 6 November 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parakontan, T.; Sawangsri, W. Development of the Machine Vision System for Automated Inspection of Printed Circuit Board Assembl. In Proceedings of the 2019 3rd International Conference on Robotics and Automation Sciences (ICRAS), Wuhan, China, 1–3 June 2019; pp. 244–248. [Google Scholar] [CrossRef]
  2. Wu, W.-Y.; Hung, C.-W.; Yu, W.-B. The development of automated solder bump inspection using machine vision techniques. Int. J. Adv. Manuf. Technol. 2013, 69, 509–523. [Google Scholar] [CrossRef]
  3. Wang, L.; Hou, C.; Zheng, J.; Cao, P.; Wang, J. Automated coplanarity inspection of BGA solder balls by structured light. Microelectron. J. 2023, 137, 105802. [Google Scholar] [CrossRef]
  4. Sha, J.; Wang, J.; Hu, H.; Ye, Y.; Xu, G. Development of an accurate and automated quality inspection system for solder joints on aviation plugs using fine-tuned YOLOv5 models. Appl. Sci. 2023, 13, 5290. [Google Scholar] [CrossRef]
  5. Chen, I.C.; Hwang, R.C.; Huang, H.C. PCB defect detection based on deep learning algorithm. Processes 2023, 11, 775. [Google Scholar] [CrossRef]
  6. Ling, Q.; Isa, N.A.M. Printed circuit board defect detection methods based on image processing machine learning and deep learning: A survey. IEEE Access 2023, 11, 15921–15944. [Google Scholar] [CrossRef]
  7. Liao, S.; Huang, C.; Liang, Y.; Zhang, H.; Liu, S. Solder joint defect inspection method based on ConvNeXt-YOLOX. IEEE Trans. Compon. Packag. Manuf. Technol. 2022, 12, 1890–1898. [Google Scholar] [CrossRef]
  8. Klco, P.; Koniar, D.; Hargas, L.; Dimova, K.P.; Chnapko, M. Quality inspection of specific electronic boards by deep neural networks. Sci. Rep. 2023, 13, 20657. [Google Scholar] [CrossRef] [PubMed]
  9. Guilhaumon, C.; Hascoët, N.; Chinesta, F.; Lavarde, M.; Daim, F. Data Augmentation for Regression Machine Learning Problems in High Dimensions. Computation 2024, 12, 24. [Google Scholar] [CrossRef]
  10. Lian, J.; Wang, L.; Liu, T.; Ding, X.; Yu, Z. Automatic visual inspection for printed circuit board via novel Mask R-CNN in smart city applications. Sustain. Energy Technol. Assessments 2021, 44, 101032. [Google Scholar] [CrossRef]
  11. Akhtar, M.B. The Use of a Convolutional Neural Network in Detecting Soldering Faults from a Printed Circuit Board Assembly. HighTech Innov. J. 2022, 3, 1–14. [Google Scholar] [CrossRef]
  12. Niu, J.; Huang, J.; Cui, L.; Zhang, B.; Zhu, A. A PCB Defect Detection Algorithm with Improved Faster R-CNN. In Proceedings of the ICBASE2022, 3rd International Conference on Big Data & Artificial Intelligence & Software Engineering, Guangzhou, China, 21–23 October 2022. [Google Scholar]
  13. Ancha, V.K.; Sibai, F.N.; Gonuguntla, V.; Vaddi, R. Utilizing YOLO Models for Real-World Scenarios: Assessing Novel Mixed Defect Detection Dataset in PCBs. IEEE Access 2024, 12, 100983–100990. [Google Scholar] [CrossRef]
  14. Supong, T.; Kangkachit, T.; Jitkongchuen, D. PCB Surface Defect Detection Using Defect-Centered Image Generation and Optimized YOLOv8 Architecture. In Proceedings of the 2024 5th International Conference on Big Data Analytics and Practices (IBDAP), Bangkok, Thailand, 23–25 August 2024; pp. 44–49. [Google Scholar] [CrossRef]
  15. Yi, X.; Song, X. CC-YOLO: An Improved PCB Surface Defect Detection Model for YOLOv7. In Proceedings of the 2024 5th International Conference on Computer Vision, Image and Deep Learning (CVIDL), Zhuhai, China, 19–21 April 2024; pp. 1311–1315. [Google Scholar] [CrossRef]
  16. Mamidi, J.S.S.V.; Sameer, S.; Bayana, J. A Light Weight Version of PCB Defect Detection system using YOLO V4 Tiny. In Proceedings of the 2022 International Mobile and Embedded Technology Conference (MECON), Noida, India, 10–11 March 2022. [Google Scholar] [CrossRef]
  17. Monika, C.S. YOLO V7: Advancing Printed Circuit Board Defect Detection and the Quality Assurance. In Proceedings of the 2023 Global Conference on Information Technologies and Communications (GCITC), Bangalore, India, 1–3 December 2023; pp. 1–5. [Google Scholar] [CrossRef]
  18. Yuan, M.; Zhou, Y.; Ren, X.; Zhi, H.; Zhang, J.; Chen, H. YOLO-HMC: An Improved Method for PCB Surface Defect Detection. IEEE Trans. Instrum. Meas. 2024, 73, 2001611. [Google Scholar] [CrossRef]
  19. Chen, R.-C.; Dewi, C.; Zhuang, Y.-C.; Chen, J.-K. Contrast Limited Adaptive Histogram Equalization for Recognizing Road Marking at Night Based on Yolo Models. IEEE Access 2023, 11, 92926–92942. [Google Scholar] [CrossRef]
  20. Hendrawan, A.; Gernowo, R.; Nurhayati, O.D. Contrast Stretching and Contrast Limited Adaptive Histogram Equalization for Recognizing Vehicles Based on Yolo Models. In Proceedings of the 2023 International Conference on Technology, Engineering, and Computing Applications (ICTECA), Semarang, Indonesia, 20–22 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  21. Lou, H.; Duan, X.; Guo, J.; Liu, H.; Gu, J.; Bi, L.; Chen, H. DC-YOLOv8: Small-size object detection algorithm based on camera sensor. Electronics 2023, 12, 2323. [Google Scholar] [CrossRef]
  22. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  23. Jocher, G.; Changyu, L.; Hogan, A.; Yu, L.; Rai, P.; Sullivan, T. Ultralytics/YOLOv5: Initial Release. Zenodo. 2020. Available online: https://zenodo.org/records/3908560 (accessed on 6 November 2024).
  24. Karpathy, A. A Peek at Trends in Machine Learning. 2017. Available online: https://karpathy.medium.com/a-peek-at-trends-in-machine-learning-ab8a1085a106 (accessed on 6 November 2024).
  25. Hinton, G. Neural networks for machine learning online course. Retrieved Sept. 2018, 16, 2021. [Google Scholar]
  26. Wilson, A.C.; Roelofs, R.; Stern, M.; Srebro, N.; Recht, B. The marginal value of adaptive gradient methods in machine learning. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  27. Pao, S.-I.; Lin, H.-Z.; Chien, K.-H.; Tai, M.-C.; Chen, J.-T.; Lin, G.-M. Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J. Ophthalmol. 2020, 2020, 9139713. [Google Scholar] [CrossRef] [PubMed]
  28. Ahmad, A.; Mansoor, A.B.; Mumtaz, R.; Khan, M.; Mirza, S.H. Image processing and classification in diabetic retinopathy: A review. In Proceedings of the 2014 5th European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 December 2014. [Google Scholar]
  29. Kusunose, S.; Shinomiya, Y.; Hoshino, Y. Exploring Effective Channels in Fundus Images for Convolutional Neural Networks. In Proceedings of the 7th International Workshop on Advanced Computational Intelligence and Intelligent Informatics (IWACIII2021), Beijing, China, 31 October–3 November 2021. [Google Scholar]
  30. Macsik, P.; Pavlovicova, J.; Kajan, S.; Goga, J.; Kurilova, V. Image preprocessing-based ensemble deep learning classification of diabetic retinopathy. IET Image Process. 2023, 18, 807–828. [Google Scholar] [CrossRef]
  31. Shih, F.Y. Image Enhancement. In Image Processing and Pattern Recognition: Fundamentals and Techniques; IEEE: Piscataway, NJ, USA, 2010; pp. 40–62. [Google Scholar] [CrossRef]
  32. MathWorks, Contrast Limited Adaptive Histogram Equalization. 2024. Available online: https://www.mathworks.com/help/visionhdl/ug/contrast-adaptive-histogram-equalization.html (accessed on 6 November 2024).
  33. Ebayyeh, A.A.R.M.A.; Mousavi, A. A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry. IEEE Access 2020, 8, 183192–183271. [Google Scholar] [CrossRef]
  34. Huang, L.; Yao, C.; Zhang, L.; Luo, S.; Ying, F.; Ying, W. Enhancing computer image recognition with improved image algorithms. Sci. Rep. 2024, 14, 13709. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram of the visual inspection system.
Figure 1. Block diagram of the visual inspection system.
Computation 12 00225 g001
Figure 2. Architecture of the YOLOv8 neural network algorithm.
Figure 2. Architecture of the YOLOv8 neural network algorithm.
Computation 12 00225 g002
Figure 3. Comparison of original RGB image (left, soldering splashes are highlighted with red ovals) and MaxGGsc color version of the original image (right).
Figure 3. Comparison of original RGB image (left, soldering splashes are highlighted with red ovals) and MaxGGsc color version of the original image (right).
Computation 12 00225 g003
Figure 4. Comparison of original RGB image (left), ordinary histogram equalization provided in the RGB model (middle), and ordinary histogram equalization performed only on the intensity channel in the HSI model (right).
Figure 4. Comparison of original RGB image (left), ordinary histogram equalization provided in the RGB model (middle), and ordinary histogram equalization performed only on the intensity channel in the HSI model (right).
Computation 12 00225 g004
Figure 5. Comparison of ordinary histogram equalization in the HSI model (left) and CLAHE version of image (right): contrast limitation in CLAHE leads to a uniform look for the image (without over- or underexposed regions).
Figure 5. Comparison of ordinary histogram equalization in the HSI model (left) and CLAHE version of image (right): contrast limitation in CLAHE leads to a uniform look for the image (without over- or underexposed regions).
Computation 12 00225 g005
Figure 6. Comparison of original RGB image (left, soldering splashes are highlighted with red ovals) and CLAHE color version of the original image (right, only the I channel from the HSI version of the image is equalized, while the H and S channels remain unchanged).
Figure 6. Comparison of original RGB image (left, soldering splashes are highlighted with red ovals) and CLAHE color version of the original image (right, only the I channel from the HSI version of the image is equalized, while the H and S channels remain unchanged).
Computation 12 00225 g006
Figure 7. Comparison of mean recall and precision scores between different YOLOv8 models.
Figure 7. Comparison of mean recall and precision scores between different YOLOv8 models.
Computation 12 00225 g007
Figure 8. Comparison of mean recall and precision scores across different image preprocessing methods.
Figure 8. Comparison of mean recall and precision scores across different image preprocessing methods.
Computation 12 00225 g008
Figure 9. Influence of the YOLOv8 model variant on recall scores when applying different image preprocessing methods.
Figure 9. Influence of the YOLOv8 model variant on recall scores when applying different image preprocessing methods.
Computation 12 00225 g009
Figure 10. Influence of the image preprocessing methods on recall scores across different YOLOv8 model versions.
Figure 10. Influence of the image preprocessing methods on recall scores across different YOLOv8 model versions.
Computation 12 00225 g010
Figure 11. Example of correct soldering splash detection after applying the CLAHE preprocessing method: the left image shows false negative detection (annotated object in the green box), while the right image shows both the annotated object (green box) and the detected object (red box).
Figure 11. Example of correct soldering splash detection after applying the CLAHE preprocessing method: the left image shows false negative detection (annotated object in the green box), while the right image shows both the annotated object (green box) and the detected object (red box).
Computation 12 00225 g011
Table 1. Parameters of different YOLOv8 versions.
Table 1. Parameters of different YOLOv8 versions.
ModelNumber of ParametersFLOPsDetection Latency [ms]
YOLOv8s11.2 × 10628.6 × 109128.4
YOLOv8m25.9 × 10678.9 × 109234.7
YOLOv8l43.7 × 106165.2 × 109375.2
Table 2. Descriptive statistics.
Table 2. Descriptive statistics.
Metrics Score/GroupMeanMedianMinMaxLower
Quartile
Upper
Quartile
Std.
Dev.
Std.
Error
Recall
RAW group [%]
0.7230.7280.690.7460.7150.7320.0180.006
Recall
MaxGGsc group [%]
0.7650.7630.7090.8050.7430.7930.0320.01
Recall
CLAHE group [%]
0.8620.8590.830.90.8450.880.0250.008
Precision
RAW group [%]
0.9460.9510.8850.9810.9330.9620.030.01
Precision
MaxGGsc group [%]
0.9430.950.8731.00.9270.9690.0390.012
Precision
CLAHE group [%]
0.9330.9250.890.9810.9190.9510.0270.008
Table 3. Multivariate tests of significance. Statistically significant p-values are marked in bold. df = degree of freedom, F = F-distribution value, * - denotes interaction between model and method.
Table 3. Multivariate tests of significance. Statistically significant p-values are marked in bold. df = degree of freedom, F = F-distribution value, * - denotes interaction between model and method.
TestWilk’s Lambda ValueFEffect dfError dfp-Value
InterceptWilks0.00050179,772.39280p < 1 × 10−7
YOLOv8 modelWilks0.46924318.394160p < 1 × 10−7
Preprocessing
method
Wilks0.24229941.264160p < 1 × 10−7
Model * methodWilks0.7221683.5381600.000856
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Klco, P.; Koniar, D.; Hargas, L.; Paskala, M. Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions. Computation 2024, 12, 225. https://doi.org/10.3390/computation12110225

AMA Style

Klco P, Koniar D, Hargas L, Paskala M. Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions. Computation. 2024; 12(11):225. https://doi.org/10.3390/computation12110225

Chicago/Turabian Style

Klco, Peter, Dusan Koniar, Libor Hargas, and Marek Paskala. 2024. "Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions" Computation 12, no. 11: 225. https://doi.org/10.3390/computation12110225

APA Style

Klco, P., Koniar, D., Hargas, L., & Paskala, M. (2024). Comparison of Preprocessing Method Impact on the Detection of Soldering Splashes Using Different YOLOv8 Versions. Computation, 12(11), 225. https://doi.org/10.3390/computation12110225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop