Next Article in Journal
A Blockchain-Based Security Framework for East-West Interface of SDN
Next Article in Special Issue
Research on Electric Vehicle Powertrain Systems Based on Digital Twin Technology
Previous Article in Journal
Social Trust Confirmation-Based Selfish Node Detection Algorithm in Socially Aware Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Product Defect Detection Method Combining Centroid Distance and Textural Information

1
School of Electronic Information and Electrical Engineering, Chengdu University, Chengdu 610106, China
2
School of Mechanical Engineering, Chengdu University, Chengdu 610106, China
3
School of Mechanical and Electrical Information, Chengdu Agricultural College, Chengdu 611130, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3798; https://doi.org/10.3390/electronics13193798
Submission received: 8 August 2024 / Revised: 17 September 2024 / Accepted: 23 September 2024 / Published: 25 September 2024

Abstract

:
In order to solve the problems of a high mismatching rate and being easily affected by noise and gray transformation, an improved product defect detection method combining centroid distance and textural information is proposed in this paper. Based on image preprocessing, the improved fuzzy C-means clustering method is used to extract the closed contour features. Then, the contour center distance description operator is used for bidirectional matching, and a robust coarse matching contour pair is obtained. After the coarse matching contour pair is screened, the refined matching result is obtained by using the improved local binary pattern operator. Finally, by comparing whether the number of fine matching pairs is consistent with the number of template outlines, the detection of good and bad industrial products is realized, and the closed contour extraction experiment, the anti-rotation matching experiment, the anti-gray difference matching experiment, and the defect detection experiment of three different products are designed. The experimental results show that the improved product defect detection method has good performance in relation to anti-rotation transformation and anti-gray difference, the detection accuracy can reach more than 90%, and the detection time is up to 362.6 ms, which can meet the requirements of industrial real-time detection.

1. Introduction

Defects such as scratches, spots, or holes can adversely affect the appearance and comfort of the product [1,2], and defect detection is an effective method to improve product quality [3]. Traditional artificial vision detection methods still play an important role, but for some application scenarios such as dangerous environments and fast pipeline work, artificial vision detection methods can no longer meet the needs in terms of detection efficiency, real-time performance and long-term operation. Machine vision, as a non-contact and non-destructive inspection means, has been widely used in the manufacturing process [4]. Defect detection methods based on machine vision usually construct corresponding features and recognition algorithms for specific objects to complete defect classification. Common defect features include gray features, shape and size features, and textural features [5]. According to the type of defect feature, defect detection methods (also called image matching methods) are usually divided into grayscale-based methods and feature-based methods [6].
Grayscale-based image matching methods use the statistical characteristics of grayscale and grayscale deviation to distinguish the defective parts from the good parts. The common statistical features of a gray histogram include the maximum value, minimum value, median value, average value, entropy, variance, L1 norm, and normalized correlation coefficient. These parameters are simple to calculate and remain constant when the image changes in translation and rotation. Li et al. [7] proposed fabric defect detection using saliency histogram features to automatically detect defects in non-patterned fabrics and patterned fabrics. The statistical features of these gray histograms only reflect the probability of the gray level, and cannot reflect the spatial distribution of pixels. The Gray Level Co-occurrence Matrix (GLCM) is used to describe the texture based on the spatial relationship and distribution characteristics of pixel gray levels. A variety of textural features, such as contrast, entropy, uniformity, and energy information, can be extracted by selecting different reference directions, quantizing gray levels, and performing image scanning and the normalization of probability distribution, and then the image arrangement structure can be analyzed [8]. Zhang et al. [9] proposed an algorithm combining LBP and GLCM, which were used to extract the local and overall information related to the surface defects on fabric images, respectively. However, the LBP algorithm is based on spatial neighborhood pixel coding to construct the histogram of the defect image, which may lead to the loss of defect image recognition information.
Feature-based defect detection methods achieve matching by obtaining robust common features between the two images. Commonly used point feature-based image matching methods include SIFT-OCT algorithm [10], the Speeded Up Robust Features algorithm [11], the Scale-Invariant Feature Transform matching algorithm [12], etc. The SIFT-OCT algorithm constructs a Gaussian scale space less affected by speckle noise to obtain feature points and adopts multidimensional feature vector descriptors. However, it takes time to calculate image matching, and the loss of detailed information will cause positioning errors, resulting in a large final matching error. The Speeded Up Robust Features algorithm [11] obtains extreme value points through the values of the Hesse matrix discriminant and computes the approximate Harr Baud sign on different scales. However, the search efficiency of this method is relatively low, and the mismatching is obvious when the image noise is large or the gray level difference of the matching object is large. Scale Invariant Feature Transform (SIFT) is a descriptor for image-based matching and recognition. By extracting unique invariant features from images, reliable feature matching from different viewing angles can be achieved. The extracted features are invariant in terms of image rotation, zoom, 3D transformation, illumination variation and noise superposition. Olvera, R [12] proposed an SAR Scale Invariant Feature Transform algorithm. The gradient calculation in Harris corner detection is replaced by ratio operation, and the improved SIFT descriptor is combined to obtain SAR-SIFT features, but the location fails in areas with high speckle noise, resulting in mismatching. Dunderdale et al. [13] used SIFT descriptors and random forest classifiers to identify defective PV modules. However, SIFT has high image quality requirements, which limits its application.
In recent years, artificial intelligence technology has greatly promoted the development of industrial production, and neural networks have been widely applied and promoted as an important branch of artificial intelligence development [14]. A CNN is a very good method for automatic defect detection and classification using machine vision [15,16], which is now used in many fields, such as industrial production and electronic components. Nguyen et al. [17] proposed a detection system based on a CNN to achieve defect classification of casting products. However, CNN deep learning models only perform well if they have large, high-quality datasets. Kim et al. [18] proposed a method for the surface defect classification of liquid crystal display panels and realized automatic defect classification based on a CNN in the industrial production process. However, as the network structure of CNNs becomes deeper and deeper, this method requires the computational power of large-scale defect datasets for training. YOLO is an object recognition and location algorithm based on a deep neural network which has the characteristics of fast speed, a low background error detection rate, and strong universality, and can be applied to real-time defect detection systems [19]. Adibhatla et al. [20] used a YOLO/CNN model to detect PCB defects, and the accuracy of defect detection was 98.79%. However, the types of defects that can be detected using this method are limited and need to be optimized. Lv et al. [21] proposed an active learning method for steel surface defect detection based on YOLOv2. The efficiency of this model is higher, but at the cost of precision. Jing et al. [22] proposed an improved YOLOv3 model, which uses the K-means algorithm to cluster labeled data. The experimental results show that the improved YOLOv3 model has a good performance of fabric defect detection, but the real-time performance needs to be improved. The YOLOv4 detection method, as a regression-based detection method, has good detection speed, and the detection accuracy of small targets needs to be improved. To detect cracks in iron materials, Deng et al. [23] proposed a cascaded YOLOv4 (C-YOLOv4) network. The experimental results show that C-YOLOv4 has good robustness and crack detection accuracy. Wang et al. successively proposed a strip surface defect detection model based on Yolo-SAGC [24] and a weld defect detection model based on Yolo-MSAPF [25]. Based on the YOLOv5 model, the SAGC strategy and MSAPF strategy were designed for these two models, respectively, and the average accuracy was improved through a self-built dataset and self-built database verification. However, high detection efficiency often comes at the expense of accuracy and requires large computation resources. Especially for YOLOv4 and above, after the introduction of various optimization strategies and network structure improvements, although the detection speed and accuracy have been significantly improved, the computing resource requirements have also increased significantly.
To solve the above problems, this paper presents an improved product defect detection method combining centroid distance and textural information. The technical flow chart is shown in Figure 1. On the basis of image preprocessing, the closed contour features are extracted by means of an improved fuzzy C-means clustering image segmentation method, and a robust coarse matched contour pair is obtained by using the contour center distance description operator for bidirectional matching. After the coarse matching contour pair is screened, the refined matching result is obtained by using the improved local binary pattern operator. Finally, by comparing whether the number of fine matching pairs is consistent with the number of template outlines, the detection of good and bad parts of industrial products is realized. The proposed product defect detection method combining centroid distance and textural information uses the contour center distance description operator and the improved local binary pattern operator as the salient features. This feature has the advantages of anti-rotation and anti-gray difference invariance, fast calculation, and no special requirements relating to the image.

2. Contour Matching Algorithm Combining Centroid Distance and Textural Information

Textural information reflects the common intrinsic properties of surfaces, containing important information about the organization of surfaces and their relationship with the surrounding environment. Texture analysis involves extracting textural feature parameters via image processing technology, in order to obtain the quantitative or qualitative description of a texture. This is important for image classification and object detection and recognition tasks [26]. In order to obtain a more robust feature matching effect after image rotation changes and gray scale changes, a product image contour matching method based on centroid distance and textural information is proposed in this paper. The feature descriptors are constructed by extracting stable contour features in the images, and then the Euclidean distance between the contour descriptors in the two images is calculated to obtain the preliminary matching results. Then, the textural information in the neighborhood of the points with the maximum and minimum centroid distance on the closed contour is calculated to obtain the final fine matching result, and the closed contour matching which can resist the rotation change and gray difference is finally realized. The complete contour matching algorithm mainly includes image preprocessing, contour feature extraction, rough matching based on the contour center distance description operator, and fine matching based on textural features (improved local binary mode).

2.1. Image Preprocessing

Based on the research of common image preprocessing methods, Block-Matching and 3D filtering (BM3D) [27] and Contrast-Limited Adaptive Histogram Equalization (CLAHE) [28] are used as image preprocessing methods in this paper.
(1) Block-Matching and 3D filtering (BM3D). This is an ideal denoising method which combines the wavelet transform method and the local method. It can preserve image contours and other details while eliminating noise. The basic idea of this method is to find the similar blocks of the image and match them through non-local operation to obtain a 3D image block matrix, then perform a Wiener filtering operation on the image block matrix, and then invert the final data to obtain a noiseless image [27]. Its specific steps include (1) using a hard threshold to obtain relatively clean image blocks for statistical data; (2) using a Wiener filter is used to denoise all image signals in the transform domain; and (3) calculating a weighted average of the estimated results of overlapping image blocks in the image.
(2) Contrast Limited Adaptive Histogram Equalization (CLAHE). When the contrast of the image is low and the overall brightness is low, some feature information of the image will be blurred after filtering. Therefore, in order to improve the image sharpness and make the features more visually clear, the contrast of the filtered image is enhanced in this section. Two common image enhancement methods are Adaptive Histogram Equilibrium (AHE) and Contrast-Limited Adaptive Histogram Equalization (CLAHE). The similarity of AHE and CLAHE is that they both calculate the histogram of the local area of the image and change the distribution of the image brightness to achieve the purpose of readjusting the contrast in the local range. However, when AHE enhances the image, it amplifies the noise in the region. As an improved histogram equalization method, CLAHE introduces the concept of limiting contrast on the basis of histogram equalization to avoid excessive enhancement of noise and effectively control the problem of noise amplification [28]. Therefore, this paper selects CLAHE method for image enhancement processing.

2.2. Closed Contour Extraction Based on Improved Fuzzy Clustering

There are many methods for image contour extraction, including the gradient method, the template matching method, and the transform domain method [29]. However, these methods often struggle to extract the overall contour of the target and will generate false edges, causing the calculation of the algorithm to be too resource-intensive [30]. To solve this problem, an improved image clustering segmentation method is proposed. Firstly, the edge detection of segmented images is carried out by means of a Canny operator, and then the region contour tracking of the edge image is carried out to obtain the clear and complete closed contour features of the image. The specific steps are as follows:
(1) Improved fuzzy C-means clustering image segmentation. An image clustering method is generally adopted to achieve image segmentation, and clustering methods based on fuzzy theory are more natural, especially the FCM algorithm as a soft clustering method [31]. This algorithm starts with n points X = {x1, x2, …, xn} divided into c fuzzy categories, and the clustering center of each category is obtained to minimize the objective function, whose objective function J(U,V) is defined as [31]
J ( U , V ) = k = 1 n i = 1 c ( u i k ) m ( d i k ) 2
where U = [uik] is a fuzzy classification matrix. V = [vi] is the cluster center matrix, where vi represents the class i cluster center. m represents the weighted index, and dik = ||xk-vi|| represents the Euclidean distance between the cluster center of class i and the K-th sample data point. However, the Euclidean distance measures the absolute distance between each point in the multidimensional space, which cannot truly reflect the distance between each sample data point in the clustering algorithm and the cluster center. In order to enhance the applicability of the FCM to image information and accurately reflect the real clustering center of the image, the Mahalanobis distance is used to replace the Euclidean distance to achieve clustering calculation, which can be expressed as [31]
d i j = c i x j = ( X i X j ) S 1 ( X i X j ) 1
where Xi and Xj are vectors composed of m indexes of the i and j samples, respectively. S is the covariance matrix of the sample population.
(2) Using the Canny operator to extract the edge of the contour. The edge detection is realized by detecting the gray change in the neighborhood of each pixel in the image and using the change rule of the first- or second-order directional derivative near the edge to detect the edge, in which the gradient operation is introduced. The edge detection operator based on the optimization algorithm proposed by Canny has a good signal-to-noise ratio and detection accuracy, so it is used to extract the contour of the segmented image by means of the improved fuzzy C-means method.
(3) Contour tracking. Contour tracking is a method to obtain the region contour via point-by-point tracking according to the connectivity of image boundary points. Region contour tracking is carried out based on the binary image obtained in step (2) to obtain all the contour information in the image and remove open loops or small enough outlines. All of the profile information that meets the requirements can be obtained through the preceding operations.

2.3. Rough Matching Based on Contour Center Distance Feature Description Operator

The closed contour matching method extracts features based on all the closed contour information of two images, and then judges the similarity of contour curves according to the relationship between the features. The similarity between features can be understood as the degree of similarity between images themselves, meaning that the accuracy of contour matching is determined by contour descriptors. Common contour description methods include centroid distance, complex coordinates, curvature function, cumulative angle, etc., among which centroid distance can reflect both the local and global features of closed contours and has good performance in terms of robustness and information retention [32]. Therefore, the centroid distance is used to describe the closed contour in this section. In order to satisfy the scale invariance and rotation invariance, a feature descriptor of the contour center distance is designed in this section, and coarse matching contour pairs are obtained by using the nearest neighbor registration method.
The centroid distance of the contour point describes the distance from the contour point to the centroid, and the centroid distance has the invariance and similarity of translation and rotation. Suppose that the total number of closed contours obtained after preprocessing and contour extraction is N, and X = (xi, yi) is any point on contour curve Q, then its centroid distance is defined as [33]
r ( i ) = ( x i x c ) 2 + ( y i y c ) 2 , i = 1 , 2 , , M
where (xc, yc) is the centroid coordinate of the contour curve, which can be expressed as [33]
x c = i = 1 M x i M y c = i = 1 M y i M
where M represents the total number of contour points. The obtained contour point centroid distance is arranged from large to small into a row centroid distance matrix, which can be expressed as [33]
R ( o ) = [ r ( 1 ) ,   r ( 2 ) ,     ,   r ( M ) ] ,   i = 1 , 2 , , M
This paper divides the centroid distance matrix into four blocks, and the interval value = the maximum centroid/the number of feature blocks. Then, the contour points contained in each feature block are counted according to the interval number, and the feature points are processed to obtain the contour center distance feature descriptor which satisfies the scale invariance. After the above calculation, the feature descriptors A and B of images A and B can be obtained, which can be expressed as [33]
A = ( Q ( 1 ) Q ( 2 ) Q ( W ) ) = ( K 11 K 12 K 13 K 14 K 21 K 22 K 23 K 24 K W 1 K W 2 K W 3 K W 4 ) ,   B = ( P ( 1 ) P ( 2 ) P ( L ) ) = ( H 11 H 12 H 13 H 14 H 21 H 22 H 23 H 24 H L 1 H L 2 H L 3 H L 4 )
where K and H represent the total number of processed centroid distance points satisfying a certain statistical interval. After calculating the contour center distance feature descriptors A and B, the Euclidean distance in the image matching algorithm is used to calculate the feature descriptors point by point, which can be expressed as [33]
d i j = ( K i 1 H j 1 ) 2 + ( K i 2 H j 2 ) 2 + ( K i 3 H j 3 ) 2 + ( K i 4 H j 4 ) 2
D A B = ( d 11 d 12 d 1 L d 21 d 22 d 2 L d W 1 d W 2 d W L ) D B A = ( d 11 d 12 d 1 W d 21 d 22 d 2 W d L 1 d L 2 d L W )
where i and j represent the number of rows in A and B, respectively. After the descriptor is obtained, the point with the smallest Euclidean distance may not be an accurate matching contour pair due to the interference caused by distortion and noise in the actual image. The nearest neighbor distance ratio (NNDR) [34] of the contour center distance feature descriptor matrix can be calculated, which can be expressed as [34]
N N D R = d min d n - min
where dmin is the nearest neighbor distance and dn-min is the second nearest neighbor distance. If NNDR meets the threshold requirements, the nearest neighbor points are correctly matched contour pairs. Otherwise, the nearest neighbor points are considered to represent mismatched contour pairs.

2.4. Fine Matching Based on Improved Local Binary Pattern

In order to obtain accurate matching results, two points corresponding to the maximum and minimum centroid distance of the contour in rough matching are selected as feature points in this section. The LBP operator of these two feature points is calculated, and the local textural information of the image is represented by this important underlying feature to obtain the final matching result [35]. The traditional local binary mode operator involves obtaining the binary relationship between the center pixel and the neighborhood pixel by comparing the threshold of the gray value. Then, the binarized neighborhood pixel values are generated according to certain coding rules, and the statistical histogram of encoded values is described as textural features. The traditional LBP operator is expressed as [35]
LBP = p = 1 8 s ( r p r c ) 2 p
s ( x ) = { 1 , x 0 0 , x < 0
where rp is the gray value of neighborhood pixels. rc is the gray value of the central pixel, and s(x) is a thresholding function.
In the calculation, the eight neighborhoods with a size of 3 × 3 are selected first, and then the gray value of the center pixel and neighborhood pixels in each neighborhood is compared. The neighborhood location is assigned to 0 or 1 according to the thresholding function. Specifically, points larger than the value of the center pixel are assigned to 1, and points smaller than the center pixel are assigned to 0, and eight binary values are obtained. Then, the encoded value of the LBP operator is obtained according to certain coding rules. The thresholding diagram of the original LBP operator is shown in Figure 2.
As shown in Figure 2, the biggest flaw of the original LBP operator is that it covers only a small region within a fixed radius. When there are textural features of different sizes and frequencies, the original LBP operator cannot meet the actual matching requirements. In order to adapt to different types of textural features and meet the requirement of rotation invariance, this paper uses a circular neighborhood instead of a square neighborhood. A series of initially defined LBP values are obtained by continuously rotating the circular neighborhood, so that the LBP feature has rotation invariance. Finally, the minimum value is taken as the LBP value of the circular domain. The improved LBP operator can be expressed as [35]
LBP P , R r = min { ROR ( LBP P , R , k ) | k = 0 , 1 , , P 1 }
where LBP P R r represents the LBP feature of local rotation invariance and ROR(x,k) represents a cyclic shift to the right of the p-bit binary number x for k times (k < p). The specific implementation process is shown in Figure 3. In Figure 3, a total of eight LBP modes are shown, and the numbers below each operator represent the LBP value corresponding to the operator. After the rotation invariant treatment, the LBP value with rotation invariance is finally obtained as 15—that is, the rotation invariant LBP mode corresponding to the eight LBP modes is 00001111.
Figure 4 shows the refined matching model diagram of the improved LBP operator defined in this section. In this figure, the centroid of the contour is taken as the center point of the circular domain, and the radius R of the circular domain is taken as the distance from the feature point to the centroid of the contour, and P sampling points are taken at the upper interval of the circle. The value of each sampling point can be expressed as [35]
x p = x c + R cos ( 2 π p P ) y p = y c + R sin ( 2 π p P )
where (xc, yc) is the center point of the neighborhood and (xp, yp) is a certain sampling point. The coordinates of any sampling point can be calculated by the above formula. However, the obtained coordinates may not always be integers, and the exact coordinates of the sampling point can be obtained via bilinear interpolation. Then, the LBP values corresponding to the maximum and minimum centroid distance points of each coarse matched contour pair are calculated. When the LBP value of the closed contour in template image A is equal to that of the contour in the image B, the fine matched contour pair is obtained. Finally, the maximum centroid distance point and the minimum centroid distance point of the precise matching contour pair are selected as the final precise matching feature points.
f ( x , y ) [ 1 x x ] [ f ( 0 , 0 ) f ( 0 , 1 ) f ( 1 , 0 ) f ( 1 , 1 ) ] [ 1 y y ]
In summary, the overall process of the product defect detection method combining centroid distance and textural information is shown in Figure 5. Specific steps can be summarized as follows:
Step 1: Calculate the contour center distance feature descriptor of the closed contour in the template image A and the image B (slide window size = template image size);
Step 2: Calculate the Euclidean distance matrix of image A relative to B, and obtain the optimal value according to NNDR for each row;
Step 3: Calculate the Euclidean distance matrix of image B relative to A, and obtain the optimal value according to NNDR for each row;
Step 4: Take the intersection of the results of step 2 and step 3 as the correct matching contour pair, and connect the two images with a robust centroid point;
Step 5: For the coarse matched contour pairs obtained in step 4, the points with the maximum and minimum centroid distance are selected as the centers, and the refined matching model graph of the improved LBP operator is constructed. If the LBP values of the feature points are the same, it is proven that the coarse matching results provided by centroid distance are correct. The contour pair is included in the correct matching set, and the final image contour fine matching set is obtained.
Step 6: The precise matched contour pair obtained from step 5 is compared with the contour quantity of the template image A. If the two are equal, it means that the image B in the sliding window is the same as the template, then the matching result is “Match Successfully”. Otherwise, go to the next slide window and repeat step 1. If the image to be matched has been traversed at this time, and no match is successful, the matching result is “Match Failed”.

3. Experimental Design and Result Analysis

This section quantitatively evaluates the product defect detection method from three perspectives: the extraction results of the closed contour, the matching pairs, and the matching accuracy rate. The test objects include character models, digital models, and ratchet workpieces, and the image size is 640 × 480 pixels. Uniform lighting conditions were selected for the experiment [36], and the light intensity was selected as E = 50 lux, 75 lux, 100 lux, 125 lux, and 150 lux to simulate the light environment from dark to light. The experimental test included closed contour extraction, image matching with different rotation angles, image matching with gray difference, and defect detection experiments using different types of workpieces (including good product matching tests and bad product matching tests).

3.1. Closed Contour Extraction Experiment

The accurate extraction of the closed-loop contour is directly related to product matching results, so it is necessary to carry out contour extraction experiments on different workpieces and working conditions. This process is used to detect whether the closed-loop contour of the target can be accurately extracted and to perform the processing of noise—that is, determining whether the influence of noise can be eliminated as much as possible when extracting edge contour points. In this section, a character model, a digital model, and a ratchet workpiece were selected as test objects. Firstly, the images were preprocessed using BM3D and CLAHE, and improved fuzzy C-means was used to segment the image. Then, the closed contour was obtained by extracting edge and tracing contour. The result of contour extraction is shown in Table 1, and the results show that the method of image processing and closed contour extraction based on improved fuzzy C-means clustering can effectively extract the closed contour features of the image of the object to be measured.

3.2. Matching Experiment of Anti-Rotation Transformation

The test objects (including a character model, a digital model, and a ratchet workpiece) were rotated with an angle step of 10° to generate a series of image sets with different angle differences to be registered. The rotation angle range was [0°~360°], and the matching test was carried out with the template image one by one. The matching experiment results are shown in Table 2, in which only part of the fine matching results based on the improved LBP operator are given. The left image represents the template image A, and the right image represents the rotated template image B. The green number in this figure indicates the rotation angle of the template image. Table 2 also gives the contour matching accuracy rate under each rotation angle, where the correct rate = the number of correct matching pairs/the number of template contours. From the anti-rotation matching experiments in Table 2, it can be found that for the character model, digital model, and ratchet workpiece, the correct contour matching rate can reach 100% at each rotation angle, indicating that the proposed method has good anti-rotation characteristics.

3.3. Matching Experiment against Gray Difference

The brightness of the image has a great influence on the contour extraction results [37]. In the process of product matching, it is expected that the image of the same product can be correctly matched under different brightness levels. In order to test the anti-gray difference matching effect of the proposed algorithm, four groups of comparison experiments were designed. We randomly selected the E = 50/75/100/125/150 (lux) arbitrary image as a template image. Mild Gaussian noise with a mean of 0 and a variance of 0.01 was added to other light intensity images to generate image sets with different gray differences to be matched. By comparing the precision matching accuracy rate based on the improved LBP operator, the anti-gray difference ability of the algorithm was verified. The experimental results are shown in Table 3, in which the character model/digital model/ratchet with light intensity E = 50 lux is used as the template image, and mild Gaussian noise with a mean value of 0 and a variance of 0.01 is added to the images to be matched with other light intensity respectively. Based on improved LBP operator matching, the fine matching accuracy of the three groups was 100%, showing that the proposed method has good resistance to gray scale difference matching performance.

3.4. Defect Detection Experiment

The defect detection experiment was run in the Windows 10 system, AMD Ryzen Core CPU (3.60 GHz), and MATLAB 2022b. It was expected that when a template with a different light intensity was used, this method would achieve normal matching with good products with the same light intensity and different light intensity when the product changes its angle, changes its position, or there is interference in the background. In addition, this method can realize the normal detection of defective products, such as incomplete product boundaries and interference occlusions. The defect detection experiment’s results are shown in Table 4, and the product matching experiment was carried out using a template with a light intensity E = 50 lux/75 lux/100 lux/125 lux/150 lux, and part of the detection effect diagram is given. The average matching time under different light intensities is shown in Table 5, and it can be seen that the average detection time of the character model, digital model, and ratchet workpiece was 303 ms, 267 ms, and 362.6 ms respectively. This method can therefore realize real-time detection.
Five uniform illumination conditions with different light intensities were set in the defect detection experiment. The correct matching of good products can be verified and defects can be detected under the condition of image brightness change. Here, the performance of the product defect detection method was evaluated by using the hundred-times matching accuracy of the Accuracy, Precision, and Recall [19]. The aforesaid metrics are expressed as follows:
Accuracy = TP + TN TP + TN + FP + FN
Precision = TP TP + FP
Re call = TP TP + FN
where the TP and FN indicate the number of defects correctly identified or not, respectively. FP indicates the number of non-defects incorrectly identified. The detection accuracy results for the character model, digital model, and ratchet workpiece are shown in Table 6.
From the results of the defect detection experiments in Table 6, it is not difficult to see the following:
(1)
The mean value of Accuracy for the character model, digital model, and ratchet workpiece under different light intensity conditions was 90.18%, 92.30% and 94.08%, respectively. The mean value of Precision for the character model, digital model, and ratchet workpiece under different light intensity conditions was 91.28%, 91.10%, and 93.04%, respectively. The mean value of Recall for the character model, digital model, and ratchet workpiece under different light intensity conditions was 91.01%, 93.65%, and 94.54%, respectively. The average detection accuracy can be kept above 90%, which proves the validity the proposed method.
(2)
When the light intensity E = 75 lux/100 lux/125 lux, the detection accuracy is higher than that under bright light (E = 150 lux) and dark (E = 50 lux) conditions. This is because when the illumination intensity is too dark or too bright, the contour features obtained by the same image preprocessing and image segmentation methods cannot be completely consistent, which will lead to the deviation of the rough matching and static matching results of the contour operator. However, because the proposed method uses a matching method based on contour features, a change in light intensity has limited influence on the matching accuracy, so it can still achieve a high accuracy.
(3)
Here, a statistical analysis is performed on the hundred-times matching accuracy of the character model, digital model, and ratchet workpiece. The standard deviations of Accuracy, Precision, and Recall were 0.0218, 0.0183, and 0.0230, and the confidence intervals were [87.72%, 96.25%], [88.09%, 95.26%], and [88.40%, 97.42%], respectively. The confidence interval shows the extent to which the true value of the parameter falls within the range of the measured result, and can represent the reliability of the measured value of the measured parameter. Here, the results with standard deviations of 0.0218, 0.0183, and 0.0230 have a 95% probability of falling within the interval of [87.72%, 96.25%], [88.09%, 95.26%], and [88.40%, 97.42%], respectively. Obviously, the Accuracy, Precision, and Recall averages are all within the corresponding confidence intervals, which indicates that the defect detection experiment results are reliable.
(4)
When there are multiple external interference sources, if the distance between the interference source and the product to be tested is great, the good product detection results are more accurate. However, if the distance between the interference source and the product to be tested is too close, the edge outline of the interference will appear in the search box, which will cause good products to be misjudged as defective, as shown in Figure 6. For the defect detection of incomplete edges or the contour occlusion type, this method can detect detection accurately because of the obvious difference, and the average Recall remains at 92%.

4. Conclusions

In this paper, a product defect detection method combining centroid distance and textural information is proposed. The algorithm’s processes of image preprocessing, closed contour feature extraction based on improved fuzzy C-means clustering, bidirectional rough matching based on the contour center distance description operator, fine matching based on the improved local binary mode operator, and image fine matching with template contour number comparison are designed in detail, and the detection of industrial products such as character models, digital models, and ratchet workpiece is realized. This method is tested by means of the closed contour extraction, anti-rotation matching, anti-gray difference matching, and defect matching of three different products. The results show that (1) the improved product defect detection method has good closed contour extraction effects, anti-rotation transform properties, and anti-gray difference properties; (2) when the image size is 640*480, the detection time is up to 362.6 ms, meaning that this model can realize real-time detection in industrial applications; (3) the detection accuracy can reach more than 90%, thus ensuring relatively stable detection quality and a high defect detection rate; and (4) the proposed algorithm is suitable for product defect detection with relatively simple shapes, and when selecting the search box, it should be as close as possible to the product under test, so as to avoid situations where the search box is too large and the edge outline of the background interferences is too large, which will cause good products to be misjudged as defective parts.
The proposed algorithm has some shortcomings when dealing with complex production environments, complex parts, and more refined production requirements. The current research results have not been able to realize the online classification of surface defects of industrial products and the measurement of defect size, which is an important direction for future research. Defect classification based on digital image processing theory also has certain advantages in computational efficiency and detection accuracy. However, advanced defect detection methods such as deep learning, CNN, and YOLO have obvious advantages in transfer learning ability, so the online detection, classification, and measurement system of surface defects of industrial products will be improved in the future by combining it with the above methods.

Author Contributions

Conceptualization, H.W. and L.H.; Formal analysis, X.L., L.H. and Y.B.; Investigation, F.S. and Q.L.; Methodology, H.W. and T.Y.; Software, Y.B. and Q.L.; Validation, H.W. and X.L.; Visualization, F.S. and T.Y.; Writing—original draft, H.W. and X.L.; Writing—review and editing, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

The authors wish to thank the Sichuan Regional Innovation Cooperation Project (2024YFHZ0147) and the 2024 Sichuan National College Student Entrepreneurship Practice Project (202411079004S), which supported the research presented in this paper.

Data Availability Statement

The original contributions presented in this study are included in this article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ren, Z.H.; Fang, F.Z.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  2. Rao, X.S.; Zhang, F.H.; Lu, Y.J.; Luo, X.C.; Chen, F.M. Surface and subsurface damage of reaction-bonded silicon carbide induced by electrical discharge diamond grinding. Int. J. Mach. Tool. Manu 2020, 154, 103564. [Google Scholar] [CrossRef]
  3. Ravimal, D.; Kim, H.; Koh, D.; Hong, J.H.; Lee, S.K. Image-Based Inspection Technique of a Machined Metal Surface for an Unmanned Lapping Process. Int. J. Precis. Eng. Manuf.-Green Technol. 2020, 7, 547–557. [Google Scholar] [CrossRef]
  4. Ali, M.; Lun, A.K. A cascading fuzzy logic with image processing algorithm-based defect detection for automatic visual inspection of industrial cylindrical object’s surface. Int. J. Adv. Manuf. Tech. 2019, 102, 81–94. [Google Scholar] [CrossRef]
  5. Badmos, O.; Kopp, A.; Bernthaler, T.; Schneider, G. Image-based defect detection in lithium-ion battery electrode using convolutional neural networks. J. Intell. Manuf. 2020, 31, 885–897. [Google Scholar] [CrossRef]
  6. Jia, L.M.; Wang, Y. Research on Industrial Production Defect Detection Method Based on Machine Vision Technology in Industrial Internet of Things. Trait. Signal 2022, 39, 2061–2068. [Google Scholar] [CrossRef]
  7. Li, M.; Wan, S.H.; Deng, Z.M.; Wang, Y.J. Fabric defect detection based on saliency histogram features. Comput. Intell. 2019, 35, 517–534. [Google Scholar] [CrossRef]
  8. Pushpalatha, K.; Gowda, A.; Ramesh, D. Identification of Similar Looking Bulk Split Grams using GLCM and CGLCM Texture Features. Int. J. Comput. Appl. 2017, 167, 30–36. [Google Scholar]
  9. Zhang, L.; Jing, J.; Zhang, H. Fabric Defect Classiffcation Based on LBP and GLCM. J. Fiber Bioeng. Inform. 2015, 8, 81–89. [Google Scholar] [CrossRef]
  10. Schwind, P.; Suri, S.; Reinartz, P.; Siebert, A. Applicability of the SIFT operator to geometric SAR image registration. Int. J. Remote Sens. 2010, 31, 1959–1980. [Google Scholar] [CrossRef]
  11. Hsu, W.Y.; Lee, Y.C. Rat Brain Registration Using Improved Speeded Up Robust Features. J. Med. Biol. Eng. 2017, 37, 45–52. [Google Scholar] [CrossRef]
  12. Olvera, R.D.P.; Zeron, E.M.; Ortega, J.C.P.; Arreguin, J.M.R.; Hurtado, E.G. A Feature Extraction Using SIFT with a Preprocessing by Adding CLAHE Algorithm to Enhance Image Histograms. In Proceedings of the 2014 International Conference on Mechatronics, Electronics and Automotive Engineering, Cuernavaca, Mexico, 18–21 November 2014. [Google Scholar]
  13. Dunderdale, C.; Brettenny, W.; Clohessy, C.; Dyk, E.E.V. Photovoltaic defect classification through thermal infrared imaging using a machine learning approach. Prog. Photovolt. Res. Appl. 2020, 28, 177–188. [Google Scholar] [CrossRef]
  14. Wang, G.Q.; Chen, M.S.; Lin, Y.; Tan, X.H.; Zhang, C.Z.; Yao, W.X.; Gao, B.H.; Li, K.; Li, Z.H.; Zeng, W.D. Efficient multi-branch dynamic fusion network for super-resolution of industrial component image. Displays 2024, 82, 102633. [Google Scholar] [CrossRef]
  15. Du, W.Z.; Shen, H.Y.; Fu, J.H.; Zhang, G.; Shi, X.K.; He, Q. Automated detection of defects with low semantic information in X-ray images based on deep learning. J. Intell. Manuf. 2021, 32, 141–156. [Google Scholar] [CrossRef]
  16. Zhang, Y.X.; You, D.Y.; Gao, X.D.; Wang, C.Y.; Li, Y.J.; Gao, P.P. Real-time monitoring of high-power disk laser welding statuses based on deep learning framework. J. Intell. Manuf. 2020, 31, 799–814. [Google Scholar] [CrossRef]
  17. Nguyen, T.P.; Choi, S.; Park, S.J.; Park, S.H.; Yoon, J. Inspecting Method for Defective Casting Products with Convolutional Neural Network (CNN). Int. J. Precis. Eng. Manuf.-Green Technol. 2021, 8, 583–594. [Google Scholar] [CrossRef]
  18. Kim, M.; Lee, M.; An, M.; Lee, H. Effective automatic defect classification process based on CNN with stacking ensemble model for TFT-LCD panel. J. Intell. Manuf. 2020, 31, 1165–1174. [Google Scholar] [CrossRef]
  19. Liu, R.Q.; Huang, M.; Gao, Z.M.; Cao, Z.Y.; Cao, P. MSC-DNet: An efficient detector with multi-scale context for defect detection on strip steel surface. Measurement 2023, 209, 112467. [Google Scholar] [CrossRef]
  20. Adibhatla, V.A.; Chih, H.C.; Hsu, C.C.; Cheng, J.; Abbod, M.F.; Shieh, J.S. Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks. Electronics 2020, 9, 1547. [Google Scholar] [CrossRef]
  21. Lv, X.M.; Duan, F.J.; Jiang, J.J.; Fu, X.; Gan, L. Deep Active Learning for Surface Defect Detection. Sensors 2020, 20, 1650. [Google Scholar] [CrossRef]
  22. Jing, J.F.; Zhuo, D.; Zhang, H.H.; Liang, Y.; Zheng, M. Fabric defect detection using the improved YOLOv3 model. J. Eng. Fiber Fabr. 2020, 15, 1558925020908268. [Google Scholar] [CrossRef]
  23. Deng, H.; Cheng, J.; Liu, T.; Cheng, B.; Sun, Z. Research on Iron Surface Crack Detection Algorithm Based on Improved YOLOv4 Network. J. Phys. Conf. Ser. 2020, 1631, 012081. [Google Scholar] [CrossRef]
  24. Wang, G.Q.; Zhang, C.Z.; Chen, M.S.; Lin, Y.C.; Tan, X.H.; Kang, Y.X.; Wang, Q.; Zeng, W.D.; Zhao, W.W. A high-accuracy and lightweight detector based on a graph convolution network for strip surface defect detection. Adv. Eng. Inform. 2024, 59, 102280. [Google Scholar] [CrossRef]
  25. Wang, G.Q.; Zhang, C.Z.; Chen, M.S.; Lin, Y.C.; Tan, X.H.; Liang, P.; Kang, Y.X.; Zeng, W.D.; Wang, Q. Yolo-MSAPF: Multiscale Alignment Fusion with Parallel Feature Filtering Model for High Accuracy Weld Defect Detection. IEEE Trans. Instrum. Meas. 2023, 72, 1–14. [Google Scholar] [CrossRef]
  26. Li, C.; Huang, Y.; Li, H.; Zhang, X. A weak supervision machine vision detection method based on artificial defect simulation. Knowl.-Based Syst. 2020, 208, 106466. [Google Scholar] [CrossRef]
  27. Honzatko, D.; Krulis, M. Accelerating block-matching and 3D filtering method for image denoising on GPUs. J. Real-Time Image Process. 2019, 16, 2273–2287. [Google Scholar] [CrossRef]
  28. Khan, S.A.; Hussain, S.; Yang, S.K. Contrast Enhancement of Low-Contrast Medical Images Using Modified Contrast Limited Adaptive Histogram Equalization. J. Med. Imaging Health Inform. 2020, 10, 1795–1803. [Google Scholar] [CrossRef]
  29. Wu, Y.; Liu, J.W.; Zhu, C.Z.; Bai, Z.F.; Gong, M.G. Computational Intelligence in Remote Sensing Image Registration: A survey. Int. J. Autom. Comput. 2020, 18, 1–17. [Google Scholar] [CrossRef]
  30. Li, H.Z.; Wang, J. From Soft Clustering to Hard Clustering: A Collaborative Annealing Fuzzy c-Means Algorithm. IEEE Trans. Fuzzy Syst. 2024, 32, 1181–1194. [Google Scholar] [CrossRef]
  31. Rahman, T.; Islam, M.S. Image Segmentation Based on Fuzzy C Means Clustering Algorithm and Morphological Reconstruction. In Proceedings of the 2021 International Conference on Information and Communication Technology for Sustainable Development (ICICT4SD), Dhaka, Bangladesh, 27–28 February 2021. [Google Scholar]
  32. Eshkevari, M.; Rezaee, M.J.; Zarinbal, M.; Izadbakhsh, H. Automatic dimensional defect detection for glass vials based on machine vision: A heuristic segmentation method. J. Manuf. Process. 2021, 68 Pt A, 973–989. [Google Scholar] [CrossRef]
  33. Ashok, S.K.; Ballav, S.; Billò, M.; Dell’Aquila, E.; Frau, M.; Gupta, V.; John, R.R.; Lerda, A. Surface operators, dual quivers and contours. Eur. Phys. J. C 2019, 79, 1–24. [Google Scholar] [CrossRef]
  34. Heylen, R.; Scheunders, P. Hyperspectral Intrinsic Dimensionality Estimation with Nearest-Neighbor Distance Ratios. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 570–579. [Google Scholar] [CrossRef]
  35. Kang, H.; Xuefei, L.; Wenhui, Z. An adaptive fusion panoramic image mosaic algorithm based on circular LBP feature and HSV color system. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020. [Google Scholar]
  36. Wang, Y.; Huang, Q.; Hu, J. Adaptive enhancement for nonuniform illumination images via nonlinear mapping. J. Electron. Imaging 2017, 26, 1. [Google Scholar] [CrossRef]
  37. Wu, H.R.; Luo, Z.Q.; Sun, F.C.; Li, X.X.; Zhao, Y.X. An Improvement Method for Improving the Surface Defect Detection of Industrial Products Based on Contour Matching Algorithms. Sensors 2024, 24, 3932. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An improved algorithmic flow diagram of a closed contour matching method for detecting defects in industrial products.
Figure 1. An improved algorithmic flow diagram of a closed contour matching method for detecting defects in industrial products.
Electronics 13 03798 g001
Figure 2. Schematic diagram of the original LBP thresholding process.
Figure 2. Schematic diagram of the original LBP thresholding process.
Electronics 13 03798 g002
Figure 3. LBP rotation invariant mode.
Figure 3. LBP rotation invariant mode.
Electronics 13 03798 g003
Figure 4. Improved LBP operator in the fine matching model.
Figure 4. Improved LBP operator in the fine matching model.
Electronics 13 03798 g004
Figure 5. The matching process from “Rough” to “Fine” based on the combined centroid distance and textural information.
Figure 5. The matching process from “Rough” to “Fine” based on the combined centroid distance and textural information.
Electronics 13 03798 g005
Figure 6. The misjudgment of good products because of distractors being too close.
Figure 6. The misjudgment of good products because of distractors being too close.
Electronics 13 03798 g006
Table 1. Results of closed contour extraction.
Table 1. Results of closed contour extraction.
ProductsCharacter ModelDigital ModelRatchet Workpiece
Experimental Condition
E = 50 luxElectronics 13 03798 i001Electronics 13 03798 i002Electronics 13 03798 i003
E = 75 luxElectronics 13 03798 i004Electronics 13 03798 i005Electronics 13 03798 i006
E = 100 luxElectronics 13 03798 i007Electronics 13 03798 i008Electronics 13 03798 i009
E = 125 luxElectronics 13 03798 i010Electronics 13 03798 i011Electronics 13 03798 i012
E = 150 luxElectronics 13 03798 i013Electronics 13 03798 i014Electronics 13 03798 i015
Table 2. Matching experiment results of anti-rotation transformation.
Table 2. Matching experiment results of anti-rotation transformation.
ItemFine Matching Results Based on Improved LBP OperatorThe Number of Closed Contours in the Template ImageCorrect Matching Rate
Character modelsElectronics 13 03798 i016Electronics 13 03798 i0177100%
Digital modelsElectronics 13 03798 i0183100%
Ratchet workpiecesElectronics 13 03798 i0192100%
Table 3. Experimental results of anti-gray difference matching.
Table 3. Experimental results of anti-gray difference matching.
ItemFine Matching Results Based on Improved LBP OperatorThe Number of Closed Contours in the Template ImageCorrect Matching Rate
Character modelsElectronics 13 03798 i0207100%
Digital modelsElectronics 13 03798 i0213100%
Ratchet workpiecesElectronics 13 03798 i0222100%
Table 4. Experimental conditions and test results of defect detection matching.
Table 4. Experimental conditions and test results of defect detection matching.
Detection TypeLight IntensityCharacter ModelsDigital ModelsRatchet WorkpiecesTest Results (Show Only)
Good product testing “Match Successful”E = 50 luxElectronics 13 03798 i023Electronics 13 03798 i024Electronics 13 03798 i025Electronics 13 03798 i026Electronics 13 03798 i039Electronics 13 03798 i040
E = 75 luxElectronics 13 03798 i027Electronics 13 03798 i028Electronics 13 03798 i029
E = 100 luxElectronics 13 03798 i030Electronics 13 03798 i031Electronics 13 03798 i032
E = 125 luxElectronics 13 03798 i033Electronics 13 03798 i034Electronics 13 03798 i035
E = 150 luxElectronics 13 03798 i036Electronics 13 03798 i037Electronics 13 03798 i038
Defect detection
“Match failed”
Arbitrary parameterArbitrary templateArbitrary templateArbitrary template
Table 5. Average matching time.
Table 5. Average matching time.
ItemLight IntensityAverage Time
50 lux 75 lux 100 lux 125 lux 150 lux
Character models305 ms296 ms285 ms301 ms328 ms303.00 ms
Digital models263 ms225 ms298 ms264 ms285 ms267.00 ms
Ratchet workpieces333 ms314 ms398 ms377 ms391 ms362.60 ms
Table 6. Experimental results of the defect detection experiment (100-times matching accuracy %).
Table 6. Experimental results of the defect detection experiment (100-times matching accuracy %).
ItemLight Intensity (lux)AccuracyPrecisionRecall
Character modelsE = 5088.8990.1589.47
E = 7590.1291.1188.09
E = 10092.2593.5294.85
E = 12591.9892.0891.87
E = 15087.6589.5690.75
Mean value (Single)90.1891.2891.01
Digital modelsE = 5089.5587.5590.63
E = 7592.5293.5494.56
E = 10093.8592.1393.62
E = 12593.4691.5295.91
E = 15092.1090.7493.54
Mean value (Single)92.3091.1093.65
Ratchet workpiecesE = 5092.3293.4195.41
E = 7594.6395.6296.85
E = 10096.5294.2192.44
E = 12594.6190.3293.65
E = 15092.3091.6394.34
Mean value (Single)94.0893.0494.54
Analysis of statistical resultsMean value (All)91.9891.6892.91
Standard deviation0.02180.01830.0230
Confidence interval[87.72, 96.25][88.09, 95.26][88.40, 97.42]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.; Li, X.; Sun, F.; Huang, L.; Yang, T.; Bian, Y.; Lv, Q. An Improved Product Defect Detection Method Combining Centroid Distance and Textural Information. Electronics 2024, 13, 3798. https://doi.org/10.3390/electronics13193798

AMA Style

Wu H, Li X, Sun F, Huang L, Yang T, Bian Y, Lv Q. An Improved Product Defect Detection Method Combining Centroid Distance and Textural Information. Electronics. 2024; 13(19):3798. https://doi.org/10.3390/electronics13193798

Chicago/Turabian Style

Wu, Haorong, Xiaoxiao Li, Fuchun Sun, Limin Huang, Tao Yang, Yuechao Bian, and Qiurong Lv. 2024. "An Improved Product Defect Detection Method Combining Centroid Distance and Textural Information" Electronics 13, no. 19: 3798. https://doi.org/10.3390/electronics13193798

APA Style

Wu, H., Li, X., Sun, F., Huang, L., Yang, T., Bian, Y., & Lv, Q. (2024). An Improved Product Defect Detection Method Combining Centroid Distance and Textural Information. Electronics, 13(19), 3798. https://doi.org/10.3390/electronics13193798

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop