Next Article in Journal
Optimized Detection Algorithm for Vertical Irregularities in Vertical Curve Segments
Previous Article in Journal
Failure Probability Analysis of the Transmission Line Considering Uncertainty Under Combined Ice and Wind Loads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Defects in Warp Knitted Fabrics Based on Local Feature Scale Adaptive Comparison

by
Yongchao Zhang
1,2,
Weimin Shi
1,* and
Jindou Zhang
1
1
College of Mechanical Engineering, Zhejiang Sci-Tech University, Hangzhou 310000, China
2
College of Automation, Zhejiang Polytechnic University of Mechanical and Electrical Engineering, Hangzhou 310000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(22), 10754; https://doi.org/10.3390/app142210754
Submission received: 4 October 2024 / Revised: 8 November 2024 / Accepted: 11 November 2024 / Published: 20 November 2024

Abstract

:
In order to improve the accuracy and detection effect of fabric defect detection, a fabric defect detection method based on local similarity comparison is proposed in this paper. This method first takes each pixel in the image as the central pixel, selects a specific window as the region size, and then uses the similarity between the central region and the surrounding neighborhood to find the neighborhood most similar to the central region to complete the estimation of the central pixel. Finally, the target image is obtained by the principle of background difference, so as to detect fabric defects. The results show that this method is superior to the traditional detection method, which can not only detect the defect image under the complex background, but also have good detection results for the fabric defect image under the influence of different organization and lighting factors. The detection accuracy rate under factory conditions can reach 98.45%, which has a high applicability and detection rate, and also demonstrates certain anti-interference performance.

1. Introduction

In the textile production process, due to mechanical equipment, misoperation, and other reasons, defects such as jumping flower, pilling, hole breaking, yarn stripping, and stains will occur, resulting in poor product quality, thus affecting the production efficiency of the enterprise [1]. The surface texture of the fabric is complex, and the traditional fabric defect detection method often relies on artificial visual inspection, with low efficiency, strong subjectivity, a capacity to be easily affected by visual fatigue, and other problems. The accuracy rate of artificial detection is only about 70%, which cannot meet the requirements of large-scale production [2]. Traditional target detection algorithms rely on the combination of manual features and classifiers, with high image quality requirements, complex processing, high sensitivity to noise and interference, and certain limitations in detecting fabric defects [3,4]. Therefore, it is necessary to develop an automated, efficient, and accurate fabric defect detection method with the help of computer vision.
Fabric defect detection is mainly to analyze and process collected fabric images to determine whether they contain defects. However, there are a variety of fabric defects. How to quickly and effectively detect the collected images is a difficult problem. Kang et al. [5] proposed two methods to detect printing fabric. One is to select the parameters of the optimal Gabor filter with the help of a genetic algorithm, and the other is to determine the unit of printing fabric with distance matching, so as to achieve good detection results in irregular fabric defect detection. Junet et al. [6] predicted the existence of defects using the Inception-V1 model, and then identified the types of defect with LeNet, achieving good classification accuracy with a small model volume. Wei et al. [7] used the compressed sampling theorem to expand the data set, and classified the data through cs-cnn, realizing the fabric defect classification with good performance under the premise of small samples. However, there are still some problems, such as high calculation complexity, limited detection effects for specific types of defect, a large amount of annotation data, and insufficient generalization ability for irregular defects.
In terms of target detection, Liu et al. [8] optimized a pooling operation in a YOLOv4 SPP structure to effectively improve the mAP while improving FPS. Lin et al. [9] introduced a Swin Transformer module into the trunk network of YOLOv5 and added a small target detection layer to strengthen the detection capability for small targets. Zhen et al. [10] added five anti-convolution layers as decoders, and generated a segmentation mask by adjusting the size of the bilinear. The proposed model achieved good detection results on the fabric defect dataset. Huang et al. [11] divided the segmentation network into two parts: segmentation and decision making. Only a small number of defect samples are combined with standard samples to obtain high-precision defect positions, and the detection speed of 25 frames per second can be achieved. Although the above methods are effective, they are limited in adaptability and multi-scale information processing, which results in unstable performance when dealing with complex fabric texture or different lighting conditions.
Learning-based methods play an important role in the field of image processing by training the model by extracting fabric image features, and then detecting the image using the trained model. Gong C et al. [12] proposed a visual significance detection method based on a collaborative teaching model, which controls the propagation order of hyperpixels by means of teaching interaction, optimizes the propagation path, and improves the detection accuracy. Liu S et al. [13] used a PSO-BP neural network to detect fabric defects, extracted fabric texture features by orthogonal wavelet transform, and optimized the BP neural network through a particle swarm optimization algorithm to determine the threshold and weight of the neural network. Compared with the BP network, the PSO-BP neural network algorithm achieved good results. When dealing with fabric defects, the above method has high computing complexity and insufficient multi-scale information fusion.
The image recognition method based on similarity mainly uses the feature structure of the image to match and estimate the similarity between different samples. The commonly used methods include Euclidean distance, Marovs distance, correlation coefficient, and information entropy [14]. P F Li et al. [15] proposed a fabric defect detection method based on local entropy, which converts the original gray image space into entropy space and extracts the texture region of the defect. In the experiment, the fabric image is segmented into a local window with the same size, and the minimum value of the local entropy window area is selected to segment the defect. The experimental results show that this method is simple and has high identification accuracy. YH Wang et al. [16] proposed a small target detection method based on the edge protection background estimation. This method uses the structural features of the image to find the most similar area, and obtains the target image by evaluating the original image.
The image recognition method of similarity mainly uses the feature structure of the image to match and estimate the similarity between different samples. Liu Z et al. [17] proposed a fabric defect detection method based on the principal local binary mode (MLBP), using the MLBP feature as the dictionary basis to complete the image reconstruction, deduce the residual between the original image and the reconstructed image, and obtain the final detection result by the maximum entropy segmentation method. Tajeripour et al. [18] used a local binary pattern (LBPs) to detect fabric defects based on texture features. Shi et al. [19] proposed a local contrast deviation (LCD) method for fabric defect detection. Although the above methods can accurately detect the existence of defects, the contour positioning of defects is not ideal and is easily affected by the surrounding noise, resulting in result deviation.
Therefore, in order to realize more comprehensive intelligent detection, this paper proposes a local contrast method with adaptive scale based on the previous algorithm research and experiment. This method mainly uses the area where each pixel is located in the image and the similarity degree between each pixel and its surrounding pixels to obtain the corresponding scale size, and uses the area where the scale is located as the similar area size of the pixel, and then uses the local contrast algorithm to detect defects.

2. Scale Adaptive Contrast Method

2.1. Adaptive Window

In order to determine whether an image pixel is a normal pixel (background area pixel) or a defective pixel, it is necessary to determine an area centered on this pixel. All pixels in this area are locally similar to the central pixel. In the existing small target detection methods, for each pixel in an image, the corresponding similar region is a rectangular region of fixed size, such as 3 × 3, 5 × 5, 5 × 7, etc. During detection, a rectangular region of fixed size adapted to these small targets (defects) is artificially determined in advance according to the specific conditions of small targets (defects) of different sizes. In an image, there are relatively flat areas, such as the background, and some edge detail areas, such as patterns or defects. That is to say, for the pixels of different regions in the image, the size of similar regions centered on the pixel cannot be the same. If it is a background area pixel, there will be more pixels around the pixel that are similar to the pixel, then the corresponding similar area should be large. If it is an edge detail or defect pixel, the number of similar pixels around the pixel should be relatively small, and the correspondingly similar area should be small. Therefore, it is unreasonable to specify a similar area of the same size for all pixels in an image. Therefore, in this method, based on the idea of scale, an adaptive method to determine the size of this similar region is proposed, that is, for each pixel in the image, the corresponding size is calculated according to the region in the image and the similarity between its surrounding pixels, and the region where the scale is located is taken as the similar region of this pixel.
Suppose a digital image can be defined as C = c , f , c is the rectangular arrangement of pixels, which is called the field on C , f is the image gray intensity function, and its definition field is [L, H], and L and H are the minimum and maximum values of the pixel gray value, respectively. When the image gray value is represented by 8-bit binary, L is 0 and H is 255.
For any element c C , the plane D r ( c ) with c as the center and r as the radius can be expressed as:
D r ( c ) = d C c d r ,
The plane D r ( c ) with c as the center and r as the radius in the plane can be represented as Figure 1a,b. When r = 1 , the pixels of the disc are {c, di}, where i = 1, 2, 3, 4, as shown in Figure 1a. When r = 2 , the pixels of the disc have {c, di}, where i = 1, 2, 12, as shown in Figure 1b.
To calculate the scale corresponding to each pixel in the image, let F O r ( c ) be:
F O r ( c ) = d D r ( c ) D r 1 ( c ) W h ( f ( c ) f ( d ) ) D r ( c ) D r 1 ( c ) ,
where D r ( c ) D r 1 ( c ) represents the number of all pixels in region D r ( c ) D r 1 ( c ) , f ( c ) and f ( d ) represent the gray value of pixels c and d , respectively, and the W h function should meet the requirements: (I) the value range is [0, 1]; (II) W h 0 = 1 ; (III) the function is a monotone non-increasing function. In fuzzy space, many functions can meet these requirements. Here, we can take a typical function W h x = e x 2 2 k φ 2 .
This function can satisfy that when pixel c and pixel d have similar gray values, that is, when the gray difference tends to 0, the function value tends to 1; when the gray value difference between pixel c and pixel d is large, that is, when the gray value difference tends to 255, the function value tends to 0.
According to the F O r ( c ) calculation, the sphere scale corresponding to each pixel in the image can be obtained. For any pixel c , take c as the center of the circle, and gradually increase the circle radius from r = 1 . Corresponding to each r value, the corresponding F O r ( c ) value can be obtained. When the F O r ( c ) value is less than a threshold τ , it is considered that the pixels in the region composed of D r ( c ) D r 1 ( c ) are no longer in the similar region of the central pixel c , and the spherical scale r s ( c ) corresponding to the central pixel c is r 1 .
The threshold τ described above can be taken as 0.85. Assuming that there are eight adjacent pixels around the 3 × 3 region determined by taking c as the central pixel, if only one pixel is significantly different from the central pixel, it is considered that the adjacent elements in the region are still in the same region as c . If two or more pixels are significantly different from c , it is considered that the adjacent pixels in the region are not in the same region as c . Therefore, the value of τ is 7/8. When the τ value is between [0.8, 0.9], it has little effect on the experimental results, so τ is taken as 0.85.

2.2. Most Similar Neighborhood

To accurately estimate the center point pixel, find the most similar neighborhood first. Each pixel in the image shall be taken as the center point, the area within a certain range around it shall be taken as the center area, and the area around the center area with the same size as the adjacent area shall be taken as the adjacent area. Meanwhile, the protection area shall be set around the center area to avoid noise pollution. The scale value r s ( c ) of each pixel point is determined according to the previous summary. If the scale of the center pixel point ( x 0 , y 0 ) is r 0 ( c ) , make the adjacent center pixel points ( x , y ) outside the scale have the same scale with them, and set the protection scale m = 1 , then the scale of the protection area E 0 is r p ( c ) = r 0 ( c ) + 1 . Suppose the central area is V 0 , and there are n adjacent areas V n ( n is a positive integer) with the same scale around it. The principle of the algorithm is shown in Figure 2.
In Figure 2, the scale of pixel ( x 0 , y 0 ) is 2, and its central region V 0 has 13 pixels in total. If the size of the protection area is set to r p ( c ) = r 0 ( c ) + 1 = 3 , the dimension of the protection area is E 0 . Outside the protection area E 0 , n neighborhood V k (where k = 1 , , n ) with the same size as V 0 can be found, and its central pixel point is x k , y k , forming a black pixel diamond.
Let D r f i ( x 0 , y 0 ) ( i = 1 , , 13 ) denote the gray value of each pixel in the central region, D r f i ( x k , y k ) ( i = 1 , , 13 ) denote the gray value of the pixel corresponding to each pixel in the central region in the surrounding k neighborhood, and set the similarity between the central region and this neighborhood to S k , then S k can be calculated from the following formula:
S k = 1 i = 1 13 D r f i ( x 0 , y 0 ) D r f i ( x k , y k ) 2 + 1
In the formula, the value range of S k is [0, 1]. When the value of S k tends to 0, it means that the gray values of all pixels in the central neighborhood and the corresponding pixels in the surrounding k neighborhood differ greatly, that is, the similarity between the two regions is weak. When the value of S k tends to 1, it means that the gray values of each pixel in the central neighborhood and the corresponding pixels in the surrounding k neighborhood differ very little, that is, the similarity between the two regions is high.
Calculate the value of similarity S k according to Equation (3), and find the neighborhood with the greatest similarity with the central region, that is, the most similar neighborhood V s :
i = arg max k = 1 , 2 n ( S k ) V s = V i
where i means that the i -th neighborhood image has the highest similarity with the central region; V i represents the i -th block neighborhood image.
If the current pixel is a defective pixel, a central region is determined by the pixel, and no region with high similarity with it can be found around it. The gray value of the current pixel evaluated by all pixels in the most similar neighborhood will differ greatly from the gray value of the pixel in the original image, so the current pixel is identified as a defective pixel according to the method in the subsequent steps.
In addition, the current pixel is a non-defective pixel (background region pixel), which is used to determine a central region, and a neighborhood with high similarity can be found in its surrounding neighborhood. The difference between the gray value of the current pixel evaluated by all pixels in the most similar neighborhood and the gray value of the pixel in the original image will be very small, so the current pixel is determined to be a non-defective pixel according to the method in the subsequent steps.

2.3. Background Estimation

Assuming that the central region V 0 has a high similarity with its surrounding neighborhood V s , the grayscale value of central region I ( x 0 , y 0 ) is not significantly different from that of surrounding neighborhood I ( x s , y s ) . The gradient reciprocal weighting value can be used to evaluate the pixel value of the central point in the central region:
I ^ ( x 0 , y 0 ) = 1 W x s , y s V s 1 I ( x 0 , y 0 ) I ( x s , y s ) + 1 × I ( x s , y s )
where I ( ^ x 0 , y 0 ) represents the estimated gray value of the center point of the central area, V s represents the most similar neighborhood, and W represents the weight, and its expression is:
W = x s , y s V s 1 I ( x 0 , y 0 ) I ( x s , y s ) + 1
In the subsequent scheme, the fabric defect image can be effectively judged by using the background subtraction method and image binarization.

3. Experiment and Result Analysis

3.1. Image Acquisition and Preprocessing

Image quality is an important factor restricting the detection rate, so image acquisition is an important link in fabric defect detection. At present, the Tilda database [20] is being used by researchers for research, but due to its high image quality, the algorithm with a high detection rate is generally tested on the Tilda database, and the collected images may not have a high detection rate under actual conditions, which will cause some misunderstanding around the reliability of the scientific research results. Therefore, on the basis of standard image detection, it is of great significance to verify the reliability of the results by collecting fabric defect images in actual production.
The three kinds of standard fabric defect shown in Figure 3 are (a) knotting, (b) holes, and (c) oil stains. The image is clear and has high resolution, which is the standard image often used by researchers.
Figure 4 shows the plain warp knitted fabric images collected by the laboratory equipment, using the lens with a 6-mm focal length (Ricoh, fl-hc0614-2m) produced by the Ricoh company in Japan, and the linear scan CCD camera produced by the Beijing micro-vision company, which was collected on the hks4 El high-speed warp knitting machine produced by the Fujian Jilong company (Quanzhou, China). (a–c) shown in Figure 4 are fabric defect images formed by holes, oil stains, and broken yarn, respectively. The image size is 512 × 512, and the resolution of the image is poor compared with the standard fabric.
Figure 5 shows the broken yarn images of some fabrics collected in the factory. Due to the variety of collected images, only parts of the defect images are displayed. The image quality collected by the factory is poor, and some fabrics are difficult to collect.
According to the above image acquisition results, it can be seen that the fabric defect image in the standard state is clear and less affected by external factors, the laboratory environmental conditions are controllable and relatively superior to those of the factory, and the quality of the collected image is acceptable. However, the quality of the collected image in the actual production of the factory is poor, because the environmental factors need to be considered in the acquisition process. For example, when the machine speed is high, the machinery will produce vibration, causing camera jitter, light source vibration, etc. At the same time, if the fabric is light and thin, the reflection of mechanical parts is also an important factor restricting the image quality acquisition. When collecting in the factory, if only the good quality image is selected, it has no reference significance. Therefore, if the various collected defect images can be effectively detected, the subsequent development of fabric defect equipment will have a certain reference value.
For the standard sample, there is less noise, but for the images collected in the experimental process there will be many factors which will cause the loss of image quality. Uneven illumination is a typical factor. In the acquisition process, due to the instability of the light source, mechanical vibration, and reflection, the gray level of the collected image will change, which will lead to the decline of image quality. In order to obtain a higher quality image, we can preprocess it by eliminating uneven illumination and enhancing the details of the image.
In actual production, the image will become blurred and the defect part will be difficult to identify due to the influence of light. Homomorphic filtering can eliminate the influence of uneven illumination by adjusting the gray range of the image and enhancing the texture details of dark areas without losing the details of bright areas. Homomorphic filtering is a special method for image preprocessing in the frequency domain. It mainly reduces the low frequency and increases the high frequency by enhancing and compressing the brightness of the image, and finally reduces the impact of illumination changes on the image and sharpens the edge details [21].
Take the original image function I ( x , y ) as the illumination function, which can be expressed as the product of the illumination component i ( x , y ) and the reflection component r ( x , y ) , that is, the function of the original image is expressed as [21]:
I ( x , y ) = i ( x , y ) r ( x , y )
The algorithm flow of homomorphic filtering is shown in Figure 6:
The following operations can be performed according to the above flow chart:
(1) For homomorphic filtering, it is necessary to simplify the multiplication operation of the original image function to the addition operation, that is, to perform a logarithmic operation on the original image function:
Z ( x , y ) = ln i ( x , y ) + ln r ( x , y )
(2) In order to convert the image to the frequency domain, Fourier transform is required for the function after the above logarithmic operation:
F ( Z ( x , y ) ) = F ( ln i ( x , y ) ) + F ( ln r ( x , y ) )
(3) Then select an appropriate transfer function H ( u , v ) , weaken I ( u , v ) , enhance the reflection component, enhance R ( u , v ) , and enhance the high frequency component by compressing the variation range of the irradiation component i ( x , y ) . Suppose a homomorphic filter function H ( u , v ) is used to process the Fourier transform of the logarithm of the original image I ( x , y ) to obtain:
S ( u , v ) = H ( u , v ) I ( u , v ) + H ( u , v ) R ( u , v )
(4) Inverting to airspace:
s ( x , y ) = F 1 ( S ( u , v ) )
(5) Finally, take the index to obtain the final result:
g ( x , y ) = exp ( s ( x , y ) )
The choice of transfer function is very important to achieve the ideal enhancement effect and the effect of compressing the gray range. Considering that the high-frequency information of the image can be enhanced while retaining part of the low-frequency information, the Butterworth high pass filter is selected. According to the similarity of transfer function, the transfer function of homomorphic filtering can be obtained:
H ( u , v ) = r h r l 1 + ( c D 0 D ( u , v ) ) 2 n + r l
The transfer function of homomorphic filtering should be less than 1 in the low frequency part and greater than 1 in the high frequency part, so there is r l < 1 , r h > 1 . C is the sharpening function, and D 0 is related to the irradiation component and reflection component.
The parameter a can be set according to the principle of the filter. The images before and after homomorphic filtering are shown in Figure 7 and Figure 8.
Figure 7 shows the images collected by the factory. It can be seen that the images collected by the factory are seriously polluted by noise such as light sources. Using homomorphic filtering for image preprocessing can more effectively eliminate noise, suppress background, and highlight targets. Figure 8 shows the standard image, which is less disturbed by noise. The image after homomorphic filtering makes the target defects clearer and clearer, and has a good effect.
According to the above filtering effect, homomorphic filtering can preprocess the image under the interference of noise such as light source, and obtain the ideal preprocessing effect. By comparing the images before and after filtering, homomorphic filtering can make the defect image clearer and clearer, inhibit the background image, and eliminate the influence of light, which has a good processing effect.

3.2. Different Types of Fabric Defect

In order to evaluate the adaptability of this algorithm, plain warp knitted fabric defect images with different types of defect are used for detection. The image acquisition was realized on the hks4 El high-speed warp knitting machine produced by the Fujian Jilong company. The Intel (R) core (TM) I3 was used in the experiment using a 6-mm focal length lens (Ricoh, fl-hc0614-2m, Kanagawa, Japan) produced by Ricoh and a linear scan CCD camera (microview, Beijing, China) produced by the Beijing micro-vision company—[email protected]. The computer detects three different kinds of defect—holes, oil stains, and broken yarn fabric defects. Considering the sensitivity of the algorithm, it detects the image compressed from 512 × 512 pixels to 32 × 32 pixels. The results are shown in Figure 9.
In the above figure, (a), (d), and (g), respectively, represent the images of fabric defects with holes, oil stains, and broken warp, (b), (e), and (h) represent three different types of scale image, (c), (f), and (i), respectively, represent the detection results of different types of defect. According to the above experimental results, the scale adaptive comparison method has good detection results for fabric defects, which can not only identify the shape and contour of fabric defects, but also detect different types of fabric defect.

3.3. Comparison of Different Defect Detection Methods

In order to prove the superiority of the scale adaptive local contrast method proposed above, the local binary patterns (LBPs) in the literature [18] and the local contrast deviation method (LCD) in the literature [19] are used to compare the experimental results on Tilda datasets with the algorithm proposed in this chapter. The results are shown in Figure 10.
The above literature, respectively, detects defects on the Tilda dataset based on LCD and LBPs methods. In the figure, (a), (b), and (c), respectively, represent three different types of fabric defect images in the Tilda dataset, (d), (e), and (f), respectively, represent the scale images of three kinds of defect image, (g), (h), and (i), respectively, represent the detection results of the algorithms in this chapter for three kinds of defect, (j), (k), and (l), respectively, represent the detection results of three kinds of defect detected by local contrast deviation (LCD), and (m), (n), and (o), respectively, represent the detection results of three kinds of defect detected by local binary pattern (LBPs).
As shown in Figure 10 above, the comparison between the proposed algorithm and the experimental results of LCD and LBPs show that the proposed scale adaptive local comparison method has better detection results than the other two methods, and clearer target contour recognition. In the above experimental process, we used homomorphic filtering to preprocess the algorithm, reducing the impact of noise on the detection results, making target recognition more accurate.
The network is trained with this set of parameters, and the training results are shown in Figure 11. From Figure 11a, it is found that the accuracy of the validation set of LBPs after the 75th round of training is close to that of the proposed method, and the overall network accuracy is higher than that of LCD. From Figure 11b, it can be seen that the training loss curve is constantly converging, the loss is close to the method in the 10th round, and the LBPs’ training loss converges significantly earlier than the LCD training loss. The experimental results show that the proposed method is feasible.
The spatial complexity of the model can be reflected by the number of parameters. When a model is deployed on an edge computing platform, in addition to the number of model parameters, the model inference speed is also an important indicator to measure the network model. The time class is usually used to measure model inference time, but there is a warm-up start when the GPU runs, so this approach is not very objective. In this paper, the TILDA dataset is tested by preheating and synchronization, the average time is obtained, and the results are shown in Table 1. The test accuracy of the model in this paper is 99.89%, the accuracy of the LCD model is 97.06%, and the test accuracy of the LBPs model is very close to that of the model in this paper, namely 97.06%. It can be seen from Table 1 that the proposed method is less than the current model in terms of the number of parameters and the amount of calculation, which indirectly improves the forward inference time of the model.

3.4. Verification of Images Collected by the Factory

In order to further verify the rationality of the methods in this chapter, factory experiments were carried out. On the cloth inspection machine, a linear CCD camera was used to collect warp knitted fabrics under different tissues (considering the actual production cost of the textile mill). The experimental device is shown in Figure 12. The CCD camera was a Hikvision MV-CS060-10GM/GC second-generation industrial area scan camera; the light source system selects the LED lamp with the model MV-LLDS-1002-38 as the system light source, the number of lamp beads is 6 rows, the wavelength B: 465 nm, R: 625 nm, the light-emitting surface size is 990 × 32 mm, and the color temperature W: 6500 k; the frame grabber is NIPCIe-1433.
A total of 200 gray-scale images with a resolution of 2568 × 40 pixels were collected, including 115 flawless images and 85 defective images. The collected pictures are used for the offline test. The experimental results are shown in Figure 13, Figure 14 and Figure 15. This experiment uses an experimental window with a central area size of 11 × 39 pixels. The experiments are carried out under the environment of matlab2023a. The pad number of plain weave in Figure 14 is Gb3: 1-0 | 0-1 | gb4: 1-2 | 1-0 |, the pad number of twill weave in Figure 13 is Gb3: 1-0 | 1-2 | gb4: 1-2 | 1-0 |, and the pad number of variable warp weave in Figure 12 is Gb3: 1-0 |1243-4|. Let off volume: 1200 mm/rack, pulling density: 20 rows/cm.
Figure 13, Figure 14 and Figure 15 are defect images and detection results of different yarn breaking parts under different tissues. The left broken yarn defect image has a tendency to tilt to the right, the middle broken yarn defect image is a vertical bar, and the right broken yarn is inclined to the left. It can be seen from the yarn breaking diagram on the left that when the yarn breaking starts, the defect image is not very obvious and the image is relatively small, but the detection results show that the algorithm can clearly identify the existence of defects and effectively avoid long defects. As the defect range after yarn breaking is different for different fabrics, it can be seen from the above figure that this method has a good detection effect for different fabrics, which shows that the algorithm used in this paper can adapt to the defect detection of different fabrics and has high robustness.
The image in Figure 15 is affected by light during acquisition, which shows that this algorithm can adapt to defect detection under the influence of external factors and has certain anti-interference performance.

3.5. Evaluating Indicator

In fabric defect detection, there are two kinds of judgment results for an image containing fabric defects, namely, the normal image and the defect image, which are represented by positive samples and negative samples. For both positive samples and negative samples, there are two judgment methods. Therefore, for the possibility of producing four different results, in order to better reflect the correlation judgment results, it is defined that the case where the positive sample is judged as positive is true positive (TP), the case where the positive sample is judged as negative is false negative (FN), the case where the negative sample is judged as positive is false positive (TP), and the case where the negative sample is judged as negative is true negative (TN).
In order to better reflect the detection rate of defect detection, the following calculation method is defined:
A c c u r a c y = T P + T N T P + T N + F P + F N
where TP is the number of true positive samples; FP refers to the number of false negative tests; FN the number of false positives detected; TN the number of true negative samples; Accuracy is the detection accuracy.
In the test of this paper, the effect of the improved algorithm can be reflected only by the detection rate. Therefore, the image data collected by the factory were used for the test, 500 positive samples and 80 negative samples were selected, a total of 580 samples were tested, and the results are shown in Table 2.
As shown in the table, 493 of the 500 positive samples were detected as positive samples, 6 were detected as negative samples, 3 of the 80 negative samples were detected as positive samples, and 77 were detected as negative samples. The test accuracy of samples collected by the factory was 98.45%.

4. Conclusions

This paper mainly studies a fabric defect detection method based on local similarity comparison. Using the scale idea, the scale of the central pixel is determined by the correlation between the central pixel and the pixels in the surrounding scale range, and the size of each central pixel window can be determined adaptively. The gray value of each pixel of the image is evaluated by using the regional similarity of the fabric image. The central region is determined by taking each pixel as the center, and the neighborhood with high similarity is found around it. The gray value of the central pixel is evaluated by using the most similar neighborhood, and the background difference method can be used to determine whether there are defects in the fabric image. In the process of experiment, through the detection of different types of fabric defect, it is verified that the algorithm can identify different types of defect and shapes, and has a certain detection rate. At the same time, it is compared with similar LCD and LBPs, and the results also show the superiority of the algorithm.
In the process of image acquisition, due to the experimental environment, camera resolution, and other problems, some fabric acquisition is relatively fuzzy, and some fabric defects are difficult to collect, such as yarn thickness and fabric organization, and other factors will affect the acquisition results. In subsequent research, it will be necessary to use a higher pixel resolution camera and create a better experimental environment. The scale local adaptive fabric defect detection algorithm proposed in this paper uses the idea of regional similarity to evaluate the gray value of each pixel in the image, so that each pixel can have a more accurate evaluation result. However, due to the need to evaluate each pixel, the amount of calculation is relatively large and the sensitivity is low, so it is necessary to optimize the code on the one hand, and select high-speed GPU for experiments on the other hand.

Author Contributions

Conceptualization, Y.Z. and J.Z.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z. and W.S.; formal analysis, Y.Z.; investigation, J.Z.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, W.S.; visualization, Y.Z.; supervision, W.S.; project administration, W.S.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Research on magnet size measurement and surface defect detection method [Grant No. Y202250269].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank Zhejiang Sci-Tech University for engaging in useful discussions on topics relevant to this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jeyaraj, P.R.; Nadar, E.R.S. Effective textile quality processing and an accurate inspection system using the advanced deep learning technique. Text. Res. J. 2020, 90, 971–980. [Google Scholar] [CrossRef]
  2. Fouda, Y.M. Integral images-based approach for fabric defect detection. Opt. Laser Technol. 2022, 147, 107608. [Google Scholar] [CrossRef]
  3. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  4. Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2023, 82, 9243–9275. [Google Scholar] [CrossRef] [PubMed]
  5. Kang, X.J.; Yang, P.P.; Jing, J.F. Defect detection on printed fabrics via Gabor filter and regular band. J. Fiber Bioeng. Inform. 2015, 8, 195–206. [Google Scholar] [CrossRef]
  6. Jun, X.; Wang, J.; Zhou, J.; Meng, S.; Pan, R.; Gao, W. Fabric defect detection based on a deep convolutional neural network using a two-stage strategy. Text. Res. J. 2021, 91, 130–142. [Google Scholar] [CrossRef]
  7. Wei, B.; Hao, K.; Tang, X.-S.; Ding, Y. A new method using the convolutional neural network with compressive sensing for fabric defect classification based on small sample sizes. Text. Res. J. 2019, 89, 3539–3555. [Google Scholar] [CrossRef]
  8. Liu, Q.; Wang, C.; Li, Y.; Gao, M.; Li, J. A fabric defect detection method based on deep learning. IEEE Access 2022, 4, 4284–4296. [Google Scholar] [CrossRef]
  9. Lin, G.; Liu, K.; Xia, X.; Yan, R. An efficient and intelligent detection method for fabric defects based on improved YOLOv5. Sensors 2022, 23, 97. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, Z.; Jing, J.; Zhang, H.; Zhao, Y. Real-time fabric defect segmentation based on convolutional neural network. AATCC J. Res. 2021, 8, 91–96. [Google Scholar] [CrossRef]
  11. Huang, Y.; Jing, J.F.; Wang, Z. Fabric defect segmentation method based on deep learning. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  12. Gong, C.; Tao, D.; Liu, W.; Maybank, S.; Fang, M.; Fu, K.; Yang, J. Saliency propagation from simple to difficult. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 44–57. [Google Scholar] [CrossRef]
  13. Liu, S.; Liu, J.; Zhang, L. Classification of Fabric Defect Based on PSO-BP Neural Network. In Proceedings of the 2008 Second International Conference on Genetic and Evolutionary Computing, Jinzhou, China, 25–26 September 2008; pp. 137–140. [Google Scholar] [CrossRef]
  14. Gudivada, V.N.; Raghavan, V.V. Content based image retrieval systems. Computer 1995, 28, 18–22. [Google Scholar] [CrossRef]
  15. Li, P.F.; Zhang, H.H.; Jing, J.F.; Wang, J. Fabric Defect Detection Based on Local Entropy. Adv. Mater. Res. 2012, 562–564, 1998–2001. [Google Scholar] [CrossRef]
  16. Wang, Y.; Xu, X.; Yue, N.; Chen, J. Edge-Preserving Background Estimation Using Most Similar Neighbor Patch for Small Target Detection. Comput. Vis. 2017, 85–89. [Google Scholar] [CrossRef]
  17. Liu, Z.; Yan, L.; Li, C.; Dong, Y.; Gao, G. Fabric defect detection based on sparse representation of main local binary pattern. Int. J. Cloth. Sci. Technol. 2017, 29, 282–293. [Google Scholar] [CrossRef]
  18. Tajeripour, F.; Kabir, E.; Sheikhi, A. Fabric defect detection using modified local binary patterns. EURASIP J. Adv. Signal Process. 2008, 2008, 783898. [Google Scholar] [CrossRef]
  19. Shi, M.; Fu, R.; Guo, Y.; Bai, S.; Xu, B. Fabric defect detection using local contrast deviations. Multimed. Tools Appl. 2011, 52, 147–157. [Google Scholar] [CrossRef]
  20. Textile Fabric Defect Sample Library. Available online: https://www.cottoninc.com/quality-products/textile-resources/fabric-defect-glossary/ (accessed on 1 January 2020).
  21. Ein-Shoka, A.; Kelash, H.M.; Faragallah, O.S.; El-Sayed, H.S. Enhancement of IR Images using Homomorphic Filtering in Fast DiscreteCurvelet Transform(FDCT). Int. J. Comput. Appl. 2014, 96, 22–25. [Google Scholar] [CrossRef]
Figure 1. Pixels within the radius of plane D r ( c ) scale. (a) r = 1, (b) r = 2.
Figure 1. Pixels within the radius of plane D r ( c ) scale. (a) r = 1, (b) r = 2.
Applsci 14 10754 g001
Figure 2. Schematic diagram of algorithm.
Figure 2. Schematic diagram of algorithm.
Applsci 14 10754 g002
Figure 3. Partial defect diagram of standard fabric defects. (a) Knots, (b) holes, (c) oil stains.
Figure 3. Partial defect diagram of standard fabric defects. (a) Knots, (b) holes, (c) oil stains.
Applsci 14 10754 g003
Figure 4. Some fabric defects collected in the laboratory. (a) Knots, (b) holes, (c) crease.
Figure 4. Some fabric defects collected in the laboratory. (a) Knots, (b) holes, (c) crease.
Applsci 14 10754 g004
Figure 5. Partial fabric defect images collected in the factory environment. (a) Warp breakage; (b) warp breakage; (c) warp breakage caused by interference from light sources and other factors.
Figure 5. Partial fabric defect images collected in the factory environment. (a) Warp breakage; (b) warp breakage; (c) warp breakage caused by interference from light sources and other factors.
Applsci 14 10754 g005
Figure 6. Homomorphic filtering algorithm flow.
Figure 6. Homomorphic filtering algorithm flow.
Applsci 14 10754 g006
Figure 7. Comparison of experimental collected images before and after filtering. (a) Original image, (b) image after homomorphic filtering.
Figure 7. Comparison of experimental collected images before and after filtering. (a) Original image, (b) image after homomorphic filtering.
Applsci 14 10754 g007
Figure 8. Image comparison before and after homomorphic filtering of standard image. (a) Original image, (b) image after homomorphic filtering.
Figure 8. Image comparison before and after homomorphic filtering of standard image. (a) Original image, (b) image after homomorphic filtering.
Applsci 14 10754 g008
Figure 9. Different types of fabric defect detection. (a) Holes, (b) scaled image, (c) test results, (d) greasy dirt, (e) scaled image, (f) test results, (g) warp breakage, (h) scaled image, (i) test results.
Figure 9. Different types of fabric defect detection. (a) Holes, (b) scaled image, (c) test results, (d) greasy dirt, (e) scaled image, (f) test results, (g) warp breakage, (h) scaled image, (i) test results.
Applsci 14 10754 g009
Figure 10. Detection results of different algorithms. (a) Holes, (b) yarn breakage, (c) fold, (d) hole-scale image, (e) yarn-break scale image, (f) fold-scale image, (g) the test results of the algorithm, (h) the test results of the algorithm, (i) the test results of the algorithm, (j) LCD test results, (k) LCD test results, (l) LCD test results, (m) LBPs test results, (n) LBPs test results, (o) LBPs test results.
Figure 10. Detection results of different algorithms. (a) Holes, (b) yarn breakage, (c) fold, (d) hole-scale image, (e) yarn-break scale image, (f) fold-scale image, (g) the test results of the algorithm, (h) the test results of the algorithm, (i) the test results of the algorithm, (j) LCD test results, (k) LCD test results, (l) LCD test results, (m) LBPs test results, (n) LBPs test results, (o) LBPs test results.
Applsci 14 10754 g010
Figure 11. Accuracy and training loss of validation set under knowledge distillation. (a) Verification accuracy; (b) training loss.
Figure 11. Accuracy and training loss of validation set under knowledge distillation. (a) Verification accuracy; (b) training loss.
Applsci 14 10754 g011
Figure 12. Fabric defect detection platform. (a) Defect detection system, (b) defect detection view.
Figure 12. Fabric defect detection platform. (a) Defect detection system, (b) defect detection view.
Applsci 14 10754 g012
Figure 13. Detection results of plain weave defects.
Figure 13. Detection results of plain weave defects.
Applsci 14 10754 g013
Figure 14. Detection results of twill tissue defects.
Figure 14. Detection results of twill tissue defects.
Applsci 14 10754 g014
Figure 15. Change the warp weave defect detection results.
Figure 15. Change the warp weave defect detection results.
Applsci 14 10754 g015
Table 1. Comparison of network model inference performance.
Table 1. Comparison of network model inference performance.
ModeAcc (%)Params (M)Flops (M)Mem (MB)Time/ms
CPUGPU
LBPs97.0621.183.56 G37.6163.997.46
LCD96.000.2534.4410.128.111.73
Proposed method99.890.1834.4410.128.121.77
Table 2. Detection accuracy.
Table 2. Detection accuracy.
TP/FPFN/TNAccuracy
494398.45%
677
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Shi, W.; Zhang, J. Detection of Defects in Warp Knitted Fabrics Based on Local Feature Scale Adaptive Comparison. Appl. Sci. 2024, 14, 10754. https://doi.org/10.3390/app142210754

AMA Style

Zhang Y, Shi W, Zhang J. Detection of Defects in Warp Knitted Fabrics Based on Local Feature Scale Adaptive Comparison. Applied Sciences. 2024; 14(22):10754. https://doi.org/10.3390/app142210754

Chicago/Turabian Style

Zhang, Yongchao, Weimin Shi, and Jindou Zhang. 2024. "Detection of Defects in Warp Knitted Fabrics Based on Local Feature Scale Adaptive Comparison" Applied Sciences 14, no. 22: 10754. https://doi.org/10.3390/app142210754

APA Style

Zhang, Y., Shi, W., & Zhang, J. (2024). Detection of Defects in Warp Knitted Fabrics Based on Local Feature Scale Adaptive Comparison. Applied Sciences, 14(22), 10754. https://doi.org/10.3390/app142210754

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop