Next Article in Journal
Non-Energy Valorization of Residual Biomasses via HTC: CO2 Capture onto Activated Hydrochars
Next Article in Special Issue
Unsupervised Generation and Synthesis of Facial Images via an Auto-Encoder-Based Deep Generative Adversarial Network
Previous Article in Journal
Self-Expandable Retainer for Endoscopic Visualization in the External Auditory Canal: Proof of Concept in Human Cadavers
Previous Article in Special Issue
Boundary Matching and Interior Connectivity-Based Cluster Validity Anlysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review on Computer Aided Weld Defect Detection from Radiography Images

1
School of Engineering, Anhui Agriculture University, No. 130 West Changjiang Road, Hefei 230026, China
2
Intelligent Agricultural Machinery Laboratory of Anhui Province, Hefei 230026, China
3
Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230027, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(5), 1878; https://doi.org/10.3390/app10051878
Submission received: 31 December 2019 / Revised: 27 February 2020 / Accepted: 2 March 2020 / Published: 10 March 2020
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)

Abstract

:
The weld defects inspection from radiography films is critical for assuring the serviceability and safety of weld joints. The various limitations of human interpretation made the development of innovative computer-aided techniques for automatic detection from radiography images an interest point of recent studies. The studies of automatic defect inspection are synthetically concluded from three aspects: pre-processing, defect segmentation and defect classification. The achievement and limitations of traditional defect classification method based on the feature extraction, selection and classifier are summarized. Then the applications of novel models based on learning(especially deep learning) were introduced. Finally, the achievement of automation methods were discussed and the challenges of current technology are presented for future research for both weld quality management and computer science researchers.

1. Introduction

Welded structures have been widely used in many areas, such as construction, vehicle, aerospace, railway, petrochemical and machinery electrical. The weld defects are inevitable due to the different environmental conditions and welding technology in the welding process. It is critical to check the quality of welded joints to assure the reliability and safety of the structure, especially for those critical applications where weld failure can be catastrophic. As the most commonly used methods to detect the quality of welding, nondestructive testing techniques (NDT) include radiographic, ultrasonic, magnetic particle and liquid penetrant testing methods. In this paper, we mainly pay attention to radiography testing technology commonly used to inspect the inner defects of welds. The X-ray and Gamma-ray sources are usually used to produce the radiographic weld images by penetrating the weld structure and exposing photographic films.
Weld flaws are described by the variation of intensity in the radiographic films. These films should be checked by certified inspectors to evaluate and interpret the quality of welds called human interpretation. However, the radiogram quality, the welding over-thickness, the bad contrast, the noise and the weak sizes of defects make difficult the task [1]. There are some drawbacks for human interpretation. Firstly, the inspectors are generally trained and have relevant expertise and experience. However, it is still difficult for the skilled inspector to recognize the small flaws within a short time. Secondly, the human interpretation is usually short of objectivity, consistency and intelligence. Finally, The labor intensity of human interpretation is large because lots of films are produced each day due to the improvement of production efficiency in modern industry. In addition, human visual inspection is, at best, 80% effective, and this effectiveness can only be achieved if a rigidly structured set of inspection checks is implemented [2]. Thus many researchers began to build the intelligent systems based on computer which help human on evaluating the quality of welds before the 1990s. Such computer-aided systems typically take the digital images as the object to extract the welds and detect the flaws in the images by various algorithms. Thus, for the conventional films, the digitization should be necessary. Fortunately, digital radiography systems (digitizers) are currently available for digitizing radiographic films without losing the useful information of the original radiograph. Unlike the conventional films which can only be evaluated manually, the digitized radiographic images not only enable the storage, management and analysis of radiographic inspection data easier, but also make the more intelligent inspection of welds possible.
The Advanced Quality Technology Group of Lockheed Martin Manned Space Systems had been supporting three projects which contribute to the building of computer-assisted X-ray film interpretation system, development of weld flaw detector based on image processing and using of Geometric Arithmetic Parallel Processor (GAPP) chips [3,4,5]. Automatic detection method for weld flaws have rapidly advanced in recent decades. This benefits from the development of technologies such as image processing, computer vision, pattern recognition and deep learning which improved the analysis capability for images.
In the initial study, many researchers took the intensity plot of the line image as the object, and processed the 2D image line-by-line. These methods are based on observation that the weld defect would destroy the bell shape possessed by the line image of good weld. Thus the detection is to find the abnormity of intensity plot. The features used for detection and classification are often defined in the intensity plot. The defects can be discriminated accurately. However, these methods are often time-consuming due to their processing style for images. In addition, it is difficult to recognize diverse types of weld defects. Then most detection systems based on 2D images relied on the image processing, feature extraction and classification. Various image processing technologies were successfully applied to improve the quality of images and remove background to highlight defect region. The geometric features, texture features or combination of both features were applied to characterize the shape, size and texture of defects for further classification. The MFCCs together with polynomial features were also used for defect identification due to their robust to noise and time shifts in signals. The feature selection is usually used between feature extraction and classifier for reducing the number of features to save the computational costs. Furthermore, developments in computer hardware technology and representation learning has provided the perfect conditions for automation of weld defects inspection. Especially, with recent advances of deep learning theory, in optical image recognition domain, considerable effort has been made to design multistage architectures which learn the hierarchical features from images automatically.
This paper aims to review the common practices for weld defects detection and classification based on the digitized radiographic images. The radiation involved in these studies is X-ray (or sometimes Gamma-ray). The two radiographic sources are used in different occasions which are not distinguished in this paper. The paper focuses on the summary of analysis methods for digitized radiographic images. It gives a detailed and comprehensive summary of literatures from image pre-processing, defect segmentation and defect classification. It elaborates from four aspects: (1) the quality improvement of weld images; (2) traditional techniques for defect detection and classification; (3) the application of novel models based on learning; and (4) the achievements and challenges of current methods.

2. Data Collection

In order to review the relevant literatures in weld detection, we searched accessible databases. The publications collected were papers published between 1982 and 2019 from peer-reviewed journals and conference proceedings. This review was carried out only on automatic technologies for weld defects based on digital radiographic images.
The peer-reviewed journals from a variety of fields including Expert Systems with Applications, NDT&E International, Fuzzy Sets and Systems, Information Sciences, Journal of Manufacturing Systems and so on. Weld defect detection automation has always been a topic of interest for NDT&E International. After conducting a comprehensive literature analysis through the literatures, it can be summarized that the automatic detection systems for weld defect mainly involve several technologies: image pre-processing, defect segmentation, feature extraction and selection, classification. Figure 1 shows a classical procedure welding defect detection system involving these aspects [6]. In our paper, the literatures are summarized through a detailed analysis of each section. The block diagram indicating the structure of this paper is shown in Figure 2. All the abbreviation in Figure 2 have full names in the text.

3. Image Preprocessing

The digital radiographic images often show low contrast, existence of noise, inconsistent distribution of grays. However, the quality of images influenced the detection of weld defects to a large extent especially for those small defects which can be easily drown in the noise. Many processing methods were applied to eliminate or relieve the existence of such problems. It should be mentioned that the processing must be carefully so that the important information is not lost. For instance, the original shape and brightness distribution of the defect which is important for the discrimination of the defect types may be lost when the image enhancing methods, such as normalization and histogram equalization are applied. The pre-processing methods mainly involve noise reduction and contrast enhancement. There are several key tasks in this phase: noise filtering, object highlighting and vision improvement.

3.1. Noise Removal

The noise pixels are usually distributed irregularly in the image. Their gray level values pixels are different from those of their surrounding pixels. Filtering methods are commonly used to remove the noise pixels based on the fact that the noise is characterized as high frequency values. Zscherpel designed a one-dimensional FFT-filter for detecting the crack flaw. This filter including a column-wise FFT high-pass Bessel operation can distinguish between undercuts and cracks [7]. The noise can be easily eliminated and the cracks can be recognized clearly when a row-oriented low-pass is used to the output of this filter. Strang used a wavelet filter to transform the image in order to restrain the noise with a simple threshold operation [8]. The median filter and adaptive Wiener filter were applied successfully for removing the noise from images in many literatures [9,10,11,12,13]. Median filter is a nonlinear, low-pass filter. It usually deploys a template and replaces the pixel value with the median of the neighboring pixels. Wang and Liao presented that the Median filter is adequate to be applied to radiographs of continuous welds [9]. Zapata applied an adaptive Wiener filter and Gaussian low-pass filter for eliminating noise. The Wiener filters carried out the smoothing with different degrees. This adaptive filter retained the edges and other high-frequency information. Gaussian low-pass filter smooth the image in the frequency domain by alternating a specified range of high-frequency components [6]. The performance of the filter depends on the size of the filter. The defect may not be filtered out when the size is too small, while the background is estimated inaccurately when the size is large. Moreover, finding a filter which can be used on all radiographic images is difficult.
El-Tokhy used the blind image separation (BIS) instead of the filtering method. The method firstly selected the appropriate method for source extraction, then identified multiple-input multiple-output finite impulse response, lastly eliminated ambiguities of convolutive blind source separation by a correlation method. Based on the procedure, the noise is separated from the gamma radiography image [14].

3.2. Contrast Enhancement

Radiographic images are usually low-contrast and short of detail due to the limitation of intensities range accommodated by the capture device. The objective of contrast enhancement is to adjust the contrast value for highlighting the important parts without the unimportant information lost [9,15]. El-Tokhy used the contrast stretch and normalization algorithm to improve the image. The method first normalize the image with the low and high threshold values, then find the closest value to the minimum and maximum values, and finally do the contrast stretching according to determined range of contrast values [14]. Shafeek applied the histogram stretch and the histogram equalization to obtain the optimum image before the segmentation process. The histogram stretch algorithm aims to increase the contrast of images. The objective of histogram equalization is to obtain an image with equally distributed brightness levels over the whole brightness scale [2]. Ye improved both the contrast of welding seam area and background area with the method of sin function intensification. After that, the grayscales of background area and welding seam area was concentrated on high and low grayscale, and the curve was double-peak [16].

4. Defect Segmentation

The weld image contains not only the weld but also the background information. The background is defined to be the regions in an image that are not important to the analyst. In welding defect detection, the regions with defects in the weld bead is the target of recognition. Thus these regions are the focus of analyst, while the background regions without defects should be removed. The background subtraction method tries to recognize the defects by subtracting the background from the original image.
The defects can be inspected by subtracting the background from the original image due to their superimposing on the other image structures. This inspection method is popular in the early research of weld seam detection. Hyatt designed a multiscale method for removing the background structure in the digital radiographs while reserving the defect details [17]. The gray level in defect part usually changed with high spatial frequencies, meanwhile that in normal part varied gradually, namely with low spatial frequencies [18]. Many methods are built on this basis. Liao pointed that the line profile of a good weld has a bell shape, while the existence of defects will destroy the bell. Based on this, they first scaled line image so that each profile has approximately the same size, then chosen the suitable threshold value by observing the histograms of the scaled images to remove the background, finally detected whether there are anomalies in the profiles and generated a two-dimensional (2D) flaw-map [19]. Furthermore, Wang and Liao simulated a 2D background model of normal welding bead which can be subtracted from the original image. This method can be applied to defect all types of flaws [9]. These methods taken the line profile as the object of detection and processed each weld image line by line. Aoki constructed the background subtraction method for abstracting the defect image. In the process of subtracting the background, a special points connection method is proposed to persist the background distribution of defective parts [20]. Carrasco and Mery used a bottom-hat filter to separate the majority of defects from the background. This filter is made up of two stages: first a background image without flaws was produced by a morphological closure operator, secondly, the defective regions were identified by subtracting the background image from the original image [21].
Kazantsev took the weld detection problem as a problem of hypothesis testing. They applied the statistical techniques for weld defect detection in radiography images. The results showed that the defects are segmented from the original radiography images based on nonparametric statistics successfully [22].
The other image processing methods involving thresholding operation [23], watershed transform [24], morphological operations [25], and edge detection [26] are used for segmenting the defects by searching the significant variation of the pixels which may be the defects. Carrasco and Mery used the binary thresholding method to segment the defective regions after the noise reduction and background subtraction. In order to eliminate over-segmentation, filters taken from morphological mathematics were next used. The watershed transform was finally used to separate internal defective regions. The watershed transform coming from the field of mathematical morphology is a well established tool for the segmentation of images. Generally, the watershed transform is computed on the gradient of the original image, the boundaries are located at high gradient points. It can produce a complete division of the image in separated regions. The result image of each step was shown in Figure 3. The result generated an area of 0.9358 underneath the receiver operation characteristic (ROC) curve [21]. Anand applied the morphological image processing to detect suspected defect regions. The approach first detected the important edges by applying the Canny operator (a computational approach to edge detection) [26] with an appropriate threshold value. In order to obtain a closed contour, morphological image processing approach was used which dilated few similar boundaries and eroded some irrelevant boundaries [27]. Nafaa applied Artificial Neural Networks (ANN) in the edge detection of X-ray images containing defects of welding replacing the application of filtering techniques. The results show that the directly contours of welding defects in radiograms is successfully delivered. It is also proved that the proposed neural segmentation technique is robust to the noise and variable luminance. However, in this type of application, the powerful and rapid computers are needed due to the slow execution speed of this technique [1]. Shafeek implemented the segmentation process using a suitable threshold to convert the image in the specified window to binary image for separating the defects from the surrounding areas. Then they applied the eight-neighborhoods boundary chain code algorithm to identify the contours of defects. As a result, the coordinates of the boundary edges of all defects can be extracted and stored. The boundary edge codes marked can be used to calculate area, perimeter, width and height of the defects. This useful information could be used to assessment the defect and to classify the welding defects in the future [2].
Valavanis used graph-based segmentation algorithms [28] to extract segments that correspond to the defects or false positives. The method observed the global image characteristics other than local variation. Thus it can identify the distinct regions, even in the case that there is large variability in their interior [29].

5. Defect Classification

The classification of defects can be treated as two tasks: binary (defect, non-defect) and multi-class (gas cavity, lack of penetration, porosity, slag inclusion, crack, lack of fusion, wormhole and undercut etc.) problem. The problem involved the feature extraction, feature selection and the choice of classifier.

5.1. Feature Extraction

The raw data of the defect image cannot be analyzed directly for further classification. Some measures or descriptors should be used to extract the characteristics of defects. This process is often called feature extraction. The classification method based on features of the defects is one of the most widely used techniques. The features with a lower dimension space than the raw data are usually human-defined according to the expertise or experience. The choice of proper features called feature engineering is a key factor for recognition of the defects in the intelligent system. This feature engineering is similar to the human interpretation, which recognizes a kind of weld defect according to visual information.
At the beginning of the study, many scholars defined the features on a one-dimensional grayscale curve which are simple. Then more and more scholars began to focus on the shape, size, location, and the intensity of corresponding pixels of the defects in 2D images. Thus the geometric features and texture features were widely applied. The two kind of features defined by different scholars were also not identical. Furthermore, to synthesize the information obtained from the image, several scholars combined different kinds of features. After that, someone tried to apply features whose performance is good in other fields to weld detection.
Liao et al. processed each image line by line. For each line, they extracted 25 features, such as the degree of symmetricity (DOS), the median DOS and the goodness of fitting (GOF) etc. as the inputs of classifiers [30]. Furthermore, they extract three new features for each object in the line image for classification. These features are the width, the mean square error (MSE) between the object and its Gaussian, and the peak intensity (gray level) [31,32]. Perner calculated various parameters of profile plot as features for classification [33]. All the features mentioned above are based on the gray lever curve of line profile.
The geometric features and texture features have been the most commonly used for weld defect classification in recent decades [6,34]. The geometric features usually describe the shape, size, location, and intensity information of welding defects, while texture features can provide very useful visual cues commonly used in pattern recognition of image. Wang extracted 12 numeric features from the segmented binary defect image, such as distance from center, radius mean, standard deviation, and circularity etc. [35]. Four geometric features including position, aspect ratio, ratio, roundness were extracted to build the inputs of nonlinear pattern classifier [36]. Three new geometric features were defined to be added the features for classification [37]. Shen defined four new features: roughness of defect edge, roughness of defect region, skewness and kurtosis, which are closely related to the defect types but cannot be detected by human eyes [38]. The expert vision system proposed by Shafeek was based on the features estimating the shape, orientation and location of the defect [39]. Zhang selected eight parameters regarding to weld center, symmetry, filling degree index and relative gray-scale as defect features, such as edge flatness and ratio of perimeter to area etc. [40]. Mery extracted 28 texture features based on co-occurrence matrix for three distances, and 64 texture features based on Gabor functions for classification [41]. Valavanis proposed the multimodal feature definition by combining the geometric features and texture features for capturing all visual attributes [29]. Kumar extracted a set of 8, 64 and 44 texture features vectors based on gray level co-occurrence matrix for classifying various defects [15]. Further, they combined the geometric features and texture features for classification, and compared the performance of classifiers with different feature sets. The results show that the classifier with the combined features perform better than with only geometric features or only texture features [12].
Mel-Frequency Cepstral Coefficients (MFCCs) as low-dimensional, fixed dimension feature vectors are very effective in speech processing [42]. They are already successfully tested as s damage sensitive features for mechanical systems [43] and applied in damage detection for structural health monitoring (SHM) [44]. Kasban used them with the polynomial features for weld defect identification. These features extracted from the 1D lexicographically ordered signals or their power density spectra are suitable for defect detection in the existence of noise. [11,45].
These features are defined on the gray-lever curve of line image or 2D images. In general, the appropriate feature is decided by the inspector or researcher who judges according to his experience and knowledge. Thus, different inspectors are inclined to diverse types of features. It is difficult to say what kind of feature is best.

5.2. Feature Selection

Sometimes, the number of features extracted is large, thus the dimension of feature vector is high, which lead to excessive computational burden. Feature selection as one of the mainstream approaches for processing high-dimensional data is essential to reduce the difficulty of classification task and computational costs. It tries to find an optimum subset of original features which can provide useful information. The work is too complicated for people, even impossible. There are a variety of methods for evaluating the quality of the extracted features. Jain compared the performance of feature selection [46]. Many feature selection methods found the optimum subset by evaluation function which assessing the quality of feature subsets. Liao used a simple feature selection approach based on the correlation coefficients between independent variable and the dependent variable to meet 7 ± 2 rule of fuzzy expert system. The method reduced the dimension of features from 12 to 7 or 9 without too much loss of accuracy [35]. However, to some extent, using of feature subset reduced the accuracy. Garcı’a-Allende presented sequential forward floating selection (SFFS) as the substitution of principal component analysis (PCA) for weld detection. The algorithm performed the dimensionality reduction better than PCA, and improved the computational performance [47]. In the work of Mery, 148 texture features were extracted for each segmented region which is too many to be input of classifier. In order to reduce the computational time, they used a sequential forward selection (SFS) method which requires an objective function obtained from the Fisher discriminant to evaluate the performance of classifier with m feature. The method began with one feature (m = 1), then added one feature in each calculation, finally searched the features that maximize the objective function until the optimal n features were obtained [41]. Valavanis used a sequential backward selection (SBS) method with classifier and compared the performance of classification using different feature sets (43 features and its subset with seven features) [29]. The results showed that the SBS saves about 80% feature computation. The accuracies of the artificial neural network (ANN) classifier using 7 features were almost as high as those using 43 features. However, the situation is not the same when using the support vector machine (SVM). The feature selection procedure favored the ANN but not SVM in classification perform.
Many selection methods applied the classification performance (classification accuracy or error) of classifier as the evaluation function. In this situation, the effectiveness of feature selection method is related to the choice of classifier.

5.3. Classifier

The choice of classifier is another key factor influencing the performance of defect classification. It is a more complex problem to choose an appropriate classifier. The classification task was finished by fuzzy K-NN algorithm and fuzzy c-means algorithm according to extracted features. It was concluded that the classification performance of fuzzy K-NN algorithm is superior to that of fuzzy c-means [30,31]. Mery and Berti used three statistical classifiers which conducting the classification by using the concept of similarity: polynomial classifier, Mahalanobis classifier and nearest neighbor classifier [48]. They compared the performance of these classifiers with 7 selected features using the true positives, false positives, false negatives, true negatives, sensibility, and 1-specificity [41]. Liao used fuzzy expert systems for classification and compared it with the fuzzy k-nearest neighbor algorithm and multi-layer perceptron neural networks. It is concluded that the proposed fuzzy system is more transparent and easier understood by human through the evaluation by bootstrap method [35]. Zhang found that multi-SVM is almost independent of the impact of the training samples reduction. Thus it had higher accuracy than fuzzy neural network under the condition of small samples [40]. Shen used direct multiclass SVM (DMSVM) to classify the defects through the features defined by themselves. DMSVM yields a direct method for training multiclass predictors instead of constructing the classifier according to the samples to be classified [38]. Silva implemented a study of nonlinear classifier using ANN and proved that the quality of the extracted features is more important than the quantity of the features [36]. In the last ten years, ANN has been widely used in the classification of welding defects [6,12,49]. Vilar applied ANN to classify the defect candidates. They used three different regularization methods (i.e., regularization with modified performance function, automatically setting of the regularization parameters and early stopping or bootstrap) to improve the network generalization. The network with best performance is obtained through several tests [49].
Perner compared the performances of neural networks and decision trees. In this work, they pointed out the error rate cannot be the only criterion for comparing the different classifiers. More criteria, such as generalization, representation ability, classification quality and classification cost etc. were proposed [33]. They noted that the RBF neural net and the back-propagation network perform better than decision tree on the error rate for test data set while the unpruned decision tree shows the best error rate for the design data set. The representation and generalization ability of the neural networks is more balanced while the unpruned decision tree overfits the data. However, the neural nets need the feature selection before learning. Otherwise, the neural nets will be more complex and time-consuming. Moreover, the explanation capability of decision trees whose rules can be understood and controlled by human is strong.
The selection of classifiers is a complex problem. Their accuracies are closely related not only to the extracted features, but also to the characteristics of the original data. For example, the complexity of some classifiers depends on the number of input features. Some classifiers need an additional feature selection operation when the number of the input features is large.
Table 1 shows the primary coverage of most literatures for reader guidelines. The table shows the traditional computer vision technologies including the above-mentioned aspects. The results of various technologies show the only best accuracy or false alarm rate obtained by the classifiers with better performance. The results of different literature are generated through the different data using the different evaluation criteria. Thus it is not appropriate for the comparison. The values are only used to understand the performance of classifiers. The word “complex” present the performance of the classifier is evaluated by complex style (more criteria) which is partly introduced in preceding text.
Through the summary of the above-mentioned literatures, it can be concluded the research on the classification of welding defects based on radiographic images has been focusing on feature extraction, feature selection and classifier design. These three aspects are closely connected which all influence the classification performance. However, the contribution of most of the works is to optimize one or both of them. The human-defining of features cannot be updated on-line. Feature selection involves prior selection of hyperparameters such as latent dimension. Thus the system including feature definition/extraction, feature selection and classifier training cannot be jointly optimized which may hinder the final performance of the whole system [50].

5.4. New Methods

Considering that most of the defect features are designed manually and lack of intelligence, many researchers began to focus on the automatic extraction method based on learning. As the classical learning-based approaches, deep learning [51] and sparse representation [52] supplied new idea in recognizing the object automatically from the optical images. Olshausen pointed out the classical image is sparse and can be compressed [53]. Deep learning can learn hierarchical feature replacing the handcrafted features. Their excellent performance in feature learning from image urged people to use them in automatic defect detection [54,55,56]. For automatic detection of welding defects, the application of two methods begin to rise, especially the application of convolutional neural network (CNN) which is a typical deep learning model [57,58,59].
In the focus of this paper, Chen developed a system for recognizing diverse types of defects from images inspired by the human visual inspection mechanism. The method which is an unsupervised algorithm learned a dictionary from lots of normal images replacing the experienced workers. The dictionary can be used to sparsely reconstruct the background and the weld region of test images without the defective regions. Thus the defects would stand out in difference image between the test image and the reconstructed image [60]. Yang proposed a model-based on CNN to classify the X-ray weld images by improving the convolution kernel and the activation function. The method doesn’t need noise reduction, the feature extracting and enhancing [61]. Based on the principle of visual perception, Li constructed a deep learning network with 10 layers to directly determine the type of suspected defect. The network can judge whether the defect is linear or circular without extracted features [62]. Ye designed online segmentation and recognition method based on the compressed sensing for weld defect. The method firstly established offline database of defects images, then segmented the defect using clustering method, lastly determined the type of defects using the optimal dictionary whose atoms are characteristic values of defects. Hou developed a model based on a deep convolutional neural network (DCNN) to extract the deep features directly from the X-ray images without any preprocessing operation. The performance of deep feature was compared with those of traditional designed feature, such as texture features and histogram of oriented gradients features. The results showed that the separability of deep features is better [63]. Figure 4 shows the difference of traditional computer vision technology based on the above-mentioned technologies and the deep learning approach.
The deep models can directly learn the high-level structural features from the original images avoiding the segment of the weld region. Thus unlike previous algorithms, the performance of these models does not depend on accurate weld region extraction. Moreover, the deep models embracing the feature extraction, selection and classification can be jointly optimized.

6. Discussions

6.1. Achievements

Inspection of weld defects from the radiographic images has always been an important and challenging task. Limitations of human evaluation persuade the researchers to develop new technologies for automatic inspections. The developments in image analysis and computer vision techniques have made the perfect conditions for proposing methods for automation of weld defect inspection. Looking into the accomplishments introduced in previous sections, developments in detection methods provide enough data for segmenting weld, extracting the features and classification in order to detect and classify nearly all defects.
The defect segmentation methods based on traditional image processing can separate the defects from the original the images. The process can be regarded as the recognition of defect. The location of defects can be obtained from the segmented results. This is similar to the second level of Rytter classification [64], which is often used in damage detection and assessment in structural health monitoring (SHM) [65]. Detection and localization of defects have been automated to some extent. At the same time, the important parameters of defects can also be calculated from the segmented defect region which are often used as the features in the future to classify the welding defects. Thus the quality of defect segmentation would affect the performance of classification system. Most of the defect classification systems based on feature extraction, selection and classifier introduced in Section 5 achieved high accuracy. However, the features introduced are defined or designed by human, which hinder the automation of system. The innovative technologies such as deep neural networks have made it possible to learn the features automatically from the image. Deep neural networks represented remarkable performance in recent years according to their recent application in weld defect detection. If the X-ray images are from thick steel pipes, the edge of image will be blurry and the gray scale distribution of the weld region will be uneven. Deep neural networks can deal with the situation well-avoiding noise reduction, extracting and enhancing features. As the end-to-end detection system, the networks can directly determine whether the suspected defect image is a linear defect, circular defect or noise.

6.2. Challenges

Contrary to the development mentioned in the previous section, there are still challenges in the automation of weld defect inspection. The main contribution made in the last decade is the development of efficient and well-organized methods for feature extraction for constructing a robust classifier. The features providing enough discrimination should be selected. It is an important step in classifier design. Previous studies have mainly focused on the geometric and texture features while there are infinite unknown patterns and shapes for each type of weld defect.
Although the deep neural networks introduced are dominant in learning the hierarchical feature from the weld images, the previous research show the drawbacks in developing a generalized defect inspection model based on these networks. There are mainly two aspects: the model training and the preparation of the datasets. Due to the huge model complexity behind deep learning methods, the training time of models is usually long which made it difficult to realize the real time defect detection. However, we believe that the implementation an on-line, real-time welding monitor system to prevent the possible defects from happening will be one of the key concerns in future. Moreover, the performance of deep models heavily depend on the scale and quality of datasets. For weld defect inspection, it is difficult to obtain a good dataset (a lots of weld images with defects being labeled by human). We believe that unsupervised representation learning provides good ideas for learning a layer of feature representations from unlabeled images. However, the literature involving the application of unsupervised representation learning in welding defect detection is still very few. Its effectiveness has yet to be supported and proven by more work.
There is an intrinsic problem in the domain of weld inspection as in many other domains such as fraud detection, oil spill detection. It is just the class imbalance problem, namely the number of examples of one class is much higher than the others. The classifier tends to obtain high accuracy over the majority class, but poor accuracy over the minority class. Figure 5 presents that class imbalance leads to poorer performance when classifying minority class. The weld defect detection is just the problem of classification for minority class. Boaretto carried out the identification of the defects successfully. However, they failed on the attempt of classifying the defects due to the unbalanced data generated by the few samples of each defect type [34]. Liao studied the imbalanced data problem in the classification of different types of weld flaws. He used eight evaluation criteria to study the effectiveness of 22 data preprocessing methods for dealing with imbalanced data. The results indicated that some preprocessing methods do not improve any criterion, and they vary from one classifier to another [66]. These preprocessing methods solved the problem from only the perspective of data. The literature involving improving algorithms to deal with the imbalance learning for weld defect inspection is still few.

7. Conclusions

To assure the reliability of the weld structure, it is critical to inspect and assess the quality of the weld joints from the radiographic films according to NDT test. The evaluation of films performed by certified operators is time-consuming, subjective, and error prone. Therefore, many researchers have tried to automate the evaluation using computer assistance. This article introduced available practices in weld defects inspection from digital radiographic images. The technologies were introduced by stage in weld defect inspection. This review supplied a comprehensive literature review on automatic models based on traditional computer vision techniques falling into three categories: image pre-processing, feature extraction and classification. The paper firstly found that some image pre-processing tools like morphological operation and thresholding operation which improved the quality and segmented the defect region. Following this, it concluded three aspects: feature extraction, selection and classifier and pointed out the limitations of these traditional methods. Lastly, the advantage of sparse representation and deep models were analyzed and their applications in weld defects inspection were introduced. It was concluded with the analysis of achievements of automating weld defect detection and challenges summarized as open questions for future research in the field.

Author Contributions

W.H., X.Z. and J.G. conceived and designed the field survey; W.H. and Y.W. processed and analyzed the data; D.Z. and X.Z. contributed to the survey and reviewed the submitted manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by National Natural Science Foundation of China (No. 51805006 and No. 51675005).

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nafaa, N.; Redouane, D.; Amar, B. Weld defect extraction and classification in radiographic testing based artificial neural networks. In Proceedings of the 15th World Conference on Non Destructive Testing, Rome, Italy, 15–21 October 2000. [Google Scholar]
  2. Shafeek, H.I.; Gadelmawla, E.S.; Abdel-Shafy, A.A. Assessment of welding defects for gas pipeline radiographs using computer vision. NDT E Int. 2004, 37, 291–299. [Google Scholar]
  3. Yeh, P.S.; Le Moigne, J.; Fong, C.B. Computer-aided X-ray film interpretation. In MML TR 88–76; Martin Marietta Laboratories: Baltimorem, MD, USA, 1988. [Google Scholar]
  4. Basart, J.P.; Xu, J. Automatic detection of flaws in welds. In Final Report to Martin Marietta MSS Contract# A71445; Center for NDE, Iowa State University: Ames, IA, USA, 1991. [Google Scholar]
  5. Cloud, E.; Fraser, K.; Krywick, S. Automated examination of X-ray welds. In Final Report to Martin Marietta MSS; Martin Marietta Electronics, Information & Missiles Group: Orlando, FL, USA, 1992. [Google Scholar]
  6. Zapata, J.; Vilar, R.; Ruiz, R. Performance evaluation of an automatic inspection system of weld defects in radiographic images based on neuro-classifiers. Expert Syst. Appl. 2011, 38, 8812–8824. [Google Scholar]
  7. Zscherpel, U.; Nockemann, C.; Mattis, A.; Heinrich, W. Neue Entwicklungen bei der Filmdigitalisierung. In Proceedings of the DGZfP-Jahrestagung in Aachen, Tagungsband, Aachen, Germany, 22–24 March 1995. [Google Scholar]
  8. Strang, G. Wavelets and dilation equations: A brief introduction. SIAM Rev. 1989, 31, 614–627. [Google Scholar]
  9. Wang, G.; Liao, T.W. Automatic identification of different types of welding defects in radiographic images. NDT E Int. 2002, 35, 519–528. [Google Scholar]
  10. Aoki, K.; Suga, Y. Application of artificial neural network to discrimination of defect type in automatic radiographic testing of welds. ISIJ Int. 1999, 39, 1081–1087. [Google Scholar]
  11. Zahran, O.; Kasban, H.; El-Kordy, M.; Abd El-Samie, F.E. Automatic weld defect identification from radiographic images. NDT E Int. 2013, 57, 26–35. [Google Scholar]
  12. Kumar, J.; Anand, R.S.; Srivastava, S.P. Flaws classification using ANN for radiographic weld images. In Proceedings of the IEEE International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014; pp. 145–150. [Google Scholar]
  13. Li, L.; Xiao, L.; Liao, H. Welding quality monitoring of high frequency straight seam pipe based on image feature. J. Mater. Process. Technol. 2017, 246, 285–290. [Google Scholar]
  14. El-Tokhy, M.S.; Mahmoud, I.I. Classification of welding flaws in gamma radiography images based on multi-scale wavelet packet feature extraction using support vector machine. J. Nondestruct. Eval. 2015, 34. [Google Scholar] [CrossRef]
  15. Kumar, J.; Anand, R.S.; Srivastava, S.P. Multi-class welding flaws classification using texture feature for radiographic images. In Proceedings of the IEEE International Conference on Advances in Electrical Engineering (ICAEE), Vellore, India, 9–11 January 2014; pp. 1–4. [Google Scholar]
  16. Ye, H.; Juefei, L.; Huijun, L. Detection and recognition of defects in X-ray images of welding seams under compressed sensing. J. Phys. Conf. Ser. 2019, 1314, 012064. [Google Scholar]
  17. Hyatt, R.; Kechter, G.E.; Nagashima, S. A method for defect segmentation in digital radiographs of pipeline girth welds. Mater. Eval. 1996, 54, 379793. [Google Scholar]
  18. Daum, W.; Rose, P.; Heidt, H.; Builtjes, J.H. Automatic recognition of weld defects in x-ray inspection. Br. J. Nondestruct. Test. 1987, 29, 79–81. [Google Scholar]
  19. Liao, T.W.; Li, Y. An automated radiographic NDT system for weld inspection: Part II—Flaw detection. NDT E Int. 1998, 31, 183–192. [Google Scholar]
  20. Aoki, K.; Suga, Y. Intelligent image processing for abstraction and discrimination of defect image in radiographic film. In Proceedings of the Seventh International Offshore and Polar Engineering Conference, Honolulu, HI, USA, 25–30 May 1997. [Google Scholar]
  21. Carrasco, M.A.; Mery, D. Segmentation of welding defects using a robust algorithm. Mater. Eval. 2004, 62, 1142–1147. [Google Scholar]
  22. Kazantsev, I.; Lemahieu, I.; Salov, G.I.; Denys, R. Statistical detection of defects in radiographic images in nondestructive testing. Signal Process. 2002, 82, 791–801. [Google Scholar]
  23. Murakami, K. Image processing for non-destructive testing. Weld. Int. 1990, 4, 144–149. [Google Scholar]
  24. Grau, V.; Mewes, A.U.J.; Alcaniz, M.; Kikinis, R.; Warfield, S.K. Improved watershed transform for medical image segmentation using prior information. IEEE Trans. Med. Imaging 2004, 23, 447–458. [Google Scholar] [PubMed]
  25. Sofia, M.; Redouane, D. Shapes recognition system applied to the non destructive testing. In Proceedings of the 8th European Conference on Non-Destructive Testing, Barcelona, Spain, 17–21 June 2002. [Google Scholar]
  26. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar]
  27. Anand, R.S.; Kumar, P. Flaw detection in radiographic weld images using morphological approach. Ndt E Int. 2006, 39, 29–33. [Google Scholar]
  28. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar]
  29. Valavanis, I.; Kosmopoulos, D. Multiclass defect detection and classification in weld radiographic images using geometric and texture features. Expert Syst. Appl. 2010, 37, 7606–7614. [Google Scholar]
  30. Liao, T.W.; Li, D.M.; Li, Y.M. Detection of welding flaws from radiographic images with fuzzy clustering methods. Fuzzy Sets Syst. 1999, 108, 145–158. [Google Scholar]
  31. Liao, T.W.; Li, D.; Li, Y. Extraction of welds from radiographic images using fuzzy classifiers. Inf. Sci. 2000, 126, 21–40. [Google Scholar]
  32. Liao, T.W. Fuzzy reasoning based automatic inspection of radiographic welds: weld recognition. J. Intell. Manuf. 2004, 15, 69–85. [Google Scholar]
  33. Perner, P.; Zscherpel, U.; Jacobsen, C. A comparison between neural networks and decision trees based on data from industrial radiographic testing. Pattern Recognit. Lett. 2001, 22, 47–54. [Google Scholar]
  34. Boaretto, N.; Centeno, T.M. Automated detection of welding defects in pipelines from radiographic images DWDI. NDT E Int. 2017, 86, 7–13. [Google Scholar]
  35. Liao, T.W. Classification of welding flaw types with fuzzy expert systems. Expert Syst. Appl. 2003, 25, 101–111. [Google Scholar]
  36. da Silva, R.R.; Calôba, L.P.; Siqueira, M.H.S.; Rebello, J.M.A. Pattern recognition of weld defects detected by radiographic test. NDT E Int. 2004, 37, 461–470. [Google Scholar]
  37. Da Silva, R.R.; Siqueira, M.H.S.; de Souza, M.P.V.; Rebello, J.M.A. Estimated accuracy of classification of defects detected in welded joints by radiographic tests. NDT E Int. 2005, 38, 335–343. [Google Scholar]
  38. Shen, Q.; Gao, J.; Li, C. Automatic classification of weld defects in radiographic images. In Insight—Non-Destructive Testing and Condition Monitoring; The British Institute of Non-Destructive Testing: Northampton, UK, 2010; Volume 52, pp. 134–139. [Google Scholar]
  39. Shafeek, H.I.; Gadelmawla, E.S.; Abdel-Shafy, A.A.; Elewa, I.M. Automatic inspection of gas pipeline welding defects using an expert vision system. NDT E Int. 2004, 37, 301–307. [Google Scholar]
  40. Zhang, X.G.; Xu, J.J.; Ge, G.Y. Defects recognition on X-ray images for weld inspection using SVM. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, Shanghai, China, 26–29 August 2004; Volume 6, pp. 3721–3725. [Google Scholar]
  41. Mery, D.; Berti, M.A. Automatic detection of welding defects using texture features. In Insight—Non-Destructive Testing and Condition Monitoring; The British Institute of Non-Destructive Testing: Northampton, UK, 2003; Volume 45, pp. 676–681. [Google Scholar]
  42. Civera, M.; Ferraris, M.; Ceravolo, R.; Surace, C. The Teager-Kaiser Energy Cepstral Coefficients as an Effective Structural Health Monitoring Tool. Appl. Sci. 2019, 9, 5064. [Google Scholar]
  43. Balsamo, L.; Betti, R.; Beigi, H. A structural health monitoring strategy using cepstral features. J. Sound Vib. 2014, 333, 4526–4542. [Google Scholar]
  44. Ferraris, M.; Civera, M.; Ceravolo, R.; Surace, C.; Betti, R. Using enhanced cepstral analysis for structural health monitoring. In Proceedings of the 13th International Conference on Damage Assessment of Structures, Porto, Portugal, 9–10 July 2019; Springer: Berlin, Germany, 2020; pp. 150–165. [Google Scholar]
  45. Kasban, H.; Zahran, O.; Arafa, H. Welding defect detection from radiography images with a cepstral approach. NDT E Int. 2011, 44, 226–231. [Google Scholar]
  46. Jain, A.; Zongker, D. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 153–158. [Google Scholar]
  47. Garcia-Allende, P.B.; Mirapeix, J.; Conde, O.M.; Cobo, A.; Lopez-Higuera, J.M. Spectral processing technique based on feature selection and artificial neural networks for arc-welding quality monitoring. NDT E Int. 2009, 42, 56–63. [Google Scholar]
  48. Mery, D.; da Silva, R.R.; Calôba, L.P.; Rebello, J.M.A. Pattern recognition in the automatic inspection of aluminium castings. In Insight—Non-Destructive Testing and Condition Monitoring; The British Institute of Non-Destructive Testing: Northampton, UK, 2003; Volume 45, pp. 475–483. [Google Scholar]
  49. Vilar, R.; Zapata, J.; Ruiz, R. An automatic system of classification of weld defects in radiographic images. NDT E Int. 2009, 42, 467–476. [Google Scholar]
  50. Zhao, R.; Yan, R.; Chen, Z.; Mao, K. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar]
  51. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar]
  52. Tosic, I.; Frossard, P. Dictionary learning: What is the right representation for my signal? IEEE Signal Process. Mag. 2011, 28, 27–38. [Google Scholar]
  53. Olshausen, B.A.; Field, D.J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 1996, 381, 607. [Google Scholar]
  54. Xiang, Y.; Zhang, C.; Guo, Q. A dictionary-based method for tire defect detection. In Proceedings of the IEEE International Conference on Information and Automation, Hailar, China, 28–30 July 2014; pp. 519–523. [Google Scholar]
  55. Nhat-Duc, H.; Nguyen, Q.L.; Tran, V.D. Automatic recognition of asphalt pavement cracks using metaheuristic optimized edge detection algorithms and convolution neural network. Autom. Constr. 2018, 94, 203–213. [Google Scholar]
  56. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar]
  57. Sassi, P.; Tripicchio, P.; Avizzano, C.A. A smart monitoring system for automatic welding defect detection. IEEE Trans. Ind. Electron. 2019. [Google Scholar] [CrossRef]
  58. Zhang, Y.; You, D.; Gao, X.; Zhang, N.; Gao, P.P. Welding defects detection based on deep learning with multiple optical sensors during disk laser welding of thick plates. J. Manuf. Syst. 2019, 51, 87–94. [Google Scholar]
  59. Zhang, Z.; Wen, G.; Chen, S. Weld image deep learning-based on-line defects detection using convolutional neural networks for Al alloy in robotic arc welding. J. Manuf. Process. 2019, 45, 208–216. [Google Scholar]
  60. Chen, B.; Fang, Z.; Xia, Y. Accurate defect detection via sparsity reconstruction for weld radiographs. NDT E Int. 2018, 94, 62–69. [Google Scholar]
  61. Yang, N.; Niu, H.; Chen, L.; Mi, G. X-ray weld image classification using improved convolutional neural network. In Proceedings of the 2018 International Symposium on Mechanics, Structures and Materials Science, Tianjin, China, 9–10 June 2018; Volume 1995, p. 020035. [Google Scholar]
  62. Yaping, L.; Weixin, G. Research on X-ray welding image defect detection based on convolution neural network. J. Phys. Conf. Ser. 2019, 1237, 032005. [Google Scholar]
  63. Hou, W.; Wei, Y.; Jin, Y.; Zhu, C. Deep Features Based A DCNN Model Classifying Imbalanced Weld Flaw Types. Measurement 2019, 131, 482–489. [Google Scholar]
  64. Rytter, A. Vibrational Based Inspection of Civil Engineering Structures. Ph.D. Thesis, Aalborg University, Aalborg, Denmark, 1993. [Google Scholar]
  65. Civera, M.; Zanotti Fragonara, L.; Surace, C. An experimental study of the feasibility of phase-based video magnification for damage detection and localisation in operational deflection shapes. Strain 2020, 56, e12336. [Google Scholar]
  66. Liao, T.W. Classification of weld flaws with imbalanced class data. Expert Syst. Appl. 2008, 35, 1041–1052. [Google Scholar]
  67. He, H.; Ma, Y. Imbalanced Learning: Foundations, Algorithms, and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
Figure 1. Procedure of automatic welding defect detection system [6].
Figure 1. Procedure of automatic welding defect detection system [6].
Applsci 10 01878 g001
Figure 2. Flowchart of the review for studies on automatic welding defect detection.
Figure 2. Flowchart of the review for studies on automatic welding defect detection.
Applsci 10 01878 g002
Figure 3. Summary of the proposed segmentation process: (a) image-altering the application of the median filter. (b) Application of the Bottom-Hat filter. (c) Application of binary thresholding. (d) Application sectioning process. (e) Modification of minima. (f) Application of the Watershed transformation [21].
Figure 3. Summary of the proposed segmentation process: (a) image-altering the application of the median filter. (b) Application of the Bottom-Hat filter. (c) Application of binary thresholding. (d) Application sectioning process. (e) Modification of minima. (f) Application of the Watershed transformation [21].
Applsci 10 01878 g003
Figure 4. Traditional computer vision technology and deep learning approach comparison.
Figure 4. Traditional computer vision technology and deep learning approach comparison.
Applsci 10 01878 g004
Figure 5. Impact of class imbalance on classification performance of minority class [67].
Figure 5. Impact of class imbalance on classification performance of minority class [67].
Applsci 10 01878 g005
Table 1. Main techniques used in the papers cited for readers guideline.
Table 1. Main techniques used in the papers cited for readers guideline.
RefBasePre-ProcessingFeature
Number;
Type
Feature
Selection
ClassifierResultsEvaluation
[30]Line
profile
-25
profile
measurements
-Fuzzy
k-NN;
Fuzzy C-means
6.01%
18.67%
Missing
rate False alarm
[31]Line
profile
-3
profile
measurements
-Fuzzy
k-NN;
Fuzzy C-means
-False alarm
[33]Line
profile
-36
profile
measurements
-NN;
Decision-tree
complexGeneralization;
Representation;
Quality;
Cost;
etc.
[32]Line
profile
-3
profile
measurements
-Fuzzy
reasoning
100%Accuracy
[9]2D
image
Noise removal;
Contrast
improve;
Defect segment
12
numeric
-Fuzzy
k-NN;
MLP
92.39%Bootstrap
accuracy
[41]2D
image
Potential
defect segment
148
texture
SFSPolynomial;
Mahalanobis; Nearest neighbor
90.91%Area
under the ROC
[35]2D
image
-12
geometric
Filter methodsFuzzy expert;
Fuzzy
k-NN;
MLP
0.9205Bootstrap
accuracy
[36]2D
image
Noise removal;
Contrast
improve;
4
geometric
-Nonlinear
pattern classifiers using NN
complexClassification
performance;
Relevance
criterion;
Principal
components
[40]2D
image
Noise removal;
Enhancement;
Segmentation
8
geometric
-SVM;
Fuzzy
NN
83.3%Accuracy
rate
[37]2D
image
-7
geometric
-Nonlinear
classifier
92%Bootstrap
accuracy
[29]2D
image
Defect
segment
43 geometric+
texture
SBSSVM;
ANN;
k-NN
98.51%3-fold
cross validation
accuracy
[15]2D
image
Noise removal;
Contrast
improve
8,64,44
texture
-ANN86.1%Classification
accuracy
[12]2D
image
Noise removal;
Contrast
improve;
Image
segment
16 texture
8geometric
72 geometric
+
texture
-ANN87.34%Classification
accuracy
[45]1D
signal
-13MFCCs+
26polynomial
features
-ANN100%Recognition
rates
[6]2D
image
Noise removal;
Contrast
improve;
Defect
segment
12
geometrical
-ANN
ANFIS
100%Classification
accuracy
[11]Power
Density Spectra
Image
enhancement;
Image
segmentation
MFCCs+
polynomial
features
-ANN100%Probability
of detection;
False
alarm rate
[14]2D
image
Noise removal;
Contrast
improve;
Image
segment
Energy
of the wavelet coefficients
-SVM99.5%Classification
rate
[34]2D
image
Location
of the weld bead region
8
geometrical
-MLP88.6%
87.5%
Accuracy
F-score

Share and Cite

MDPI and ACS Style

Hou, W.; Zhang, D.; Wei, Y.; Guo, J.; Zhang, X. Review on Computer Aided Weld Defect Detection from Radiography Images. Appl. Sci. 2020, 10, 1878. https://doi.org/10.3390/app10051878

AMA Style

Hou W, Zhang D, Wei Y, Guo J, Zhang X. Review on Computer Aided Weld Defect Detection from Radiography Images. Applied Sciences. 2020; 10(5):1878. https://doi.org/10.3390/app10051878

Chicago/Turabian Style

Hou, Wenhui, Dashan Zhang, Ye Wei, Jie Guo, and Xiaolong Zhang. 2020. "Review on Computer Aided Weld Defect Detection from Radiography Images" Applied Sciences 10, no. 5: 1878. https://doi.org/10.3390/app10051878

APA Style

Hou, W., Zhang, D., Wei, Y., Guo, J., & Zhang, X. (2020). Review on Computer Aided Weld Defect Detection from Radiography Images. Applied Sciences, 10(5), 1878. https://doi.org/10.3390/app10051878

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop