Next Article in Journal
Novel Hydrazide Hydrazone Derivatives as Antimicrobial Agents: Design, Synthesis, and Molecular Dynamics
Next Article in Special Issue
Application of Additive Manufacturing in the Automobile Industry: A Mini Review
Previous Article in Journal
Three-Dimensional VOF-DEM Simulation Study of Particle Fluidization Induced by Bubbling Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Defect Identification of 316L Stainless Steel in Selective Laser Melting Process Based on Deep Learning

School of Mechanical Engineering, Beihua University, Jilin 132000, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(6), 1054; https://doi.org/10.3390/pr12061054
Submission received: 16 March 2024 / Revised: 21 April 2024 / Accepted: 24 April 2024 / Published: 22 May 2024
(This article belongs to the Special Issue Additive Manufacturing of Materials: Process and Applications)

Abstract

:
In additive manufacturing, such as Selective Laser Melting (SLM), identifying fabrication defects poses a significant challenge. Existing identification algorithms often struggle to meet the precision requirements for defect detection. To accurately identify small-scale defects in SLM, this paper proposes a deep learning model based on the original YOLOv5 network architecture for enhanced defect identification. Specifically, we integrate a small target identification layer into the network to improve the recognition of minute anomalies like keyholes. Additionally, a similarity attention module (SimAM) is introduced to enhance the model’s sensitivity to channel and spatial features, facilitating the identification of dense target regions. Furthermore, the SPD-Conv module is employed to reduce information loss within the network and enhance the model’s identification rate. During the testing phase, a set of sample photos is randomly selected to evaluate the efficacy of the proposed model, utilizing training and test sets derived from a pre-existing defect database. The model’s performance in multi-category recognition is measured using the average accuracy metric. Test results demonstrate that the improved YOLOv5 model achieves a mean average precision (mAP) of 89.8%, surpassing the mAP of the original YOLOv5 network by 1.7% and outperforming other identification networks in terms of accuracy. Notably, the improved YOLOv5 model exhibits superior capability in identifying small-sized defects.

1. Introduction

Metal additive manufacturing technology has gained widespread application in the design and production of high-performance components across various industries, including aerospace, medical, and automotive, primarily attributed to its significant advantages over conventional manufacturing methods for fabricating intricate components [1]. Nonetheless, the performance of Selective Laser Melting (SLM) components is influenced by both surface quality and microstructural characteristics [2]. Critical factors that impinge upon surface quality and microstructure comprise material properties, design-related considerations, process parameters, and system settings. Improper process configurations may engender defects in the final additively manufactured part, such as a lack of fusion voids, cracks, and others, which significantly compromise the service life and pose substantial hazards [3]. Consequently, defect detection emerges as a pivotal aspect in assuring the high quality of additively manufactured parts [4]. Adopting deep learning methods for defect detection has a faster detection speed, lower detection cost, and less demanding detection environment. This has a clear advantage in reducing the technical requirements of personnel, equipment costs, and space for companies engaged in additive manufacturing.
Numerous researchers have endeavored to utilize imaging detection methodologies for the identification of defects in additively manufactured components. These techniques encompass X-ray inspection, ultrasonic testing, scanning electron microscopy (SEM), and machine vision inspections, among others. Notably, ultrasonic testing, X-ray testing, and SEM technology impose stringent demands on operators and necessitate specialized training. Ultrasonic testing typically exhibits limited capability in detecting defects at the millimeter scale. In contrast, SEM showcases exceptional resolution and precision in discerning minute defects and microstructures. However, it is characterized by exorbitant equipment costs, intricate operational procedures, and specific sample handling requisites. Ultrasonic and X-ray detection methods are limited to detecting larger defects ranging from tens to hundreds of microns, whereas metallographic microscope technology provides a more practical, cost-effective, and convenient option. However, analyzing the defect type from metallographic microscope images is a more intricate process, and the manual identification of defects is prone to significant recognition errors. Consequently, there is an urgent need to address the challenge of utilizing artificial intelligence for the automated identification of microscopic defects in images.
Deep learning algorithms have found extensive applications in the domain of machine vision inspection, demonstrating commendable outcomes in various tasks. Leveraging vast quantities of annotated image data, these algorithms can be trained to autonomously acquire feature representations, leading to precise defect detection and classification.
Deep learning algorithms can be categorized into two-stage and one-stage approaches for object detection. One-stage algorithms directly perform object detection on the entire image without explicit candidate region generation steps. They employ convolutional neural networks (CNN) to simultaneously predict the classes and bounding boxes of all objects in the image. This method often utilizes dense anchor boxes or default boxes to generate candidate bounding boxes, which are then predicted and filtered through classifiers and regressors. Well-known one-stage algorithms include the YOLO (You Only Look Once) series and SSD (Single Shot MultiBox Detector). On the other hand, two-stage algorithms divide the object detection task into two stages. Initially, they employ region proposal methods like Selective Search and EdgeBoxes to generate a set of candidate regions that potentially contain objects. Then, for each candidate region, a CNN is employed for feature extraction and classification to determine the presence of an object and accurately localize it. Common two-stage algorithms encompass R-CNN, Fast R-CNN, and Faster R-CNN [5]. While two-stage algorithms offer enhanced accuracy, they require additional candidate region generation steps, resulting in slower computation. Conversely, one-stage algorithms are faster but may exhibit slight limitations in detecting small-sized or densely packed objects.
Xu et al. employed the Faster R-CNN algorithm to identify damage defects occurring on the inner surface of hydraulic cylinders within hydraulic brackets [6]. Meanwhile, Liu et al. utilized the Faster R-CNN ResNet50 neural network model to detect a range of defects present on aero-engine blades, including cracks, wrinkles, pockmarks, scratches, polishing traces, localized chromatic aberrations, and coating detachments. They also confirmed that the model achieved remarkable recognition accuracy [7]. The R-CNN family of algorithms offers significant advantages in terms of detecting accuracy; however, its limitation lies in the relatively slower detection speed.
The YOLO algorithm, as a representative of single-stage algorithms, directly feeds the entire image into the model network. This yields output results containing classification categories and bounding box positions, enabling the extraction of all features within the image and the prediction of all objects [8,9]. Among the YOLO family, the YOLOv5 algorithm demonstrates superior overall recognition capability compared to other YOLO models, owing to its enhanced accuracy and detection precision [10,11]. In the field of metal defect detection, numerous researchers have applied the YOLOv5 model or its variants [12,13]. For instance, Lu et al. employed the YOLOv5 model to detect surface defects on aluminum profiles and successfully identified various defect types [14]. Xiao et al., on the other hand, amalgamated the original YOLOv5 network model with transformer structures and a BiFPN (Bi-directional Feature Pyramid Network) to amplify the recognition capability for target defects, creating a defect recognition model applicable to galvanized steel [15]. Zhao et al. addressed issues observed in existing defect detection methods, such as substantial model parameters and low detection rates. Their proposed solution involved devising a shallow feature-enhancement module and implementing a coordinate attention mechanism within the bottleneck structure of MobileNetV2. This led to the development of a smaller Bi-directional Feature Pyramid Network (BiFPN-S). Compared to the original YOLOv5, this model demonstrated a 61.5% reduction in model parameters, a 28.7% increase in detection speed, and a 2.4% improvement in recognition accuracy [16].
Metal parts manufactured using Selective Laser Melting (SLM) may exhibit various internal defects of different types and sizes, necessitating a high defect recognition rate [17]. YOLOv5, as a single-stage object detection algorithm characterized by its fast detection speed, is well suited for real-time applications. YOLOv5 incorporates multiple optimization strategies, such as network structure improvements and data augmentation, to achieve accurate defect detection [18]. By employing feature fusion across different scales, YOLOv5 effectively handles defects of varying sizes and scales. However, compared to certain two-stage methods, YOLOv5 may face challenges in detecting small-sized defects, such as small-sized holes. This is primarily due to the employment of dense anchor boxes in single-stage methods for target detection, which may overlook or struggle to accurately detect smaller defects. To address this limitation and leverage the real-time nature of the YOLOv5 model, we introduce enhancements to the original network model. Our improvement approach involves adding an upsampling layer in the network model’s neck to extract features related to small-sized targets, thereby enhancing the model’s ability to perceive and detect such targets. Additionally, we introduce a similarity attention module (SimAM) to enrich the representation capability of small-sized target defect features. This module strengthens the model’s localization and detection ability for small target defects, thereby enhancing the robustness in complex scenes. Furthermore, we introduce the SDP-Conv module, which employs spatial decomposition and propagation convolution to expand the receptive field range of the convolutional layer. By propagating features from different scales, the SDP-Conv module achieves effective feature fusion, enabling the model to better utilize feature information from various levels and enhance its perception of small target defects. It is anticipated that the improved YOLOv5 model will enhance the detection capability of small target defects, reducing missed detections and false alarms, enhancing detail perception and multi-scale feature fusion, and ultimately improving the accuracy of detecting small-scale defects.

2. Materials and Methods

2.1. Samples Used in the Experiments

A 316L stainless steel sample was prepared using the Selective Laser Melting (SLM) process. The samples were produced utilizing an ISLM150 printer manufactured by Zhongrui Technology (Suzhou, Jiangsu Province, China), which incorporates a fiber laser with a power output of 200 W, operating at a wavelength of 1064 nm and with a spot diameter ranging from 0.04 mm to 0.15 mm.the beam diameter adjusted to 90 μm during the printing process. The process parameters used to fabricate the parts are listed in Table 1. Energy density, represented as E = laser power/scanning speed × layer thickness (J/mm3) [19], is recognized as a critical factor influencing quality. However, this measure does not account for the impact of parameters like spot diameter on defect generation. Hence, this paper employs a comprehensive analytical model to ascertain the process parameters that result in defects [20]. The samples were subjected to quality inspection, with images captured using an RX-200 metallurgical microscope (Yuanxing Optical Instrument Co., Ltd., Dongguan, Guangdong Province, China), set at a magnification of 200 and a resolution of 640 pixels × 480 pixels, offering sufficient information about the defects.

2.2. Dataset Production

To enhance the effectiveness and diversity of the training samples, a pre-screening process was conducted on the collected image data prior to training. A total of 436 defective images were selected and stored in JPG format. The image resolution was maintained at 640 pixels × 480 pixels after processing. In this study, the dataset was augmented using various methods, including adaptive contrast, rotation, translation, and cropping, resulting in an expanded dataset comprising 1451 images [21]. This dataset consists of three labeled classes: lack of fusion (LOF), unmelted powder, and keyhole, as illustrated in Figure 1. The labeling software utilized for annotating the real bounding boxes and categories was Labelimg [22]. Subsequently, all augmented images were divided into training, validation, and testing sets in a ratio of 7:2:1. The training set contained 1061 images, the validation set consisted of 290 images, and the testing set included 145 images.

2.3. Experimental Equipment

Training of this model was conducted on a Windows 11 operating system using the Pytorch framework. The CPU model of the testing device is the AMD Ryzen 7 5700X 8-Core Processor @3.40 GHz, and the GPU model is the GeForce RTX 2060 s 8 G. The software environment consists of CUDA 10.2, CUDNN 7605, and Python 3.6. Detailed parameters are provided in Table 2.

2.4. SLM Defect Detection of 316L Material Based on YOLOv5

2.4.1. Small Target Detection Layer

Defects of small size, such as keyholes, exist in the cross-sectional images of the printed samples, and the detection model used must be able to detect these small targets. In the process of using the original YOLOv5 model, the size of defects like keyholes is very small. In the YOLOv5 network structure, the feature maps are too small, and the downsampling ratio is large; therefore, it is difficult for deep feature mappings to learn the features of small-target information, leading to the omission of small-sized defects. To solve this problem, this paper attempts to add a small-target detection layer to the original YOLOv5 detection head.
As shown in Figure 2, the feature map of 80 × 80 size is upsampled at the neck to obtain the feature map of 160 × 160 size so that the feature map retains more information about small targets. In the 21st layer, the 160 × 160 size feature maps collected are spliced with the feature maps of the third layer in the backbone network, and the resulting feature maps are used for small-target detection.

2.4.2. Similarity Attention Module

To suppress the interference caused by extensive irrelevant information in defect images, this study proposes an improved model that incorporates a three-dimensional parameter-free SimAM, as shown in Figure 3. This mechanism enables better focus on small targets, enhances the weight of effective image features, and reduces sensitivity to irrelevant information. This helps reduce errors and unnecessary computational costs caused by excessive invalid or redundant information, thereby improving the computational efficiency and generalization capability of the model. Therefore, compared to the three-dimensional parameterized attention mechanism without introducing additional learnable parameters, this mechanism is more comprehensive and efficient in evaluating feature weights to enhance defect features and optimize defect target features.
SimAM (similarity attention module) is an attention mechanism used to enhance the attention and responsiveness of deep learning models to key regions. The SimAM adaptively adjusts the attention weights at each position in the feature map by calculating similarity weights. In the SimAM, the input feature map is transformed using convolutional layers. Then, a similarity matrix is computed to reflect the similarity between different positions. The similarity matrix can be computed using a dot product or other methods. Next, a set of weight matrices is obtained by normalizing the similarity matrix. These weight matrices have the same dimensions as the feature map, and the weight at each position represents the importance of that position to the entire feature map. Finally, the weight matrices are multiplied element-wise with the values and summed to obtain the final attention feature map. The weight at each position determines its contribution to the final feature map. The design goal of SimAM is to enhance the attention to key regions and improve the model’s responsiveness. By adaptively adjusting the attention weights at each position in the feature map, the SimAM can better capture important information, thereby improving the performance and accuracy of the model. Through the aforementioned steps, the SimAM can find the weights of each neuron in the defect feature map based on the similarity energy function, which is used to distinguish the linear separability between the current target neuron and other neurons. The similarity energy function for the i-th neuron in the SimAM is represented by Equation (1).
e t * = 4 ( σ ^ 2 + λ ) ( t μ ^ ) 2 + 2 σ ^ 2 + 2 λ
where  μ ^ = 1 M i = 1 M x i  and  σ ^ 2 = 1 M i = 1 M ( x i μ ^ ) 2 ; λ is the canonical term, i is the index over the spatial dimension, M = H × W is the number of neurons on that channel, and t and xi are the target neuron and other neurons in a single channel of the input feature X ∈ RC × H × W.
Regarding  e t , the smaller its value, the more separable the current defective feature map target neuron and other neurons are and the more critical the neuron is, and the weight of each neuron on the feature map can be obtained,  1 / e t * . Our feature map X′ final result is shown in Equation (2), where E is the set of all neural  e t  values of the defect feature map [1].
X ˜ = S i g m o i d ( 1 E ) X

2.4.3. SDP-Conv

When the depth of the YOLOv5 network increases, the feature maps are processed through multiple convolutional and pooling layers, with each layer performing feature extraction and abstraction on the input data. However, this process may result in information loss.
First, the convolutional kernels in deep networks are typically small, meaning that each convolutional layer can only perceive a limited local receptive field. While stacking multiple convolutional layers can expand the receptive field, this local receptive field processing may lead to the loss of some detailed information.
Second, deep networks perform downsampling (e.g., pooling operations) between each layer to reduce the size of the feature maps. This downsampling reduces the spatial resolution of the feature maps, resulting in the loss of detailed information in the defect regions. As the network depth increases, this loss of spatial resolution becomes more pronounced.
Furthermore, as the YOLOv5 network becomes deeper, the issues of vanishing or exploding gradients may become more severe. This can hinder convergence during training or lead to unstable training, affecting the model’s ability to learn defect regions.
To address these challenges, this paper introduces the SPD-Conv module to reduce information loss in the network and improve the detection accuracy for small-sized defects.
In the SDP-Conv module, spatial depth separable convolution separates the standard convolution into depth-wise convolution and point-wise convolution. The depth-wise convolution focuses only on the spatial relationships between channels in the input feature map, while the point-wise convolution independently operates on each channel. This significantly reduces the number of parameters, saves computational resources, and improves the efficiency of the network. Additionally, the SDP-Conv module utilizes 1 × 3 and 3 × 1 convolution kernels of different sizes to enhance the network’s perception capability for defects with different aspect ratios. This helps the network better understand the shape and structure of defects, thereby improving the detection accuracy for defects of different sizes. The introduced structure of SPD-Conv is shown in Figure 4.
From Figure 4, it can be observed that the introduced SPD-Conv structure consists of spatial-to-depth (SPD) layers and non-overlapping convolution (Conv) layers. For instance, in SPD, a feature map X of size S × S × C1 can be divided into submaps fx,y composed of i X and i y, with each submap downsampled proportionally from X. Choosing a reasonable scale factor of 2 helps extract higher-level information and effectively reduces the computational workload, improving the operational efficiency of the network. Consequently, each submap has a size of S/2 × S/2 × C1. These four submaps are then concatenated along the channel dimension to obtain a feature map X′ of size S/2 × S/2 × 4C1. In the Conv part, the SPD-Conv module utilizes convolution layers with a stride of 1 to reshape the size of X′ to S/2 × S/2 × C2, where C2 < 4C1, in order to retain as much crucial information as possible.
By incorporating the SPD-Conv module into the backbone and neck networks of an enhanced YOLOv5 network model, on top of adding a small object detection layer, a novel improved YOLOv5 network model is constructed. The overall framework of the entire model is illustrated in Figure 5.

3. Results and Discussion

3.1. Model Evaluation Index

In order to evaluate the model performance, this paper uses some standard evaluation metrics for target detection models, including precision, recall, average precision (AP), mean average precision (mAP), etc. Because there is a restrictive relationship between precision and recall, that is, high precision is accompanied by low recall, and vice versa, the use of precision or recall alone cannot accurately reflect the actual performance of the model. To address this issue, an integrated analysis of precision and recall is conducted, introducing the concept of average precision (AP). Starting from the model’s prediction results, they are arranged according to the level of confidence, and the precision and recall of R and P are calculated at different confidence levels, thereby drawing the P-R curve. By integrating the area under the curve, the AP is ultimately obtained. The mean average precision (mAP), which is an average of the AP values of multiple categories, demonstrates the model’s detection performance for the overall categories. In this paper, mAP is chosen as the main evaluation criterion to comprehensively assess the model’s performance on different categories [23].
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P ( R ) d R
m A P = 1 n k = 1 k = n A P k

3.2. Results and Analysis

Observing the curve in Figure 6 leads us to the following conclusions. Firstly, during the training process of the model, the improved YOLOv5 model exhibits a faster convergence speed. When the number of training batches on the x-axis is consistent, the loss value of the improved YOLOv5 model is lower, indicating that the improved YOLOv5 model has a higher learning efficiency and can enhance the model’s performance in a shorter training time. Secondly, the lower validation loss suggests that the improved model has better generalizing capabilities.
The loss value of the improved model and the comparison chart of mAP on the dataset between the improved model and the original model are shown in Figure 6. Both models converge within 150 steps. The highest accuracy of the original model reaches 92.1%, and the improved model attains its highest accuracy at 96.3%.
This article introduces an improved YOLOv5 model which includes three enhancements. Firstly, a small target detection layer is incorporated to enhance the detection capability for smaller targets. Secondly, the SimAM is employed to enhance the model’s resistance to interference. Lastly, SPD-Conv is used to reduce the loss of detailed information associated with deepening the network. Ablation experiments were conducted to further verify the impact of these improvements on the network. Based on the original model, we gradually added experimental models, including the original YOLOv5 model, YOLOv5+ small target detection layer model, YOLOv5+ small target detection layer + SimAM model, and YOLOv5+ small target detection layer + SimAM + SPD-Conv model. Table 3 displays the results of these ablation experiments.
Compared to the original network, the improved YOLOv5 model has a 4.2% increase in precision, a 1.5% increase in recall rate, a 1.7% increase in mAP, and the model parameters increased by 44.8 MB.
Table 4 below demonstrates a more significant improvement in the model’s recognition of the smallest-sized keyhole defect, as well as some improvement in the recognition of the other two defects.
Figure 7 compares the results of the original YOLOv5 and the modified YOLOv5 for detecting defects in SLM. The red boxes in the figure represent LOF defects, the pink boxes represent unmelted powder, and the yellow boxes represent keyholes. As can be seen from the figure, the improved YOLOv5 model has a much lower leakage rate for small-sized defects, and the target confidence level increases accordingly. In the background with interference, the original YOLOv5 is weak in extracting features and cannot accurately predict defect targets. The detection performance of the improved YOLOv5 model is significantly better than that of the original model, with a large number of small targets detected, high accuracy, and better detection performance for small defects.

3.3. Performance Comparison of Different Models

To better validate the performance of the improved SLM defect detection model, this paper employs the same additive manufacturing defect dataset and parameter settings, and uses the original YOLOv5, Faster R-CNN, and SSD300 models to test and compare the results. Similarly, accuracy, recall, mAP, and model size are used as indicators to evaluate model performance.
As can be seen from the data in Table 5, the detection accuracy of the improved YOLOv5 model has increased by 4.2% compared to the original YOLOv5 model, and is higher than other detection models, with an increase of 1.7% in mAP. The experiments have proven that introducing a small target detection layer, incorporating the SimAM into the original YOLOv5 model, and adding the SPD-Conv module all contribute to enhancing the performance of SLM defect detection.

4. Conclusions and Future Research

4.1. Conclusions

In this paper, we propose a deep learning model based on the original YOLOv5 network architecture, which includes three structural improvements to enhance the identification of defects in stainless steel in additive manufacturing.
First, a special small target recognition layer was introduced in the prediction head of the YOLOv5 model to improve the recognition of small anomalies such as keyholes. Secondly, we integrated the SimAM for feature fusion to improve the expressiveness and robustness of the network. Finally, we adopted an SPD-Conv module in the model to reduce information loss within the network and enhance the sensitivity of the model to channel and spatial features. During the test phase of the sample, data enhancement techniques such as rotation, flipping, scaling, or twisting were used to create additional training data, and an improved YOLOV5 model was used to detect defects in the sample.
The experimental results confirm the significant progress of the improved YOLOv5 network enhancement model in defect detection. The results of comparison with the original YOLOv5 model and alternative methods also confirm that the improved model can achieve a higher detection rate. Specifically, our improved model achieved a significant detection mAP of 89.8%, significantly exceeding other state-of-the-art methods.

4.2. Future Research

Nevertheless, it is important to acknowledge certain limitations associated with the improved YOLOv5 model. Primarily, the model encounters challenges in accurately detecting defects located at edge regions. Additionally, although our model demonstrates improved detection accuracy, there remains scope for further refinement. Future research endeavors could explore techniques to amplify detection accuracy, such as augmenting the sample resolution or employing more lightweight network architectures.

Author Contributions

Conceptualization, X.G.; Formal analysis, W.Y.; Investigation, X.G. and J.H.; Data curation, J.H.; Writing—original draft, W.Y.; Project administration, X.G.; Funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study received funding from the Jilin Provincial Department of Education Science and Technology Research project, China (Project Number: JJKH20220046KJ).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Frazier, W.E. Metal additive manufacturing: A review. J. Mater. Eng. Perform. 2014, 23, 1917–1928. [Google Scholar] [CrossRef]
  2. Lewandowski, J.J.; Seifi, M. Metal Additive Manufacturing: A Review of Mechanical Properties. Annu. Rev. Mater. Res. 2016, 46, 151–186. [Google Scholar] [CrossRef]
  3. Haribaskar, R.; Kumar, T.S. Defects in Metal Additive Manufacturing: Formation, Process Parameters, Postprocessing, Challenges, Economic Aspects, and Future Research Directions. 3D Print. Addit. Manuf. 2023, 9, 165–183. [Google Scholar] [CrossRef]
  4. Fu, Y.; Downey, A.R.; Yuan, L.; Zhang, T.; Pratt, A.; Balogun, Y. Machine learning algorithms for defect detection in metal laser-based additive manufacturing: A review. J. Manuf. Process. 2022, 75, 693–710. [Google Scholar] [CrossRef]
  5. Zhang, L.; Fu, Z.; Guo, H.; Sun, Y.; Li, X.; Xu, M. Multiscale Local and Global Feature Fusion for the Detection of Steel Surface Defects. Electronics 2023, 12, 3090. [Google Scholar] [CrossRef]
  6. Zhang, X.; Zhou, Q.; Wu, C.; Pan, J.; Qi, H. Improved faster-RCNN-based inspection of hydraulic cylinder internal surface defects. In Proceedings of the 3rd International Conference on Artificial Intelligence, Automation, and High-Performance Computing (AIAHPC 2023), Wuhan, China, 31 March–2 April 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12717, pp. 201–208. [Google Scholar]
  7. Yixuan, L.; Dongbo, W.; Jiawei, L.; Hui, W. Aeroengine Blade Surface Defect Detection System Based on Improved Faster RCNN. Int. J. Intell. Syst. 2023, 2023, 1992415. [Google Scholar] [CrossRef]
  8. Khalfaoui, A.; Badri, A.; Mourabit, I.E.L. Comparative study of YOLOv3 and YOLOv5’s performances for real-time person detection. In Proceedings of the 2nd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, 3–4 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
  9. Li, X.; Wang, C.; Ju, H.; Li, Z. Surface Defect Detection Model for Aero-Engine Components Based on Improved YOLOv5. Appl. Sci. 2022, 12, 7235. [Google Scholar] [CrossRef]
  10. Inbar, O.; Shahar, M.; Gidron, J.; Cohen, I.; Menashe, O.; Avisar, D. Analyzing the secondary wastewater-treatment process using Faster R-CNN and YOLOv5 object detection algorithms. J. Clean. Prod. 2023, 416, 137913. [Google Scholar] [CrossRef]
  11. Thuan, D. Evolution of Yolo Algorithm and Yolov5: The State-of-the-Art Object Detection Algorithm. Bachelor’s Thesis, Oulu University of Applied Sciences, Oulu, Finland, 2021. [Google Scholar]
  12. Cherkasov, N.; Ivanov, M.; Ulanov, A. Weld Surface Defect Detection Based on a Laser Scanning System and YOLOv5. In Proceedings of the 2023 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM), Sochi, Russia, 15–19 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 851–855. [Google Scholar]
  13. Wang, K.; Teng, Z.; Zou, T. Metal defect detection based on YOLOv5. J. Phys. Conf. Ser. 2022, 2218, 012050. [Google Scholar] [CrossRef]
  14. Lu, D.; Muhetae, K. Research on algorithm of surface defect detection of aluminum profile based on YOLO. In Proceedings of the International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2023), Changsha, China, 24–26 February 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12707, pp. 317–323. [Google Scholar]
  15. Xiao, D.; Xie, F.T.; Gao, Y.; Ni Li, Z.; Xie, H.F. A detection method of spangle defects on zinc-coated steel surfaces based on improved YOLO-v5. Int. J. Adv. Manuf. Technol. 2023, 128, 937–951. [Google Scholar] [CrossRef]
  16. Zhao, H.; Wan, F.; Lei, G.; Xiong, Y.; Xu, L.; Xu, C.; Zhou, W. LSD-YOLOv5: A Steel Strip Surface Defect Detection Algorithm Based on Lightweight Network and Enhanced Feature Fusion Mode. Sensors 2023, 23, 6558. [Google Scholar] [CrossRef] [PubMed]
  17. Brennan, M.C.; Keist, J.S.; Palmer, T.A. Defects in Metal Additive Manufacturing Processes. J. Mater. Eng. Perform. 2021, 30, 4808–4818. [Google Scholar] [CrossRef]
  18. Hossain, S.; Anzum, H.; Akhter, S. Comparison of YOLO (v3, v5) and MobileNet-SSD (v1, v2) for Person Identification Using Ear-Biometrics. Int. J. Comput. Digit. Syst. 2023, 15, 1259–1271. [Google Scholar] [CrossRef] [PubMed]
  19. Yusuf, S.M.; Gao, N. Influence of energy density on metallurgy and properties in metal additive manufacturing. Mater. Sci. Technol. 2017, 33, 1269–1289. [Google Scholar] [CrossRef]
  20. Sabzi, H.E.; Maeng, S.; Liang, X.; Simonelli, M.; Aboulkhair, N.T.; Rivera-Díaz-del-Castillo, P.E.J. Controlling crack formation and porosity in laser powder bed fusion: Alloy design and process optimization. Addit. Manuf. 2020, 34, 101360. [Google Scholar] [CrossRef]
  21. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  22. Vostrikov, A.; Chernyshev, S. Training sample generation software. In Intelligent Decision Technologies 2019, Proceedings of the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT 2019), St. Julian’s, Malta, 17–19 June 2019; Springer: Singapore, 2019; Volume 2, pp. 145–151. [Google Scholar]
  23. Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
Figure 1. Defect-type labeling (The green box stands for LOF (Lack of Fusion), the yellow box stands for unmelted powder, the blue box stands for keyhole).
Figure 1. Defect-type labeling (The green box stands for LOF (Lack of Fusion), the yellow box stands for unmelted powder, the blue box stands for keyhole).
Processes 12 01054 g001
Figure 2. Model structure for adding a small-target detection layer.
Figure 2. Model structure for adding a small-target detection layer.
Processes 12 01054 g002
Figure 3. Full 3-D weights for attention.
Figure 3. Full 3-D weights for attention.
Processes 12 01054 g003
Figure 4. SDP-Conv structure.
Figure 4. SDP-Conv structure.
Processes 12 01054 g004
Figure 5. SLM defect detection of 316L material based on YOLOv5.
Figure 5. SLM defect detection of 316L material based on YOLOv5.
Processes 12 01054 g005
Figure 6. Results of the improved YOLOv5: (a) the training and validation loss of the improved model; (b) the [email protected] of the model before and after improvement.
Figure 6. Results of the improved YOLOv5: (a) the training and validation loss of the improved model; (b) the [email protected] of the model before and after improvement.
Processes 12 01054 g006
Figure 7. Comparison of YOLOv5 model recognition effects. (ad) Recognition effect of the original YOLOv5; (eh) recognition effect of the improved YOLOv5.
Figure 7. Comparison of YOLOv5 model recognition effects. (ad) Recognition effect of the original YOLOv5; (eh) recognition effect of the improved YOLOv5.
Processes 12 01054 g007
Table 1. Process conditions for selective laser deposition.
Table 1. Process conditions for selective laser deposition.
Powder Size (μm)Power (W)Scanning Speed (mm/s)Layer Thickness (μm)Hatch
Distance (μm)
50–150200100050120
Table 2. Test environment settings and parameters.
Table 2. Test environment settings and parameters.
ParametersConfigure
Operating systemWindows 11
Deep learning frameworkPytorch 1.10
Programming languagePython 3.6
GPU-accelerated environmentCUDA 10.2
GPUGeForce RTX 2060 s 8 G
CPUAMD Ryzen 7 5700X 8-Core Processor @3.40 GHz
Table 3. Results of ablation experiments.
Table 3. Results of ablation experiments.
Experiment No.Precision (%)Recall (%)[email protected] (%)Model Size (MB)
192.193.188.165.9
294.393.788.992.2
395.694.189.492.2
496.3.94.689.8110.7
Table 4. Comparison between the original model and the proposed model.
Table 4. Comparison between the original model and the proposed model.
CategoryYOLOV5Proposed YOLOv5
LOF90.491.1
Unmelted Powder89.690.2
Keyhole83.286.3
Table 5. Comparison of the performance of the model proposed in this paper with other advanced models.
Table 5. Comparison of the performance of the model proposed in this paper with other advanced models.
Model NameP (%)R (%)[email protected] (%)Model Size (MB)Detect Time (ms)
Faster R-CNN86.488.780.9108119.7
SSD30068.675.470.493.174.3
YOLOv592.193.188.165.927.1
Proposed YOLOv596.3.94.689.8110.734.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, W.; Gan, X.; He, J. Defect Identification of 316L Stainless Steel in Selective Laser Melting Process Based on Deep Learning. Processes 2024, 12, 1054. https://doi.org/10.3390/pr12061054

AMA Style

Yang W, Gan X, He J. Defect Identification of 316L Stainless Steel in Selective Laser Melting Process Based on Deep Learning. Processes. 2024; 12(6):1054. https://doi.org/10.3390/pr12061054

Chicago/Turabian Style

Yang, Wei, Xinji Gan, and Jinqian He. 2024. "Defect Identification of 316L Stainless Steel in Selective Laser Melting Process Based on Deep Learning" Processes 12, no. 6: 1054. https://doi.org/10.3390/pr12061054

APA Style

Yang, W., Gan, X., & He, J. (2024). Defect Identification of 316L Stainless Steel in Selective Laser Melting Process Based on Deep Learning. Processes, 12(6), 1054. https://doi.org/10.3390/pr12061054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop