An Enhanced U-Net Approach for Segmentation of Aeroengine Hollow Turbine Blade
Abstract
:1. Introduction
2. Materials and Methods
2.1. Preprocessing of Training Data
2.1.1. Data Source
2.1.2. Data Augmentation
- Translation Transformation
- Mirror Transformation
- Rotation Transformation
- Scaling Transformation
2.2. Architecture of the Enhanced U-net
2.3. Training
2.3.1. Training Data Set
2.3.2. Training Parameters
2.4. Performance Evaluation
3. Results and Discussion
3.1. Segmentation Results
3.2. Analysis of the Proposed Architecture
3.2.1. Ablation Experiments
3.2.2. Processing Time
3.2.3. Robustness of the Proposed Architecture
3.3. Limitations
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Dong, Y.W.; Li, X.L.; Zhao, Q.; Yang, J.; Dao, M. Modeling of shrinkage during investment casting of thin-walled hollow turbine blades. J. Mater. Process. Technol. 2017, 244, 190–203. [Google Scholar] [CrossRef]
- Ohtake, Y.; Suzuki, H. Edge detection based multi-material interface extraction on industrial CT volumes. Sci. China Ser. F Inf. Sci. 2013, 56, 1–9. [Google Scholar] [CrossRef]
- Ciliberti, G.A.; Janello, P.; Jahnke, P.; Keuthage, L. Potentials of Full-Vehicle CT Scans Within the Automotive Industry. In Proceedings of the 19th World Conference on Nondestructive Testing (WCNDT 2016), Munich, Germany, 13–17 June 2016. [Google Scholar]
- Qian, X.; Wang, J.; Guo, S.; Li, Q. An active contour model for medical image segmentation with application to brain CT image. Med. Phys. 2013, 40, 021911. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ketcham, R.A.; Carlson, W.D. Acquisition, optimization and interpretation of X-ray computed tomographic imagery: Applications to the geosciences. Comput. Geosci. 2001, 27, 381–400. [Google Scholar] [CrossRef]
- Lee, J.H.; Lee, J.M.; Park, J.W.; Moon, Y.S. Efficient algorithms for automatic detection of cracks on a concrete bridge. In Proceedings of the 23rd International Technical Conference on Circuits/Systems, Computers and Communications, Shimonoseki, Japan, 6–9 July 2008; pp. 1213–1216. [Google Scholar]
- Zheng, J.; Zhang, D.; Huang, K.; Sun, Y. Cone-Beam Computed Tomography Image Pretreatment and Segmentation. In Proceedings of the International Symposium on Computational Intelligence and Design, Hangzhou, China, 8–9 December 2018; pp. 25–28. [Google Scholar]
- Ayala, H.V.H.; Santos, F.M.d.; Mariani, V.C.; Coelho, L.d.S. Image thresholding segmentation based on a novel beta differential evolution approach. Expert Syst. Appl. 2015, 42, 2136–2142. [Google Scholar] [CrossRef]
- Alaknanda; Anand, R.S.; Kumar, P. Flaw detection in radiographic weld images using morphological approach. Ndt E Int. 2006, 39, 29–33. [Google Scholar] [CrossRef]
- Zhang, K.; Zhang, L.; Song, H.; Zhou, W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [Google Scholar] [CrossRef]
- Zheng, J.; Zhang, D.; Huang, K.; Sun, Y. Adaptive image segmentation method based on the fuzzy c-means with spatial information. Iet Image Process. 2017, 12, 785–792. [Google Scholar] [CrossRef]
- Zheng, J.; Zhang, D.; Huang, K.; Sun, Y.; Tang, S. Adaptive windowed range-constrained Otsu method using local information. J. Electron. Imaging 2016, 25, 013034. [Google Scholar] [CrossRef]
- Alaknanda; Anand, R.S.; Kumar, P. Flaw detection in radiographic weldment images using morphological watershed segmentation technique. Ndt E Int. 2009, 42, 2–8. [Google Scholar] [CrossRef]
- Prathusha, P.; Jyothi, S. A Novel Edge Detection Algorithm for Fast and Efficient Image Segmentation; Springer: Singapore, 2018; pp. 283–291. [Google Scholar]
- Li, Y.; Cao, G.; Yu, Q.; Li, X. Fast and Robust Active Contours Model for Image Segmentation. Neural Process. Lett. 2019, 49, 431–452. [Google Scholar] [CrossRef]
- Dellepiane, S.G.; Nardotto, S. Fuzzy Image Segmentation: An Automatic Unsupervised Method; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 65–88. [Google Scholar]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef] [Green Version]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Noh, H.; Hong, S.; Han, B. Learning Deconvolution Network for Semantic Segmentation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1520–1528. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Visin, F.; Romero, A.; Cho, K.; Matteucci, M.; Ciccone, M.; Kastner, K.; Bengio, Y.; Courville, A. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 426–433. [Google Scholar]
- Chen, L.C.; Yang, Y.; Wang, J.; Xu, W.; Yuille, A.L. Attention to Scale: Scale-Aware Semantic Image Segmentation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3640–3649. [Google Scholar]
- Souly, N.; Spampinato, C.; Shah, M. Semi Supervised Semantic Segmentation Using Generative Adversarial Network. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5689–5697. [Google Scholar]
- Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 2019, 16, 67–70. [Google Scholar] [CrossRef]
- Han, Y.; Ye, J.C. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT. IEEE Trans. Med. Imaging 2018, 37, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
- Yin, X.-X.; Sun, L.; Fu, Y.; Lu, R.; Zhang, Y. U-Net-Based Medical Image Segmentation. J. Healthc. Eng. 2022, 2022, 1–16. [Google Scholar] [CrossRef]
- Baltruschat, I.M.; Ćwieka, H.; Krüger, D.; Zeller-Plumhoff, B.; Schlünzen, F.; Willumeit-Römer, R.; Moosmann, J.; Heuser, P. Scaling the U-net: Segmentation of biodegradable bone implants in high-resolution synchrotron radiation microtomograms. Sci. Rep. 2021, 11, 24237. [Google Scholar] [CrossRef]
- Ghosh, S.; Chaki, A.; Santosh, K. Improved U-Net architecture with VGG-16 for brain tumor segmentation. Phys. Eng. Sci. Med. 2021, 44, 703–712. [Google Scholar] [CrossRef] [PubMed]
- Khaled, R.; Vidal, J.; Vilanova, J.C.; Martí, R. A U-Net Ensemble for breast lesion segmentation in DCE MRI. Comput. Biol. Med. 2022, 140, 105093. [Google Scholar] [CrossRef] [PubMed]
- Li, A.C.; Vyas, S.; Lin, Y.H.; Huang, Y.Y.; Huang, H.M.; Luo, Y. Patch-Based U-Net Model for Isotropic Quantitative Differential Phase Contrast Imaging. IEEE Trans. Med. Imaging 2021, 40, 3229–3237. [Google Scholar] [CrossRef]
- Lee, S.A.; Konofagou, E.E. FUS-Net: U-Net-Based FUS Interference Filtering. IEEE Trans. Med. Imaging 2022, 41, 915–924. [Google Scholar] [CrossRef] [PubMed]
- Rocha, J.; Cunha, A.; Mendonça, A.M. Conventional Filtering Versus U-Net Based Models for Pulmonary Nodule Segmentation in CT Images. J. Med. Syst. 2020, 44, 1–8. [Google Scholar] [CrossRef]
- Seo, H.; Huang, C.; Bassenne, M.; Xiao, R.; Xing, L. Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE Trans. Med. Imaging 2020, 39, 1316–1325. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Man, Y.; Huang, Y.; Feng, J.; Li, X.; Wu, F. Deep Q Learning Driven CT Pancreas Segmentation With Geometry-Aware U-Net. IEEE Trans. Med. Imaging 2019, 38, 1971–1980. [Google Scholar] [CrossRef] [Green Version]
- Hiasa, Y.; Otake, Y.; Takao, M.; Ogawa, T.; Sugano, N.; Sato, Y. Automated Muscle Segmentation from Clinical CT Using Bayesian U-Net for Personalized Musculoskeletal Modeling. IEEE Trans. Med. Imaging 2020, 39, 1030–1040. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Lian, C.; Zhang, B.; Zhang, X.; Cao, X.; Nie, D.; Gao, Y.; Zhang, J.; Shen, D. HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT Images. IEEE Trans. Med. Imaging 2021, 40, 2118–2128. [Google Scholar] [CrossRef]
- Wang, B.; Chen, Z.; Dewulf, W.; Pauwels, R.; Yao, Z.; Hou, Q.; Xiao, Y. U-net-based blocked artifacts removal method for dynamic computed tomography. Appl. Opt. 2019, 58, 3748. [Google Scholar] [CrossRef]
- Li, X.; Song, W.; Gao, D.; Gao, W.; Wang, H. Training a U-Net based on a random mode-coupling matrix model to recover acoustic interference striations. J. Acoust. Soc. Am. 2020, 147, EL363–EL369. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, W.; Li, Q.; Xiao, C.; Zhang, D.; Miao, L.; Wang, L. An Improved Boundary-Aware U-Net for Ore Image Semantic Segmentation. Sensors 2021, 21, 2615. [Google Scholar] [CrossRef] [PubMed]
- Cherfa, I.; Mokraoui, A.; Mekhmoukh, A.; Mokrani, K. Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering Algorithm Using Particle Swarm Optimization for Medical Image Segmentation. In Proceedings of the 2020 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 23–25 September 2020; pp. 24–29. [Google Scholar]
- Wang, D. Extremely optimized DRLSE method and its application to image segmentation. IEEE Access 2019, 7, 119603–119619. [Google Scholar] [CrossRef]
- Merzougui, M.; El Allaoui, A. Region growing segmentation optimized by evolutionary approach and Maximum Entropy. Procedia Comput. Sci. 2019, 151, 1046–1051. [Google Scholar] [CrossRef]
- Masuda, Y.; Tateyama, T.; Xiong, W.; Zhou, J.; Wakamiya, M.; Kanasaki, S.; Furukawa, A.; Chen, Y.W. Liver tumor detection in CT images by adaptive contrast enhancement and the EM/MPM algorithm. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 1421–1424. [Google Scholar]
- Wu, M.; Fan, W.; Chen, Q.; Du, Z.; Li, X.; Yuan, S.; Park, H. Three-dimensional continuous max flow optimization-based serous retinal detachment segmentation in SD-OCT for central serous chorioretinopathy. Biomed. Opt. Express 2017, 8, 4257–4274. [Google Scholar] [CrossRef]
Samples | 1#102 | 1#111 | 1#113 | 2#740 | 2#741 | 2#743 | |
---|---|---|---|---|---|---|---|
Methods | |||||||
ARKFCM | winSize = 45; cNum = 2; opt = ‘average’ | winSize = 31; cNum = 2; opt = ‘median’ | winSize = 41; cNum = 2; opt = ‘median’ | winSize = 27; cNum = 2; opt = ‘weighted’ | winSize = 15; cNum = 2; opt = ‘weighted’ | winSize = 16; cNum = 2; opt = ‘weighted’ | |
DRLSE | sigma = 45; iter_outer = 60; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 1; | sigma = 35; iter_outer = 40; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 0.1; | sigma = 35; iter_outer = 50; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 1; | sigma = 21; iter_outer = 26; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 1; | sigma = 21; iter_outer = 26; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 1; | sigma = 21; iter_outer = 26; iter_inner = 10; timestep = 0.1; c0 = 1; mu = 1; | |
EM/MPM | regions = 2; steps = 10; mpmSteps = 1; coolMax = 1.2 coolInc = 0.25 | regions = 2; steps = 16; mpmSteps = 10; coolMax = 1.2 coolInc = 0.025 | regions = 2; steps = 18; mpmSteps = 1; coolMax = 1.2 coolInc = 0.025 | regions = 2; steps = 1; mpmSteps = 1; coolMax = 1.2 coolInc = 0.025 | regions = 2; steps = 1; mpmSteps = 5; coolMax = 1.2 coolInc = 0.025 | regions = 2; steps = 10; mpmSteps = 8; coolMax = 1.2 coolInc = 0.025 |
Evaluation Index | Methods | Experimental Results | |||||
---|---|---|---|---|---|---|---|
1#102 | 1#111 | 1#113 | 2#740 | 2#741 | 2#743 | ||
BF Score | ARKFCM | 0.9890 | 0.9889 | 0.9876 | 0.9801 | 0.9804 | 0.9863 |
CMF | 0.9403 | 0.9096 | 0.9092 | 0.8169 | 0.8127 | 0.8585 | |
DRLSE | 0.9984 | 0.9818 | 0.9861 | 0.9713 | 0.9651 | 0.9793 | |
EM/MPM | 0.9928 | 0.9876 | 0.9862 | 0.9769 | 0.9768 | 0.9824 | |
MAXENTROPY | 0.9889 | 0.9872 | 0.9825 | 0.9291 | 0.9153 | 0.9177 | |
OTSU | 0.9952 | 0.9899 | 0.9876 | 0.9799 | 0.9800 | 0.9858 | |
Conventional U-net | 1.0000 | 1.0000 | 1.0000 | 0.9960 | 0.9964 | 1.0000 | |
Proposed | 1.0000 | 1.0000 | 1.0000 | 0.9965 | 0.9969 | 0.9990 | |
Dual-Frame U-net | 1.0000 | 1.0000 | 1.0000 | 0.9963 | 0.9967 | 0.9995 | |
mU-net | 1.0000 | 1.0000 | 1.0000 | 0.9970 | 0.9967 | 0.9995 | |
Jaccard | ARKFCM | 0.8183 | 0.7622 | 0.7904 | 0.7964 | 0.7924 | 0.8121 |
CMF | 0.6197 | 0.5604 | 0.5865 | 0.5865 | 0.5862 | 0.6176 | |
DRLSE | 0.7711 | 0.7013 | 0.7401 | 0.7514 | 0.7491 | 0.7815 | |
EM/MPM | 0.8122 | 0.7425 | 0.7711 | 0.7720 | 0.7665 | 0.7916 | |
MAXENTROPY | 0.7425 | 0.6973 | 0.7209 | 0.6853 | 0.6789 | 0.7012 | |
OTSU | 0.8092 | 0.7388 | 0.7678 | 0.7677 | 0.7634 | 0.7879 | |
Conventional U-net | 0.9497 | 0.9311 | 0.9313 | 0.9423 | 0.9418 | 0.9472 | |
Proposed | 0.9540 | 0.9392 | 0.9423 | 0.9547 | 0.9554 | 0.9592 | |
Dual-Frame U-net | 0.9485 | 0.9391 | 0.9404 | 0.9553 | 0.9550 | 0.9590 | |
mU-net | 0.9474 | 0.9383 | 0.9377 | 0.9534 | 0.9518 | 0.9550 | |
DSC | ARKFCM | 0.9001 | 0.8651 | 0.8829 | 0.8867 | 0.8842 | 0.8963 |
CMF | 0.7652 | 0.7183 | 0.7394 | 0.7394 | 0.7391 | 0.7636 | |
DRLSE | 0.8708 | 0.8244 | 0.8506 | 0.8581 | 0.8566 | 0.8774 | |
EM/MPM | 0.8964 | 0.8522 | 0.8708 | 0.8713 | 0.8678 | 0.8837 | |
MAXENTROPY | 0.8522 | 0.8217 | 0.8378 | 0.8133 | 0.8087 | 0.8244 | |
OTSU | 0.8945 | 0.8498 | 0.8687 | 0.8686 | 0.8658 | 0.8814 | |
Conventional U-net | 0.9742 | 0.9643 | 0.9644 | 0.9703 | 0.9700 | 0.9729 | |
Proposed | 0.9765 | 0.9686 | 0.9703 | 0.9768 | 0.9772 | 0.9792 | |
Dual-Frame U-net | 0.9736 | 0.9686 | 0.9693 | 0.9771 | 0.9770 | 0.9791 | |
mU-net | 0.9730 | 0.9682 | 0.9678 | 0.9761 | 0.9753 | 0.9770 |
Method | ARKFCM | CMF | DRLSE | EM/MPM | MAXENTROPY | OTSU | Conventional U-net | Dual-Frame U-net | mU-net | |
---|---|---|---|---|---|---|---|---|---|---|
p-Value | ||||||||||
p-BF | 2.27 × 10−5 | 0.000194 | 0.003279 | 0.000224 | 0.012090 | 0.000598 | 1.000000 | 0.986737 | 0.888246 | |
p-Jaccard | 6.34 × 10−9 | 5.19 × 10−12 | 1.05 × 10−8 | 9.79 × 10−9 | 3.37 × 10−10 | 8.6 × 10−9 | 0.049792 | 0.796774 | 0.452846 | |
p-DSC | 1.21 × 10−8 | 2.79 × 10−11 | 2.9 × 10−8 | 2.04 × 10−8 | 9.45 × 10−10 | 1.78 × 10−8 | 0.050454 | 0.799691 | 0.453197 |
Evaluation Index | Methods | Experimental Results | |||||
---|---|---|---|---|---|---|---|
1#102 | 1#111 | 1#113 | 2#740 | 2#741 | 2#743 | ||
BF Score | Dense Block + U-net | 1.0000 | 1.0000 | 1.0000 | 0.9961 | 0.9968 | 0.9991 |
Multi Input + U-net | 1.0000 | 1.0000 | 1.0000 | 0.9964 | 0.9967 | 0.9991 | |
Focal Loss + U-net | 1.0000 | 1.0000 | 0.9998 | 0.9967 | 0.9967 | 0.9998 | |
Residual Path + U-net | 1.0000 | 1.0000 | 1.0000 | 0.9970 | 0.9967 | 0.9995 | |
Conventional U-net | 1.0000 | 1.0000 | 1.0000 | 0.9960 | 0.9964 | 1.0000 | |
Proposed | 1.0000 | 1.0000 | 1.0000 | 0.9965 | 0.9969 | 0.9990 | |
Jaccard | Dense Block + U-net | 0.9492 | 0.9397 | 0.9406 | 0.9535 | 0.9530 | 0.9578 |
Multi Input + U-net | 0.9487 | 0.9397 | 0.9419 | 0.9557 | 0.9540 | 0.9600 | |
Focal Loss + U-net | 0.9329 | 0.9337 | 0.9273 | 0.9533 | 0.9514 | 0.9502 | |
Residual Path + U-net | 0.9474 | 0.9383 | 0.9377 | 0.9534 | 0.9518 | 0.9550 | |
Conventional U-net | 0.9497 | 0.9311 | 0.9313 | 0.9423 | 0.9418 | 0.9472 | |
Proposed | 0.9540 | 0.9392 | 0.9423 | 0.9547 | 0.9554 | 0.9592 | |
DSC | Dense Block + U-net | 0.9739 | 0.9689 | 0.9694 | 0.9762 | 0.9759 | 0.9784 |
Multi-Input + U-net | 0.9737 | 0.9689 | 0.9701 | 0.9773 | 0.9765 | 0.9796 | |
Focal Loss + U-net | 0.9653 | 0.9657 | 0.9623 | 0.9761 | 0.9751 | 0.9745 | |
Residual Path + U-net | 0.9730 | 0.9682 | 0.9678 | 0.9761 | 0.9753 | 0.9770 | |
Conventional U-net | 0.9742 | 0.9643 | 0.9644 | 0.9703 | 0.9700 | 0.9729 | |
Proposed | 0.9765 | 0.9686 | 0.9703 | 0.9768 | 0.9772 | 0.9792 |
Methods | ARKFCM | CMF | DRLSE | EM/MPM | MAXENTROPY | OTSU | U-net | Proposed | mU-net | Dual-Frame U-net | |
---|---|---|---|---|---|---|---|---|---|---|---|
Samples | |||||||||||
1#102 | 6.003 s | 0.425 s | 134.397 s | 0.904 s | 0.367 s | 0.128 s | 0.0315 s | 0.0810 s | 0.0418 s | 0.0304 s | |
1#111 | 5.927 s | 0.435 s | 59.553 s | 0.706 s | 0.360 s | 0.073 s | 0.0325 s | 0.0811 s | 0.0428 s | 0.0299 s | |
1#113 | 6.697 s | 0.438 s | 76.094 s | 0.737 s | 0.327 s | 0.082 s | 0.0319 s | 0.0806 s | 0.0423 s | 0.0294 s | |
2#740 | 6.117 s | 0.504 s | 25.223 s | 0.906 s | 0.320 s | 0.086 s | 0.0330 s | 0.0810 s | 0.0418 s | 0.0295 s | |
2#741 | 5.902 s | 0.471 s | 22.218 s | 0.935 s | 0.297 s | 0.086 s | 0.0294 s | 0.0810 s | 0.0388 s | 0.0301 s | |
2#743 | 5.899 s | 0.514 s | 22.111 s | 0.759 s | 0.424 s | 0.104 s | 0.0292 s | 0.0807 s | 0.0404 s | 0.0296 s |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, J.; Tang, C.; Sun, Y.; Feng, M.; Wang, C. An Enhanced U-Net Approach for Segmentation of Aeroengine Hollow Turbine Blade. Mathematics 2022, 10, 4230. https://doi.org/10.3390/math10224230
Zheng J, Tang C, Sun Y, Feng M, Wang C. An Enhanced U-Net Approach for Segmentation of Aeroengine Hollow Turbine Blade. Mathematics. 2022; 10(22):4230. https://doi.org/10.3390/math10224230
Chicago/Turabian StyleZheng, Jia, Chuan Tang, Yuanxi Sun, Mingchi Feng, and Congzhe Wang. 2022. "An Enhanced U-Net Approach for Segmentation of Aeroengine Hollow Turbine Blade" Mathematics 10, no. 22: 4230. https://doi.org/10.3390/math10224230
APA StyleZheng, J., Tang, C., Sun, Y., Feng, M., & Wang, C. (2022). An Enhanced U-Net Approach for Segmentation of Aeroengine Hollow Turbine Blade. Mathematics, 10(22), 4230. https://doi.org/10.3390/math10224230