Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging
Abstract
:Simple Summary
Abstract
1. Introduction
2. Transfer Learning
2.1. Overview of Transfer Learning
2.2. Advantages of Transfer Learning
2.3. Transfer Learning Approaches
2.3.1. Feature Extracting
2.3.2. Fine-Tuning
2.3.3. Feature Extracting vs. Fine-Tuning
2.4. Pre-Training Model and Dataset
- ImageNet: ImageNet is a large image database designed for use in image recognition [77,78,79]. It comprise more than 14 million images that have been hand-annotated to indicate the pictured objects. ImageNet is categorized into more than 20,000 categories with a typical category consisting of several images. The third-party image URLs repository of annotations is freely accessible directly from ImageNet, although ImageNet does not own the images.
2.5. Pre-Processing
2.6. Convolutional Neural Network
- AlexNet: the AlexNet architecture is composed of eight layers. The first layers of AlexNet are the convolutional layers, and the next layer is a max-pooling layer for data dimension reduction [77,78,79]. AlexNet uses a rectified linear unit (ReLU) for the activation function, which offers faster training than other activation functions. The remaining three layers are the fully connected layers.
- VGGNet: VGG16 was the first CNN introduced by the Visual Geometry Group (VGG); this was followed by VGG19; VGG16 and VGG19 becoming two excellent architectures on ImageNet [85]. VGGNet models afford better performance than AlexNet by superseding large kernel-sized filters with various small kernel-sized filters; thus, VGG16 and VGG19 comprise 13 and 16 convolution layers, respectively [84,85,86].
- Inception: this is a GoogLeNet model focused on improving the efficiency of VGGNet from the perspective of memory usage and runtime without reducing performance accuracy [86,87,88,89]. To achieve this, it removes the activation functions of VGGNet that are iterative or zero [86]. Therefore, GoogLeNet came up with and added a module known as Inception, which approximates scattered connections between the activation functions [87]. Following InceptionV1, the architecture was improved in three subsequent versions [88,89]. InceptionV2 used batch normalization for training, and InceptionV3 introduced the factorization method to enhance the computational complexity of convolution layers. InceptionV4 brought about a similar comprehensive type of Inception-V3 architecture with a larger number of inception modules [89].
3. Results
4. Discussion
5. Outlook
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Mutar, M.T.; Goyani, M.S.; Had, A.M.; Mahmood, A.S. Pattern of Presentation of Patients with Breast Cancer in Iraq in 2018: A Cross-Sectional Study. J. Glob. Oncol. 2019, 5, 1–6. [Google Scholar] [CrossRef] [PubMed]
- Coleman, C. Early Detection and Screening for Breast Cancer. Sem. Oncol. Nurs. 2017, 33, 141–155. [Google Scholar] [CrossRef] [PubMed]
- Smith, N.B.; Webb, A. Ultrasound Imaging. In Introduction to Medical Imaging: Physics, Engineering and Clinical Applications, 6th ed.; Saltzman, W.M., Chien, S., Eds.; Cambridge University Press: Cambridge, UK, 2010; Volume 1, pp. 145–197. [Google Scholar] [CrossRef]
- Gilbert, F.J.; Pinker-Domenig, K. Diagnosis and Staging of Breast Cancer: When and How to Use Mammography, Tomosynthesis, Ultrasound, Contrast-Enhanced Mammography, and Magnetic Resonance Imaging. Dis. Chest Breast Heart Vessels 2019, 2019–2022, 155–166. [Google Scholar] [CrossRef]
- Jesneck, J.L.; Lo, J.Y.; Baker, J.A. Breast Mass Lesions: Computer-aided Diagnosis Models with Mammographic and Sonographic Descriptors. Radiology 2007, 244, 390–398. [Google Scholar] [CrossRef]
- Feldman, M.K.; Katyal, S.; Blackwood, M.S. US artifacts. Radiographics 2009, 29, 1179–1189. [Google Scholar] [CrossRef] [PubMed]
- Barr, R.; Hindi, A.; Peterson, C. Artifacts in diagnostic ultrasound. Rep. Med. Imaging 2013, 6, 29–49. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y. Ultrasound Diagnosis of Breast Cancer. J. Med. Imag. Health Inform. 2013, 3, 157–170. [Google Scholar] [CrossRef]
- Liu, S.; Wang, Y.; Yang, X.; Li, S.; Wang, T.; Lei, B.; Ni, D.; Liu, L. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
- Huang, Q.; Zhang, F.; Li, X. Machine Learning in Ultrasound Computer-Aided Diagnostic Systems: A Survey. BioMed Res. Int. 2018, 7, 1–10. [Google Scholar] [CrossRef]
- Brattain, L.J.; Telfer, B.A.; Dhyani, M.; Grajo, J.R.; Samir, A.E. Machine learning for medical ultrasound: Status, methods, and future opportunities. Abdom. Radiol. 2018, 43, 786–799. [Google Scholar] [CrossRef]
- Sloun, R.J.G.v.; Cohen, R.; Eldar, Y.C. Deep Learning in Ultrasound Imaging. Proc. IEEE 2020, 108, 11–29. [Google Scholar] [CrossRef] [Green Version]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Khoshdel, V.; Ashraf, A.; LoVetri, J. Enhancement of Multimodal Microwave-Ultrasound Breast Imaging Using a Deep-Learning Technique. Sensors 2019, 19, 4050. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Day, O.; Khoshgoftaar, T.M. A survey on heterogeneous transfer learning. J. Big Dat. 2017, 4, 29. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Dat. 2016, 3, 1–9. [Google Scholar] [CrossRef] [Green Version]
- Gentle Introduction to Transfer Learning. Available online: https://bit.ly/2KuPVMA (accessed on 10 November 2020).
- Taylor, M.E.; Kuhlmann, G.; Stone, P. Transfer Learning and Intelligence: An Argument and Approach. In Proceedings of the 2008 Conference on Artificial General Intelligence, Amsterdam, The Netherlands, 18–19 June 2008; pp. 326–337. [Google Scholar]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. J. Int. Neur. Net. Soci. 2019, 113, 54–71. [Google Scholar] [CrossRef] [PubMed]
- Silver, D.; Yang, Q.; Li, L. Lifelong Machine Learning Systems: Beyond Learning Algorithms. In Proceedings of the AAAI Spring Symposium, Palo Alto, CA, USA, 25–27 March 2013; pp. 49–55. [Google Scholar]
- Chen, Z.; Liu, B. Lifelong Machine Learning. Syn. Lect. Art. Intel. Machn. Learn. 2016, 10, 1–145. [Google Scholar] [CrossRef] [Green Version]
- Alom, M.Z.; Taha, T.; Yakopcic, C.; Westberg, S.; Hasan, M.; Esesn, B.; Awwal, A.; Asari, V. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. arXiv 2018, arXiv:abs/1803.01164. [Google Scholar]
- Huynh, B.; Drukker, K.; Giger, M. MO-DE-207B-06: Computer-Aided Diagnosis of Breast Ultrasound Images Using Transfer Learning From Deep Convolutional Neural Networks. Int. J. Med. Phys. Res. Prac. 2016, 43, 3705–3705. [Google Scholar] [CrossRef]
- Byra, M.; Galperin, M.; Ojeda-Fournier, H.; Olson, L.; O’Boyle, M.; Comstock, C.; Andre, M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med. Phys. 2019, 46, 746–755. [Google Scholar] [CrossRef]
- Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R.; Moi Hoon, Y.; Pons, G.; et al. Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2018, 22, 1218–1226. [Google Scholar] [CrossRef] [Green Version]
- Byra, M.; Sznajder, T.; Korzinek, D.; Piotrzkowska-Wroblewska, H.; Dobruch-Sobczak, K.; Nowicki, A.; Marasek, K. Impact of Ultrasound Image Reconstruction Method on Breast Lesion Classification with Deep Learning. arXiv 2018, arXiv:abs/1804.02119. [Google Scholar]
- Hijab, A.; Rushdi, M.A.; Gomaa, M.M.; Eldeib, A. Breast Cancer Classification in Ultrasound Images using Transfer Learning. In Proceedings of the 2019 Fifth International Conference on Advances in Biomedical Engineering (ICABME), Tripoli, Lebanon, 17–19 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Yap, M.H.; Goyal, M.; Osman, F.M.; Martí, R.; Denton, E.; Juette, A.; Zwiggelaar, R. Breast ultrasound lesions recognition: End-to-end deep learning approaches. J. Med. Imaging 2019, 6, 1–7. [Google Scholar] [CrossRef]
- Hadad, O.; Bakalo, R.; Ben-Ari, R.; Hashoul, S.; Amit, G. Classification of breast lesions using cross-modal deep learning. IEEE 14th Intl. Symp. Biomed. Imaging 2017, 1, 109–112. [Google Scholar] [CrossRef]
- Transfer Learning. Available online: http://www.isikdogan.com/blog/transfer-learning.html (accessed on 20 November 2020).
- Chu, B.; Madhavan, V.; Beijbom, O.; Hoffman, J.; Darrell, T. Best Practices for Fine-Tuning Visual Classifiers to New Domains. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 435–442. [Google Scholar]
- Transfer Learning. Available online: https://cs231n.github.io/transfer-learning (accessed on 19 November 2020).
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neur. Inf. Proc. Sys. (NIPS). 2014, 27, 1–14. [Google Scholar]
- Huh, M.-Y.; Agrawal, P.; Efros, A.A.J.A. What makes ImageNet good for transfer learning? arXiv 2016, arXiv:abs/1608.08614. [Google Scholar]
- Li, Z.; Hoiem, D. Learning without Forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2935–2947. [Google Scholar] [CrossRef] [Green Version]
- Building Trustworthy and Ethical AI Systems. Available online: https://www.kdnuggets.com/2019/06/5-ways-lack-data-machine-learning.html (accessed on 15 November 2020).
- Overfit and Underfit. Available online: https://www.tensorflow.org/tutorials/keras/overfit_and_underfit (accessed on 10 November 2020).
- Handling Overfitting in Deep Learning Models. Available online: https://towardsdatascience.com/handling-overfitting-in-deep-learning-models-c760ee047c6e (accessed on 12 November 2020).
- Transfer Learning: The Dos and Don’ts. Available online: https://medium.com/starschema-blog/transfer-learning-the-dos-and-donts-165729d66625 (accessed on 20 November 2020).
- Transfer Learning & Fine-Tuning. Available online: https://keras.io/guides/transfer_learning/ (accessed on 2 November 2020).
- How the pytorch freeze network in some layers, only the rest of the training? Available online: https://bit.ly/2KrE2qK (accessed on 2 November 2020).
- Transfer Learning. Available online: https://colab.research.google.com/github/kylemath/ml4aguides/blob/master/notebooks/transferlearning.ipynb (accessed on 5 November 2020).
- A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning. Available online: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a (accessed on 3 November 2020).
- Transfer Learning with Convolutional Neural Networks in PyTorch. Available online: https://towardsdatascience.com/transfer-learning-with-convolutional-neural-networks-in-pytorch-dd09190245ce (accessed on 25 October 2020).
- Best, N.; Ott, J.; Linstead, E.J. Exploring the efficacy of transfer learning in mining image-based software artifacts. J. Big Dat. 2020, 7, 1–10. [Google Scholar] [CrossRef]
- He, K.; Girshick, R.; Dollar, P. Rethinking ImageNet Pre-Training. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), New York, NY, USA, 27 October–2 November 2019; pp. 4917–4926. [Google Scholar]
- Neyshabur, B.; Sedghi, H.; Zhang, C.J.A. What is being transferred in transfer learning? arXiv 2020, arXiv:abs/2008.11687. [Google Scholar]
- Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. Int. J. Comp. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
- Çarkacioglu, A.; Yarman Vural, F. SASI: A Generic Texture Descriptor for Image Retrieval. Pattern Recogn. 2003, 36, 2615–2633. [Google Scholar] [CrossRef] [Green Version]
- Yan, Y.; Ren, W.; Cao, X. Recolored Image Detection via a Deep Discriminative Model. IEEE Trans. Inf. Forensics Sec. 2018, 7, 1–7. [Google Scholar] [CrossRef]
- Imai, S.; Kawai, S.; Nobuhara, H. Stepwise PathNet: A layer-by-layer knowledge-selection-based transfer learning algorithm. Sci. Rep. 2020, 10, 1–14. [Google Scholar] [CrossRef]
- Zhao, Z.; Zheng, P.; Xu, S.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neur. Net. Learn. Sys. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Transfer Learning (C3W2L07). Available online: https://www.youtube.com/watch?v=yofjFQddwHE&t=1s (accessed on 3 November 2020).
- Zhang, J.; Li, W.; Ogunbona, P.; Xu, D. Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective. ACM Comput. Surv. 2019, 52, 1–38. [Google Scholar] [CrossRef]
- Nguyen, D.; Sridharan, S.; Denman, S.; Dean, D.; Fookes, C.J.A. Meta Transfer Learning for Emotion Recognition. arXiv 2020, arXiv:abs/2006.13211. [Google Scholar]
- Schmidt, J.; Marques, M.R.G.; Botti, S.; Marques, M.A.L. Recent advances and applications of machine learning in solid-state materials science. NPJ Comput. Mater. 2019, 5, 1–36. [Google Scholar] [CrossRef]
- D’souza, R.N.; Huang, P.-Y.; Yeh, F.-C. Structural Analysis and Optimization of Convolutional Neural Networks with a Small Sample Size. Sci. Rep. 2020, 10, 1–13. [Google Scholar] [CrossRef]
- Rizwan I Haque, I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 1–12. [Google Scholar] [CrossRef]
- Azizi, S.; Mousavi, P.; Yan, P.; Tahmasebi, A.; Kwak, J.T.; Xu, S.; Turkbey, B.; Choyke, P.; Pinto, P.; Wood, B.; et al. Transfer learning from RF to B-mode temporal enhanced ultrasound features for prostate cancer detection. Int. J. Comp. Assist. Radiol. Surg. 2017, 12, 1111–1121. [Google Scholar] [CrossRef] [PubMed]
- Amit, G.; Ben-Ari, R.; Hadad, O.; Monovich, E.; Granot, N.; Hashoul, S. Classification of breast MRI lesions using small-size training sets: Comparison of deep learning approaches. Proc. SPIE 2017, 10134, 1–6. [Google Scholar]
- Tajbakhsh, N.; Jeyaseelan, L.; Li, Q.; Chiang, J.N.; Wu, Z.; Ding, X. Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Med. Image Anal. 2020, 63, 1–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Calisto, F.M.; Nunes, N.; Nascimento, J. BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis. arXiv 2020, arXiv:2004.03500v2. [Google Scholar] [CrossRef]
- Evans, A.; Trimboli, R.M.; Athanasiou, A.; Balleyguier, C.; Baltzer, P.A.; Bick, U. Breast ultrasound: Recommendations for information to women and referring physicians by the European Society of Breast Imaging. Insights Imaging 2018, 9, 449–461. [Google Scholar] [CrossRef] [Green Version]
- Mammography in Breast Cancer. Available online: https://bit.ly/2Jyf8pl (accessed on 20 November 2020).
- Eggertson, L. MRIs more accurate than mammograms but expensive. CMAJ 2004, 171, 840. [Google Scholar] [CrossRef] [Green Version]
- Salem, D.S.; Kamal, R.M.; Mansour, S.M.; Salah, L.A.; Wessam, R. Breast imaging in the young: The role of magnetic resonance imaging in breast cancer screening, diagnosis and follow-up. J. Thorac. Dis. 2013, 5, 9–18. [Google Scholar] [CrossRef]
- A Literature Review of Emerging Technologies in Breast Cancer Screening. Available online: https://bit.ly/37Ccmas (accessed on 20 October 2020).
- Li, W.; Gu, S.; Zhang, X.; Chen, T. Transfer learning for process fault diagnosis: Knowledge transfer from simulation to physical processes. Comp. Chem. Eng. 2020, 139, 1–10. [Google Scholar] [CrossRef]
- Zhong, E.; Fan, W.; Yang, Q.; Verscheure, O.; Ren, J. Cross Validation Framework to Choose amongst Models and Datasets for Transfer Learning. In Proceedings of the Machine Learning and Knowledge Discovery in Databases, Berlin, Heidelberg, Germany, 12–15 July 2010; pp. 547–562. [Google Scholar]
- Baykal, E.; Dogan, H.; Ercin, M.E.; Ersoz, S.; Ekinci, M. Transfer learning with pre-trained deep convolutional neural networks for serous cell classification. Multimed. Tools Appl. 2020, 79, 15593–15611. [Google Scholar] [CrossRef]
- Cheplygina, V.; de Bruijne, M.; Pluim, J.P.W. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 2019, 54, 280–296. [Google Scholar] [CrossRef] [Green Version]
- Kensert, A.; Harrison, P.J.; Spjuth, O. Transfer Learning with Deep Convolutional Neural Networks for Classifying Cellular Morphological Changes. SLAS Discov. Adv. Life Sci. 2019, 24, 466–475. [Google Scholar] [CrossRef] [Green Version]
- Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput. Biol. Med. 2021, 128, 10–15. [Google Scholar] [CrossRef]
- Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Dig. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
- Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep Learning for Generic Object Detection: A Survey. Int. J. Comput. Vis. 2020, 128, 261–318. [Google Scholar] [CrossRef] [Green Version]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
- Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujście, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
- Ma, B.; Wei, X.; Liu, C.; Ban, X.; Huang, H.; Wang, H.; Xue, W.; Wu, S.; Gao, M.; Shen, Q.; et al. Data augmentation in microscopic images for material data mining. NPJ Comput. Mat. 2020, 6, 1–14. [Google Scholar] [CrossRef]
- Kamycki, K.; Kapuscinski, T.; Oszust, M. Data Augmentation with Suboptimal Warping for Time-Series Classification. Sensors 2019, 20, 95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Dat. 2019, 60, 1–48. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neur. Net. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 20–23 February 2015; pp. 448–456. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), City of Las Vegas, NY, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A.J.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2017, arXiv:abs/1602.07261. [Google Scholar]
- Boroujeni, F.Z.; Wirza, R.; Maskon, O.; Khosravi, A.; Khalilian, M. An Improved Seed Point Detection Algorithm for Centerline Tracing in Coronary Angiograms. In Proceedings of the 2010 Seventh International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 12–14 April 2010; pp. 352–357. [Google Scholar] [CrossRef]
- Erode, C.G.R.; Ravindran, G. Automatic Seed Generation Using Discrete Cosine Transform for 2D Region Growing Segmentation of Computed Tomography Image Sequence—A New Hybrid Segmentation Technique. J. Appl. Sci. 2007, 7, 671–678. [Google Scholar] [CrossRef]
- Drukker, K.; Giger, M.L.; Horsch, K.; Kupinski, M.A.; Vyborny, C.J.; Mendelson, E.B. Computerized lesion detection on breast ultrasound. Med. Phys. 2002, 29, 1438–1446. [Google Scholar] [CrossRef]
- Yap, M.H.; Edirisinghe, E.A.; Bez, H.E. A novel algorithm for initial lesion detection in ultrasound breast images. J. Appl. Clin. Med. Phys. 2008, 9, 2741–2748. [Google Scholar] [CrossRef]
- Shan, J.; Cheng, H.D.; Wang, Y. Completely Automated Segmentation Approach for Breast Ultrasound Images Using Multiple-Domain Features. Ultras. Med. Biol. 2012, 38, 262–275. [Google Scholar] [CrossRef] [PubMed]
- Khan, R.; Stöttinger, J.; Kampel, M. An adaptive multiple model approach for fast content-based skin detection in on-line videos. In Proceedings of the 1st ACM workshop on Analysis and retrieval of events/actions and workflows in video streams, Vancouver, BC, Canada, 8–10 October 2008; pp. 89–96. [Google Scholar] [CrossRef]
- Hu, Q.; Whitney, H.M.; Giger, M.L. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci. Rep. 2020, 10, 1–11. [Google Scholar] [CrossRef]
- Hajian-Tilaki, K. Receiver Operating Characteristic (ROC) Curve Analysis for Medical Diagnostic Test Evaluation. Casp. J. Intern. Med. 2013, 4, 627–635. [Google Scholar]
Study | TL Approach Used | Pre-Training Model Used | Application | Image Dataset | Pre-Processing | Pre-Training Dataset |
---|---|---|---|---|---|---|
Byra et al. [26] | Fine-tuning | VGG19 & InceptionV3 | Classification | OASBUD | Compression and augmentation | ImageNet |
Byra et al. [24] | Fine-tuning | VGG19 | Classification | 882 US images of their own and public images UDIAT and OASBUD | Matching layer | ImageNet |
Hijab et al. [27] | Fine-tuning | VGG16 | Classification | 1300 US Images | Augmentation | ImageNet |
Yap et al. [25] | Fine-tuning | AlexNet | Detection | Dataset A and B | Splitting in to patches | ImageNet |
Yap et al. [28] | Fine-tuning | AlexNet | Detection | Dataset A and B | Ground-truth labeling | ImageNet |
Huynh et al. [23] | Feature extractor | AlexNet | Classification | Breast mammogram dataset with 2393 regions of interest (ROIs) | Compression and augmentation | ImageNet |
Hadad et al. [29] | Fine-tuning | VGG128 | Detection and classification | MRI data | Augmentation | Medical Image (Mammography image) |
Study | Performance Analysis Approach | Performance Metrics | Results |
---|---|---|---|
Byra et al. [26] | Classification performance of classifiers developed using train all and evaluated on test all. | AUC, sensitivity, accuracy, and specificity | InceptionV3: AUC = 0.857 VGG19: AUC = 0.822 |
Byra et al. [24] | Classification performance with and without the matching layer (ML) on two datasets. Bootstrap was used to calculate the parameter standard deviations. | AUC, sensitivity, accuracy, and specificity | The better-performing fine-tuning approach and matching layer, had AUC = 0.936 on a test data of 150 cases |
Hijab et al. [27] | Comparison between accuracy of their model using ultrasound images and other related work. | AUC and accuracy | AUC = 0.98 Accuracy = 0.9739 |
Yap et al. [25] | Comparison of the capability of the proposed deep learning models on the combined dataset. | TPF, FPs/image, and F-measure | FCN-AlexNet (A + B): (TPF = 0.99 for A and TPF = 0.93 for B) |
Yap et al. [28] | Dice similarity coefficient to compare with the malignant lesions. | Mean Dice, sensitivity, precision, and Matthew correlation coefficient (MCC) | “Mean Dice” score of 0.7626 with FCN-16s |
Huynh et al. [23] | Classifiers trained on pre-trained models features were compared with classifiers trained with human-designed features. | AUC | SVM trained on human-designed features obtained an AUC = 0.90. SVM trained on CNN-extracted features obtained an AUC = 0.88 |
Hadad et al. [29] | Cross-modal and cross-domain transfers learning were compared. | Accuracy | Cross-modal = 0.93 Cross-domain = 0.90 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ayana, G.; Dese, K.; Choe, S.-w. Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers 2021, 13, 738. https://doi.org/10.3390/cancers13040738
Ayana G, Dese K, Choe S-w. Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers. 2021; 13(4):738. https://doi.org/10.3390/cancers13040738
Chicago/Turabian StyleAyana, Gelan, Kokeb Dese, and Se-woon Choe. 2021. "Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging" Cancers 13, no. 4: 738. https://doi.org/10.3390/cancers13040738
APA StyleAyana, G., Dese, K., & Choe, S. -w. (2021). Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers, 13(4), 738. https://doi.org/10.3390/cancers13040738