Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis †
Abstract
:1. Introduction
- (1)
- We introduce a new, more realistic, evaluation procedure, referred to as leave-one-patient-out (LOPO). To our best knowledge, this is the single CAD for CEUS FLLs work in which the evaluation does not follow the standard training/validation/testing split applied with respect to images [11,12,13,14]. The main drawback of the latter approach is that the images from the training and testing sets, obviously distinct, may still origin from the same patient, thus making the evaluation easier, and not suitable for claiming CAD in-field performances.
- (2)
- The above-mentioned procedure enabled us to define and implement different voting schemes for patient-oriented lesion diagnosis. For example, a hard vote scheme uses predicted class labels for majority rule voting, whereas soft voting predicts the class label based on the argmax of the sums of the predicted probabilities.
- (3)
- Our early work was the first one which uses custom designed 2D-DCNN for implementing an automated CAD for CEUS FLLs. In the current work, we further extend the study by employing modern DNN architectures available through Keras Applications. They are deep learning models that are made available alongside pre-trained weights used in this paper in various forms (transfer learning/feature extraction, fine-tuning or train from scratch). In our study, a special emphasis is put on TinyML/small memory footprint models, as we intend to transfer the CAD into a medical embedded system.
2. State of the Art
2.1. Deep Learning Based CEUS for Medical Investigations
2.2. Deep Learning Based CEUS for FLL Investigation
3. Materials and Methods
3.1. Materials
- Arterial (Beginning: 10–20 s, End: 25–35 s)
- Portal (Beginning: 30–45 s, End: 120 s)
- Late (Beginning: 120 s, End: until the disappearance of the bubbles)
3.2. Method
4. Results
- -
- Hardware architecture: Intel® Core ™ i7—6800K CPU @ 3.4 GHz, 64 GB RAM, 64 bit system, GPU: NVIDIA GeForce RTX 2080 SUPER 1845 MHz, 8 GB RAM, 3072 CUDA Cores
- -
- Software framework: TensorFlow 2.4.1, Python 3.8.5
4.1. Custom CNN
4.1.1. Evaluation Procedure Influence
4.1.2. Voting Scheme
4.2. Modern DNN Architectures
- MobileNetV2, introduced in [37], has as basic building block a bottleneck depth-separable convolution with residuals; it is faster with the same accuracy than MobileNetV1, and needs 30 percent fewer parameters. Performance on ImageNet showed improvement in state-of-the-art performance points like running time—75 ms, top-1 accuracy—72% or number of multiply-adds—300 M.
- The NASNet [38] research aimed towards searching for an optimal CNN architecture directly on the dataset of interest using reinforcement learning. NASNet Mobile is a simplified version of NASNet which achieves 74% top-1 accuracy, which is 3.1% better than equivalently sized, state-of-the-art models for mobile platforms.
- EfficientNet [39] propose an efficient scaling method that uses a simple yet highly effective compound coefficient. The smallest version of EfficientNet is EfficientNetB0 with a similar architecture to NASNet Mobile which includes a squeeze-and-excite optimization and Swish activation function. The reported top-1 accuracy for EfficientNetB0 is 77.1%.
- DenseNet [40] have several compelling advantages simplifies the connectivity pattern between layers and ensures maximum information flow by connecting every layer directly with each other. It also encourages feature reuse and decrease the number of parameters. It achieved a top-1 accuracy of 75%
4.2.1. Pre-Trained Modern DNNs
4.2.2. Modern DNNs Trained from Scratch
5. Discussion and Conclusions
- -
- advanced DNN architectures, e.g., GhostNet;
- -
- automatic DNN architecture search, e.g., Autokeras;
- -
- dataset extension, curation—as some cases constantly fail—and enhancement via advanced techniques for speckle noise removal and robustness improvement, e.g., Augmix.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sampat, M.P.; Markey, M.K.; Bovik, A.C. Computer-aided detection and diagnosis in mammography. In Handbook of Image and Video Processing, 2nd ed.; Bovik, A.C., Ed.; Elsevier: Amsterdam, The Netherlands, 2005; pp. 1195–1217. [Google Scholar]
- Gomathi, M.; Thangaraj, P.A. Computer aided diagnosis system for lung cancer detection using machine learning technique. Eur. J. Sci. Res. 2011, 51, 260–275. [Google Scholar]
- Yoshida, H.; Nappi, J. Three-dimensional computer-aided diagnosis scheme for detection of colonic polyps. IEEE Trans. Med. Imaging 2002, 20, 12. [Google Scholar] [CrossRef] [PubMed]
- Bowsher, J.E.; Roper, J.R.; Yan, S.; Giles, W.M.; Yin, F.-F. Regional SPECT imaging using sampling principles and multiple pinholes. In Proceedings of the Nuclear Science Symposium Conference Record, Knoxville, TN, USA, 30 October–6 November 2010. [Google Scholar]
- Pizurica, A.; Wilfried, P.; Lemahieu, I.; Acheroy, M. A versatile wavelet domain ISE filtration technique for medical imaging. IEEE Trans. Med. Imaging 2003, 22, 323–331. [Google Scholar] [CrossRef] [PubMed]
- Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
- Rousson, M.; Paragios, N.; Deriche, R. Implicit active shape models for 3D segmentation in MRI imaging. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Saint-Malo, France, 26–29 September 2004; pp. 209–216. [Google Scholar]
- Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef] [Green Version]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Sîrbu, C.L.; Simion, G.; Căleanu, C.-D. Deep CNN for contrast-enhanced ultrasound focal liver lesions diagnosis. In Proceedings of the 2020 International Symposium on Electronics and Telecommunications (ISETC), Timișoara, Romania, 5–6 November 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Pan, F.; Huanq, Q.; Li, X. Classification of liver tumors with CEUS based on 3D-CNN. In Proceedings of the IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Osaka, Japan, 3–5 July 2019. [Google Scholar]
- Vancea, F.; Mitrea, D.; Nedevschi, S. Hepatocellular Carcinoma Segmentation within Ultrasound Images using Convolutional Neural Networks. In Proceedings of the IEEE 15th International Conference on Intelligent Computer Communication and Preprocessing (ICCP), Cluj-Napoca, Romania, 5–7 September 2019. [Google Scholar] [CrossRef]
- Streba, C.T.; Ionescu, M.; Gheonea, D.I.; Sandulescu, L.; Ciurea, T.; Saftoiu, A.; Vere, C.C.; Rogoveanu, I. Contrast-enhanced ultrasonography parameters in neural network diagnosis of liver tumors. World J. Gastroenterol. 2012, 18, 4427–4434. [Google Scholar] [CrossRef]
- Guo, L.; Wang, D.; Xu, H.; Qian, Y.; Wang, C.; Zheng, X.; Zhang, Q.; Shi, J. CEUS-based classification of liver tumors with deep canonical correlation analysis and multi-kernel learning. In Proceedings of the Annual International Conference IEEE Engineering Medicine and Biology Society, Jeju Island, Korea, 11–15 July 2017; pp. 1748–1751. [Google Scholar] [CrossRef]
- Bartolotta, T.; Taibbi, A.; Midiri, M.; Lgalla, R. Contrast-enhanced ultrasound of hepatocellular carcinoma: Where do we stand? Ultrasonography 2019, 38, 200–214. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Tianfu, W. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
- Song, K.D. Current status of deep learning applications in abdominal ultrasonography. Ultrasonography 2021, 40, 177–182. [Google Scholar] [CrossRef]
- Sporea, I.; Badea, R.; Martie, A.; Şirli, R.; Socaciu, M.P.A.; Dănilă, M. Contrast Enhanced Ultrasound for the characterization of focal liver lesions. Med. Ultrason. 2011, 13, 38–44. [Google Scholar]
- Liu, Q.; Cheng, J.; Li, J.; Gao, X.; Li, H. The diagnostic accuracy of contrast-enhanced ultrasound for the differentiation of benign and malignant thyroid nodules: A PRISMA compliant meta-analysis. Medicine 2018, 97, e13325. [Google Scholar] [CrossRef] [PubMed]
- Wan, P.; Chen, F.; Liu, C.; Kong, W.; Zhang, D. Hierarchical temporal attention network for thyroid dule recognition using dynamic CEUS imaging. IEEE Trans. Med. Imaging 2021, 40, 1646–1660. [Google Scholar] [CrossRef]
- Posteman, A.W.; Scheltema, M.J.; Mannaerts, C.K.; Van Sloun, R.J.; Idzenga, T.; Mischi, M.; Engelbrecht, M.R.; De la Rosette, J.J.; Wijkstra, H. The prostate cancer detection rates of CEUS-targeted versus MRI-targeted versus systematic TRUS-guided biopsies in biopsy-naïve men: A prospective, comparative clinical trial using the same patients. BMC Urol. 2017, 17, 27. [Google Scholar] [CrossRef] [Green Version]
- Feng, Y.; Yang, F.; Zhou, X.; Guo, Y.; Tang, F.; Ren, F.; Guo, J.; Ji, S. A Deep learning approach for targeted contrast-enhanced ultrasound based prostate cancer detection. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019, 16, 1794–1801. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Li, L.; Wang, Y.X.J.; Cui, N.Y.; Zou, S.M.; Zhou, C.W.; Jiang, Y.X. Time-intensity curve parameters in rectal cancer measured using endorectal ultrasonography with sterile coupling gels filling the rectum: Correlations with tumor angiogenesis and clinicopathological features. BioMed Res. Int. 2014, 2014, 587806. [Google Scholar] [CrossRef] [PubMed]
- Qin, L.; Yin, H.; Zhuang, H. Classification for rectal CEUS images based on combining features by transfer learning. In Proceedings of the Third International Symposium on Image Computing and Digital Medicine—ISICDM, Xi’an, China, 24–26 August 2019; pp. 187–191. [Google Scholar] [CrossRef]
- Yang, Z.; Gong, X.; Guo, Y.; Liu, W. A Temporal sequence dual-branch network for classifying hybrid ultrasound data of breast cancer. IEEE Access 2020, 8, 82688–82699. [Google Scholar] [CrossRef]
- Zhang, F.; Jin, L.; Li, G.; Jia, C.; Shi, Q.; Du, L.; Wu, R. The role of contrast-enhanced ultrasound in the diagnosis of malignant non-mass breast lesions and exploration of diagnostic criteria. Br. J. Radiol. 2021, 94, 20200880. [Google Scholar] [CrossRef]
- Erlichman, D.B.; Weiss, A.; Koenigsberg, M.; Stein, M.W. Contrast enhanced ultrasound: A review of radiology applications. Clin. Imaging 2020, 60, 209–215. [Google Scholar] [CrossRef] [Green Version]
- Gasnier, A.; Ardon, R.; Ciofolo-Veit, C.; Leen, E.; Correas, J.M. Assessing tumor vascularity with 3D contrast-enhanced ultrasound: A new semi-automated segmentation framework. In Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, The Netherlands, 14–17 April 2010; pp. 300–303. [Google Scholar] [CrossRef] [Green Version]
- Wan, P.; Chen, F.; Zhu, X.; Liu, C.; Zhang, Y.; Kong, W.; Zhang, D. CEUSNet: Lesion segmentation in dynamic contrast-enhanced ultrasound with feature-reweighted attention mechanism. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Iowa City, IA, USA, 4 April 2020; pp. 1816–1819. [Google Scholar]
- Xi, I.L.; Wu, J.; Guan, J.; Zhang, P.J.; Horii, S.C.; Soulen, M.C.; Zhang, Z.; Bai, H.X. Deep learning for differentiation of benign and malignant solid liver lesions on ultrasonography. Abdom. Radiol. 2021, 46, 534–543. [Google Scholar] [CrossRef]
- Hassan, T.M.; Elmogy, M.; Sallam, E.-S. Diagnosis of focal liver diseases based on deep learning technique for ultrasound images. Arab. J. Sci. Eng. 2017, 42, 3127–3140. [Google Scholar] [CrossRef]
- Wu, K.; Chenb, X.; Ding, M. Deep learning based classification of focal liver lesions with contrast-enhanced ultrasound. Optik 2014, 125, 4057–4063. [Google Scholar] [CrossRef]
- Moga, T.V.; Popescu, A.; Sporea, I.; Danila, M.; David, C.; Gui, V.; Iacob, N.; Miclaus, G.; Sirli, R. Is Contrast Enhanced Ultrasonography a useful tool in a beginner’s hand? How much can a Computer Assisted Diagnosis prototype help in characterizing the malignancy of focal liver lesions? Med. Ultrason. 2017, 19, 252–258. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Căleanu, C.-D.; Simion, G.; David, C.; Gui, V.; Moga, T.; Popescu, A.; Sirli, R.; Sporea, I. A study over the importance of arterial phase temporal parameters in focal liver lesions CEUS based diagnosis. In Proceedings of the 11th International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 14–15 November 2014; pp. 1–4, ISBN 978-1-4799-7266-1. [Google Scholar] [CrossRef]
- Căleanu, C.-D.; Simion, G. A Bag of Features Approach for CEUS Liver Lesions Investigation. In Proceedings of the 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019. [Google Scholar] [CrossRef]
- Keras Applications. Available online: https://keras.io/api/applications/ (accessed on 6 March 2021).
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar] [CrossRef] [Green Version]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking model scaling for conutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected conutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
FLL | No. of Patients |
---|---|
FNH | 16 |
HCC | 30 |
HMG | 23 |
HYPERM | 11 |
HYPOM | 11 |
Total: | 91 |
Model | Batch | Training Epochs | Input Size | Adam Optimizer | SGD Optimizer | RMSprop |
---|---|---|---|---|---|---|
Sequential M | 32 | 50 | 180 × 180 | 95.71 | 93.19 | 94.59 |
Model | Batch Size | Training Epochs | Input Size | Acc. [%] (Adam Optimizer) |
---|---|---|---|---|
Sequential M | 16 | 50 | 80 × 80 | 90.47 |
Sequential M | 16 | 50 | 120 × 120 | 93.93 |
Sequential M | 16 | 50 | 180 × 180 | 91.87 |
Sequential M | 32 | 50 | 80 × 80 | 93.18 |
Sequential M | 32 | 50 | 120 × 120 | 88.77 |
Sequential M | 32 | 50 | 180 × 180 | 95.71 |
Sequential M | 16 | 100 | 80 × 80 | 94.76 |
Sequential M | 16 | 100 | 120 × 120 | 93.15 |
Sequential M | 16 | 100 | 180 × 180 | 94.72 |
Sequential M | 32 | 100 | 80 × 80 | 92.48 |
Sequential M | 32 | 100 | 120 × 120 | 94.92 |
Sequential M | 32 | 100 | 180 × 180 | 94.43 |
Model/Patients | 11 FNH | 11 HCC | 11 HMG | 11 MPER | 11 MPO | 55 Patients |
---|---|---|---|---|---|---|
Sequential S | 71 ± 0.06 | 88 ± 0.02 | 62 ± 0.04 | 24 ± 0.05 | 32 ± 0.04 | 56 |
Sequential M | 58 ± 0.05 | 72 ± 0.07 | 63 ± 0.08 | 33 ± 0.04 | 43 ± 0.04 | 54 |
Sequential L | 45 ± 0.01 | 88 ± 0.05 | 60 ± 0.03 | 14 ± 0.02 | 36 ± 0.04 | 49 |
Model/Patients | 16 FNH | 30 HCC | 23 HMG | 11 MPER | 11 MPO | 91 Patients |
---|---|---|---|---|---|---|
Sequentia S | 75 ± 0.03 | 89 ± 0.02 | 68 ± 0.00 | 20 ± 0.01 | 28 ± 0.03 | 56 |
Sequential M | 56 ± 0.02 | 74 ± 0.01 | 65 ± 0.02 | 31 ± 0.01 | 43 ± 0.04 | 54 |
Sequential L | 49 ± 0.02 | 84 ± 0.02 | 63 ± 0.01 | 15 ± 0.02 | 33 ± 0.03 | 49 |
Model/Patients | 11 FNH | 11 HCC | 11 HMG | 11 MPER | 11 MPO | 55 Patients |
---|---|---|---|---|---|---|
Sequential S | 86 ± 0.05 | 99 ± 0.02 | 77 ± 0.07 | 31 ± 0.05 | 46 ± 0.04 | 68 |
Sequential M | 80 ± 0.04 | 87 ± 0.05 | 84 ± 0.07 | 55 ± 0.1 | 63 ± 0.06 | 74 |
Sequential L | 68 ± 0.05 | 97 ± 0.02 | 82 ± 0.04 | 15 ± 0.04 | 56 ± 0.05 | 64 |
Model/Patients | 16 FNH | 30 HCC | 23 HMG | 11 MPER | 11 MPO | 91 Patients |
---|---|---|---|---|---|---|
Sequential S | 91 ± 0.04 | 98 ± 0.02 | 86 ± 0.03 | 27 ± 0.02 | 41 ± 0.03 | 69 |
Sequential M | 79 ± 0.01 | 93 ± 0.02 | 85 ± 0.01 | 54 ± 0.04 | 53 ± 0.04 | 75 |
Sequential L | 73 ± 0.03 | 96 ± 0.02 | 81 ± 0.02 | 18 ± 0.05 | 53 ± 0.05 | 64 |
Model | Size | Top-1 Accuracy | Top-5 Accuracy | Parameters | Depth |
---|---|---|---|---|---|
MobileNetV2 | 14 MB | 0.713 | 0.901 | 3,538,984 | 88 |
NASNetMobile | 23 MB | 0.744 | 0.919 | 5,326,716 | - |
EfficientNetB0 | 29 MB | - | - | 5,330,571 | - |
DenseNet121 | 33 MB | 0.750 | 0.923 | 8,062,504 | 121 |
ResNet50 | 98 MB | 0.749 | 0.921 | 25,636,712 | - |
Model/Patients | 11 FNH | 11 HCC | 11 HMG | 11 MPER | 11 MPO | 55 Patients |
---|---|---|---|---|---|---|
MobileNetV2 | 73 | 94 | 71 | 31 | 71 | 68 |
NASNetMobile | 54 | 100 | 76 | 44 | 58 | 66 |
EfficientNetB0 | 85 | 94 | 70 | 33 | 54 | 68 |
DenseNet121 | 72 | 100 | 88 | 31 | 62 | 71 |
ResNet50 | 69 | 95 | 72 | 43 | 62 | 68 |
Model/Patients | 16 FNH | 30 HCC | 23 HMG | 11 MPER | 11 MPO | 91 Patients |
---|---|---|---|---|---|---|
MobileNetV2 | 100 | 100 | 93 | 36 | 72 | 80 |
NASNetMobile | 100 | 88 | 82 | 68 | 83 | 84 |
EfficientNetB0 | 81 | 100 | 90 | 74 | 63 | 82 |
DenseNet121 | 92 | 100 | 94 | 78 | 70 | 87 |
ResNet50 | 100 | 100 | 100 | 72 | 67 | 88 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Căleanu, C.D.; Sîrbu, C.L.; Simion, G. Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis. Sensors 2021, 21, 4126. https://doi.org/10.3390/s21124126
Căleanu CD, Sîrbu CL, Simion G. Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis. Sensors. 2021; 21(12):4126. https://doi.org/10.3390/s21124126
Chicago/Turabian StyleCăleanu, Cătălin Daniel, Cristina Laura Sîrbu, and Georgiana Simion. 2021. "Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis" Sensors 21, no. 12: 4126. https://doi.org/10.3390/s21124126
APA StyleCăleanu, C. D., Sîrbu, C. L., & Simion, G. (2021). Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis. Sensors, 21(12), 4126. https://doi.org/10.3390/s21124126