Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods
Abstract
:1. Introduction
1.1. Automatic Diagnosis Approaches within Contrast-Enhanced Medical Images
1.2. Automatic Recognition and Segmentation Using Multiple Image Modalities
1.3. Contributions
2. Materials and Methods
2.1. Background
2.1.1. Convolutional Neural Networks (CNNs)
2.1.2. Kernel Principal Component Analysis (KPCA)
2.2. The Proposed Solution
2.2.1. Data Preparation and Preprocessing
2.2.2. CNN Based Methods for HCC Recognition within Combined B-Mode Mode Ultrasound and CEUS Images
- (a)
- SqueezeNet, as a small and efficient CNN [52];
- (b)
- VGGNet, as a classical, sequential CNN, well known for its performance [53];
- (c)
- GoogLeNet, having an optimized CNN architecture, in comparison with its ancestors, due to the inception modules [59];
- (d)
- ResNet, as a deep CNN, which implements the concept of residual connections [55];
- (e)
- An original version of GoogLeNet, however, less complex than InceptionV3 and InceptionResNetV2, obtained by drawing residual connections in order to skip the last three inception modules, aiming to improve efficiency, while reducing the gradient vanishing danger; the residual connections were drawn from the output of each of these inception modules, to the network output, the addition operation being performed. In order to perform appropriate dimensionality reduction, respectively to equalize the feature vectors, which were inputs to the same addition unit, average pooling operations and 1 × 1 convolutions have been performed, achieving dimensionality reduction and the decrease in the number of parameters;
- (f)
- DenseNet, as an improved version of ResNet, providing a maximized information flow through the network, but with less parameters.
- (1)
- Feature level fusion, the combination being performed directly among the data belonging to the B-mode ultrasound and to the CEUS images;
- (2)
- Classifier level fusion, the combination being performed using the feature maps provided by two CNN structures, separately trained on B-mode ultrasound, respectively on CEUS data;
- (3)
- Decision level fusion, assuming to compute the arithmetic or weighted means between the probability values yielded by two completely separated CNNs, the first being trained with B-mode ultrasound images and the second being trained using CEUS images. All these methods are detailed below.
- (1)
- Performing feature level fusion
- (2)
- Performing classifier level fusion
- (3)
- Performing decision level fusion
2.2.3. Comparing the CNN Methods with Conventional Approaches
- (1)
- Texture analysis methods
- (2)
- Feature selection methods
- (a)
- The Correlation-based Feature Selection (CFS) is a feature selection method that evaluates attribute subsets by computing, for each possible subset, a merit, with respect to the class parameter. Thus, the features from the subset were considered relevant if they were strongly correlated with the class parameter and weakly correlated with the other features [62]. CFS was employed together with the Best First Search algorithm, which generated an appropriate set of feature subsets to be assessed [63].
- (b)
- The method of Gain Ratio Attribute Evaluation assessed the individual attributes , (m being the number of attributes), by associating them with a gain ratio. This ratio emphasized the decrease in the class entropy after observing the attribute , reported to the entropy of within the whole dataset [62]. This method was employed in conjunction with the Ranker method [63]. In the case of CFS, the feature subset corresponding to the most increased merit was taken into account, while for the Gain Ratio Attribute Evaluation technique, the first ranked attributes with a gain ratio above 0.15 were considered. The union of the relevant feature subsets, provided by each individual method, was finally considered—the corresponding values being provided as inputs to the conventional classification methods.
- (3)
- Conventional classification techniques
3. Experiments
4. Results
4.1. CNN Assessment on CEUS and B-Mode Ultrasound Images
4.2. CNN Assessment on Combined CEUS and B-Mode Ultrasound Images
4.2.1. Feature Level Fusion
4.2.2. Classifier Level Fusion
- (a)
- In the case of SqueezeNet, the following feature vectors were taken into account: the output of the last layer, “pool10” having the size of 2; the feature vector obtained at the output of the previous layer, “relu_conv10”, of size 392 (); the concatenation of these two feature vectors (size 394).
- (b)
- In the case of GoogLeNet, the feature vector of size 1024 () obtained at the output of the “pool5-drop_7X7_s1” layer, preceding the fully connected layer, was considered; the same for the modified version of GooLeNet, denoted by GoogLeNetV1, but the output was of size 528.
- (c)
- In the case of ResNet18, the output of the “pool5” layer, of size 512 (), preceding the fully connected layer, was taken into account.
- (d)
- In the case of VGG16, the output of the “drop7” dropout layer, of size 4096 (), preceding the fully connected layer, was retained.
- (e)
- As for DenseNet201, the output of the “avg_pool” layer, of size 1920 (), preceding the fully connected layer, was considered.
4.2.3. Decision Level Fusion
4.3. Comparisons with Conventional Approaches
4.3.1. Texture Analysis and Recognition on CEUS, and B-Mode Ultrasound Images
4.3.2. Texture Analysis and Recognition on Combined CEUS and B-Mode Ultrasound Images
5. Discussion
Comparisons with Other State-of-the-Art Results
6. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ANN | Artificial Neural Networks |
AUC | Area Under Curve |
BN | Batch Normalization |
CT | Computer Tomography |
MRI | Magnetic Resonance Imaging |
CFS | Correlation-based Feature Selection |
CNN | Convolutional Neural Network(s) |
DCCA | Deep Canonical Correllation Analysis |
DICOM | Digital Imaging and Communications in Medicine |
GCM | Generalized Co-occurrence Matrix |
GLCM | Gray Level Co-occurrence Matrix |
HCC | Hepatocellular Carcinoma |
KPCA | Kernel Principal Component Analysis |
LDA | Linear Discriminant Analysis |
MLP | Multi-Layer Perceptron |
PAR | Cirrhotic Parenchyma (on which HCC had evolved) |
PCA | Principal Component Analysis |
RF | Random Forest |
SMO | Sequential Minimal Optimization |
SVM | Support Vector Machines |
Appendix A
Appendix B
References
- Bosch, F.; Ribes, J.; Cleries, R.; Diaz, M. Epidemiology of hepatocellular carcinoma. Clin. Liver Dis. 2005, 9, 191–211. [Google Scholar] [CrossRef] [PubMed]
- Sherman, M. Approaches to the diagnosis of hepatocellular carcinoma. Curr. Gastroenterol. Rep. 2005, 7, 11–18. [Google Scholar] [CrossRef]
- Leoni, S.; Serio, I.; Pecorelli, A.; Marinelli, S.; Bolondi, L. Contrast-enhanced ultrasound in liver cancer. Hepatic Oncol. 2015, 2, 51–62. [Google Scholar] [CrossRef] [PubMed]
- Ciobanu, L.; Szora, A.T.; Badea, A.F.; Suciu, M.; Badea, R. Harmonic Contrast-Enhanced Ultrasound (CEUS) of Kidney Tumors; IntechOpen: Rijeka, Croatia, 2018. [Google Scholar]
- Bartolotta, T.; Taibbi, A.; Midiri, M.; Lgalla, R. Contrast-enhanced ultrasound of hepatocellular carcinoma: Where do we stand? Ultrasonography 2019, 38, 200–214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lelandais, B.; Gardin, I.; Mouchard, L.; Vera, P.; Ruan, S. Dealing with uncertainty and imprecision in image segmentation using belief function theory. Int. J. Approx. Reason. 2014, 55, 376–387. [Google Scholar] [CrossRef]
- Bar-Zion, A.D.; Tremblay-Darveau, C.; Yin, M.; Adam, D.; Foster, F.S. Denoising of Contrast-Enhanced Ultrasound Cine Sequences Based on a Multiplicative Model. IEEE Trans. Biomed. Eng. 2015, 62, 1969–1980. [Google Scholar] [CrossRef]
- Gong, P.; Song, P.; Chen, S. Improved Contrast-Enhanced Ultrasound Imaging With Multiplane-Wave Imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2018, 65, 178–187. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mitrea, D.; Mitrea, P.; Nedevschi, S.; Badea, R.; Lupsor, M.; Socaciu, M.; Golea, A.; Hagiu, C.; Ciobanu, L. Abdominal Tumor Characterization and Recognition Using Superior-Order Cooccurrence Matrices, Based on Ultrasound Images. Comput. Math. Methods Med. 2012. [Google Scholar] [CrossRef]
- Versaci, M.; Morabito, F.C.; Angiulli, G. Adaptive Image Contrast Enhancement by Computing Distances into a 4-Dimensional Fuzzy Unit Hypercube. IEEE Access 2017, 5, 26922–26931. [Google Scholar] [CrossRef]
- Shakeri, M.; Dezfoulian, M.; Khotanlou, H.; Barati, A.; Masoumi, Y. Image contrast enhancement using fuzzy clustering with adaptive cluster parameter and sub-histogram equalization. Digit. Signal Process. 2017, 62, 224–237. [Google Scholar] [CrossRef]
- Zheng, Q.; Yanga, L.; Zheng, B.; Jiahao, L.; Guoa, K.; Lianga, Y. Artificial intelligence performance in detecting tumor metastasis from medical radiology imaging: A systematic review and meta-analysis. EClinicalMedicine 2021, 31, 1–25. [Google Scholar] [CrossRef] [PubMed]
- Virmani, J.; Kumar, V.; Kalra, N.; Khandelwal, N. SVM-Based Characterization of Liver Ultrasound Images Using Wavelet Packet Texture Descriptors. J. Digit. Imaging Off. J. Soc. Comput. Appl. Radiol. 2012, 26. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Song, J.; Wang, S.; Zhao, J.; Chen, Y. Learning to Diagnose Cirrhosis with Liver Capsule Guided Ultrasound Image Classification. Sensors 2017, 17, 149. [Google Scholar] [CrossRef] [Green Version]
- Chikui, T.; Tokumori, K.; Yoshiura, K.; Oobu, K.; Nakamura, S.; Nakamura, K. Sonographic texture characterization of salivary gland tumors by fractal analyses. Ultrasound Med. Biol. 2005, 31, 1297–1304. [Google Scholar] [CrossRef] [PubMed]
- Madabhushi, A.; Feldman, M.; Metaxas, D.; Tomaszeweski, J.; Chute, D. Automated Detection of Prostatic Adenocarcinoma from High-Resolution Ex Vivo MRI. IEEE Trans. Med. Imaging 2006, 24, 1611–1625. [Google Scholar] [CrossRef]
- Mohd Khuzi, A.; Besar, R.; Zaki, W.; Ahmad, N. Identification of masses in digital mammogram using gray level co-occurrence matrices. Biomed. Imaging Interv. J. 2009, 5, e17. [Google Scholar] [CrossRef] [PubMed]
- Yoshida, H.; Casalino, D.; Keserci, B.; Coskun, A.; Ozturk, O.; Savranlar, A. Wavelet-packet-based texture analysis for differentiation between benign and malignant liver tumours in ultrasound images. Phys. Med. Biol. 2003, 48, 3735–3753. [Google Scholar] [CrossRef] [PubMed]
- Sujana, H.; Swarnamani, S. Application of Artificial Neural Networks for the classification of liver lesions by texture parameters. Ultrasound Med. Biol. 1996, 22, 1177–1181. [Google Scholar] [CrossRef]
- Sharma, S. Stacked Autoencoders for Medical Image Search. In Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA, 12–14 December 2016; pp. 45–54. [Google Scholar]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
- Le, N.Q.K.; Yapp, E.K.Y.; Nagasundaram, N.; Yeh, H.Y. Classifying Promoters by Interpreting the Hidden Information of DNA Sequences via Deep Learning and Combination of Continuous FastText N-Grams. Front. Bioeng. Biotechnol. 2019, 7, 305. [Google Scholar] [CrossRef] [Green Version]
- Le, N.; Nguyen, B. Prediction of FMN Binding Sites in Electron Transport Chains based on 2-D CNN and PSSM Profiles. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019, 1. [Google Scholar] [CrossRef]
- Jiao, L.; Zhang, F.; Liu, F.; Yang, S.; Li, L.; Feng, Z.; Qu, R. A Survey of Deep Learning-Based Object Detection. IEEE Access 2019, 7, 128837–128868. [Google Scholar] [CrossRef]
- Guo, Z.; Li, X.; Huang, H.; Guo, N.; Li, Q. Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging, Washington, DC, USA, 4–7 April 2018; pp. 903–907. [Google Scholar]
- Mishkin, D.; Sergievskiy, N.; Matas, J. Systematic evaluation of CNN advances on the ImageNet. arXiv 2016, arXiv:1606.02228v1. [Google Scholar]
- Hui, Z.; Liu, C.; Fankun, M. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: A prospective multicentre study. Gut 2019, 68, 729–741. [Google Scholar] [CrossRef]
- Zhantao, C.; Lixin, D.; Guowu, Y. Breast Tumor Detection in Ultrasound Images Using Deep Learning. In International Workshop on Patch-based Techniques in Medical Imaging, Lecture Notes in Computer Science; Springer: Berlin, Germany, 2017. [Google Scholar]
- Acharya, U.R.; Koh, J.E.W.; Hagiwara, Y.; Tan, J.H.; Gertych, A.; Vijayananthan, A.; Yaakup, N.A.; Abdullah, B.J.J.; Fabell, M.K.B.M.; Yeong, C.H. Automated Diagnosis of Focal Liver Lesions Using Bidirectional Empirical Mode Decomposition Features. Comput. Biol. Med. 2018, 94, 11–18. [Google Scholar] [CrossRef] [PubMed]
- Azer, S. Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World J. Gastrointest. Oncol. 2019, 11, 1218–1230. [Google Scholar] [CrossRef]
- Vivanti, R.; Epbrat, A. Automatic liver tumor segmentation in follow-up CT studies using convolutional neural networks. In Proceedings of the Patch-Based Methods in Medical Image Processing Workshop, Munich, Germany, 9 October 2015; pp. 45–54. [Google Scholar]
- Li, W.; Cao, P. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images. Comput. Math. Methods Med. 2016, 2016, 7. [Google Scholar] [CrossRef]
- Guo, L.; Wang, D.; Xu, H.; Qian, Y.; Wang, K.; Zheng, X. CEUS-based classification of liver tumors with deep canonical correlation analysis and multi-kernel learning. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Jeju, Korea, 11–15 July 2017; pp. 1748–1751. [Google Scholar]
- Paire, J.; Sauvage, V.; Albouy-Kissi, A.; Ladam Marcus, V. Fast CEUS Image Segmentation based on Self Organizing Maps. Prog. Biomed. Opt. Imaging Proc. SPIE 2014, 9034, 903412. [Google Scholar]
- Wan, P.; Chen, F.; Zhu, X.; Liu, C.; Zhang, Y.; Kong, W.; Zhang, D. CEUS-Net: Lesion Segmentation in Dynamic Contrast-Enhanced Ultrasound with Feature-Reweighted Attention Mechanism. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020. [Google Scholar]
- Zhou, B.; Yang, X.; Liu, T. Artificial Intelligence in Quantitative Ultrasound Imaging: A Review. arXiv 2020, arXiv:2003.11658. [Google Scholar]
- Duda, D.; Kretowski, M.; Bezy-Vendling, J. Computer aided diagnosis of liver tumors based on multi-image texture analysis of contrast-enhanced CT. Selection of the most appropriate texture features. Stud. Logic Gramm. Rhetor. 2013, 35, 49–70. [Google Scholar] [CrossRef] [Green Version]
- Kondo, S.; Takagi, K.; Nishida, M.; Iwai, T.; Kudo, Y.; Ogawa, K.; Kamiyama, T.; Shibuya, H.; Kahata, K.; Shimizu, C. Computer-Aided Diagnosis of Focal Liver Lesions Using Contrast-Enhanced Ultrasonography With Perflubutane Microbubbles. IEEE Trans. Med Imaging 2017, 36, 1427–1437. [Google Scholar] [CrossRef]
- Pan, F.; Huang, Q.; Li, X. Classification of liver tumors with CEUS based on 3D-CNN. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Toyonaka, Japan, 3–5 July 2019; pp. 495–506. [Google Scholar]
- Liu, F.; Liu, D.; Wang, K.; Xie, X.; Kuang, L.; Huang, G.; Peng, B.; Wang, Y.; Lin, M.; Tian, J.; et al. Deep Learning Radiomics Based on ContrastEnhanced Ultrasound Might Optimize Curative Treatments for Very-Early or Early Stage Hepatocellular Carcinoma Patients. Liver Cancer 2020, 9, 397–413. [Google Scholar] [CrossRef] [PubMed]
- Cantisani, V.; Calliada, F. Liver metastases Contrast-enhanced ultrasound compared with computed tomography and magnetic resonance. World J. Gastroenterol. 2014, 20, 9998–10007. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, Z.; Gong, X.; Guo, Y.; Liu, W. A Temporal Sequence Dual-Branch Network for Classifying Hybrid Ultrasound Data of Breast Cancer. IEEE Access 2020, 8, 82688–82699. [Google Scholar] [CrossRef]
- Pradhan, P.; Kohler, K.; Guo, S.; Rosin, O.; Popp, J.; Niendorf, A.; Bocklitz, T. Data Fusion of Histological and Immunohistochemical Image Data for Breast Cancer Diagnostics using Transfer Learning. In Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods, Online Streaming, 4–6 February 2021; Scitepress DigitalLibrary: Setubal, Portugal, 2021; pp. 495–506. [Google Scholar]
- Chinmayi, P.; Agilandeeswari, L.; Prabukumar, M. Combining Multiple Sources of Knowledge in Deep CNNs for Action Recognition. In IEEE Winter Conference on Applications of Computer Vision (WACV 2016); Abraham, A., Cherukuri, A.K., Madureira, A.M., Muda, A.K., Eds.; IEEE: Piscataway, NJ, USA, 2016; pp. 1–8. [Google Scholar]
- Sun, S.; An, N.; Zhao, X.; Tan, M. A PCA–CCA network for RGB-D object recognitions. Int. J. Adv. Robot. Syst. 2018, 15, 1–12. [Google Scholar] [CrossRef]
- Liu, Y.; Durlofsky, L. 3D CNN-PCA: A Deep-Learning-Based Parameterization for Complex Geomodels. arXiv 2020, arXiv:2007.08478, 1–29. [Google Scholar]
- Mitrea, D.; Mendoiu, C.; Mitrea, P.; Nedevschi, S.; Lupsor-Platon, M.; Rotaru, M.; Badea, R. HCC Recognition within B-mode and CEUS Images using Traditional and Deep Learning Techniques. In Proceedings of the 7th International Conference on Advancements of Medicine and Health Care through Technology, Cluj-Napoca, Romania, 12–15 October 2020; IFMBE Proceedings Series; Springer: Berlin, Germany, 2020; pp. 1–6. [Google Scholar]
- Mitrea, D.; Nedevschi, S.; Badea, R. Automatic Recognition of the Hepatocellular Carcinoma from Ultrasound Images using Complex Textural Microstructure Co-Occurrence Matrices (CTMCM). In Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods-Volume 1: ICPRAM. INSTICC, Funchal, Portugal, 16–18 January 2018; SciTePress: Setubal, Portugal, 2018; pp. 178–189. [Google Scholar] [CrossRef]
- Brehar, R.; Mitrea, D.A.; Vancea, F.; Marita, T.; Nedevschi, S.; Lupsor-Platon, M.; Rotaru, M.; Badea, R. Comparison of Deep-Learning and Conventional Machine-Learning Methods for the Automatic Recognition of the Hepatocellular Carcinoma Areas from Ultrasound Images. Sensors 2020, 20, 3085. [Google Scholar] [CrossRef]
- LISA lab, University of Montreal Copyright, T.D.T. Tutorial of Deep Learning; Release 0.1; University of Montreal: Montreal, QC, Canada, 2015. [Google Scholar]
- Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <1 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar]
- Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative review. J. Mach. Learn. Res. 2009, 10, 66–71. [Google Scholar]
- Dutta, A.; Gupta, A.; Zissermann, A. VGG Image Annotator (VIA). Version: 2.0.9. 2016. Available online: http://www.robots.ox.ac.uk/vgg/software/via/ (accessed on 15 May 2020).
- Chatterjee, H.S. Various Types of Convolutional Neural Network. 2019. Available online: https://towardsdatascience.com/various-types-of-convolutional-neural-network-8b00c9a08a1b (accessed on 5 March 2020).
- Materka, A.; Strzelecki, M. Texture Analysis Methods—A Review; Technical Report; Institute of Electronics, Technical University of Lodz: Lodz, Poland, 1998. [Google Scholar]
- Meyer-Base, A. Pattern Recognition for Medical Imaging; Elsevier: Amsterdam, The Netherlands, 2009. [Google Scholar]
- Hall, M. Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans. Knowl. Data Eng. 2003, 15, 1–16. [Google Scholar] [CrossRef] [Green Version]
- Waikato Environment for Knowledge Analysis (Weka 3). 2020. Available online: http://www.cs.waikato.ac.nz/ml/weka/ (accessed on 20 December 2020).
- Deep Learning Toolbox for Matlab. 2020. Available online: https://it.mathworks.com/help/deeplearning/index.html (accessed on 22 December 2020).
- Kitayama, M. Matlab-Kernel-PCA Toolbox. 2017. Available online: https://it.mathworks.com/matlabcentral/fileexchange/71647-matlab-kernel-pca (accessed on 11 November 2020).
CNN | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
SqueezeNet | 85.7 | 80.5 | 91.4 | 86.38 |
GoogLeNet | 86.2 | 86.4 | 86.1 | 86.25 |
GoogLeNetV1 | 86.7 | 80.9 | 91.5 | 86.23 |
ResNet | 91.6 | 93.5 | 90.5 | 92.04 |
VGGNet | 87.4 | 85.8 | 88.9 | 87.39 |
DenseNet | 90.9 | 86.9 | 94.1 | 90.71 |
CNN | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
SqueezeNet | 82.9 | 91.2 | 80.7 | 86.35 |
GoogLeNet | 84.4 | 89.3 | 82.9 | 86.25 |
GoogLeNetV1 | 85.8 | 67.7 | 94.3 | 83.86 |
ResNet | 90.5 | 84.3 | 94 | 89.52 |
VGGNet | 87.2 | 95.2 | 84.7 | 90.4 |
DenseNet | 90.3 | 84.3 | 93.5 | 89.23 |
Fusion | CNN | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|---|
Arithmetic mean | SqueezeNet | 87 | 88.7 | 85.2 | 87 |
GoogLeNet | 87.6 | 93.6 | 80.9 | 87.86 | |
GoogLeNetV1 | 87.9 | 84.5 | 85.1 | 84.80 | |
ResNet | 92.6 | 91.2 | 94.2 | 92.74 | |
VGGNet | 91.6 | 89 | 95 | 92.15 | |
DenseNet | 93.2 | 93.4 | 93 | 93.2 | |
Weighted mean | SqueezeNet | 87.5 | 88.1 | 87.1 | 87.6 |
GoogLeNet | 88.9 | 88.1 | 89.7 | 88.9 | |
GoogLeNetV1 | 88.4 | 67.1 | 96.3 | 84.65 | |
ResNet | 90.9 | 96.1 | 86.6 | 91.73 | |
VGGNet | 92.3 | 89.1 | 94.2 | 91.76 | |
DenseNet | 92.6 | 95.9 | 89.8 | 93.01 | |
Multiplication | SqueezeNet | 88.8 | 89.5 | 88.2 | 88.6 |
GoogLeNet | 88.9 | 91.2 | 87.1 | 89.22 | |
GoogLeNetV1 | 88.9 | 70.4 | 95.3 | 85.02 | |
ResNet | 94.5 | 92.5 | 96.1 | 94.36 | |
VGGNet | 92.3 | 88.7 | 95.4 | 92.24 | |
DenseNet | 94.7 | 96.4 | 93.3 | 94.89 |
CNN | Fusion | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|---|
SqueezeNet | Concatenation | 93.77 | 91.34 | 95.81 | 93.66 |
“pool10” | Arithm.mean | 92.57 | 90.55 | 94.27 | 92.47 |
Weight.mean | 94.13 | 91.86 | 96.04 | 94.03 | |
Multiplication | 88.38 | 80.58 | 94.93 | 88.55 | |
SqueezeNet | Concatenation | 88.02 | 87.93 | 88.1 | 88.02 |
“relu_conv10” | Arithm.mean | 88.62 | 86.09 | 90.75 | 88.50 |
Weight.mean | 89.82 | 91.86 | 88.11 | 90.04 | |
Multiplication | 83.71 | 82.41 | 84.80 | 83.63 | |
KPCA(Poly 3rd dgr) | 87.78 | 80.84 | 93.61 | 87.84 | |
SqueezeNet | Concatenation | 87.90 | 88.45 | 87.44 | 87.95 |
“relu_conv10” | Arithm.mean | 90.06 | 89.50 | 90.53 | 90.02 |
+ “pool10” | Weight.mean | 89.46 | 87.66 | 90.97 | 89.36 |
Multiplication | 84.79 | 83.99 | 85.46 | 84.73 | |
KPCA(Poly 3rd dgr) | 89.46 | 87.93 | 90.75 | 89.37 | |
GoogLeNet | Concatenation | 90.1 | 86.3 | 96 | 91.2 |
“pool5_drop_7x7_s1” | Arithm.mean | 89.1 | 88.2 | 90.3 | 89.2 |
Weight.mean | 91.6 | 95.6 | 86.9 | 82.3 | |
Multiplication | 84.6 | 91 | 76.9 | 84.4 | |
KPCA (Linear) | 83.7 | 81.1 | 85.9 | 83.2 | |
GoogLeNetV1 | Concatenation | 91.86 | 91.08 | 92.51 | 91.8 |
“pool5_drop_7x7_s1” | Arithm.mean | 92.1 | 91.34 | 92.73 | 93.75 |
Weight.mean | 90.06 | 90.03 | 90.09 | 90.06 | |
Multiplication | 91.74 | 90.29 | 92.95 | 91.89 | |
KPCA (Linear) | 75.3 | 80.3 | 71.0 | 65.79 | |
ResNet | Concatenation | 92.2 | 90.9 | 93.9 | 92.1 |
“pool5” | Arithm.mean | 87.3 | 89.9 | 84.5 | 87.3 |
Weight.mean | 81.9 | 83.3 | 80.3 | 82.3 | |
Multiplication | 88.9 | 88.8 | 88.9 | 89.2 | |
KPCA (Linear) | 76.9 | 80.1 | 74.2 | 77.1 | |
VGGNet | Concatenation | 92.8 | 96.9 | 87.9 | 92.75 |
“drop7” | Arithm.mean | 93.9 | 96 | 91.3 | 93.75 |
Weight.mean | 92 | 94.1 | 89.5 | 91.89 | |
Multiplication | 92.46 | 92.39 | 92.51 | 92.45 | |
KPCA (Linear) | 90.7 | 90.8 | 90.5 | 90.65 | |
DenseNet | Concatenation | 87.7 | 87.9 | 87.4 | 87.80 |
“avg_pool” | Arithm.mean | 88.6 | 91 | 85.8 | 88.50 |
Weight.mean | 80.23 | 60.12 | 83.45 | 73.04 | |
Multiplication | 81.4 | 80.19 | 82.5 | 81.36 | |
KPCA (Linear) | 75.9 | 72.3 | 72.8 | 72.55 | |
DenseNet + ResNet | Concatenation | 97.25 | 96.85 | 97.58 | 97.22 |
“avg_pool” | Arithm.mean | 85.12 | 84.25 | 80.65 | 95.75 |
+”pool5” | Weight.mean | 80.23 | 60.12 | 83.45 | 73.04 |
Multiplication | 81.4 | 80.19 | 82.5 | 81.36 | |
KPCA (Linear) | 75.9 | 72.3 | 72.8 | 72.55 | |
ResNet + DenseNet | Concatenation | 96.53 | 97.11 | 96.04 | 96.58 |
“pool5” | Arithm.mean | 80.26 | 84.78 | 63.91 | 93.75 |
+”avg_pool” | Weight.mean | 80.11 | 78.47 | 81.12 | 79.82 |
Multiplication | 80.6 | 78.8 | 81.53 | 80.19 | |
KPCA (Linear) | 73.1 | 74.2 | 73.1 | 73.65 |
Fusion | CNN | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|---|
Arithmetic mean | SqueezeNet | 92.69 | 91.08 | 94.05 | 92.6 |
GoogLeNet | 93.89 | 91.08 | 96.26 | 93.78 | |
GoogLeNetV1 | 86.19 | 65.35 | 98 | 85.45 | |
ResNet | 97.37 | 96.33 | 98.24 | 97.3 | |
VggNet | 95.21 | 92.13 | 97.8 | 95.11 | |
DenseNet | 97.49 | 96.59 | 98.24 | 97.43 | |
DenseNet + ResNet | 98.20 | 98.16 | 98.24 | 98.20 | |
ResNet + DenseNet | 96.77 | 95.01 | 98.24 | 96.67 | |
Weighted mean | SqueezeNet | 91.62 | 91.08 | 92.07 | 91.58 |
GoogLeNet | 92.10 | 89.76 | 94.05 | 91.99 | |
GoogLeNetV1 | 85.45 | 56.73 | 99.54 | 82.10 | |
ResNet | 95.81 | 95.01 | 96.48 | 95.75 | |
VggNet | 92.93 | 91.08 | 94.49 | 92.83 | |
DenseNet | 96.77 | 97.64 | 96.04 | 96.85 | |
DenseNet + ResNet | 92.81 | 95.8 | 90.31 | 93.18 | |
ResNet + DenseNet | 95.45 | 93.44 | 97.14 | 95.35 |
Classifier | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
RF | 79.25 | 90.7 | 66.7 | 88.1 |
SVM(poly 3rd dgr) | 79.3 | 89.1 | 69.8 | 79.3 |
MLP | 79.5 | 88.7 | 70.3 | 86.4 |
AdaBoost + J48 | 82.1 | 91.5 | 72.5 | 87 |
Classifier | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
RF | 75.1 | 90.1 | 60.2 | 83 |
SVM(poly 5th dgr) | 65.3 | 88.2 | 50.2 | 65.9 |
MLP | 73.1 | 85.4 | 60.3 | 75.2 |
AdaBoost + J48 | 73.4 | 87.2 | 59.7 | 77.8 |
Classifier | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
RF | 84.35 | 95.6 | 73.1 | 95.1 |
SVM(poly 1st dgr) | 83.11 | 94.0 | 72.1 | 83.2 |
MLP(a) | 81.5 | 87 | 76.2 | 90.2 |
AdaBoost + J48 | 87.1 | 98.2 | 74.1 | 93.3 |
Img. Modality | CNN Classifier | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|---|
CEUS | ResNet | 92.02 | 89.52 | 94.12 | 91.91 |
B-mode US | ResNet | 90.95 | 88.07 | 92.43 | 90.33 |
Feature level fusion | DenseNet (multiplication) | 93.7 | 97.2 | 90.8 | 94.18 |
Classifier level fusion | DenseNet + ResNet (concatenation) | 97.21 | 95.20 | 98.90 | 97.11 |
Decision level fusion | DenseNet + ResNet (arithmetic mean) | 98 | 96.94 | 98.90 | 97.94 |
Multimodal Classifier | Acc (%) | Sens (%) | Spec (%) | AUC (%) |
---|---|---|---|---|
InceptionV3 + PCA-LDA [44] | 73.4 | 66.1 | 79.5 | 73.22 |
ResNet50 + PCA-LDA [44] | 78.7 | 82.2 | 75.8 | 79.12 |
VGG16 + PCA-LDA [44] | 91.7 | 91.9 | 91.6 | 91.75 |
Textural Features + SAE [48] | 90.08 | 85.1 | 94.2 | 89.9 |
DenseNet201—feature level multiplication (crt. work) | 94.7 | 96.4 | 93.3 | 94.89 |
DenseNet201 + ResNet18—classif. level concatenation (crt. work) | 97.25 | 96.85 | 97.58 | 97.22 |
DenseNet201 + ResNet18—decision level arithm. mean (crt. work) | 98.25 | 98.16 | 98.24 | 98.2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mitrea, D.; Badea, R.; Mitrea, P.; Brad, S.; Nedevschi, S. Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods. Sensors 2021, 21, 2202. https://doi.org/10.3390/s21062202
Mitrea D, Badea R, Mitrea P, Brad S, Nedevschi S. Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods. Sensors. 2021; 21(6):2202. https://doi.org/10.3390/s21062202
Chicago/Turabian StyleMitrea, Delia, Radu Badea, Paulina Mitrea, Stelian Brad, and Sergiu Nedevschi. 2021. "Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods" Sensors 21, no. 6: 2202. https://doi.org/10.3390/s21062202
APA StyleMitrea, D., Badea, R., Mitrea, P., Brad, S., & Nedevschi, S. (2021). Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods. Sensors, 21(6), 2202. https://doi.org/10.3390/s21062202