Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training
Abstract
:1. Introduction
- (1)
- We propose a simple but effective self-supervised learning method, which is called Phase Shuffle Prediction. The proposed phase shuffle prediction focuses on feature representation within multi-phase CT images.
- (2)
- To further enhance the pre-training performance of deep learning, we propose a novel self-supervised feature learning approach based on multi-tasking by combining the newly proposed phase shuffle prediction with our previously proposed case discrimination [24], focusing on the feature representation of different CT images. Through these two pretext tasks, it is possible to obtain a representation that encompasses information from both within and between images, allowing the extraction of comprehensive information relevant to liver cancers.
- (3)
- The effectiveness of our proposed method is demonstrated not only in the classification of FLLs, but also in the prediction of ER of HCC. To the best of our knowledge, this is the first application of self-supervised learning for predicting the ER of HCC using multi-phase CT imaging.
2. Related Work
2.1. Pre-Trained ImageNet Model
2.2. Self-Supervised Learning
- (1)
- Pre-training a deep neural network model on a pretext task with an unannotated target dataset.
- (2)
- Fine-tuning the pre-trained model for the main task with an annotated target dataset.
3. Methods
3.1. Overview of the Proposed Method
3.2. Multi-Task Pre-Training
3.2.1. Phase Shuffle Prediction Task
3.2.2. Case-Level Discrimination Task
3.2.3. Loss Function for Pre-Training
3.3. Target Task (Fine-Tuning)
4. Experiment
4.1. Task 1: Prediction of Early Recurrence
4.1.1. Data
4.1.2. Implementations
4.1.3. Results
4.2. Task2: Classification of Focal Liver Lesions
4.2.1. Data
4.2.2. Implementations
4.2.3. Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Elsayes, K.M.; Kielar, A.Z.; Agrons, M.M.; Szklaruk, J.; Tang, A.; Bashir, M.R.; Mitchell, D.G.; Do, R.K.; Fowler, K.J.; Chernyak, V.; et al. Liver Imaging Reporting and Data System: An expert consensus statement. J. Hepatocell. Carcinoma 2017, 4, 29–39. [Google Scholar] [CrossRef] [PubMed]
- Zhu, R.X.; Seto, W.K.; Lai, C.L.; Yuen, M.F. Epidemiology of hepatocellular carcinoma in the Asia-Pacific region. Gut Liver 2016, 10, 332–339. [Google Scholar] [CrossRef] [PubMed]
- Thomas, M.B.; Zhu, A.X. Hepatocellular carcinoma: The need for progress. J. Clin. Oncol. 2005, 23, 2892–2899. [Google Scholar] [CrossRef] [PubMed]
- Yang, T.; Lin, C.; Zhai, J.; Shi, S.; Zhu, M.; Zhu, N.; Lu, J.-H.; Yang, G.-S.; Wu, M.-C. Surgical resection for advanced hepatocellular carcinoma according to Barcelona Clinic Liver Cancer (BCLC) staging. J. Cancer Res. Clin. Oncol. 2012, 138, 1121–1129. [Google Scholar] [CrossRef] [PubMed]
- Portolani, N.; Coniglio, A.; Ghidoni, S.; Giovanelli, M.; Benetti, A.; Tiberio, G.A.M.; Giulini, S.M. Early and late recurrence after liver resection for hepatocellular carcinoma: Prognostic and therapeutic implications. Ann. Surg. 2006, 243, 229–235. [Google Scholar] [CrossRef] [PubMed]
- Shah, S.A.; Cleary, S.P.; Wei, A.C.; Yang, I.; Taylor, B.R.; Hemming, A.W.; Langer, B.; Grant, D.R.; Greig, P.D.; Gallinger, S. Recurrence after liver resection for hepatocellular carcinoma: Risk factors, treatment, and outcomes. Surgery 2007, 141, 330–339. [Google Scholar] [CrossRef] [PubMed]
- Feng, J.; Chen, J.; Zhu, R.; Yu, L.; Zhang, Y.; Feng, D.; Kong, H.; Song, C.; Xia, H.; Wu, J.; et al. Prediction of early recurrence of hepatocellular carcinoma within the Milan criteria after radical resection. Oncotarget 2017, 8, 63299–63310. [Google Scholar] [CrossRef] [PubMed]
- Cheng, Z.; Yang, P.; Qu, S.; Zhou, J.; Yang, J.; Yang, X.; Xia, Y.; Li, J.; Wang, K.; Yan, Z.; et al. Risk factors and management for early and late intrahepatic recurrence of solitary hepatocellular carcinoma after curative resection. HPB 2015, 17, 422–427. [Google Scholar] [CrossRef] [PubMed]
- Hirokawa, F.; Hayashi, M.; Miyamoto, Y.; Asakuma, M.; Shimizu, T.; Komeda, K.; Inoue, Y.; Uchiyama, K. Outcomes and predictors of microvascular invasion of solitary hepatocellular carcinoma. Hepatol. Res. 2014, 44, 846–853. [Google Scholar] [CrossRef] [PubMed]
- Sterling, R.K.; Wright, E.C.; Morgan, T.R.; Seeff, L.B.; Hoefs, J.C.; Di Bisceglie, A.M.; Dienstag, J.L.; Lok, A.S. Frequency of elevated hepatocellular carcinoma (HCC) biomarkers in patients with advanced hepatitis C. Am. J. Gastroenterol. 2012, 107, 64. [Google Scholar] [CrossRef] [PubMed]
- Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, R.G.P.M.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [PubMed]
- Scapicchio, C.; Gabelloni, M.; Barucci, A.; Cioni, D.; Saba, L.; Neri, E. A deep look into radiomics. La Radiol. Medica 2021, 126, 1296–1311. [Google Scholar] [CrossRef] [PubMed]
- Coppola, F.; Giannini, V.; Gabelloni, M.; Panic, J.; Defeudis, A.; Monaco, S.L.; Cattabriga, A.; Cocozza, M.A.; Pastore, L.V.; Polici, M.; et al. Radiomics and magnetic resonance imaging of rectal cancer: From engineering to clinical practice. Diagnostics 2021, 11, 756. [Google Scholar] [CrossRef] [PubMed]
- Gabelloni, M.; Faggioni, L.; Borgheresi, R.; Restante, G.; Shortrede, J.; Tumminello, L.; Scapicchio, C.; Coppola, F.; Cioni, D.; Gómez-Rico, I.; et al. Bridging gaps between images and data: A systematic update on imaging biobanks. Eur. Radiol. 2022, 32, 3173–3186. [Google Scholar] [CrossRef] [PubMed]
- Afshar, P.; Mohammadi, A.; Plataniotis, K.N.; Oikonomou, A.; Benali, H. From handcrafted to deep-learning-based cancer radiomics: Challenges and opportunities. IEEE Signal Process. Mag. 2019, 36, 132–160. [Google Scholar] [CrossRef]
- Chen, Y.W.; Jain, L.C. Deep Learning in Healthcare; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Yasaka, K.; Akai, H.; Abe, O.; Kiryu, S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology 2017, 286, 887–896. [Google Scholar] [CrossRef] [PubMed]
- Liang, D.; Lin, L.; Hu, H.; Zhang, Q.; Chen, Q.; lwamoto, Y.; Han, X.; Chen, Y.W. Combining Convolutional and Recurrent Neural Networks for Classification of Focal Liver Lesions in Multi-Phase CT Images. In Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 16–20 September 2018; pp. 666–675. [Google Scholar]
- Wang, W.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Chen, Q.; Liang, D.; Lin, D.; Hu, H.; Zhang, Q. Classification of Focal Liver Lesions Using Deep Learning with Fine-tuning. In Proceedings of the Digital Medicine and Image Processing (DMIP2018), Okinawa, Japan, 12–14 November 2018; pp. 56–60. [Google Scholar]
- Wang, W.; Chen, Q.; Iwamoto, Y.; Han, X.; Zhang, Q.; Hu, H.; Lin, L.; Chen, Y.-W. Deep Learning-Based Radiomics Models for Early Recurrence Prediction of Hepatocellular Carcinoma with Multi-phase CT Images and Clinical Data. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 4881–4884. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Q.; Iwamoto, Y.; Aonpong, P.; Lin, L.; Hu, H.; Zhang, Q.; Chen, Y.-W. Deep Fusion Models of Multi-Phase CT and Selected Clinical Data for Preoperative Prediction of Early Recurrence in Hepatocellular Carcinoma. IEEE Access 2020, 8, 139212–139220. [Google Scholar] [CrossRef]
- Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 3733–3742. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Dong, H.; Iwamoto, Y.; Han, X.; Lin, L.; Hu, H.; Cai, X.; Chen, Y.W. Case Discrimination: Self-supervised Feature Learning for the classification of Focal Liver Lesions. In Innovation in Medicine and Healthcare, Smart Innovation, Systems and Technologie; Chen, Y.W., Tanaka, S., Eds.; Springer: Singapore, 2021; pp. 241–249. [Google Scholar]
- Song, J.; Dong, H.; Chen, Y.; Lin, L.; Hu, H.; Chen, Y.W. Deep Neural Network-Based Classification of Focal Liver Lesions Using Phase-Shuffle Prediction Pre-training. In Proceedings of the Innovation in Medicine and Healthcare, Smart Innovation, Systems and Technologies, Rome, Italy, 14–16 June 2023. [Google Scholar] [CrossRef]
- Noroozi, M.; Favaro, P. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 69–84. [Google Scholar]
- Spyros, G.; Praveer, S.; Nikos, K. Unsupervised representation learning by predicting image rotations. In Proceedings of the ICLR2018, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the International Conference on Machine Learning, Virtual Event, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Xu, Y.; Cai, M.; Lin, L.; Zhang, Y.; Hu, H.; Peng, Z.; Zhang, Q.; Chen, Q.; Mao, X.W.; Iwamoto, Y.; et al. PA-ResSeg: A Phase Attention Residual Network for Liver Tumor Segmentation from Multi-phase CT Images. Med. Phys. 2021, 48, 3752–3766. [Google Scholar] [CrossRef]
- Oquab, M.; Darcet, T.; Moutakanni, T.; Vo, H.; Szafraniec, M.; Khalidov, V.; Fernandez, P.; Haziza, D.; Massa, F.; El-Nouby, A.; et al. DINOv2: Learning Robust Visual Features without Supervision. arXiv 2023, arXiv:2304.07193. Available online: https://arxiv.org/abs/2304.07193 (accessed on 2 February 2024).
Experiment | E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
---|---|---|---|---|---|---|---|---|---|---|
Training | 695 (150) | 681 (150) | 683 (150) | 691 (150) | 694 (150) | 676 (150) | 700 (150) | 694 (151) | 680 (151) | 691 (151) |
Testing | 70 (17) | 84 (17) | 82 (17) | 74 (17) | 71 (17) | 89 (17) | 65 (17) | 71 (16) | 85 (16) | 74 (16) |
Total | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) | 765 (167) |
GPU | NVIDIA GeForce RTX 3090 |
CPU | Intel® X® Platinum 8358P |
OS | Ubuntu 20.04 |
Deep learning Framework | PyTorch2.0 |
Model | Pre-Training | ACC (%) | AUC | |
---|---|---|---|---|
Case Discrimination | Phase Shuffle Prediction | |||
Model 1 | 67.44% ± 5.29 | 0.666 ± 0.06 | ||
Model 2 | √ | 71.98% ± 2.64 | 0.715 ± 0.03 | |
Model 3 | √ | 70.15% ± 3.71 | 0.694 ± 0.04 | |
Proposed | √ | √ | 74.65% ± 3.30 | 0.739 ± 0.04 |
Models | ER | NER | Average | AUC |
---|---|---|---|---|
Fine-tuning (ImageNet) [21] | 57.45% ± 11.35 | 80.45% ± 10.96 | 69.34% ± 3.43 | 0.695 ± 0.04 |
Self-supervised (rotation) [27] | 60.15% ± 13.60 | 76.02% ± 9.29 | 68.53% ± 3.75 | 0.684 ± 0.05 |
Self-supervised (phase shuffle prediction) [25] | 55.22% ± 11.84 | 83.22% ± 13.98 | 70.15% ± 3.71 | 0.694 ± 0.04 |
Self-supervised (instance-level) [22] | 56.83% ± 18.53 | 79.68% ± 11.33 | 69.17% ± 4.02 | 0.683 ± 0.05 |
Self-supervised (case-level) [24] | 59.88% ± 11.98 | 82.56% ± 10.01 | 71.98% ± 2.64 | 0.715 ± 0.03 |
Self-supervised (DINOv2) [31] | 58.68% ± 11.73 | 83.31% ± 10.30 | 71.79% ± 2.86 | 0.711 ± 0.04 |
Self-supervised multi-task pre-training model (proposed) | 65.64% ± 11.18 | 82.17% ± 9.89 | 74.65% ± 3.30 | 0.739 ± 0.04 |
Type | Cyst | FNH | HCC | HEM | Total |
---|---|---|---|---|---|
Group 1: case (slice) | 5 (29) | 4 (15) | 4 (30) | 4 (21) | 17 (95) |
Group 2: case (slice) | 6 (31) | 3 (17) | 4 (29) | 4 (33) | 17 (110) |
Group 3: case (slice) | 6 (37) | 3 (7) | 4 (36) | 4 (17) | 17 (97) |
Group 4: case (slice) | 6 (24) | 3 (17) | 4 (35) | 4 (19) | 17 (95) |
Group 5: case (slice) | 7 (28) | 3 (20) | 3 (32) | 4 (12) | 17 (92) |
Total: case (slice) | 30 (149) | 16 (76) | 19 (162) | 20 (102) | 85 (489) |
Model | Pre-Training | ACC (%) | AUC | |
---|---|---|---|---|
Case Discrimination | Phase Shuffle Prediction | |||
Model 1 | 80.84% ± 2.91 | 0.709 ± 0.07 | ||
Model 2 | √ | 87.04% ± 2.27 | 0.760 ± 0.05 | |
Model 3 | √ | 84.82% ± 1.99 | 0.746 ± 0.06 | |
Proposed | √ | √ | 88.06% ± 4.72 | 0.791 ± 0.04 |
Models | Cyst | FNH | HCC | HEM | Average | AUC |
---|---|---|---|---|---|---|
Fine-tuning (ImageNet) [21] | 95.56% ± 2.84 | 83.53% ± 13.62 | 80.99% ± 11.69 | 56.56 ± 32.61 | 81.26% ± 1.20 | 0.721 ± 0.06 |
Self-supervised (rotation) [27] | 96.66% ± 2.89 | 88.44% ± 7.51 | 78.81% ± 13.81 | 60.30 ± 17.85 | 81.84% ± 1.72 | 0.713 ± 0.05 |
Self-supervised (phase shuffle prediction) [25] | 98.27% ± 1.42 | 86.22% ± 12.27 | 82.90% ± 5.84 | 63.72 ± 12.68 | 84.82% ± 1.99 | 0.746 ± 0.06 |
Self-supervised (instance-level) [22] | 93.75% ± 1.56 | 87.46% ± 5.69 | 85.04% ± 5.58 | 69.05 ± 11.99 | 82.82% ± 3.98 | 0.759 ± 0.05 |
Self-supervised (case-level) [24] | 90.29% ± 1.43 | 88.82% ± 7.21 | 88.74% ± 14.85 | 80.15 ± 16.28 | 87.04% ± 2.27 | 0.760 ± 0.05 |
Self-supervised multi-task pre-training model (proposed) | 97.21% ± 4.66 | 93.00% ± 7.80 | 90.55% ± 11.58 | 66.49 ± 8.21 | 88.06% ± 4.72 | 0.791 ± 0.04 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, J.; Dong, H.; Chen, Y.; Zhang, X.; Zhan, G.; Jain, R.K.; Chen, Y.-W. Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training. Information 2024, 15, 493. https://doi.org/10.3390/info15080493
Song J, Dong H, Chen Y, Zhang X, Zhan G, Jain RK, Chen Y-W. Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training. Information. 2024; 15(8):493. https://doi.org/10.3390/info15080493
Chicago/Turabian StyleSong, Jian, Haohua Dong, Youwen Chen, Xianru Zhang, Gan Zhan, Rahul Kumar Jain, and Yen-Wei Chen. 2024. "Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training" Information 15, no. 8: 493. https://doi.org/10.3390/info15080493
APA StyleSong, J., Dong, H., Chen, Y., Zhang, X., Zhan, G., Jain, R. K., & Chen, Y. -W. (2024). Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training. Information, 15(8), 493. https://doi.org/10.3390/info15080493