Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks
Abstract
:1. Introduction
- (a)
- We developed a novel 3D-CNN input method that maintains the advantage of a low training cost for a single input and the advantage of multi-modal feature fusion of previous multi-input models, such that the model can fully fuse multi-modal features and facilitate network prediction with a single input.
- (b)
- We improve the category activation map based on CAM by using the category activation map in a 3D image sequence to obtain a 3D-Grad-cam to facilitate our visualization of the network learning process.
- (c)
- We performed an extensive experimental evaluation and comparison and used different 3D-CNN models and different sampling methods for 3D-CNN models, and the AUC, sensitivity, and specificity of this method on a test dataset were 0.85, 0.88, and 0.88, respectively.
2. Methods
2.1. Rigid Alignment of DWI and T2 Images
2.2. Prostate Area Cropping
2.3. New Sequence-Based 3D-CNN
- (1)
- Feature fusion: With the 3D convolution kernel process and the operation of the spatial convolution of the image sequence, the convolution kernel will convolve the single image adjacent to the z-axis in the sequence image, and the features of the single image adjacent to the z-axis will be calculated by the convolution kernel and extracted as high-dimensional vectors. This operation is good for fusing all of the adjacent image features and can replace the traditional method with a multiple-input multi-MRI-contrast image method. We formed the images with different MRI contrasts into a new 3D image sequence so that neighboring images of each image in the sequence were images of various MRI contrasts. This method is a very cost-effective way to fuse image features with various MRI contrasts.
- (2)
- Reinforcement features: In building a new 3D image sequence, we built the images with several MRI contrasts in a different order to create a new image sequence; thus, the adjacent image MRI contrasts were often diverse, and the cancerous tissue features would be different in images with various MRI contrasts. The operation of the 3D convolution kernel caused the model to remember the features of cancerous tissue in the images with different MRI contrasts, which could enhance the learning of cancerous features.
- (3)
- Influence weight: In the learning of 3D convolution, the features of the image sequence were gradually high-dimensionalized, and the high-dimensional vector contained the full features of cancerous tissue; the linkage layer was expanded and the proportion of high-dimensional vectors containing cancerous tissue features to the total vectors increased, which could increase the accuracy of the prediction output. In the following, we provide detailed information on each step. In previous studies, the input of the 3D-CNN was usually a sequence of images of a patient with a particular MRI modality, but we fused images with different MRI contrasts into one sequence to be input into the network. The features of the image columns were extracted by the 3D-CNN, and the features of the z-axis were observable in the z-axis direction because each image of the new sequence had the most evident cancerous tissue. In Section 3, we use the best available 3D-CNN models for comparative experiments.
2.4. Implementations
3. Experiments
3.1. Setup
3.2. Comparison with the Classic 3D CNN
3.3. Comparison of Different Input Orders
- (1)
- Order one: DWI, T2, and then overlap as a set of re-sampling six times;
- (2)
- Order two: T2 re-sampling six times, DWI re-sampling six times, and then overlap re-sampling six times;
- (3)
- Order three: DWI re-sampling six times, T2 re-sampling six times, and then overlap re-sampling six times;
- (4)
- Order four: overlap re-sampling six times, T2 re-sampling six times, and then DWI re-sampling six times.
3.4. Comparison Experiments with the Original 3D Sequence
3.5. Comparison with State-of-the-Art Methodologies
3.6. 3D-CNN Learning Process Visualization
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mohler, J.; Bahnson, R.R.; Boston, B.; Busby, J.E.; D’Amico, A.; Eastham, J.A.; Enke, C.A.; George, D.; Horwitz, E.M.; Huben, R.P.; et al. Prostate cancer. J. Natl. Compr. Cancer Netw. 2010, 8, 162–200. [Google Scholar] [CrossRef] [PubMed]
- Gillessen, S.; Attard, G.; Beer, T.M.; Beltran, H.; Bjartell, A.; Bossi, A.; Briganti, A.; Bristow, R.G.; Chi, K.N.; Clarke, N.; et al. Management of patients with advanced prostate cancer: Report of the advanced prostate cancer consensus conference 2019. Eur. Urol. 2020, 77, 508–547. [Google Scholar] [CrossRef] [PubMed]
- Weinreb, J.C.; Barentsz, J.O.; Choyke, P.L.; Cornud, F.; Haider, M.A.; Macura, K.J.; Margolis, D.; Schnall, M.D.; Shtern, F.; Tempany, C.M.; et al. PI-RADS prostate imaging–reporting and data system: 2015, version 2. Eur. Urol. 2016, 69, 16–40. [Google Scholar] [CrossRef] [PubMed]
- Schröder, F.H.; Hugosson, J.; Roobol, M.J.; Tammela, T.L.; Ciatto, S.; Nelen, V.; Kwiatkowski, M.; Lujan, M.; Lilja, H.; Zappa, M.; et al. Screening and prostate-cancer mortality in a randomized European study. N. Engl. J. Med. 2009, 360, 1320–1328. [Google Scholar] [CrossRef] [Green Version]
- de Rooij, M.; Hamoen, E.H.; Fütterer, J.J.; Barentsz, J.O.; Rovers, M.M. Accuracy of multiparametric MRI for prostate cancer detection: A meta-analysis. Am. J. Roentgenol. 2014, 202, 343–351. [Google Scholar] [CrossRef] [PubMed]
- Bai, X.; Jiang, Y.; Zhang, X.; Wang, M.; Tian, J.; Mu, L.; Du, Y. The Value of Prostate-Specific Antigen-Related Indexes and Imaging Screening in the Diagnosis of Prostate Cancer. Cancer Manag. Res. 2020, 12, 6821–6826. [Google Scholar] [CrossRef]
- Fehr, D.; Veeraraghavan, H.; Wibmer, A.; Gondo, T.; Matsumoto, K.; Vargas, H.A.; Sala, E.; Hricak, H.; Deasy, J.O. Automatic classification of prostate cancer Gleason scores from multiparametric magnetic resonance images. Proc. Natl. Acad. Sci. USA 2015, 112, E6265–E6273. [Google Scholar] [CrossRef] [Green Version]
- Turkbey, B.; Choyke, P.L. Multiparametric MRI and prostate cancer diagnosis and risk stratification. Curr. Opin. Urol. 2012, 22, 310. [Google Scholar] [CrossRef]
- Peng, Y.; Jiang, Y.; Yang, C.; Brown, J.B.; Antic, T.; Sethi, I.; Schmid-Tannwald, C.; Giger, M.L.; Eggener, S.E.; Oto, A. Quantitative analysis of multiparametric prostate MR images: Differentiation between prostate cancer and normal tissue and correlation with Gleason score—a computer-aided diagnosis development study. Radiology 2013, 267, 787–796. [Google Scholar] [CrossRef]
- Turkbey, B.; Xu, S.; Kruecker, J.; Locklin, J.; Pang, Y.; Bernardo, M.; Merino, M.J.; Wood, B.J.; Choyke, P.L.; Pinto, P.A. Documenting the location of prostate biopsies with image fusion. BJU Int. 2011, 107, 53. [Google Scholar] [CrossRef] [Green Version]
- Valerio, M.; Donaldson, I.; Emberton, M.; Ehdaie, B.; Hadaschik, B.A.; Marks, L.S.; Mozer, P.; Rastinehad, A.R.; Ahmed, H.U. Detection of clinically significant prostate cancer using magnetic resonance imaging–ultrasound fusion targeted biopsy: A systematic review. Eur. Urol. 2015, 68, 8–19. [Google Scholar] [CrossRef] [PubMed]
- Liu, P.; Wang, S.; Turkbey, B.; Grant, K.; Pinto, P.; Choyke, P.; Wood, B.J.; Summers, R.M. A prostate cancer computer-aided diagnosis system using multimodal magnetic resonance imaging and targeted biopsy labels. Medical Imaging 2013: Computer-Aided Diagnosis. In Proceedings of the International Society for Optics and Photonics, Lake Buena Vista, FL, USA, 26 February 2013; Volume 8670, p. 86701G. [Google Scholar]
- Lemaitre, G. Computer-Aided Diagnosis for Prostate Cancer Using Multi-Parametric Magnetic Resonance Imaging. Ph.D. Thesis, Universitat de Girona, Escola Politècnica Superior, Girona, Spain, 2016. [Google Scholar]
- Litjens, G.J.; Vos, P.C.; Barentsz, J.O.; Karssemeijer, N.; Huisman, H.J. Automatic computer aided detection of abnormalities in multi-parametric prostate MRI. Medical Imaging 2011: Computer-Aided Diagnosis. In Proceedings of the International Society for Optics and Photonics, Lake Buena Vista, FL, USA, 4 March 2011; Volume 7963, p. 79630T. [Google Scholar]
- Litjens, G.J.; Barentsz, J.O.; Karssemeijer, N.; Huisman, H.J. Automated computer-aided detection of prostate cancer in MR images: From a whole-organ to a zone-based approach. In Proceedings of the Medical Imaging 2012: Computer-Aided Diagnosis, International Society for Optics and Photonics, San Diego, CA, USA, 23 February 2012; Volume 8315, p. 83150G. [Google Scholar]
- Artan, Y.; Haider, M.A.; Langer, D.L.; Van der Kwast, T.H.; Evans, A.J.; Yang, Y.; Wernick, M.N.; Trachtenberg, J.; Yetik, I.S. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields. IEEE Trans. Image Process. 2010, 19, 2444–2455. [Google Scholar] [CrossRef] [PubMed]
- Niaf, E.; Rouvière, O.; Mège-Lechevallier, F.; Bratan, F.; Lartizien, C. Computer-aided diagnosis of prostate cancer in the peripheral zone using multiparametric MRI. Phys. Med. Biol. 2012, 57, 3833. [Google Scholar] [CrossRef] [PubMed]
- Tiwari, P.; Kurhanewicz, J.; Madabhushi, A. Multi-kernel graph embedding for detection, Gleason grading of prostate cancer via MRI/MRS. Med. Image Anal. 2013, 17, 219–235. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Burtt, K.; Turkbey, B.; Choyke, P.; Summers, R.M. Computer aided-diagnosis of prostate cancer on multiparametric MRI: A technical review of current research. BioMed Res. Int. 2014, 2014, 789561. [Google Scholar] [CrossRef]
- Rundo, L.; Han, C.; Zhang, J.; Hataya, R.; Nagano, Y.; Militello, C.; Ferretti, C.; Nobile, M.S.; Tangherloni, A.; Gilardi, M.C.; et al. CNN-based Prostate Zonal Segmentation on T2-weighted MR Images: A Cross-dataset Study. In Neural Approaches to Dynamics of Signal Exchanges; Esposito, A., Faundez-Zanuy, M., Morabito, F., Pasero, E., Eds.; Springe: Singapore, 2020; Volume 151, pp. 269–280. [Google Scholar] [CrossRef] [Green Version]
- Rundo, L.; Han, C.; Nagano, Y.; Zhang, J.; Hataya, R.; Militello, C.; Tangherloni, A.; Nobile, M.S.; Ferretti, C.; Besozzi, D.; et al. USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019, 365, 31–43. [Google Scholar] [CrossRef] [Green Version]
- Chan, I.; Wells, W., III; Mulkern, R.V.; Haker, S.; Zhang, J.; Zou, K.H.; Maier, S.E.; Tempany, C.M. Detection of prostate cancer by integration of line-scan diffusion, T2-mapping and T2-weighted magnetic resonance imaging; a multichannel statistical classifier. Med. Phys. 2003, 30, 2390–2398. [Google Scholar] [CrossRef]
- Langer, D.L.; Van der Kwast, T.H.; Evans, A.J.; Trachtenberg, J.; Wilson, B.C.; Haider, M.A. Prostate cancer detection with multi- parametric MRI: Logistic regression analysis of quantitative T2, diffusion-weighted imaging, and dynamic contrast-enhanced MRI. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2009, 30, 327–334. [Google Scholar] [CrossRef]
- Tiwari, P.; Viswanath, S.; Kurhanewicz, J.; Sridhar, A.; Madabhushi, A. Multimodal wavelet embedding representation for data combination (MaWERiC): Integrating magnetic resonance imaging and spectroscopy for prostate cancer detection. NMR Biomed. 2012, 25, 607–619. [Google Scholar] [CrossRef] [Green Version]
- Castelvecchi, D. Can we open the black box of AI? Nat. News 2016, 538, 20. [Google Scholar] [CrossRef] [Green Version]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 839–847. [Google Scholar]
- Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
- Hill, D.L.; Batchelor, P.G.; Holden, M.; Hawkes, D.J. Medical image registration. Phys. Med. Biol. 2001, 46, R1. [Google Scholar] [CrossRef] [PubMed]
- Sankineni, S.; Osman, M.; Choyke, P.L. Functional MRI in prostate cancer detection. BioMed Res. Int. 2014, 2014, 590638. [Google Scholar] [CrossRef] [PubMed]
- Gibbs, P.; Tozer, D.J.; Liney, G.P.; Turnbull, L.W. Comparison of quantitative T2 mapping and diffusion-weighted imaging in the normal and pathologic prostate. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2001, 46, 1054–1058. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- De Santi, B.; Salvi, M.; Giannini, V.; Meiburger, K.M.; Marzola, F.; Russo, F.; Bosco, M.; Molinariet, F. Comparison of Histogram-based Textural Features between Cancerous and Normal Prostatic Tissue in Multiparametric Magnetic Resonance Images. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1671–1674. [Google Scholar] [CrossRef]
- Avants, B.B.; Tustison, N.J.; Song, G.; Cook, P.A.; Klein, A.; Gee, J.C. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 2011, 54, 2033–2044. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Yu, L.; Yang, X.; Chen, H.; Qin, J.; Heng, P.A. Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
- De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980 2014. [Google Scholar]
- Armato, S.G.; Huisman, H.; Drukker, K.; Hadjiiski, L.; Kirby, J.S.; Petrick, N.; Redmond, G.; Giger, M.L.; Cha, K.; Mamonov, A.; et al. PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J. Med. Imaging 2018, 5, 044501. [Google Scholar] [CrossRef]
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
- Kopuklu, O.; Kose, N.; Gunduz, A.; Rigoll, G. Resource efficient 3d convolutional neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019; pp. 1910–1919. [Google Scholar]
- Aldoj, N.; Lukas, S.; Dewey, M.; Penzkofer, T. Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network. Eur. Radiol. 2020, 30, 1243–1253. [Google Scholar] [CrossRef]
- Zhong, X.; Cao, R.; Shakeri, S.; Scalzo, F.; Lee, Y.; Enzmann, D.R.; Wu, H.H.; Raman, S.S.; Sung, K. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI. Abdom. Radiol. 2019, 44, 2030–2039. [Google Scholar] [CrossRef] [PubMed]
- Chen, Q.; Xu, X.; Hu, S.; Li, X.; Zou, Q.; Li, Y. A transfer learning approach for classification of clinical significant prostate cancers from mpMRI scans. Medical Imaging 2017: Computer-Aided Diagnosis. In Proceedings of the International Society for Optics and Photonics, Orlando, FL, USA, 16 March 2017; Volume 10134, p. 101344F. [Google Scholar]
Methods | Sensitivity | Specificity | AUC | CI 95% | Parameters |
---|---|---|---|---|---|
C3D | 0.83 | 0.79 | 0.81 | 0.80–0.83 | 78 M |
3DSqueezeNet | 0.73 | 0.68 | 0.70 | 0.72–0.78 | 2.15 M |
3DMobileNet | 0.74 | 0.67 | 0.69 | 0.73–0.75 | 8.22 M |
3DShuffleNet | 0.74 | 0.65 | 0.68 | 0.74–0.76 | 6.64 M |
ResNext101 | 0.83 | 0.75 | 0.81 | 0.76–0.82 | 48.34 M |
3DResnet101 | 0.88 | 0.88 | 0.83 | 0.84–0.85 | 83.29 M |
Our method + 3DResNet50 | 0.88 | 0.84 | 0.85 | 0.85–0.87 | 44.24 M |
Methods | Sensitivity | Specificity | AUC | CI 95% | Parameters |
---|---|---|---|---|---|
Order 1 | 0.88 | 0.88 | 0.85 | 0.85–0.87 | 44.24 M |
Order 2 | 0.84 | 0.84 | 0.82 | 0.80–0.83 | - |
Order 3 | 0.84 | 0.84 | 0.81 | 0.82–0.84 | - |
Order 4 | 0.88 | 0.84 | 0.84 | 0.79–0.84 | - |
Methods | Sensitivity | Specificity | AUC | CI 95% | Parameters |
---|---|---|---|---|---|
T2(384) | 0.63 | 0.59 | 0.68 | 0.66–0.688 | - |
T2(128) | 0.71 | 0.63 | 0.72 | 0.71–0.74 | - |
DWI(384) | 0.61 | 0.54 | 0.71 | 0.69–0.72 | - |
DWI(128) | 0.65 | 0.54 | 0.74 | 0.73–0.76 | - |
Our method + 3DResnert50 | 0.88 | 0.88 | 0.85 | 0.85–0.87 | 44.24 M |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, B.; Oka, R.; Xuan, P.; Yoshimura, Y.; Nakaguchi, T. Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks. Algorithms 2022, 15, 248. https://doi.org/10.3390/a15070248
Li B, Oka R, Xuan P, Yoshimura Y, Nakaguchi T. Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks. Algorithms. 2022; 15(7):248. https://doi.org/10.3390/a15070248
Chicago/Turabian StyleLi, Bochong, Ryo Oka, Ping Xuan, Yuichiro Yoshimura, and Toshiya Nakaguchi. 2022. "Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks" Algorithms 15, no. 7: 248. https://doi.org/10.3390/a15070248
APA StyleLi, B., Oka, R., Xuan, P., Yoshimura, Y., & Nakaguchi, T. (2022). Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks. Algorithms, 15(7), 248. https://doi.org/10.3390/a15070248