Next Article in Journal
Deep Sequencing of Immunoglobulin Genes Identifies a Very Low Percentage of Monoclonal B Cells in Primary Cutaneous Marginal Zone Lymphomas with CD30-Positive Hodgkin/Reed–Sternberg-like Cells
Next Article in Special Issue
Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm
Previous Article in Journal
Classification of the Confocal Microscopy Images of Colorectal Tumor and Inflammatory Colitis Mucosa Tissue Using Deep Learning
Previous Article in Special Issue
Two-Stage Liver and Tumor Segmentation Algorithm Based on Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities

1
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
2
Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore
3
Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore
4
Institute for Infocomm Research, A*Star, Singapore 138632, Singapore
5
Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
*
Author to whom correspondence should be addressed.
Senior author.
Diagnostics 2022, 12(2), 289; https://doi.org/10.3390/diagnostics12020289
Submission received: 2 December 2021 / Revised: 31 December 2021 / Accepted: 14 January 2022 / Published: 24 January 2022
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)

Abstract

:
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.

1. Introduction

Prostate MRI has been developed into an important tool for the management of prostate cancer (PCa). It is recommended as the first-line screening method for patients with a clinical suspicion of prostate cancer [1]. The Prostate Imaging-Reporting and Data System (PI-RADS) represents a comprehensive set of guidelines, standardized observations and lexicon, which aims to stratify probability of clinically-significant prostate cancer (csPCa) on MRI [2].
Prostate MRI and PI-RADS aim to guide biopsy by identifying the most suspicious areas within the prostate to be targeted. MRI-guided targeted biopsy pathways have been shown to improve the detection of clinically significant prostate cancer with a reduction in the number of biopsy cores, compared to conventional systematic biopsy [3,4]. Improved prostate cancer localization with MRI has facilitated investigations into focal therapy, such as via cryotherapy, high-intensity focused ultrasound (HIFU) and brachytherapy. Focal therapy is a promising option that is being investigated as a possible alternative to whole-gland treatment (prostatectomy or radiotherapy) in patients with low-volume disease, and avoids adverse effects such as incontinence, erectile dysfunction and radiation enteritis [5].
Despite vast improvements in MRI techniques, there are still limitations to prostate MRI interpretation in clinical practice, and experience is still accumulating. This is evident from the fact that PI-RADS has undergone two revisions since version 1 was introduced in 2012 (version 2 in 2015 and version 2.1 in 2019). However, differentiating cancer from non-cancerous pathology such as benign prostatic hyperplasia or inflammation on MRI remains challenging [6,7]. Considerable inter-observer variability also remains, even in the latest PI-RADS version 2.1 [8,9]. Furthermore, variations in scanning parameters and image quality between scanners make it difficult to accurately compare between MRI studies [10].
There have been efforts to address these limitations. Techniques such as quantitative MRI analysis and computer-aided diagnosis enable a more objective analysis of prostate MRI and have been shown to improve diagnostic accuracy and reproducibility [11,12,13]. In recent years, there has been increasing interest in the use of artificial intelligence (AI) and machine learning (ML) in radiology.
Recent studies suggest that applying ML in prostate MRI could improve diagnostic accuracy and reduce inter-reader variability by highlighting suspicious areas on MRI, allowing a more focused interpretation by the radiologist during conventional scan interpretation [14]. ML has also been shown to be able to predict lesion aggressiveness and treatment response [15,16]. Several studies have demonstrated comparable performances between ML and expert radiologists in head-to-head comparisons for MRI interpretation [17,18].
The value of ML in prostate MRI could go beyond imaging interpretation and diagnosis. For example, in MRI–US fusion targeted prostate biopsy, the precise gland and region-of-interest segmentation and image co-registration between MRI and US are important for optimal biopsy. Segmentation and image fusion are usually manually performed, which can be laborious, time-consuming, and subject to inter-operator variation. ML has shown potential in gland segmentation and MRI–US fusion in terms of accuracy and efficiency [19,20]. This can be further extrapolated to radiation therapy planning and possible focal therapy, where precise segmentation is necessary to optimize dose to region of interest and reduce injury to adjacent normal prostate tissue [21].
Prostate MRI interpretation is generally recognized to present a steep learning curve [22]. Through automation, ML potentially enables more consistent interpretation across readers with various experience levels, improving inter-reader agreement, and reducing the need for expert training in prostate MRI interpretation. This would particularly benefit surgeons and radiotherapists who normally do not receive formal radiology training, when it comes to the management of prostate cancer patients.

2. Machine Learning Applications to Enhance Utility of Prostate MRI: Current Status

2.1. Related Reviews

A review in 2019 [16] has demonstrated the capability of machine learning (ML) and deep learning (DL) to process prostate MRI during different tasks, including segmentation, cancer detection, cancer assessment, local staging, and biochemical recurrence. However, the summary is limited and covered fewer methods published up to 2019. Others, for example, Chaddad et al. [23], discussed the existing clinical applications, as well as machine learning-based studies weighted more on the prostate MRI radiomics pipeline and methods of cancer grade prediction on the Gleason score. Another review by Zeeshan et al. [24] introduced the literature using AI to support urological disease treatments, with only a small amount of research based on prostate MRI.
In this section, we will review the literature, studying the ML and DL applications in prostate MRI segmentation, registration, lesion detection and scoring, and treatment decision support, over a wider range in time. More importantly, both traditional and recently published studies will be covered, especially focusing on clinic tasks, methods utilized, data, and results.

2.2. Segmentation

The aim of segmentation is to define the boundary of the prostate gland, prostate zones (central, transition, peripheral zones), and any focal lesions. Gland and lesion segmentation is important when performing fusion-based targeted biopsy or focal therapy, as these clinical settings require the accurate delineation of prostate, zonal, and lesion contours. Segmentation can be performed either manually or by ML/DL methods. In practice, manual segmentation can be time-consuming and subjective, depending on the experts’ perception and level of experience. This can range from highly accurate when delineating the transition zone from the peripheral zone, to highly subjective and variable when delineating the prostate margins from periprostatic venous plexus in the mid-gland to apex regions (Figure 1).
Therefore, interest in developing accurate automatic segmentation tools has rapidly increased. A comprehensive review [25] summarized recent ML and DL applications for prostate MRI segmentation until December 2020, showing the important and mature status for ML in automatic prostate MRI segmentation tasks. In this paper, we will revisit the main methods by which segmentation can be performed by ML and DL, and review more recent research work covered. The papers mentioned in this section are summarised in Table 1.
The evaluating metric for segmentation is usually expressed as Dice similarity coefficient (DSC), measuring the degree of overlap between the predicted and the true masks [26].
Table 1. Machine learning-based segmentation methods for prostate MRI. The abbreviations are shown below 1.
Table 1. Machine learning-based segmentation methods for prostate MRI. The abbreviations are shown below 1.
Publication YearMethodProstate ZoneInput Image Dimension (Pixel/Voxel/mm)Data SourceMRI Sequence(s)Sample SizesCVResultsRefs.
TrainValTestAcc (%)DSC (%)
2008Nonrigid registration of prelabelled atlas imagesWG512 × 512 × 90, 271 × 333 × 86PvT2w38-50No-85[27]
2009Level setWG PvDWI10-10No-91[28]
2012AAMWG0.54 × 0.54 × 3 mmPvT2w86-225-fold 88[29]
2007Organ model-based, region-growingWG 3DPvT1w, T2w15-24No94.75 [30]
2014RF and graph cutsWG512 × 512 or 320 × 320PRO12T2w50-3010-fold->91 (training),
>81 (test)
[31]
2014Atlas-based AAM and SVMWG512 × 512PvT2w100-40leave-one-out9087[32]
2016Atlas and C-Means classifierWG, PZ, TZVarying sizesPRO12, PvT2w3035 No-81 (WG),
70 (TZ),
62 (PZ)
[33]
2016Volumetric CNNWG128 × 128 × 64PRO12T2w50-31No-86.9[34]
2017FCNWG, TZ0.625 × 0.625 × 1.5 mmPRO12T2w50-3010-fold-89.43[35]
2021V-Net using bicubic interpolationWG1024 × 1024 × 3 × 16PRO12, PvT2w106-30Y-96.13[36]
2019Cascade dense-UNetWG256 × 256PRO12T2w40-105-fold-85.6[37]
20213D-2D UNetWG-PvT2w299- 5-fold-89.8[38]
2020convLSTM and GGNNWG28 × 28 × 128PRO12, ISBI13, PvT2w140-30No-91.78[39]
2020Transfer learning, data augmentation, fine-tuningWG, TZ-PvT2w684-40610-fold-91.5 (WG), 89.7 (TZ)[40]
2021Federated learning with AutoMLWG160 × 160 × 32MSD-Pro, PRO12, ISBI13, PROxT2w3444696No-89.06[41]
2020Anisotropic 3D multi-stream CNNWG144 × 144 × 144PRO12, PvT2w8730194-fold-90.6 (base), 90.1 (apex)[42]
2020MS-NetWG384 × 384PvT2w63-16No-91.66[43]
2017FCNWG, TZ144 × 144 × 26PRO12, PvDWI141-134-fold9793,88[44]
2020Transfer learningWG, TZ1.46 × 1.46 × 3 mmPvDWI29197145No-65 (WG),
51 (TZ)
[45]
2019Cascaded U-NetWG, PZ192 × 192PvDWI763651No-92.7 (WG), 79.3 (PZ)[46]
2021Three 3D/2D UNet pipelineWG, PZ, TZ256 × 256 × (3 mm)PvT2w1454848No-0.94 (WG), 0.914 (TZ), 0.776 (PZ)[47]
2021U-Net, ENet, ERFNetWG, PZ, TZ512 × 512PROxT2w99-1055-fold-ENet (best):
91 (WG),
87 (TZ),
71 (PZ)
[48]
2021Transfer learning, aggregated learning, U-NetWG, PZ, CG192 × 192, 192 × 192 × 192ISBI13T2w5–40-205-fold-73 (PZ),
83 (CG),
88 (WG)
[49]
2018PSNetWG320 × 320 × 512 × 512PRO12, ISBI13T2w112-285-fold-85[50]
1 Val = validation, CV = cross–validation, Acc = accuracy, DSC = dice similarity coefficient, Refs. = reference, - = not reported. For datasets, Pv = private, PRO12 = PROMISE12 [51], ISBI13 = NCI-ISBI 2013 Challenge [52], MSD-Pro = MSD Prostate [53], PROx = PROSATETx Challenge [15]. For prostate zones, WG = whole gland, TZ = transition zone, PZ = peripheral zone.

2.2.1. Traditional Machine Learning Methods

Traditional segmentation methods can be summarized as atlas-based models, deformable models, and feature-based ML methods. “Atlas” is a collection of manually segmented structures, which serves as a reference to be registered with additional or new patient images on longitudinal follow-up. A study in 2008 [27] applied the inter-subject registration of Atlas images for pelvic MR images. The deformed images were fused using various voting techniques, to generate a consensus label for the segmentation of new patients’ images, resulting in a DSC of 0.85. Deformable models, like level sets [28] and active appearance models (AAMs) [29], make use of prior geometric or statistical knowledge for segmentation. In 2007, Pasquier et al. [30] applied a statistic model for prostate gland segmentation and region-growing methods for rectum and bladder segmentation. In 2009, Liu et al. [28] used prior shape information for level set, a numerical technique, yielding a mean DSC of 0.91. Another approach uses feature-based machine learning methods to cluster the image into prostate and background regions. ML classifiers, such as random forest (RF) and support vector machine (SVM), widely adopted for binary or multi-class classification are applied here for assigning membership values to either the target or background group. For example, Mahapatra et al. [31] used two sets of RF for the identification of super-voxels and the classification of prostate voxel using surrounding features. A graph cut method is used based on the RF output as cost function, resulting in a Dice metric of more than 0.81 on the test set.
Hybrid methods are also common, which combine different models, including some of the aforementioned methods, to achieve improved performance. For instance, in 2014, Cheng et al. [32] combined atlas-based AAM for registration, and SVM for classification, to achieve a relatively high segmentation result (accuracy 90%) on prostate delineation. Later, in 2016, Chilali et al. [33] developed a prostate and zonal segmentation model using atlas images and C-means clustering, achieved mean Dice accuracy values of 0.81, 0.70, and 0.62, for the prostate, the transition zone, and peripheral zone segmentation, respectively. Regardless of their differences, all these methods ultimately required manual ground truth segmentation by experts, in order to achieve reasonably accurate modelling results.

2.2.2. Deep Learning-Based Methods

Recently, deep convolution neural networks (deep CNNs) for prostate segmentation have been developed, and demonstrate better segmentation accuracy than using conventional ML methods [25,54]. This is due to their ability to learn the appropriate features of input images, and their enhancement with data augmentation methods that provide more data to train the network. The review [25] classified the DL algorithms into four groups according to the segmentation techniques used. They are: feature encoder, up-sampling, the resolution increment of features, and regional proposal-based techniques. Another review published in 2020 [55] listed the deep learning-based prostate MRI and CT segmentation methods, with fewer papers covered than in [25] (19 vs. 110) for MRI segmentation. The papers were classified based on methods applied through the DL-based segmentation process (e.g., data pre-processing, loss function, optimizer, ground truth, post processing).
The fully convolutional network (FCN) [56] is the most widely used method for semantic segmentation with the employment of skip connection, pooling, and up-sampling. U-Net [57] is a further extension of FCN. With a regular CNN followed by up-sampling that increases the size of the feature map, it demonstrated striking performance, and has been widely used in prostate and zonal segmentation. In 2016, Milletari et al. [34] took the idea of U-Net architecture to develop a volumetric FCN (V-Net), to segment prostate volumes from MRI. To date, many novel variations of deep learning have been applied to improve the segmentation performance by modifying FCN, U-Net, or V-Net. For example, Yu et al. [35] introduced the long and short residual connections into 3D FCN, Tian et al. [50] fine-tuned an FCN that has been pre-trained using a large dataset, and Li et al. [37] added dense blocks and transition layers to two cascaded U-Net models. Recently, Jin et al. [36] improved V-Net using bicubic interpolation and achieved DSCs of 0.975 and 0.983 on two public datasets. Ushinsky et al. [38] introduced a novel 3D-2D U-Net CNN which was trained on 3D images, and completed object detection and segmentation on 2D images, resulting in a DSC of 0.898.
When performing segmentation, the multi-modal 3D MR images can either be used directly or cropped into 2D slices as input for the DL model. Neither approach is without limitations. Splicing the 2D segmentation from 3D results may cause jaggedness and faults, whereas the amount of 3D MRI data is usually not sufficient to train the deep 3D neural network, leading to struggles in increasing segmentation accuracy.
To learn about better correlation between neighbour image slices in 2D segmentation, Tian et al. [39] proposed a convolutional long short-term memory model (convLSTM), which enables the model to capture the spatial relation between neighbouring vertices. To solve the problem of a lack of MRI data for a deep learning network, several methods have been proposed. Sanford et al. [40] showed that the combination of transfer learning, data augmentation, and test time fine-tuning could benefit prostate segmentation. Roth et al. [41] combined federated learning (FL) and automated machine learning (AutoML) [58] to increase data diversity, achieving an average Dice score of 0.89. Meyer et al. [42] proposed an anisotropic multi-stream segmentation CNN to process additional scan directions (dual-plane and triple-plane), rather than simply axial scans, which significantly improved (p < 0.05) over segmentation on a plain axial view (base DSC 0.906 vs. 0.898, apex DSC 0.901 vs. 0.888). Liu et al. [43] proposed an MS-Net for segmentation from multiple sources of data using a knowledge transfer method. An overall AUC of 0.9166 was achieved across three sites.
Amongst the myriad of possible approaches, we anticipate that FCN-based U-Net architecture will continue to be applied as a backbone in most segmentation tasks, while data augmentation and transfer learning could effectively improve segmentation performance. That said, head-to-head comparisons of some of the more robust and mature methods will become necessary in the future.

2.2.3. Zonal Segmentation

While the majority of current works have focused on whole prostate gland segmentation, some other studies have focused instead on the internal structure of the prostate, such as the delineation between the peripheral zone (PZ) and the transition zone (TZ). PZ is the origin of most carcinomas, while cancers in TZ, though less common, are more difficult to detect, due to concomitant benign prostatic hyperplasia (BPH). This has led to distinct clinical evaluation criteria for the different zones on the widely used PI-RADS. In order to train machines towards accurately segmenting tumours, it is necessary to segment different zones inside the prostate gland first.
Clark et al. [44] combined VGG ConvNet [59] and U-Net-based architecture for whole gland (WG) and TZ segmentation. Motamed et al. [45] applied transfer learning and fine-tuning on a modified U-Net architecture for WG and TZ segmentation. Zhu et al. [46] used K-means clustering for coarse segmentation, and a cascaded U-Net model for WG and PZ segmentation. The algorithm achieved higher DSC than U-Net (0.93 vs. 0.87 and 0.79 vs. 0.67 for WG and PZ, respectively). In 2021, Bardis et al. [47] used a three 3D-2D U-Net models pipeline to first localize the prostate square shape zone, then segment prostate WG and PZ vs. TZ, achieving DSC 0.94, 0.914 and 0.776 for WG, TZ and PZ, respectively.
Recently, several studies have compared different deep CNN models on zonal segmentation. Cuocolo et al. [48] compared three deep learning methods, UNet, an efficient neural network (ENet) [60], and efficiently residually factorize ConvNet (ERFNet) [61] in the PROSTATEx public dataset. ENet (0.91, 0.87, 0.71) and UNet (0.88, 0.86, 0.70) were more accurate than ERFNet (0.87, 0.84, 0.65) in terms of DSC (for WG, TZ and PZ, respectively), while ENet outstood the other two methods, with faster convergence speed and fewer parameters. Saunders et al. [49] compared the performance of independent training, transfer learning, and aggregated learning based on 3D and 2D U-Net models, on the premise of limited training data. In addition, 3D U-Net was found to be more robust to a small sample size (five training cases) than 2D U-Net by an average DSC of 0.18, while transfer learning and aggregated learning (similar DSC: 0.73, 0.83, 0.88 for PZ, CG, WG, respectively) both outperformed independent training (DSC 0.65, 0.77, 0.83) when using five internal training cases. Predictably, automated segmentation between PZ and TZ can become challenging in cases where tumours span across both zones, since false positives like prostatitis in the PZ reduce its normal high T2 signal to become isointense to the TZ, whereas severe benign prostatic hypertrophy in the TZ compresses the PZ, reducing the ability to discriminate between the two zones (Figure 2, Figure 3 and Figure 4).

2.3. Image Registration

The aim of image registration is to transform different types of images into the same coordinates to ensure spatial correspondence. For interventions such as transrectal biopsy, focal therapy and high dose rate (HDR) brachytherapy procedures, the fusion of MR images with real-time ultrasound images facilitates accurate localisation for needle placement [62]. For patients who have undergone radical prostatectomy, correlating pre-surgical MRI with whole-mount histopathology images could provide high-resolution information on the extension of cancer [63]. Moreover, dominant intraprostatic lesion (DIL) delineation from MRI (mpMRI) to CT images during radiation therapy also needs accurate MRI–CT image registration.
For feature-based registration, the relatively sparse features need to be consistently matched between two imaging modalities. Manually obtaining landmarks from both modalities is strenuous and often impossible during intervention. Cognitive fusion is possible, but it generates a steep learning curve for procedurists. Meanwhile, further research is required to determine if software-based fusion is superior to cognitive fusion [64].
To date, ML- and DL-based approaches have been applied to the task of registration between mpMRI and other types of images. We will discuss the three most popular areas, MRI–ultrasound (MRI–US), MRI–histopathology, and MRI–CT registration. The studies mentioned in this section are summarised in Table 2. For quantitative validation, target registration error (TRE) is the most frequently used measurement, calculated as the root-mean-square distance over all pairs of corresponding landmarks in the registration image pairs for each patient. There are three types of image registration (rigid, affine, and deformable registration) depending on the transformations applied [65].

2.3.1. MRI–US

Generally, transrectal (TR), or transperineal (TP), ultrasound images serve as the main guidance for prostate biopsies. However, they display poor contrast between healthy and cancerous regions. On the other hand, MRI, or mpMRI, is currently the mainstay for the detection and localization of PCas, and MRI-guided targeted biopsy pathways have been shown to increase the detection of clinically significant prostate cancer [4]. However, in-bore MRI-based interventions are cumbersome and consume valuable magnet scan time. As a result, MRI–US fusion techniques are increasingly favoured (Figure 5).
Traditional ML approaches for MR-US registration have mainly been based on biomechanical models and statistical deformation models. In 2002, Mohamed et al. [66] first proposed a finite-element (FE) analysis to simulate the prostate deformation model induced by the transrectal ultrasound probe. Principal component analysis (PCA) and linear least square fitting were applied to construct deformed prostate shapes from CT or MRI. Thereafter, Hu et al. [67,68] incorporated biomechanical parameters with random sampling and PCA. However, the randomly sampled biomechanical parameters may greatly differ from the real values of the patient, and hence, may not be appropriate for general usage. Subsequently, Wang et al. [69] proposed a patient-specific deformable registration model using a hybrid surface point matching method. Their model achieved a TRE around 1.44 mm. Nevertheless, all the above-mentioned methods relied on rigid surface registration and required prostate surface segmentation for initialization, while the biomechanical models are not readily available for practical clinical application.
Recently, deep learning models have been introduced for image registration. The principle is using the fixed and moving images to train the deep learning network to predict the appropriate transformation matrix between them. However, supervised deep learning is currently difficult to apply here, due to the unavailability of ground truth deformation.
To tackle this issue, ongoing research focuses on weakly supervised or unsupervised deep learning models. Hu et al. [70,71] proposed a weakly supervised learning method with CNN using manually labelled anatomical structures. Yan et al. [72] proposed a multi-modality generative adversarial network (GAN) network. An unsupervised DL network enabled the simultaneous training of CNNs for transformation parameter estimation and registration quality evaluation. Zeng et al. [73] proposed a weakly supervised MRI–transrectal ultrasound (MRI–TRUS) registration method that first segmented MRI and TRUS prostate, and then applied affine and non-rigid registration using FCN, 2D CNN and 3D U-Net. The method produced a mean TRE of 2.53 mm. Similarly, Chen et al. [74] in 2021 also employed a segmentation model before the registration of MR images to the treatment planning US images after the needle insertion for HDR brachytherapy.
To overcome the problem of rigid alignment error and organ deformation in real-time intervention, Bhardwaj et al. [75] proposed a 2D/3D DL-based constraint registration for real-time rigid and deformable error correction, achieving a TRE of 2.988 mm on clinical data. Biomechanical properties combined with DL networks also demonstrated effectiveness in several studies. Hu et al. [76] applied biomechanical FE simulations to regularize the registration in an adversarial learning approach, resulting in a registration error of 6.3 mm and a DSC of 0.82. Yang et al. [77] proposed a framework that first segmented the MRI and TRUS images, and then applied a point-cloud-based network with a built-in biomechanical constraint for registration. The model achieved a 1.57 mm TRE and a 0.94 DSC, approaching what could be implemented into a clinical routine. It is predicted that these techniques will significantly advance our ability to perform the accurate placement of needles for tissue sampling or local ablation (such as cryotherapy), where MR images can be a used as ground-truth, and fused with real-time TRUS, on which considerable deformation of the gland occurs during the procedure.

2.3.2. MRI–Histopathology

For patients who have undergone radical prostatectomy (RP), the whole-mount histopathology images can be correlated with pre-surgical MRI, such that cancers (and their attending Gleason grades) are accurately mapped. Developing such mapping between histopathology images and MRI could improve the existing MRI interpretation, as well as facilitating the machine learning methods to identify prostate cancer on MRI by providing accurate cancer labels, which will be introduced in the next section. Rusu et al. [79] developed RAPSODI for the registration of radiology and pathology images. The framework created a digital representation of tissue from the histopathology specimen to provide cancer labels for MRI. In 2020, Shao et al. [81] applied mono-modal MR and histopathology image pairs to a DL network for the estimation of affine and deformable transformation, which was then applied for mapping the cancer labels onto MRI. The method performed similarly to RAPSODI. However, these strategies assumed slice-to-slice correspondence between histopathology and MR images, which would require a significant alteration to the clinical workflow, which is typically not practiced in most centres performing RP. Sood et al. [82] introduced a novel framework without the need for slice-to-slice correspondence using a GAN-based network. The learned information from 3D MRI and histopathology slices was applied for mapping the extent of cancer onto MRI. The model achieved a 3D DSC of 0.95 for prostate gland and 0.68 for cancer. Advancements in this domain will invariably allow for a better understanding of treatment response to local ablative therapies, where the intent is to concentrate treatment dose to the tumour site, and spare the non-tumorous areas.

2.3.3. MRI–CT

Co-registration MRI with computed tomography (CT) is promising for radiation therapy planning and delivery for patients with prostate cancer. This method would combine the superior soft tissue delineation by MRI with the lower cost CT-based linear accelerator (Linac) units for optimal dose delivery in a localized disease. In 2019, Ghazal et al. [78] used RF based on an auto-context model to create synthetic CT images from MR images for dose calculations, resulting in more than 99% pass rate in gamma analysis. Recently, Fu et al. [80] used a 3D point cloud matching network after segmentation of MR and CBCT for registration. Their TRE was 2.68 mm, and the DSC was 0.93. In practice, there is a relatively urgent need for the clinical implementation of this technique, given the substantially higher cost of MRI-based Linacs, and the increasingly advanced capability of current radiotherapy machines in delivering modulated treatment doses to reduce the detrimental side effects of regional non-tumorous tissue damage. ML-/DL-based techniques in these clinical settings would also be particularly useful to facilitate treatment planning and diagnosis for surgeons, pathologists and radiotherapists who do not usually receive formal radiology training, and may not be familiar with MRI interpretation.

2.4. Lesion Detection and Characterization

Ultimately, the optimal clinical treatment of prostate cancer requires the accurate grading and staging of disease. Prostate cancers can be divided into clinically significant and clinically insignificant cancers based on grade (Gleason score) and aggressiveness [83]. Clinically significant cancers (csPCa) would usually require definitive treatment, such as surgery or radiotherapy, while clinically insignificant cancers can be managed with active surveillance. Currently, the prostate imaging-reporting and data system (PI-RADS) is the main scoring system to indicate the probability of csPCa on MRI. However, even with the current version PI-RADS v2.1, there remains considerable inter/intra-reader disagreement [84,85]. Hence, many ML and DL methods have been developed for refining the differentiation between csPCa and non-csPCa, improving PI-RADS categorization, as well as possibly predicting Gleason score (GS). A recently published review [86] introduced prostate lesion classification and detection studies proposed between 2018 and February 2021. The study showed that most ML-/DL-based approaches have been conducted for the task of PCa lesion classification (either two classes or multi-classes according to lesion aggressiveness), followed by lesion detection (detection and localization of lesion).
In this section, we summarize the studies using ML and DL for prostate lesion detection and lesion scoring with the utilization of MRI. Lesion detection was treated as differentiating csPCa with non-csPCa, as well as lesion region detection, while lesion scoring predicts PI-RADS or Gleason score /grade. The studies covered in this section are summarized in Table 3.

2.4.1. Lesion Detection

The radiomics feature is defined as the quantified and high-dimensional medical imaging features extracted using ML [109]. Recent studies mainly follow the procedure that first extracts predictive radiomic features that are highly correlated with the presence of csPCa, then applies the extracted features to ML-based classifiers for distinguishing clinically significant and non-significant Pcas. For example, Algohary et al. [87] and Min et al. [88] applied the minimum redundancy maximum relevance (MRMR) method for feature extraction, while the latter further used the least absolute shrinkage and selection operator (LASSO) algorithm [110], which selects strongly correlated features via shrinking their respective regression coefficients. Wu et al. [89] applied mpMRI and radiomic features to logistic regression (LR) and SVM models for diagnosing Pca in the TZ only (AUC 0.989 and 0.949 for LR and SVM, respectively), and differentiating TZ Pca with stromal BPH (AUC 0.976). Texture analysis has the advantage of reproducibility and the ability to detect image features that may be beyond the limits of visual analysis by the human eye, and hence, can also be a useful tool to quantitatively describe the tumour heterogeneity [111]. Textured-DL [90], consisting of a 3D grey-level co-occurrence matrix extractor and a CNN to differentiate csPCa and non-csPCa, showed significantly higher AUC (0.85) than the PI-RADS-based classification (0.73).
Very recently, Aldoj et al. [91] tested different combinations of distinct MRI (3D) sequences as input to a 3D CNN, to find the best combinations for Pca classification. The location of lesion was pre-required as an input. Eventually, the ADC, DWI, and K-trans input combination performed the best, with an AUC of 0.897. This technique has the advantage of accurately combining image inputs from different MRI pulse sequences and meta-data, something which is virtually impossible for a human reader.
Unfortunately, DL networks on medical imaging suffer from a small number of labelled datasets, and even more so when seeking to develop algorithms to delineate the foci of clinically significant disease on MRI. Transfer learning is one means to overcome this problem, by enabling knowledge gained from large datasets in other images, to be transferred to smaller relevant datasets.
Chen et al. [92] performed the transfer learning of two ImageNet pre-trained models for feature extraction and lesion classification. The prior feature extraction layers were frozen from further training. They obtained AUCs of 0.81 and 0.83 for the two pre-trained models. Yuan et al. [93] established three pre-trained architectures to compute features from three sequences of mpMRI. Their model achieved an accuracy of 86.92% for prostate cancer classification. Separately, Zhong et al. [94] applied a deep transfer learning (DTL)-based model using two ResNets. The AUC for the DTL model was the highest (0.726) compared with the same DL model architecture trained from scratch (0.687) and PI-RADS v2 classification (0.711).
Other than classifying pre-defined ROIs on prostate MRI, several studies have been further developed to automatically detect and segment suspicious Pca areas. In 2015, Giannini et al. [95] used SVM to create a malignancy probability map for all voxels of the prostate, yielding a voxel-wise segmentation AUC of 0.91. Another study [96] employed a least square regression model to generate the segmentation of epithelium and lumen maps, resulting in an AUC of 0.79. To detect csPCa in low risk patients who opt for active surveillance, Arif et al. [98] used low risk Pca patients’ data to train a U-shaped 3D CNN model for lesion segmentation, achieving an AUC from 0.65 to 0.89 for lesion volumes ranging from > 0.03 to > 0.5 cc. Recently, Zhang et al. [97] combined GrowCut techniques to segment prostate cancer from MRI images, Zernik feature selection to extract features from images, and ensemble learning techniques including KNN, SVM, and MLP to determine and diagnose the lesions. They improved the accuracy by 20% compared to other methods with similar approaches (accuracy 80.97%). Seetharaman et al. [99] introduced the Stanford Prostate Cancer Network (SPCNet) to learn features specific to each sequence of MRI mapped with histopathology images, achieving a AUC of 0.86–0.89 to detect aggressive cancers and 0.75–0.85 to detect clinically significant lesions.

2.4.2. Lesion Scoring (PI-RADS and Gleason Score)

Due to the overlap between Pca lesions and false positive diseases such as BPH and prostatitis, a key tenet of PI-RADS has been to stratify lesions based on the probability of malignancy, rather than taking a binary approach to image-based diagnosis. DL approaches encode ground truth scores and MRI features for multi-class prediction. Sanford et al. [17] used Resnet34 CNN to predict the PI-RADS scores of manually segmented lesions. The predicted PI-RADS scores were associated with the expert radiologist’s, with a kappa score of 0.40. de Vente et al. [103] and Cao et al. [104] applied ordinal encoding [112], which utilizes the ordinal nature of Gleason score (GS) to CNN models for predicting the GS of Pca. A novel study in 2018 [107] determined the Gleason grade (GG) in Pca using stacked sparse autoencoders (SSAE) and a SoftMax classifier. The algorithm won first place in the PROSTATEx-2 2017 challenge, with a kappa score of 0.2326.
As mentioned earlier, lesions have different characteristics in different prostate zones. Several methods used zonal specific imaging features to different ML/DL classifiers for Pca scoring. For example, Jensen et al. [106] and de Vente et al. [105] applied a k-nearest neighbour classifier and a U-Net-based model, respectively, for GG prediction, and Wang et al. [102] employed the SVM-RFE model for PI-RADS score prediction. Their results showed that zonal specific information and radiomic features could significantly improve the prediction of aggressive scores for prostate lesions.

2.5. Treatment Decision Support

To patients with newly diagnosed prostate cancer, the role of MRI is to help determine the best treatment option for the patients. For example, the presence of locally advanced disease with extra-prostatic extension of disease (EPE) and the invasion of the neurovascular bundles will increase the complexity of surgical resection, but remain amendable to radiation therapy. In metastatic bone or nodal disease, systemic hormonal therapy is preferred. To date, machine learning-based methods for improving treatment prediction have been limited to determining the presence of EPE and estimating biochemical recurrence risk. The studies mentioned in this section are summarized in Table 4.

2.5.1. EPE Prediction

MpMRI plays an important role in the local staging of prostate cancer. However, for the diagnosis of EPE, MRI has variable sensitivity, reportedly as low as 0.57 [130]. The accuracy of diagnosis strongly correlates with the experience level of radiologists interpreting the scans [131].
Recently, a radiomic-based EPE prediction has emerged. Stanzione et al. [113] predicted EPE using texture features extracted from manually segmented mpMRI maps of index lesions. Comparing different feature selection models and ML classifiers, the Bayesian network performed best with correctly classified instances of 82% and an AUC of 0.88. Similarly, Ma et al. [114] and Xu et al. [115] extracted radiomic features from mpMRI, using LASSO logistic regression for predicting extracapsular extension (ECE) and EPE, yielding AUCs of 0.83 and 0.865, respectively. Losnegård et al. [116] combined RF analysis with radiology interpretation and the MSKCC nomogram, resulting in an AUC of 0.79 for EPE prediction. Cuocolo et al. [117] applied SVM to train and test data from three different institutions, and resulted in an overall accuracy of 83%. Because the real-world implementation of ML algorithms often suffers from a drop in diagnostic performance, more extensive validation and testing of the aforementioned models are required.

2.5.2. Biochemical Recurrence Prediction

Biochemical recurrence (BCR) is used for assessing the outcome of radical prostatectomy. It indicates an increase in the PSA level of patients who have undergone radiotherapy or surgery for PCa [132]. Pre-operative BCR prediction could help urologists and patients to decide on an acceptable treatment option, while post-operative BCR prediction enables an optimal surveillance.
Currently, GS, preoperative PSA and pathological stages are the main parameters used to predict the risk of BCR. For higher accuracy and specificity, multivariate LR models such as nomograms [133] have been used to improve prediction. Although several nomograms for predicting BCR have been internationally validated, these tools have been developed with limited (often single institutional) datasets, and therefore limited accuracy [134]. MRI has been increasingly employed as an adjunct tool for improving the performance of BCR prediction. However, the clinical application of MRI can be limited by inter-reader variability, and the occasional degraded image quality from magnetic field-related artefacts. Despite these challenges, several traditional ML models have been designed to find the remarkable features in MRI for BCR prediction.
Incorporating MRI findings into traditional regression methods has been found to be useful for BCR prediction, in some studies. Fuchsjäger et al. [118] converted MRI findings into a scoring system using Cox proportional hazards regression, and added the scores to published preoperative nomograms for BCR prediction. The model resulted in a C-index of 0.776 for 5-year BCR prediction and 0.788 for 10-year prediction. A later study [119] applied univariate and multivariate analyses using the Cox proportional hazards model to find the correlation between the PI-RADSv2 score and BCR. It is worth noting from this study that patients with PI-RADS < 4 did not suffer from BCR, indicating that PI-RADSv2 may be useful to predict PCR after RP for PCa. More recently, Capogrosso et al. [120] used 372 patients’ data for Cox regression, for assessing the association between the pre-biopsy mpMRI score and the risk of postoperative BCR. However, the authors did not demonstrate significant improvement after adding the pre-biopsy mpMRI score to the existing predictive models, probably due to the insufficient data size. A better understanding of what constitutes important MRI features would likely improve the yield that MRI brings into predictive models.
Rather than simply relying on mpMRI scoring systems, which aggregate rather than provide sufficient emphasis on specific important imaging features, radiomic features have been employed with some promise of success. Using Cox regression analysis, Park et al. [121] analysed all clinical variables and tumour ADC data, and found that tumour ADC was the only independent predictive factor for PCR, with an AUC 0.755. The finding was later proven by Bournonne et al. [122], who extracted IBSI-compliant radiomic features [135] from mpMRIs, and found that one feature (SZEGLSZM) from ADC maps was predictive of BCR and bRFS after prostatectomy with an AUC of 0.76. Zhang et al. [123] found that the imaging-based approach using SVM was superior to LR analysis in predicting PCa outcome (accuracy 92.2% vs. 79.0%). Another study [124] employed SVM and linear discriminant analysis on radiomic features extracted from T2-weighted and ADC images, with a similar AUC of 0.73.
There are fewer studies using DL to analyse MRI for BCR prediction. Early in 2004, Poulakis et al. [128] compared the artificial neural network (ANN) with classical regression and the Kattan nomogram model in a utilization of pelvic coil MRI, resulting in a significantly more accurate performance than the two other models (AUC 0.897 vs. 0.738, 0.728) in predicting BCR. Recently, Yan et al. [125] proposed a DL-based algorithm that consecutively extracted quantitative features from MRI and predicted BCR risk. Their model was validated in two independent cohorts, and achieved a C-index of 0.802 in both primary and validating cohorts, showing significant potential for DL-based BCR prediction. Importantly, when comparing various approaches incorporating MRI information (PI-RADS grade vs. specific imaging feature, use of DL) into these predictive models, diagnostic performance (AUC) cannot be interpreted en face, and a head-to-head comparison of the various approaches would be better to deepen our understanding of which approach to employ for clinical use.

2.5.3. Histological and Outcome Predictions

Often, MRI is not able to accurately depict the extent of disease burden in PCa; notably, the presence of lymph node metastases—an important predictor of disease-free and overall survival. Kang et al. [126] compared the performance of RF and Kattan pre-operative nomogram (KN) in the prediction of organ-confined disease (OCD), EPE and lymph node metastasis using 1560 patients’ data, and found that RF may outperform KN slightly (AUC 0.75 vs. 0.69) when the positive and negative outcomes are balanced. Abdollahi et al. [127] utilized pre- and post- operative MRI radiomic features and ML classification methods to predict intensity-modulated radiation therapy (IMRT) response, GS and PCa stages, showing that non-invasive radiomic features from MR images and ML approaches are easy methods for guiding PCa diagnosis and therapy. Detailed information pertaining to tumour stage, treatment regime and subsequent treatment response can be prospectively amassed for the purposes of developing sophisticated predictive models that could allow for better individualized treatment and surveillance. This will require foresight by healthcare providers in setting up the necessary IT infrastructure to enable relevant big (structured and unstructured) data to be mined and analysed in an informative manner.

3. Machine Learning Applications to Enhance Utility of Prostate MRI: Limitations

Multiple studies have shown potential in the application of ML-/DL-based methods in prostate MRI. There is good reproducibility in prostate gland and zonal segmentation, satisfactory registration between MRI and ultrasound or CT, and comparable performance with expert reads in the detection of clinically significant prostate cancer. However, applicability may be limited by relatively small validation and test datasets, as well as considerable heterogeneity across many of these studies. Furthermore, there is often difficulty obtaining ground truth validation for segmentation and image registration techniques. While the automation of these processes improves time management and allows procedurists to focus on more urgent tasks, more head-to-head comparisons, for example comparing automated vs. expert-read lesion detection and lesion scoring, are needed to compare the diagnostic performance of ML/DL-based methods with conventional interpretation, or demonstrate the added advantage of ML-/DL-based methods over manual interventions.
There remain two potential clinical questions that could be addressed by DL/ML in the future. First, patients with low-risk prostate cancer can undergo active surveillance (AS), where upfront definitive treatment such as surgery or radiotherapy is deferred until there is evidence of disease progression. Prostate MRI is increasingly being integrated into active surveillance protocols for this group of patients, where the aim is to determine radiologic change between interval surveillance MRI scans which would suggest disease progression [136]. ML has been utilised to predict the probability of disease progression for patients on AS, based on clinical criteria such as age, family history, serum PSA, and tumour volume [137]. However, to our knowledge, ML/DL methods in prostate MRI have yet to be incorporated into active surveillance strategies. The detection of a significant change between surveillance MRI can be challenging for even the most experienced radiologist, and ML/DL methods would be very helpful in this setting. Second, patients diagnosed with BCR often undergo a prostate MRI to evaluate for local recurrence. Post-treatment MRI appearance in this group of patients who have either undergone surgery, radiotherapy or focal therapy will differ considerably from pre-treatment MRI. To our knowledge, there are no published studies evaluating ML/DL in a post-treatment MRI scan for the detection of local recurrence. The assessment for local recurrence can be challenging for radiologists, given the variable morphological features on MRI. ML/DL applications will also be very helpful in this setting.

4. Future Opportunities

With the above advances in ML techniques, it is foreseeable that a combination of various approaches could ultimately reproduce a radiologist’s role in MRI prostate interpretation. Alkadi et al. [101] developed a DL encoder–decoder architecture that could concurrently segment the prostate, define its anatomical structure, and mark out suspicious lesions. With an employment of 3D spatial information from MR series, their accuracy for cancer detection was 0.894. Recently, Mehralivand et al. [100] proposed a cascaded DL model for lesion detection and scoring on bpMRI. The model contained a 3-D UNet-based network that automatically detected and segmented prostate MRI lesions, and two 3D residual networks that made 4-class classification to predict PI-RADS categories. The mean DSC for lesion segmentation was 0.359. Hoar et al. [54] compared several ML and comprehensive DL models, and found that the combination of transfer learning and test time augmentation resulted in a significant improvement (DSC 0.59, AUC 0.93) in CNN lesion segmentation for a small set of mpMRI data comprising 154 patients. Whereas image segmentation and registration are relatively simpler tasks, much validation work remains, in order to develop these lesion detection and characterization prototypes into diagnostic tools that can meet regulatory standards and gain adoption into mainstream clinical practice.

Federated Learning

The data sharing, privacy and security policy may limit the ML model training that requires large (centralized) datasets. In the past few years, federated learning (FL) attracted much attention and many interests in the area where data sharing is a concern. FL is the technology enabling collaborative learning with de-centralized data [138]. However, it is still in the early stage, and challenges remain for medical applications [139], mainly due to policy and constraints on the access to learning infrastructure. The efficacy of the FL model has to be validated with more real-world studies. A recent survey on FL [140] summarises the related areas on data distribution, privacy protection, machine learning model aggregation, and communication. In applications of prostate imaging, recent works focus only on prostate segmentation [42,141], showing outperforming results over models using single data sources. A study to use FL for COVID-19 outcome predictions based on EMR data across over 20 countries [142] shows the potential to collectively use multiple data sources for medical applications. Commercially, there are platforms that researchers can use, as well as open platforms available; examples include Nvidia Clara [143], Tensorflow Federated [144], IBM FL [145], OpenFL [146], FATE [147], XayNey [148], Baidu FL [149] et al. We anticipate that federated learning could accelerate the development of DL algorithms in domains where large datasets are not feasible from single institutions alone, such as in the case of MRI prostate.

5. Conclusions

Rapid advancements in ML and DL have led to a recent surge in interest in the domain of computer vision in medical imaging. Prostate MRI is an ideal imaging modality for DL applications, given that it is the mainstay of lesion detection, and in line with a trend towards targeted biopsy and local ablative therapies to improve clinical outcomes. It has remarkable potential to increase the manpower productivity of radiologists and radiation oncologists over the manual segmentation of the organ, reduce interobserver variability in lesion detection and cross-modality image co-registration, and advance the accuracy of PIRADS through predictive analytics and federated learning for risk stratification, and the individualized care of patients at risk and with prostate cancer.

Author Contributions

Conceptualization, D.C., C.H.T. and W.H.; methodology, C.H.L., W.H., H.L. and C.H.T.; writing—original draft preparation, H.L., C.H.L., W.H. and C.H.T.; writing—review and editing, H.L., C.H.L., D.C., Z.L., W.H. and C.H.T.; visualization, C.H.L.; supervision, W.H., Z.L. and C.H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Open search internet, A*Star digital library.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mottet, N.; van den Bergh, R.C.N.; Briers, E.; Van den Broeck, T.; Cumberbatch, M.G.; De Santis, M.; Fanti, S.; Fossati, N.; Gandaglia, G.; Gillessen, S.; et al. EAU-EANM-ESTRO-ESUR-SIOG Guidelines on Prostate Cancer-2020 Update. Part 1: Screening, Diagnosis, and Local Treatment with Curative Intent. Eur. Urol. 2021, 79, 243–262. [Google Scholar] [CrossRef] [PubMed]
  2. Barentsz, J.O.; Richenberg, J.; Clements, R.; Choyke, P.; Verma, S.; Villeirs, G.; Rouviere, O.; Logager, V.; Fütterer, J.J. ESUR prostate MR guidelines 2012. Eur. Radiol. 2012, 22, 746–757. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kasivisvanathan, V.; Rannikko, A.S.; Borghi, M.; Panebianco, V.; Mynderse, L.A.; Vaarala, M.H.; Briganti, A.; Budäus, L.; Hellawell, G.; Hindley, R.G.; et al. MRI-Targeted or Standard Biopsy for Prostate-Cancer Diagnosis. N. Engl. J. Med. 2018, 378, 1767–1777. [Google Scholar] [CrossRef] [PubMed]
  4. Ahmed, H.U.; Bosaily, A.E.-S.; Brown, L.C.; Gabe, R.; Kaplan, R.; Parmar, M.K.; Collaco-Moraes, Y.; Ward, K.; Hindley, R.G.; Freeman, A.; et al. Diagnostic accuracy of multi-parametric MRI and TRUS biopsy in prostate cancer (PROMIS): A paired validating confirmatory study. Lancet 2017, 389, 815–822. [Google Scholar] [CrossRef] [Green Version]
  5. Connor, M.J.; Gorin, M.A.; Ahmed, H.U.; Nigam, R. Focal therapy for localized prostate cancer in the era of routine multi-parametric MRI. Prostate Cancer Prostatic Dis. 2020, 23, 232–243. [Google Scholar] [CrossRef]
  6. De Visschere, P.J.L.; Vral, A.; Perletti, G.; Pattyn, E.; Praet, M.; Magri, V.; Villeirs, G.M. Multiparametric magnetic resonance imaging characteristics of normal, benign and malignant conditions in the prostate. Eur. Radiol. 2017, 27, 2095–2109. [Google Scholar] [CrossRef]
  7. Chesnais, A.L.; Niaf, E.; Bratan, F.; Mège-Lechevallier, F.; Roche, S.; Rabilloud, M.; Colombel, M.; Rouvière, O. Differentiation of transitional zone prostate cancer from benign hyperplasia nodules: Evaluation of discriminant criteria at multiparametric MRI. Clin. Radiol. 2013, 68, e323–e330. [Google Scholar] [CrossRef]
  8. Brembilla, G.; Dell’Oglio, P.; Stabile, A.; Damascelli, A.; Brunetti, L.; Ravelli, S.; Cristel, G.; Schiani, E.; Venturini, E.; Grippaldi, D.; et al. Interreader variability in prostate MRI reporting using Prostate Imaging Reporting and Data System version 2.1. Eur. Radiol. 2020, 30, 3383–3392. [Google Scholar] [CrossRef]
  9. Park, K.J.; Choi, S.H.; Lee, J.S.; Kim, J.K.; Kim, M.-H. Interreader Agreement with Prostate Imaging Reporting and Data System Version 2 for Prostate Cancer Detection: A Systematic Review and Meta-Analysis. J. Urol. 2020, 204, 661–670. [Google Scholar] [CrossRef]
  10. Leake, J.L.; Hardman, R.; Ojili, V.; Thompson, I.; Shanbhogue, A.; Hernandez, J.; Barentsz, J. Prostate MRI: Access to and current practice of prostate MRI in the United States. J. Am. Coll. Radiol. 2014, 11, 156–160. [Google Scholar] [CrossRef] [Green Version]
  11. Shinmoto, H.; Tamura, C.; Soga, S.; Shiomi, E.; Yoshihara, N.; Kaji, T.; Mulkern, R.V. An intravoxel incoherent motion diffusion-weighted imaging study of prostate cancer. Am. J. Roentgenol. 2012, 199, W496–W500. [Google Scholar] [CrossRef]
  12. Tamura, C.; Shinmoto, H.; Soga, S.; Okamura, T.; Sato, H.; Okuaki, T.; Pang, Y.; Kosuda, S.; Kaji, T. Diffusion kurtosis imaging study of prostate cancer: Preliminary findings. J. Magn. Reson. Imaging 2014, 40, 723–729. [Google Scholar] [CrossRef]
  13. Fei, B. Computer-aided diagnosis of prostate cancer with MRI. Curr. Opin. Biomed. Eng. 2017, 3, 20–27. [Google Scholar] [CrossRef]
  14. Greer, M.D.; Lay, N.; Shih, J.H.; Barrett, T.; Bittencourt, L.K.; Borofsky, S.; Kabakus, I.; Law, Y.M.; Marko, J.; Shebel, H.; et al. Computer-aided diagnosis prior to conventional interpretation of prostate mpMRI: An international multi-reader study. Eur. Radiol. 2018, 28, 4407–4417. [Google Scholar] [CrossRef]
  15. Armato, S.G.; Huisman, H.; Drukker, K.; Hadjiiski, L.; Kirby, J.S.; Petrick, N.; Redmond, G.; Giger, M.L.; Cha, K.; Mamonov, A.; et al. PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J. Med. Imaging 2018, 5, 44501. [Google Scholar] [CrossRef]
  16. Cuocolo, R.; Cipullo, M.B.; Stanzione, A.; Ugga, L.; Romeo, V.; Radice, L.; Brunetti, A.; Imbriaco, M. Machine learning applications in prostate cancer magnetic resonance imaging. Eur. Radiol. Exp. 2019, 3, 1–8. [Google Scholar] [CrossRef]
  17. Sanford, T.; Harmon, S.A.; Turkbey, E.B.; Kesani, D.; Tuncer, S.; Madariaga, M.; Yang, C.; Sackett, J.; Mehralivand, S.; Yan, P.; et al. Deep-Learning-Based Artificial Intelligence for PI-RADS Classification to Assist Multiparametric Prostate MRI Interpretation: A Development Study. J. Magn. Reson. Imaging 2020, 52, 1499–1507. [Google Scholar] [CrossRef]
  18. Schelb, P.; Kohl, S.; Radtke, J.P.; Wiesenfarth, M.; Kickingereder, P.; Bickelhaupt, S.; Kuder, T.A.; Stenzinger, A.; Hohenfellner, M.; Schlemmer, H.P.; et al. Classification of cancer at prostate MRI: Deep Learning versus Clinical PI-RADS Assessment. Radiology 2019, 293, 607–617. [Google Scholar] [CrossRef]
  19. Goldenberg, S.L.; Nir, G.; Salcudean, S.E. A new era: Artificial intelligence and machine learning in prostate cancer. Nat. Rev. Urol. 2019, 16, 391–403. [Google Scholar] [CrossRef]
  20. van Sloun, R.J.G.; Wildeboer, R.R.; Mannaerts, C.K.; Postema, A.W.; Gayet, M.; Beerlage, H.P.; Salomon, G.; Wijkstra, H.; Mischi, M. Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy. Eur. Urol. Focus 2021, 7, 78–85. [Google Scholar] [CrossRef]
  21. Padhani, A.R.; Turkbey, B. Detecting Prostate Cancer with Deep Learning for MRI: A Small Step Forward. Radiology 2019, 293, 618–619. [Google Scholar] [CrossRef]
  22. Gaziev, G.; Wadhwa, K.; Barrett, T.; Koo, B.C.; Gallagher, F.A.; Serrao, E.; Frey, J.; Seidenader, J.; Carmona, L.; Warren, A.; et al. Defining the learning curve for multiparametric magnetic resonance imaging (MRI) of the prostate using MRI-transrectal ultrasonography (TRUS) fusion-guided transperineal prostate biopsies as a validation tool. BJU Int. 2016, 117, 80–86. [Google Scholar] [CrossRef]
  23. Chaddad, A.; Kucharczyk, M.J.; Cheddad, A.; Clarke, S.E.; Hassan, L.; Ding, S.; Rathore, S.; Zhang, M.; Katib, Y.; Bahoric, B.; et al. Magnetic resonance imaging based radiomic models of prostate cancer: A narrative review. Cancers 2021, 13, 552. [Google Scholar] [CrossRef]
  24. Zeeshan Hameed, B.M.; Aiswarya Dhavileswarapu, V.L.S.; Raza, S.Z.; Karimi, H.; Khanuja, H.S.; Shetty, D.K.; Ibrahim, S.; Shah, M.J.; Naik, N.; Paul, R.; et al. Artificial intelligence and its impact on urological diseases and management: A comprehensive review of the literature. J. Clin. Med. 2021, 10, 1864. [Google Scholar] [CrossRef]
  25. Khan, Z.; Yahya, N.; Isam Al-Hiyali, M.; Meriaudeau, F. Recent Automatic Segmentation Algorithms of MRI Prostate Regions: A Review. IEEE Access 2021, 9, 97878–97905. [Google Scholar] [CrossRef]
  26. Zou, K.H.; Warfield, S.K.; Bharatha, A.; Tempany, C.M.C.; Kaus, M.R.; Haker, S.J.; Wells, W.M., 3rd; Jolesz, F.A.; Kikinis, R. Statistical validation of image segmentation quality based on a spatial overlap index. Acad. Radiol. 2004, 11, 178–189. [Google Scholar] [CrossRef] [Green Version]
  27. Klein, S.; Van Der Heide, U.A.; Lips, I.M.; Van Vulpen, M.; Staring, M.; Pluim, J.P.W. Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information. Med. Phys. 2008, 35, 1407–1417. [Google Scholar] [CrossRef]
  28. Liu, X.; Langer, D.L.; Haider, M.A.; Van Der Kwast, T.H.; Evans, A.J.; Wernick, M.N.; Yetik, I.S. Unsupervised segmentation of the prostate using MR images based on level set with a shape prior. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 2–6 September 2009; pp. 3613–3616. [Google Scholar] [CrossRef]
  29. Toth, R.; Madabhushi, A. Multifeature landmark-free active appearance models: Application to prostate MRI segmentation. IEEE Trans. Med. Imaging 2012, 31, 1638–1650. [Google Scholar] [CrossRef]
  30. Pasquier, D.; Lacornerie, T.; Vermandel, M.; Rousseau, J.; Lartigau, E.; Betrouni, N. Automatic Segmentation of Pelvic Structures From Magnetic Resonance Images for Prostate Cancer Radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2007, 68, 592–600. [Google Scholar] [CrossRef]
  31. Mahapatra, D.; Buhmann, J.M. Prostate MRI segmentation using learned semantic knowledge and graph cuts. IEEE Trans. Biomed. Eng. 2014, 61, 756–764. [Google Scholar] [CrossRef] [PubMed]
  32. Cheng, R.; Turkbey, B.; Gandler, W.; Agarwal, H.K.; Shah, V.P.; Bokinsky, A.; McCreedy, E.; Wang, S.; Sankineni, S.; Bernardo, M.; et al. Atlas based AAM and SVM model for fully automatic MRI prostate segmentation. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2014, 2014, 2881–2885. [Google Scholar] [CrossRef]
  33. Chilali, O.; Puech, P.; Lakroum, S.; Diaf, M.; Mordon, S.; Betrouni, N. Gland and Zonal Segmentation of Prostate on T2W MR Images. J. Digit. Imaging 2016, 29, 730–736. [Google Scholar] [CrossRef] [Green Version]
  34. Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef] [Green Version]
  35. Yu, L.; Yang, X.; Chen, H.; Qin, J.; Heng, P.A. Volumetric convnets with mixed residual connections for automated prostate segmentation from 3d MR images. In Proceedings of the AAAI’17: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 66–72. [Google Scholar]
  36. Jin, Y.; Yang, G.; Fang, Y.; Li, R.; Xu, X.; Liu, Y.; Lai, X. 3D PBV-Net: An automated prostate MRI data segmentation method. Comput. Biol. Med. 2021, 128, 104160. [Google Scholar] [CrossRef]
  37. Li, S.; Chen, Y.; Yang, S.; Luo, W. Cascade Dense-Unet for Prostate Segmentation in MR Images. In Intelligent Computing Theories and Application; Springer: Cham, Switzerland, 2019; pp. 481–490. [Google Scholar] [CrossRef]
  38. Ushinsky, A.; Bardis, M.; Glavis-Bloom, J.; Uchio, E.; Chantaduly, C.; Nguyentat, M.; Chow, D.; Chang, P.D.; Houshyar, R. A 3d-2d hybrid u-net convolutional neural network approach to prostate organ segmentation of multiparametric MRI. Am. J. Roentgenol. 2021, 216, 111–116. [Google Scholar] [CrossRef]
  39. Tian, Z.; Li, X.; Chen, Z.; Zheng, Y.; Fan, H.; Li, Z.; Li, C.; Du, S. Interactive prostate MR image segmentation based on ConvLSTMs and GGNN. Neurocomputing 2021, 438, 84–93. [Google Scholar] [CrossRef]
  40. Sanford, T.H.; Harmon, S.A.; Sackett, J.; Barrett, T.; Wood, B.J.; Choyke, P.L.; Th, S.; Zhang, L.; Sa, H. Data Augmentation and Transfer Learning to Improve Generalizability of an Automated Prostate Segmentation Model Thomas. Am. J. Roentgenol. 2020, 215, 1403–1410. [Google Scholar] [CrossRef]
  41. Roth, H.R.; Yang, D.; Li, W.; Myronenko, A.; Zhu, W.; Xu, Z.; Wang, X.; Xu, D. Federated Whole Prostate Segmentation in MRI with Personalized Neural Architectures. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2021; pp. 357–366. [Google Scholar] [CrossRef]
  42. Meyer, A.; Chlebus, G.; Rak, M.; Schindele, D.; Schostak, M.; van Ginneken, B.; Schenk, A.; Meine, H.; Hahn, H.K.; Schreiber, A.; et al. Anisotropic 3D Multi-Stream CNN for Accurate Prostate Segmentation from Multi-Planar MRI. Comput. Methods Programs Biomed. 2021, 200, 105821. [Google Scholar] [CrossRef]
  43. Liu, Q.; Dou, Q.; Yu, L.; Heng, P.A. MS-Net: Multi-Site Network for Improving Prostate Segmentation with Heterogeneous MRI Data. IEEE Trans. Med. Imaging 2020, 39, 2713–2724. [Google Scholar] [CrossRef] [Green Version]
  44. Clark, T.; Zhang, J.; Baig, S.; Wong, A.; Haider, M.A.; Khalvati, F. Fully automated segmentation of prostate whole gland and transition zone in diffusion-weighted MRI using convolutional neural networks. J. Med. Imaging 2017, 4, 1. [Google Scholar] [CrossRef]
  45. Motamed, S.; Gujrathi, I.; Deniffel, D.; Oentoro, A.; Haider, M.A.; Khalvati, F. Transfer Learning for Automated Segmentation of Prostate Whole Gland and Transition Zone in Diffusion Weighted MRI. arXiv 2020, arXiv:1909.09541. [Google Scholar]
  46. Zhu, Y.; Wei, R.; Gao, G.; Ding, L.; Zhang, X.; Wang, X.; Zhang, J. Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J. Magn. Reson. Imaging 2019, 49, 1149–1156. [Google Scholar] [CrossRef]
  47. Bardis, M.; Houshyar, R.; Chantaduly, C.; Tran-Harding, K.; Ushinsky, A.; Chahine, C.; Rupasinghe, M.; Chow, D.; Chang, P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol. Imaging Cancer 2021, 3, e200024. [Google Scholar] [CrossRef]
  48. Cuocolo, R.; Comelli, A.; Stefano, A.; Benfante, V.; Dahiya, N.; Stanzione, A.; Castaldo, A.; De Lucia, D.R.; Yezzi, A.; Imbriaco, M. Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset. J. Magn. Reson. Imaging 2021, 54, 452–459. [Google Scholar] [CrossRef]
  49. Saunders, S.L.; Leng, E.; Spilseth, B.; Wasserman, N.; Metzger, G.J.; Bolan, P.J. Training Convolutional Networks for Prostate Segmentation with Limited Data. IEEE Access 2021, 9, 109214–109223. [Google Scholar] [CrossRef]
  50. Tian, Z.; Liu, L.; Zhang, Z.; Fei, B. PSNet: Prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 2018, 5, 1. [Google Scholar] [CrossRef]
  51. Litjens, G.; Toth, R.; van de Ven, W.; Hoeks, C.; Kerkstra, S.; van Ginneken, B.; Vincent, G.; Guillard, G.; Birbeck, N.; Zhang, J.; et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 2014, 18, 359–373. [Google Scholar] [CrossRef] [Green Version]
  52. NCI-ISBI 2013 Challenge—Automated Segmentation of Prostate Structures. Available online: https://wiki.cancerimagingarchive.net/display/Public/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures (accessed on 23 November 2021).
  53. Simpson, A.L.; Antonelli, M.; Bakas, S.; Bilello, M.; Farahani, K.; Van Ginneken, B.; Kopp-Schneider, A.; Landman, B.A.; Litjens, G.; Menze, B.; et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv 2019, arXiv:1902.09063. [Google Scholar]
  54. Hoar, D.; Lee, P.Q.; Guida, A.; Patterson, S.; Bowen, C.V.; Merrimen, J.; Wang, C.; Rendon, R.; Beyea, S.D.; Clarke, S.E. Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images. Comput. Methods Programs Biomed. 2021, 210, 106375. [Google Scholar] [CrossRef]
  55. Almeida, G.; Tavares, J.M.R.S. Deep Learning in Radiation Oncology Treatment Planning for Prostate Cancer: A Systematic Review. J. Med. Syst. 2020, 44, 1–15. [Google Scholar] [CrossRef]
  56. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  57. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Available online: http://lmb.informatik.uni-freiburg.de/ (accessed on 27 October 2021).
  58. He, X.; Zhao, K.; Chu, X. AutoML: A survey of the state-of-the-art. Knowl.-Based Syst. 2021, 212, 106622. [Google Scholar] [CrossRef]
  59. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition Karen. Am. J. Health Pharm. 2018, 75, 398–406. [Google Scholar]
  60. Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar]
  61. Romera, E.; Álvarez, J.M.; Bergasa, L.M.; Arroyo, R. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2018, 19, 263–272. [Google Scholar] [CrossRef]
  62. Nicolae, A.M.; Venugopal, N.; Ravi, A. Trends in targeted prostate brachytherapy: From multiparametric MRI to nanomolecular radiosensitizers. Cancer Nanotechnol. 2016, 7, 6. [Google Scholar] [CrossRef] [Green Version]
  63. Humphrey, P.A. Histopathology of Prostate Cancer. Cold Spring Harb. Perspect. Med. 2017, 7, a030411. [Google Scholar] [CrossRef] [Green Version]
  64. Cool, D.W.; Zhang, X.; Romagnoli, C.; Izawa, J.I.; Romano, W.M.; Fenster, A. Evaluation of MRI-TRUS Fusion Versus Cognitive Registration Accuracy for MRI-Targeted, TRUS-Guided Prostate Biopsy. Am. J. Roentgenol. 2015, 204, 83–91. [Google Scholar] [CrossRef]
  65. Sun, Y.; Reynolds, H.M.; Parameswaran, B.; Wraith, D.; Finnegan, M.E.; Williams, S.; Haworth, A. Multiparametric MRI and radiomics in prostate cancer: A review. Australas Phys. Eng. Sci. Med. 2019, 42, 3–25. [Google Scholar] [CrossRef]
  66. Mohamed, A.; Davatzikos, C.; Taylor, R. A combined statistical and biomechanical model for estimation of intra-operative prostate deformation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2489, pp. 452–460. [Google Scholar] [CrossRef] [Green Version]
  67. Hu, Y.; Carter, T.J.; Ahmed, H.U.; Emberton, M.; Allen, C.; Hawkes, D.J.; Barratt, D.C. Modelling prostate motion for data fusion during image-guided interventions. IEEE Trans. Med. Imaging 2011, 30, 1887–1900. [Google Scholar] [CrossRef]
  68. Hu, Y.; Ahmed, H.U.; Taylor, Z.; Allen, C.; Emberton, M.; Hawkes, D.; Barratt, D. MR to ultrasound registration for image-guided prostate interventions. Med. Image Anal. 2012, 16, 687–703. [Google Scholar] [CrossRef]
  69. Wang, Y.; Cheng, J.Z.; Ni, D.; Lin, M.; Qin, J.; Luo, X.; Xu, M.; Xie, X.; Heng, P.A. Towards personalized statistical deformable model and hybrid point matching for robust MR-TRUS registration. IEEE Trans. Med. Imaging 2016, 35, 589–604. [Google Scholar] [CrossRef]
  70. Hu, Y.; Modat, M.; Gibson, E.; Ghavami, N.; Bonmati, E.; Moore, C.M.; Emberton, M.; Noble, J.A.; Barratt, D.C.; Vercauteren, T. Label-driven weakly-supervised learning for multimodal deformarle image registration. Proc.—Int. Symp. Biomed. Imaging 2018, 2018, 1070–1074. [Google Scholar] [CrossRef] [Green Version]
  71. Hu, Y.; Modat, M.; Gibson, E.; Li, W.; Ghavami, N.; Bonmati, E.; Wang, G.; Bandula, S.; Moore, C.M.; Emberton, M.; et al. Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 2018, 49, 1–13. [Google Scholar] [CrossRef]
  72. Yan, P.; Xu, S.; Rastinehad, A.R.; Wood, B.J. Adversarial Image registration with application for MR and TRUS image fusion. In Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef] [Green Version]
  73. Zeng, Q.; Fu, Y.; Tian, Z.; Lei, Y.; Zhang, Y.; Wang, T.; Mao, H.; Liu, T.; Curran, W.J.; Jani, A.B.; et al. Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy. Phys. Med. Biol. 2020, 65, 135002. [Google Scholar] [CrossRef]
  74. Chen, Y.; Xing, L.; Yu, L.; Liu, W.; Fahimian, B.P.; Nieder-, T.; Bagshaw, H.P.; Buyyounouski, M.; Han, B. MR to ultrasound image registration with segmentation-based learning for HDR prostate brachytherapy. Med. Phys. 2021, 48, 3074–3083. [Google Scholar] [CrossRef]
  75. Bhardwaj, A.; Park, J.-S.; Mukhopadhyay, S.; Sharda, S.; Son, Y.; Ajani, B.N.; Kudavelly, S.R. Rigid and deformable corrections in real-time using deep learning for prostate fusion biopsy. In Proceedings of the Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, Houston, TX, USA, 15–20 February 2020. [Google Scholar] [CrossRef]
  76. Hu, Y.; Gibson, E.; Ghavami, N.; Bonmati, E.; Moore, C.M.; Emberton, M.; Vercauteren, T.; Noble, J.A.; Barratt, D.C. Adversarial Deformation Regularization for Training Image Registration Neural Networks; Springer International Publishing: New York, NY, USA, 2018; Volume 11070, ISBN 9783030009274. [Google Scholar]
  77. Yang, X.; Fu, Y.; Lei, Y.; Tian, S.; Wang, T.; Shelton, J.W.; Jani, A.; Curran, W.J.; Patel, P.R.; Liu, T. Deformable MRI-TRUS Registration Using Biomechanically Constrained Deep Learning Model for Tumor-Targeted Prostate Brachytherapy. Int. J. Radiat. Oncol. 2020, 108, e339. [Google Scholar] [CrossRef]
  78. Shafai-Erfani, G.; Wang, T.; Lei, Y.; Tian, S.; Patel, P.; Jani, A.B.; Curran, W.J.; Liu, T.X.Y. Does Evaluation of MRI-based Synthetic CT Generated Using a Machine Learning for Prostate Cancer RAdiotherapy. Physiol. Behav. 2019, 44, e64–e70. [Google Scholar] [CrossRef]
  79. Rusu, M.; Shao, W.; Kunder, C.A.; Wang, J.B.; Soerensen, S.J.C.; Teslovich, N.C.; Sood, R.R.; Chen, L.C.; Fan, R.E.; Ghanouni, P.; et al. Registration of presurgical MRI and histopathology images from radical prostatectomy via RAPSODI. Med. Phys. 2020, 47, 4177–4188. [Google Scholar] [CrossRef] [PubMed]
  80. Fu, Y.; Wang, T.; Lei, Y.; Patel, P.; Jani, A.B.; Curran, W.J.; Liu, T.; Yang, X. Deformable MR-CBCT prostate registration using biomechanically constrained deep learning networks. Med. Phys. 2021, 48, 253–263. [Google Scholar] [CrossRef]
  81. Shao, W.; Banh, L.; Kunder, C.A.; Fan, R.E.; Soerensen, S.J.C.; Wang, J.B.; Teslovich, N.C.; Madhuripan, N.; Jawahar, A.; Ghanouni, P.; et al. ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Med. Image Anal. 2021, 68, 101919. [Google Scholar] [CrossRef]
  82. Sood, R.R.; Shao, W.; Kunder, C.; Teslovich, N.C.; Wang, J.B.; Soerensen, S.J.C.; Madhuripan, N.; Jawahar, A.; Brooks, J.D.; Ghanouni, P.; et al. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Med. Image Anal. 2021, 69, 101957. [Google Scholar] [CrossRef] [PubMed]
  83. Steiger, P.; Thoeny, H.C. Prostate MRI based on PI-RADS version 2: How we review and report. Cancer Imaging 2016, 16, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Woo, S.; Suh, C.H.; Kim, S.Y.; Cho, J.Y.; Kim, S.H. Diagnostic Performance of Prostate Imaging Reporting and Data System Version 2 for Detection of Prostate Cancer: A Systematic Review and Diagnostic Meta-analysis. Eur. Urol. 2017, 72, 177–188. [Google Scholar] [CrossRef]
  85. Smith, C.P.; Harmon, S.A.; Barrett, T.; Bittencourt, L.K.; Law, Y.M.; Shebel, H.; An, J.Y.; Czarniecki, M.; Mehralivand, S.; Coskun, M.; et al. Intra- and interreader reproducibility of PI-RADSv2: A multireader study. J. Magn. Reson. Imaging 2019, 49, 1694–1703. [Google Scholar] [CrossRef]
  86. Twilt, J.J.; van Leeuwen, K.G.; Huisman, H.J.; Fütterer, J.J.; de Rooij, M. Artificial intelligence based algorithms for prostate cancer classification and detection on magnetic resonance imaging: A narrative review. Diagnostics 2021, 11, 959. [Google Scholar] [CrossRef]
  87. Algohary, A.; Viswanath, S.; Shiradkar, R.; Ghose, S.; Pahwa, S.; Moses, D.; Jambor, I.; Shnier, R.; Böhm, M.; Haynes, A.M.; et al. Radiomic features on MRI enable risk categorization of prostate cancer patients on active surveillance: Preliminary findings. J. Magn. Reson. Imaging 2018, 48, 818–828. [Google Scholar] [CrossRef]
  88. Min, X.; Li, M.; Dong, D.; Feng, Z.; Zhang, P.; Ke, Z.; You, H.; Han, F.; Ma, H.; Tian, J.; et al. Multi-parametric MRI-based radiomics signature for discriminating between clinically significant and insignificant prostate cancer: Cross-validation of a machine learning method. Eur. J. Radiol. 2019, 115, 16–21. [Google Scholar] [CrossRef] [Green Version]
  89. Wu, M.; Krishna, S.; Thornhill, R.E.; Flood, T.A.; McInnes, M.D.F.; Schieda, N. Transition zone prostate cancer: Logistic regression and machine-learning models of quantitative ADC, shape and texture features are highly accurate for diagnosis. J. Magn. Reson. Imaging 2019, 50, 940–950. [Google Scholar] [CrossRef]
  90. Liu, Y.; Zheng, H.; Liang, Z.; Qi, M.; Brisbane, W.; Marks, L.; Raman, S.; Reiter, R.; Yang, G.; Sung, K. Textured-Based Deep Learning in Prostate Cancer Classification with 3T Multiparametric MRI: Comparison with PI-RADS-Based Classification. Diagnostics 2021, 11, 1785. [Google Scholar] [CrossRef]
  91. Aldoj, N.; Lukas, S.; Dewey, M.; Penzkofer, T. Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network. Eur. Radiol. 2020, 30, 1243–1253. [Google Scholar] [CrossRef]
  92. Chen, Q.; Xu, X.; Hu, S.; Li, X.; Zou, Q.; Li, Y. A transfer learning approach for classification of clinical significant prostate cancers from mpMRI scans. Proc. SPIE 2017, 10134, 1154–1157. [Google Scholar] [CrossRef]
  93. Yuan, Y.; Qin, W.; Buyyounouski, M.; Ibragimov, B.; Hancock, S.; Han, B.; Xing, L. Prostate cancer classification with multiparametric MRI transfer learning model. Med. Phys. 2019, 46, 756–765. [Google Scholar] [CrossRef]
  94. Zhong, X.; Cao, R.; Shakeri, S.; Scalzo, F.; Lee, Y.; Enzmann, D.R.; Wu, H.H.; Raman, S.S.; Sung, K. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI. Abdom. Radiol. 2019, 44, 2030–2039. [Google Scholar] [CrossRef]
  95. Giannini, V.; Mazzetti, S.; Vignati, A.; Russo, F.; Bollito, E.; Porpiglia, F.; Stasi, M.; Regge, D. A fully automatic computer aided diagnosis system for peripheral zone prostate cancer detection using multi-parametric magnetic resonance imaging. Comput. Med. Imaging Graph. 2015, 46, 219–226. [Google Scholar] [CrossRef]
  96. Mcgarry, S.D.; Hurrell, S.L.; Iczkowski, K.A.; Hall, W.; Kaczmarowski, A.L.; Banerjee, A.; Keuter, T.; Jacobsohn, K.; Bukowy, J.D.; Nevalainen, M.T.; et al. Radio-pathomic Maps of Epithelium and Lumen Density Predict the Location of High-Grade Prostate Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2018, 101, 1179–1187. [Google Scholar] [CrossRef] [Green Version]
  97. Zhang, L.; Li, L.; Tang, M.; Huan, Y.; Zhang, X.; Zhe, X. A new approach to diagnosing prostate cancer through magnetic resonance imaging. Alex. Eng. J. 2021, 60, 897–904. [Google Scholar] [CrossRef]
  98. Arif, M.; Schoots, I.G.; Castillo Tovar, J.; Bangma, C.H.; Krestin, G.P.; Roobol, M.J.; Niessen, W.; Veenland, J.F. Clinically significant prostate cancer detection and segmentation in low-risk patients using a convolutional neural network on multi-parametric MRI. Eur. Radiol. 2020, 30, 6582–6592. [Google Scholar] [CrossRef]
  99. Seetharaman, A.; Bhattacharya, I.; Chen, L.C.; Kunder, C.A.; Shao, W.; Soerensen, S.J.C.; Wang, J.B.; Teslovich, N.C.; Fan, R.E.; Ghanouni, P.; et al. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med. Phys. 2021, 48, 2960–2972. [Google Scholar] [CrossRef]
  100. Mehralivand, S.; Yang, D.; Harmon, S.A.; Xu, D.; Xu, Z.; Roth, H.; Masoudi, S.; Sanford, T.H.; Kesani, D.; Lay, N.S.; et al. A Cascaded Deep Learning–Based Artificial Intelligence Algorithm for Automated Lesion Detection and Classification on Biparametric Prostate Magnetic Resonance Imaging. Acad. Radiol. 2021. [Google Scholar] [CrossRef]
  101. Alkadi, R.; Taher, F.; El-baz, A.; Werghi, N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J. Digit. Imaging 2018, 32, 793–807. [Google Scholar] [CrossRef]
  102. Wang, J.; Wu, C.J.; Bao, M.L.; Zhang, J.; Wang, X.N.; Zhang, Y.D. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer. Eur. Radiol. 2017, 27, 4082–4090. [Google Scholar] [CrossRef] [PubMed]
  103. De Vente, C.; Vos, P.; Pluim, J.; Veta, M. Simultaneous Detection and Grading of Prostate Cancer in Multi-Parametric MRI. Med. Imaging Deep. Learn. 2019, 2019, 1–5. [Google Scholar]
  104. Cao, R.; Mohammadian Bajgiran, A.; Afshari Mirak, S.; Shakeri, S.; Zhong, X.; Enzmann, D.; Raman, S.; Sung, K. Joint Prostate Cancer Detection and Gleason Score Prediction in mp-MRI via FocalNet. IEEE Trans. Med. Imaging 2019, 38, 2496–2506. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  105. de Vente, C.; Vos, P.; Hosseinzadeh, M.; Pluim, J.; Veta, M. Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI. IEEE Trans. Biomed. Eng. 2021, 68, 374–383. [Google Scholar] [CrossRef] [PubMed]
  106. Jensen, C.; Carl, J.; Boesen, L.; Langkilde, N.C.; Østergaard, L.R. Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier. J. Appl. Clin. Med. Phys. 2019, 20, 146–153. [Google Scholar] [CrossRef]
  107. Abraham, B.; Nair, M.S. Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder. Comput. Med. Imaging Graph. 2018, 69, 60–68. [Google Scholar] [CrossRef] [PubMed]
  108. Initiative for Collaborative Computer Vision Benchmarking. Available online: https://i2cvb.github.io/ (accessed on 24 November 2021).
  109. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2015, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
  110. Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  111. Gatenby, R.A.; Grove, O.; Gillies, R.J. Quantitative imaging in cancer evolution and ecology. Radiology 2013, 269, 8–15. [Google Scholar] [CrossRef] [Green Version]
  112. Gutiérrez, P.A.; Pérez-Ortiz, M.; Sánchez-Monedero, J.; Fernández-Navarro, F.; Hervás-Martínez, C. Ordinal Regression Methods: Survey and Experimental Study. IEEE Trans. Knowl. Data Eng. 2016, 28, 127–146. [Google Scholar] [CrossRef] [Green Version]
  113. Stanzione, A.; Cuocolo, R.; Cocozza, S.; Romeo, V.; Persico, F.; Fusco, F.; Longo, N.; Brunetti, A.; Imbriaco, M. Detection of Extraprostatic Extension of Cancer on Biparametric MRI Combining Texture Analysis and Machine Learning: Preliminary Results. Acad. Radiol. 2019, 26, 1338–1344. [Google Scholar] [CrossRef] [PubMed]
  114. Ma, S.; Xie, H.; Wang, H.; Yang, J.; Han, C.; Wang, X.; Zhang, X. Preoperative Prediction of Extracapsular Extension: Radiomics Signature Based on Magnetic Resonance Imaging to Stage Prostate Cancer. Mol. Imaging Biol. 2020, 22, 711–721. [Google Scholar] [CrossRef] [PubMed]
  115. Xu, L.; Zhang, G.; Zhao, L.; Mao, L.; Li, X.; Yan, W.; Xiao, Y.; Lei, J.; Sun, H.; Jin, Z. Radiomics Based on Multiparametric Magnetic Resonance Imaging to Predict Extraprostatic Extension of Prostate Cancer. Front. Oncol. 2020, 10, 940. [Google Scholar] [CrossRef]
  116. Losnegård, A.; Reisæter, L.A.R.; Halvorsen, O.J.; Jurek, J.; Assmus, J.; Arnes, J.B.; Honoré, A.; Monssen, J.A.; Andersen, E.; Haldorsen, I.S.; et al. Magnetic resonance radiomics for prediction of extraprostatic extension in non-favorable intermediate- and high-risk prostate cancer patients. Acta Radiol. 2020, 61, 1570–1579. [Google Scholar] [CrossRef]
  117. Cuocolo, R.; Stanzione, A.; Faletti, R.; Gatti, M.; Calleris, G.; Fornari, A.; Gentile, F.; Motta, A.; Dell’Aversana, S.; Creta, M.; et al. MRI index lesion radiomics and machine learning for detection of extraprostatic extension of disease: A multicenter study. Eur. Radiol. 2021, 31, 7575–7583. [Google Scholar] [CrossRef]
  118. Fuchsjäger, M.H.; Shukla-Dave, A.; Hricak, H.; Wang, L.; Touijer, K.; Donohue, J.F.; Eastham, J.A.; Kattan, M.W. Magnetic resonance imaging in the prediction of biochemical recurrence of prostate cancer after radical prostatectomy. BJU Int. 2009, 104, 315–320. [Google Scholar] [CrossRef]
  119. Park, S.Y.; Oh, Y.T.; Jung, D.C.; Cho, N.H.; Choi, Y.D.; Rha, K.H.; Hong, S.J. Prediction of biochemical recurrence after radical prostatectomy with PI-RADS version 2 in prostate cancers: Initial results. Eur. Radiol. 2015, 26, 2502–2509. [Google Scholar] [CrossRef]
  120. Capogrosso, P.; Vertosick, E.A.; Benfante, N.E.; Sjoberg, D.D.; Vickers, A.J.; Eastham, J.A. Can We Improve the Preoperative Prediction of Prostate Cancer Recurrence With Multiparametric MRI? Clin. Genitourin. Cancer 2019, 17, e745–e750. [Google Scholar] [CrossRef]
  121. Park, S.Y.; Kim, C.K.; Park, B.K.; Lee, H.M.; Lee, K.S. Prediction of biochemical recurrence following radical prostatectomy in men with prostate cancer by diffusion-weighted magnetic resonance imaging: Initial results. Eur. Radiol. 2010, 21, 1111–1118. [Google Scholar] [CrossRef]
  122. Bourbonne, V.; Vallières, M.; Lucia, F.; Doucet, L.; Visvikis, D.; Tissot, V.; Pradier, O.; Hatt, M.; Schick, U. MRI-Derived Radiomics to Guide Post-operative Management for High-Risk Prostate Cancer. Front. Oncol. 2019, 9, 807. [Google Scholar] [CrossRef]
  123. Zhang, Y.D.; Wang, J.; Wu, C.J.; Bao, M.L.; Li, H.; Wang, X.N.; Tao, J.; Shi, H.-B. An imaging-based approach predicts clinical outcomes in prostate cancer through a novel support vector machine classification. Oncotarget 2016, 7, 78140–78151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Shiradkar, R.; Ghose, S.; Jambor, I.; Taimen, P.; Ettala, O.; Purysko, A.S.; Madabhushi, A. Radiomic features from pretreatment biparametric MRI predict prostate cancer biochemical recurrence: Preliminary findings. J. Magn. Reson. Imaging 2018, 48, 1626–1636. [Google Scholar] [CrossRef] [PubMed]
  125. Yan, Y.; Shao, L.; Liu, Z.; He, W.; Yang, G.; Liu, J.; Xia, H.; Zhang, Y.; Chen, H.; Liu, C.; et al. Deep learning with quantitative features of magnetic resonance images to predict biochemical recurrence of radical prostatectomy: A multi-center study. Cancers 2021, 13, 3098. [Google Scholar] [CrossRef]
  126. Kang, J.; Doucette, C.W.; El Naqa, I.; Zhang, H. Comparing the Kattan Nomogram to a Random Forest Model to Predict Post-Prostatectomy Pathology. Int. J. Radiat. Oncol. 2018, 102, S61–S62. [Google Scholar] [CrossRef]
  127. Abdollahi, H.; Mofid, B.; Shiri, I.; Razzaghdoust, A.; Saadipoor, A.; Mahdavi, A.; Galandooz, H.M.; Mahdavi, S.R. Machine learning-based radiomic models to predict intensity-modulated radiation therapy response, Gleason score and stage in prostate cancer. La Radiol. Med. 2019, 124, 555–567. [Google Scholar] [CrossRef] [PubMed]
  128. Poulakis, V.; Witzsch, U.; de Vries, R.; Emmerlich, V.; Meves, B.; Altmannsberger, H.-M.; Becht, E. Preoperative neural network using combined magnetic resonance imaging variables, prostate specific antigen, and Gleason score to predict prostate cancer recurrence after radical prostatectomy. Eur. Urol. 2004, 46, 571–578. [Google Scholar] [CrossRef] [PubMed]
  129. Harrell, F.E.J.; Califf, R.M.; Pryor, D.B.; Lee, K.L.; Rosati, R.A. Evaluating the yield of medical tests. JAMA 1982, 247, 2543–2546. [Google Scholar] [CrossRef] [PubMed]
  130. de Rooij, M.; Hamoen, E.H.J.; Witjes, J.A.; Barentsz, J.O.; Rovers, M.M. Accuracy of Magnetic Resonance Imaging for Local Staging of Prostate Cancer: A Diagnostic Meta-analysis. Eur. Urol. 2016, 70, 233–245. [Google Scholar] [CrossRef]
  131. Heidenreich, A. Consensus Criteria for the Use of Magnetic Resonance Imaging in the Diagnosis and Staging of Prostate Cancer: Not Ready for Routine Use. Eur. Urol. 2011, 59, 495–497. [Google Scholar] [CrossRef]
  132. Stephenson, A.J.; Kattan, M.W.; Eastham, J.A.; Dotan, Z.A.; Bianco, F.J., Jr.; Lilja, H.; Scardino, P.T. Defining biochemical recurrence of prostate cancer after radical prostatectomy: A proposal for a standardized definition. J. Clin. Oncol. 2006, 24, 3973–3978. [Google Scholar] [CrossRef] [PubMed]
  133. Kattan, M.W.; Stapleton, A.M.; Wheeler, T.M.; Scardino, P.T. Evaluation of a nomogram used to predict the pathologic stage of clinically localized prostate carcinoma. Cancer 1997, 79, 528–537. [Google Scholar] [CrossRef]
  134. Shariat, S.F.; Karakiewicz, P.I.; Roehrborn, C.G.; Kattan, M.W. An updated catalog of prostate cancer predictive tools. Cancer 2008, 113, 3075–3099. [Google Scholar] [CrossRef] [PubMed]
  135. Zwanenburg, A.; Leger, S.; Vallières, M.; Löck, S. Image biomarker standardisation initiative. arXiv 2016, arXiv:1612.07003. [Google Scholar]
  136. Moore, C.M.; Giganti, F.; Albertsen, P.; Allen, C.; Bangma, C.; Briganti, A.; Carroll, P.; Haider, M.; Kasivisvanathan, V.; Kirkham, A.; et al. Reporting Magnetic Resonance Imaging in Men on Active Surveillance for Prostate Cancer: The PRECISE Recommendations-A Report of a European School of Oncology Task Force. Eur. Urol. 2017, 71, 648–655. [Google Scholar] [CrossRef] [Green Version]
  137. Nayan, M.; Salari, K.; Bozzo, A.; Ganglberger, W.; Lu, G.; Carvalho, F.; Gusev, A.; Schneider, A.; Westover, B.M.; Feldman, A.S. A machine learning approach to predict progression on active surveillance for prostate cancer. Urol. Oncol. 2021. [Google Scholar] [CrossRef] [PubMed]
  138. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. [Google Scholar]
  139. Xu, J.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J.; Wang, F. Federated Learning for Healthcare Informatics. J. Healthc. Inform. Res. 2021, 5, 1–19. [Google Scholar] [CrossRef] [PubMed]
  140. Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl.-Based Syst. 2021, 216, 106775. [Google Scholar] [CrossRef]
  141. Sarma, K.V.; Harmon, S.; Sanford, T.; Roth, H.R.; Xu, Z.; Tetreault, J.; Xu, D.; Flores, M.G.; Raman, A.G.; Kulkarni, R.; et al. Federated learning improves site performance in multicenter deep learning without data sharing. J. Am. Med. Inform. Assoc. 2021, 28, 1259–1264. [Google Scholar] [CrossRef]
  142. Dayan, I.; Roth, H.R.; Zhong, A.; Harouni, A.; Gentili, A.; Abidin, A.Z.; Liu, A.; Costa, A.B.; Wood, B.J.; Tsai, C.-S.; et al. Federated learning for predicting clinical outcomes in patients with COVID-19. Nat. Med. 2021, 27, 1735–1743. [Google Scholar] [CrossRef] [PubMed]
  143. NVIDIA Clara|NVIDIA Developer. Available online: https://developer.nvidia.com/clara (accessed on 10 November 2021).
  144. TensorFlow Federated: Machine Learning on Decentralized Data. Available online: https://www.tensorflow.org/federated (accessed on 10 November 2021).
  145. IBM Federated Learning. Available online: https://ibmfl.mybluemix.net/ (accessed on 10 November 2021).
  146. GitHub—Intel/Openfl: An Open Framework for Federated Learning. Available online: https://github.com/intel/openfl (accessed on 10 November 2021).
  147. An Industrial Grade Federated Learning Framework. Available online: https://fate.fedai.org/ (accessed on 10 November 2021).
  148. XayNet|Open Source Federated Learning Framework for Edge AI. Available online: https://www.xaynet.dev/ (accessed on 10 November 2021).
  149. GitHub—PaddlePaddle/PaddleFL: Federated Deep Learning in PaddlePaddle. Available online: https://github.com/PaddlePaddle/PaddleFL (accessed on 10 November 2021).
Figure 1. On MRI, the periprostatic venous plexus appears as serpinginous hyperintense structures with foci of signal voids adjacent to the prostate (green outline), and can be closely related to the prostate capsule (red outline). It may have similar heterogeneous appearance as the peripheral zone. Therefore, during manual segmentation, it can be mistaken as part of the prostate to less experienced operators.
Figure 1. On MRI, the periprostatic venous plexus appears as serpinginous hyperintense structures with foci of signal voids adjacent to the prostate (green outline), and can be closely related to the prostate capsule (red outline). It may have similar heterogeneous appearance as the peripheral zone. Therefore, during manual segmentation, it can be mistaken as part of the prostate to less experienced operators.
Diagnostics 12 00289 g001
Figure 2. Multifocal prostate cancer seen as hypointense lesions on T2-weighted imaging (star), obscuring the boundaries between peripheral and transition zone, making zonal segmentation challenging.
Figure 2. Multifocal prostate cancer seen as hypointense lesions on T2-weighted imaging (star), obscuring the boundaries between peripheral and transition zone, making zonal segmentation challenging.
Diagnostics 12 00289 g002
Figure 3. Prostatitis typically appears as diffuse hypointensity in the peripheral zone on T2-weighted imaging (star), resulting in an almost similar signal to stromal nodules related to benign prostatic hyperplasia in the transition zone (arrowhead). This may make differentiation between peripheral and transition zone difficult, and zonal segmentation challenging.
Figure 3. Prostatitis typically appears as diffuse hypointensity in the peripheral zone on T2-weighted imaging (star), resulting in an almost similar signal to stromal nodules related to benign prostatic hyperplasia in the transition zone (arrowhead). This may make differentiation between peripheral and transition zone difficult, and zonal segmentation challenging.
Diagnostics 12 00289 g003
Figure 4. Severe hypertrophy of the transition zone (segmented in red) compressing on the peripheral zone which appears as a thin sliver (white arrows). Reduced visualisation of the peripheral zone in this case can make zonal segmentation challenging.
Figure 4. Severe hypertrophy of the transition zone (segmented in red) compressing on the peripheral zone which appears as a thin sliver (white arrows). Reduced visualisation of the peripheral zone in this case can make zonal segmentation challenging.
Diagnostics 12 00289 g004
Figure 5. MRI–US fusion technique for targeted prostate biopsy requires precise registration between pre-operative prostate MRI (bottom image) and real-time ultrasound (top image).
Figure 5. MRI–US fusion technique for targeted prostate biopsy requires precise registration between pre-operative prostate MRI (bottom image) and real-time ultrasound (top image).
Diagnostics 12 00289 g005
Table 2. Machine learning-based MR image registration methods. The abbreviations are shown below 2.
Table 2. Machine learning-based MR image registration methods. The abbreviations are shown below 2.
Publication YearApproachRegistration TypeRegistration ModalitiesML/DL MethodAuto-SegSample SizeCVResultsRefs.
TRE (mm)DSC%MSD (mm)HD (mm)Error (%)
2002Knowledge-basedDeformableMRI–TRUShomogeneous Mooney-Rivlin model,
Linear least squares fitting
N25 simulations of TRUSNo----26.7[66]
2011Knowledge-basednon-rigid, deformableMRI–TRUSPCAN5 patientsLeave-one-out5.8----[67]
2012Knowledge-basednon-rigid, deformableMRI–TRUSPCAN8 patientsLeave-one-out2.4----[68]
2016Knowledge-basednon-rigid, deformableMRI–TRUSPCA, surface point matchingN1 MRI dataset and 60 TRUS datasetsLeave-one-out1.44----[69]
2018Weakly supervisedDeformableMRI–TRUSCNNN111 pairs10-fold9.473---[70]
2018Weakly supervisednon-rigid, deformableMRI–TRUSCNNN76 patients12-fold3.687---[71]
2018UnsupervisedAffineMRI–TRUSGAN, CNNN763 pairsNo3.48----[72]
2020Weakly supervisedAffine and nonrigid, deformableMRI–TRUSFCN, 3D UNetY36 pairsLeave-one-out2.53910.884.41-[73]
2021Weakly supervisedDeformableMRI–TRUS3D UNetY288 patientsNo-87-7.21 [74]
2020SupervisedRigid, DeformableMRI–TRUSUNet, CNNY12 patientsNo2.99----[75]
2018Knowledge-based and DLNon-rigid, deformableMRI–TRUS3D encoder-decoderN108 pairs12-fold6.382---[76]
2020Knowledge-based and DLnon-rigid, deformableMRI–TRUSCNN, 3D Point CloudY50 patientsLeave-one-out1.57940.902.96-[77]
2019SupervisedRigid, deformableMRI–CTRF based on an Auto-context modelN17 treatment plans from 10 patientsNo----<1[78]
2020Knowledge-basedRigid, Affine, and DeformableMRI–histology images-N157 patientsNo-97-1.99-[79]
2021Knowledge-basedRigid, deformableMRI–CBCTCNN, 3D Point CloudY50 patients5-fold2.68931.66--[80]
2021UnsupervisedAffine, DeformableMRI–histology imageCNNN99 patients (training), 53 patients (test)No-97.5, 96.1, 96.7-1.72, 1.98, 1.96-[81]
2017UnsupervisedRigid, affine, deformableMRI–histology imageMulti-image super-resolution GANN533 patients5-fold-95 (prostate), 68 (cancer)---[59]
2 Auto-Seg = auto-segmentation, CV = cross-validation, TRE = target registration error, DSC = dice similarity coefficient, MSD = mean surface distance, HD = Hausdorff distance, Refs. = reference, MRI–TRUS = MRI–transrectal ultrasound, - = not reported.
Table 3. Machine learning methods for lesion detection and characterization. The abbreviations are shown below 3.
Table 3. Machine learning methods for lesion detection and characterization. The abbreviations are shown below 3.
Publication YearApplicationMethodSerum PSA (ng/mL)Prostate ZoneData SourceMRI Sequence(s)Sample Sizes:CVGround-TruthNon-MRI Data Features (If Any)ResultsRefs.
TrainValTestAcc, AUC (%)Ssv, Spc (%)Kap, DSC (%)
2018Detecting csPCa in AS patientsMRMR, QDA, RF, SVM6.96 ± 5.8 WGPvT2w, ADC31-253-fold (training)PI-RADS score and biopsy-72, ---[87]
2019Differentiating csPCa and non-cs PCaMRMR and LASSO algorithm>10WGPvT1w, T2w, DWI, ADC187-9310-foldGleason Score--, 82.384.1, 72.7-[88]
2019Differentiating TZ Pca from BPHLogistic Regression and SVM-TZPvT2w, ADC105- No---, 98.993.2, 98.484 (tumour), 87 (BPH)[89]
2021Prediction of csPCa (PI-RADS ≥ 4)Textured-DL and CNN4.7–8.7 WGPvT2w, ADC23942121NoPI-RADS score--, 85-, 70-[90]
2020Differentiating csPCA and non-cs Pca3D CNN-WGPROxADC, DWI, K-trans (from DCE)175-258-foldPI-RADS scoreLocation of lesion center-, 89.781.9, 86.1-[91]
2017Differentiating csPCa and non-cs PcaTransfer learning, ImageNet-WGPROxT2W, DWI, ADC, DCE330-208k-foldPI-RADS score--, 83--[92]
2019Classifying low-grade and high-grade PcaTransfer learning, AlexNet NN-WGPv, PROx-2T2w, ADC1106644NoGleason Score-86.92, ---[93]
2019Prediction of csPCa (PI-RADS ≥ 4)Transfer learning7.9 ± 2.5 WGPvT2w, ADC169 47NoPI-RADS scoreZonal information72.3, 72.663.6, 80-[94]
2015PZ cancer detectionRegression, SVM4.9–8.6 PZPvT2w, ADC56 56Yesprostectomy--, 9197-[95]
2018Predictive maps of epithelium and lumen densityLeast square regression-WGPvT2w, ADC, ECE20-19Noprostectomy--, 77(epithelium); 84 (lumen) -[96]
2021Pca detection and segmentationGrowcut, Zernik, KNN, SVM, MLP-PZ, TZPvT2w217-54NoprostectomyClinical and histopathological variables80.97, --79[97]
2020Pca detection and segmentation3D CNN-WGPvT2w, DWI, ADC116-1553-foldbiopsyLocation of lesion-, 0.65–0.8982–92, 43–76-[98]
2021Pca differentiation and segmentationSPCNet6.8–7.1 WGPvT2w, ADC102-3325-foldprostectomy--, 0.75–0.85--[99]
2021Pca detection and classificationCascaded DL4.7–9.9 WGPv, PROxT2w, ADC1290-1505-foldPI-RADS score-30.8, -56.1, -35.9[100]
2021Pca segmentationTransfer learning, CNN, Test time augmentation2.1–18 WGPv, PROxT2w, DWI and DCE16, 16-16Leave-one-outprostectomy---59[54]
2018Pca segmentationEncoder–decoder CNN-WG, PZ, CGI2CVBT2w141323670710-foldRadiologist segmented results-89.4, ---[101]
2017Improve PI-RADS v2RBF-SVM, SVM-RFE12.5–56.1WG, TZ, PZPvT2w, DWI, DCE97--Leave-one-outPI-RAD scores--, 98.3 (PZ); 96.8 (TZ)94.4 (PZ); 91.6 (TZ), 97.7 (PZ); 95.5 (TZ)-[102]
2020Prediction of PI-RADS v2 ScoreResnet34 CNN-WGPv, PROXT2W, DWI, ADC, DCE48213768NoPI-RADS score---40, -[17]
2019Pca detection, prediction of GGG scoreUnet, batch normalization, ordinal regression-WGPROX-2T2w, ADC99-635-foldGleason score---32.1[103]
2019Pca segmentation, prediction of GS Scoremulti-class CNN (Deeplab)-WGPvT2w, DWI417--5-foldGleason score--, 80.988.8, --[104]
2021Prediction of GGG scoreUnet, ordinal regression-WG, TZ, PZPROX-2T2W, DWI, ADC112-705-foldGGGZonal information--13, 37[105]
2019Prediction of GGG scoreKNN-TZ, PZPvT2w, DCE, DWI, ADC112-703-foldGGGTexture features, zonal information-, 92 (PZ); 87 (TZ)--[106]
2018Prediction of GGG scoreStacked sparse autoencoders-WGPROX-2T2w, DWI, ADC, 112 -703-foldGGG Hand-crafted texture features47.3, --27.72, -[107]
2021Lesion detection and classificationCascaded DL4.7–9.9 WGPv, PROXT2w, ADC1290-1505-foldPI-RADS score-30.8, -56.1, --, 35.9[100]
3 Val = validation, CV = cross-validation, Acc = accuracy, AUC = area under ROC curve, Ssv = sensitivity, Spc = specificity, Kap = Kappa score, DSC = dice similarity coefficient, Refs. = reference, - = not reported. csPCa = clinically-significant prostate cancer, GGG = Gleason grade group. For prostate zones, WG = whole gland, PZ = peripheral zone, TZ = transition zone. For data source, Pv = private, PROx = PROSATETx Challenge [15], PROx-2: PROSATETx-2 challenge [15], I2CVB: I2CVB Benchmark dataset [108].
Table 4. Machine learning methods for treatment aiding. The abbreviations are shown below 4.
Table 4. Machine learning methods for treatment aiding. The abbreviations are shown below 4.
Publication YearApplicationMethodInput FeatureSample SizeGround-TruthMRI Sequence (s)CVResultsRefs.
Acc (%)AUC (%)C-Index
2019EPE DetectionBayesian Network, Texture analysisIndex lesions from biparametric MRI39 ProstectomyT2w, ADCNo8288-[113]
2020ECE PredictionLASSO regressionROIs of T2W images119 ProstectomyT2w, DWI, DCE10-fold-82.1-[114]
2020EPE DetectionLASSO regressionRadiomic features, patients’ clinical and pathological variables115 ProstectomyT2w, ADC, DWI, DCENo81.886.5-[115]
2020EPE PredictionCombination of RF model, radiology interpretation and clinical nomogramMR radiomic features228 ProstectomyT1w, T2w, DWI, DCE10-fold-79-[116]
2021EPE DetectionSVMRadiomic feature from MRI index lesions193 ProstectomyT2w, ADC10-fold79--[117]
2009BCR PredictionCox regressionGS and clinical variales610 BCR defined by NCCN guidelineT2w, DWI, ADC, DCENo--0.776 (5-year), 0.788 (10-year)[118]
2015BCR PredictionUnivariate and multivariate analyses using Cox’s proportional hazards modelPI-RADSv2 score, surgical parameters158 Two consecutive PSA ≥ 0.2 ng/mLT2w, DWI, DCENo ---[119]
2019pre-biopsy mpMRI to improve preoperative risk modelCox regressionpre-biopsy mpMRI score372 Two consecutive PSA ≥ 0.1 ng/mLT1w, T2wNo---[120]
2010BCR PredictionUnivariate and multivariate analysesClinical variables and tumour ADC data158 PSA ≥ 0.2 ng/mLADC, DWINo-75.5-[121]
2019BCR and bRFS PredictionUnivariate and multivariate Cox regressionIBSI-compliant radiomic features107 Two consecutive PSA ≥ 0.2 ng/mLT2w, ADCNo-76-[122]
2016BCR PredictionSVMClinicopathologic and bpMRI variables205 PSA ≥ 0.2 ng/mLT2w, DWI, DCE5-fold92.2--[123]
2018Identify predictive radiomic features for BCRSVM, Linear discriminant analysis and RFRadiomic features from pretreatment bpMRI120 PSA > 0.2 ng/mL (post-RP) and PSA >2 ng/mL (post-RT)T2w, ADC3-fold-73-[124]
2021BCR PredictionRadiomic-based DLQuantitative features of MRI485 PSA ≥ 0.2 ng/mLT1w, T2w, DWI, ADCNo--0.802[125]
2018Post-Prostatectomy Pathology predictionRFDemographics, PSA trends, and location-specific biopsy findings1560 Prostatectomy---75 (OCD), 73 (ECE), 64 (pN+)-[126]
2019IMRT response predictionUnivariate radiomic analysis, ML classification modelspre-/post-IMRT mpMRI radiomic features33 Change of ADC values before and after IMRT. T2w, ADC10-fold-63.2-[127]
2004BCR PredictionANNMRI findings, PSA, biopsy Gleason score 210 PSA level ≥ 0.1 ng/mLT2w, DWI, ADC, DCE5-fold-89.7-[128]
4 CV = cross-validation, Acc = accuracy, AUC = area under ROC curve, C-Index = concordance index [129], Refs. = reference, - = not reported. bRFS = biochemical recurrence free survival, RP = radical prostatectomy.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Lee, C.H.; Chia, D.; Lin, Z.; Huang, W.; Tan, C.H. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics 2022, 12, 289. https://doi.org/10.3390/diagnostics12020289

AMA Style

Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics. 2022; 12(2):289. https://doi.org/10.3390/diagnostics12020289

Chicago/Turabian Style

Li, Huanye, Chau Hung Lee, David Chia, Zhiping Lin, Weimin Huang, and Cher Heng Tan. 2022. "Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities" Diagnostics 12, no. 2: 289. https://doi.org/10.3390/diagnostics12020289

APA Style

Li, H., Lee, C. H., Chia, D., Lin, Z., Huang, W., & Tan, C. H. (2022). Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics, 12(2), 289. https://doi.org/10.3390/diagnostics12020289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop