Next Article in Journal
Survival in Patients with Primary Parotid Gland Carcinoma after Surgery—Results of a Single-Centre Study
Previous Article in Journal
CKLF as a Prognostic Biomarker and Its Association with Immune Infiltration in Hepatocellular Carcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Brain Tumor Imaging: A Step toward Personalized Medicine

1
Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
2
Postgraduation School in Radiodiagnostics, University of Rome Tor Vergata, Viale Oxford 81, 00133 Rome, Italy
3
Radiology Department, San Raffaele Hospital, Via Olgettina 60, 20132 Milan, Italy
4
Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy
*
Author to whom correspondence should be addressed.
Curr. Oncol. 2023, 30(3), 2673-2701; https://doi.org/10.3390/curroncol30030203
Submission received: 18 January 2023 / Revised: 10 February 2023 / Accepted: 20 February 2023 / Published: 22 February 2023
(This article belongs to the Section Neuro-Oncology)

Abstract

:
The application of artificial intelligence (AI) is accelerating the paradigm shift towards patient-tailored brain tumor management, achieving optimal onco-functional balance for each individual. AI-based models can positively impact different stages of the diagnostic and therapeutic process. Although the histological investigation will remain difficult to replace, in the near future the radiomic approach will allow a complementary, repeatable and non-invasive characterization of the lesion, assisting oncologists and neurosurgeons in selecting the best therapeutic option and the correct molecular target in chemotherapy. AI-driven tools are already playing an important role in surgical planning, delimiting the extent of the lesion (segmentation) and its relationships with the brain structures, thus allowing precision brain surgery as radical as reasonably acceptable to preserve the quality of life. Finally, AI-assisted models allow the prediction of complications, recurrences and therapeutic response, suggesting the most appropriate follow-up. Looking to the future, AI-powered models promise to integrate biochemical and clinical data to stratify risk and direct patients to personalized screening protocols.

1. Introduction

Even though artificial intelligence (AI) is far from being used routinely in the current workflow of radiologists, the number of clinical studies using radiomics and radiogenomics approaches in neuroradiology is increasing day by day. In this article, we describe some examples of AI applications in the main activities related to brain tumor imaging, with a special focus on gliomas. These applications include lesion detection, differential diagnosis, non-invasive molecular characterization, the definition of lesion boundaries and spatial relationships (segmentation), and an assessment of response to therapy and prognosis. It is likely that in each of these areas, AI models will soon play a central role in assisting the radiologist in his daily work [1].
Gliomas are the most common type of central nervous system (CNS) neoplasm and arise from glial cells [2]. They represent a clinically and biologically heterogeneous disease, with several recognized histotypes and molecular subtypes, and a clinical history ranging from slow growth and predominantly benign prognosis, such as pilocytic astrocytoma, to particularly aggressive histological subtypes, such as glioblastoma multiforme (GBM), which is associated with rapid progression and poor prognosis [3,4]. Therefore, timely and accurate diagnosis is essential to ensure adequate patient treatment and longtime survival.
Historically, brain tumor classification has been solely based on histopathological features [5], whereas the latest editions incorporate genetic and epigenetic information, such as molecular markers (e.g., IDH mutation, 1p/19q codeletion, etc.) and DNA methylation profiles [6,7]. The genetic and epigenetic makeups define the molecular signature, a “barcode” of the tumor, whose recognition is essential for clinical decision-making in the era of targeted therapies [8]. Therefore, tissue sampling remains the gold standard for decoding the molecular landscape of most CNS tumors, especially for gliomas [9]. Nevertheless, growing evidence has highlighted the powerful role of artificial intelligence in oncological neuroimaging through the extraction of quantitative information from routine radiological examinations [10]. Alongside the molecular signature is the imaging signature, which offers complementary and ideally additional information for the characterization of the brain tumor, with a potential role in guiding the choice of the most appropriate therapy and clinical management [11]. In this landscape, AI-assisted tools represent the bridge from precision diagnostics to precision therapeutics [12].
In Figure 1, the flowchart shows the possible applications of AI in brain tumor imaging to provide customized patient management.

2. An Introduction to Artificial Intelligence and Related Concepts

In this section, we provide some basic definitions and theoretical frameworks of the most important AI-related concepts in biomedical imaging.

2.1. Artificial Intelligence (AI)

AI can be defined as technology that mimics human cognitive processes, such as learning, reasoning, and problem-solving. Developed as a branch of computer science, present-day AI is a broad field of knowledge that welcomes contributions from different disciplines, such as statistics, informatics, and physics.

2.2. Radiomics

Radiomics was first described by Lambin in 2012 as the high-throughput extraction of numerous quantitative image features from radiographic images for diagnostic purposes [13]. At the basis of this new approach is the awareness that radiological images must be considered as numerical data rather than simple images, providing much more information than can be perceived by the radiologist through a qualitative evaluation [14,15,16]. The radiomic paradigm seeks to extract quantitative and ideally repeatable information from diagnostic images, including complex patterns that are challenging for the human eye to detect or quantify. There are high expectations that the contribution of artificial intelligence to biomedical imaging will help close the gap towards personalized medicine [17,18].

2.2.1. Radiogenomics

Radiogenomics may be considered a subset of radiomic applications, aiming to link imaging and biology, thus correlating lesion imaging phenotype (“radio“) to the genotype (“genomics”), based on the assumption that phenotype is the expression of genotype; thus, genomic and proteomics patterns can be expressed in terms of macroscopic image-based features [19].

2.2.2. Radiomics Workflow

Radiomics workflow is a complex process leading to the development and validation of AI-based tools aiming to extract diagnostic and/or predictive information from biomedical images [20,21]. It includes some well-established steps for image acquisition (or data collection), post-processing/reconstruction, segmentation (definition of the area/volume of interest), feature extraction and harmonization, and data mining/model building. The last phase consists of the effective extraction of valuable information from imaging data, essentially through machine learning (ML) methods that need to be trained, validated, and tested to ensure reliability and clinical applicability [22,23].

2.2.3. Machine Learning

ML is a branch of AI aimed at the automated detection of meaningful patterns in data and is at the basis of data mining [24,25,26,27]. In radiology, ML can be used to extract information from imaging data [28]. First, ML algorithms are trained to perform a certain task related to medical images starting from some initial data, which must be provided in a certain way, depending on whether the model is supervised or unsupervised. In the second phase, the computational power of modern computers is exploited to perform these tasks automatically or semi-automatically in order to replace or improve the performance of the human decision-maker. There are essentially four main types of ML (supervised learning, unsupervised learning, semi-supervised learning, and self-supervised learning). Supervised learning (SL) and unsupervised learning (UL) are the two most used in radiomics applications [29].

2.2.4. Supervised Learning

Supervised learning (SL) is the most basic mechanism of ML and, as the name suggests, needs some degree of human supervision to be trained [25,26,27,30]. SL algorithms simulate the human cognitive process of “learning by examples”. This kind of ML is appropriate for very general classification tasks where new elements must be labeled in accordance with a set of predefined categories. These techniques require a training dataset consisting of input-output (x, y) tuples composed of input (x) and corresponding label values (y). Human experts annotate (“label”) input data points with corresponding label values (for example, classifying a brain mass as “tumor” or “nontumor”) to create the labeled data points in order to train the algorithm. While normally a computer is programmed to perform a known operation on the input to obtain the output, in this case the computer is asked to find the operation that links the given input and output. In this case, the algorithm is structured to test a series of theoretical hypotheses mapping possible relationships (x → y) among the data, mimicking the human annotator who learns through experience to infer the category of appearance based on image characteristics.

2.2.5. Unsupervised Learning

Unsupervised learning (UL) algorithms rely solely on the inherent structure of data that have not been labeled, classified, or categorized by an expert and exhibit self-organization to capture hidden patterns in data [26,30]. In this type of ML, the learning algorithm is given a naive dataset and instructed to extract knowledge from it. They are utilized in tasks such as clustering, where the goal is to divide the dataset into groups based on particular feature characteristics, and association, where the goal is to identify rules that link data points together.

Semi-Supervised Learning

Semi-supervised learning refers to a machine learning paradigm intermediate between unsupervised and supervised learning that works primarily on unlabeled data with a small amount of labeled data [26,30].

Self-Supervised Learning

Self-supervised learning refers to a machine learning paradigm in which the basic idea is to automatically generate some kind of supervisory signal to solve some task [26,30]. Self-supervised learning is very similar to unsupervised since no labels are given, but it aims to tackle tasks that are traditionally done by supervised learning. Since it is not possible to provide adequate supervision for all the data a large project must accumulate, this type of ML addresses the problem of low data availability by taking advantage of the abundant amount of accessible but unlabeled data, in order to train precise classifiers. Some neural networks, for example, autoencoders, are sometimes called self-supervised learning tools.

2.2.6. ML Models

At the heart of every ML approach are models that can extract insights from unseen datasets to find patterns or make decisions [26,30]. ML models are the protagonists of ML processes, i.e., they are those who learn in the training phase and who return this knowledge when put to the test in an unprecedented context. ML models can be defined as a set of rules for manipulating data according to theoretical hypotheses that map the possible relationships within the dataset. We generally distinguish models based on statistics and models based on artificial neural networks. Statistical assumptions have the characteristic of being computationally feasible, which means that they can be easily translated into programming code to perform automated data analysis. Through the automation of machines, it is possible to test the feasibility of a model to infer the relationship within a given dataset and compare the performance of different models. Of course, different assumptions or mapping models can be used to infer predictions of an amount of interest that satisfies a predefined requirement, which in the simplest case of ML is the relationship between the native image and the label that the radiologist assigned. The validation and test phases, which follow the training phase, are necessary to evaluate the effectiveness of the mapping prescribed by the model (if any) within new data.

2.2.7. Features

While radiologists primarily evaluate qualitative features, such as increase or decrease in signal intensity, by comparison with a subjective reference standard, radiomics features are quantifiable image properties, or metrics, that can be easily calculated or measured by a machine. In general, a distinction is made between hand-crafted features (manually defined by an expert) and automatically extracted features (usually through deep learning algorithms [DL], see below). Some features, called first-level features, include basic characteristics such as shape, volume, and intensity signal metrics, for which it is easy to guess their meaning from a qualitative point of view. On the other hand, second-order features are obtained from the combination of first-level features or automatically extracted using DL networks and may appear clinically uninformative to the radiologist [31]. Yet, it is especially from these high-level features that the potential of AI-based tools derives, which can capture what the radiologist does not. The feature extraction phase is fundamental in the radiomics workflow because they represent the raw material on which the artificial intelligence models are trained and validated. Therefore, the choice of reproducible and robust features is essential in the radiomic workflow [32].

2.2.8. Artificial Neural Network

Artificial neural networks (ANNs) are a particular learning paradigm inspired by the human brain’s biological network [33,34]. The ANNs consist of layers of nodes, each representing a computational unit that processes the input according to a specific function and transfers the output, through a series of interconnections to one or more successive nodes. Although ANN nodes differ markedly from biological cells in terms of complexity and amount of connections, the whole of the network exhibits emergent properties, as does the biological network. These properties are generated by the coordinated activity of many smaller units, each performing an elementary computational operation, substantially adding up the inputs—variously weighted—and transferring the information through the neuron when a certain threshold value is reached. The learning capabilities of ANNs are not based on testing statistical hypotheses to map possible relationships between data, but on the flexibility of the computational properties of the neural ensemble.

2.2.9. Deep Learning

Deep learning is a domain of AI that leverages sophisticated ANNs, such as convolutional neural networks (CNNs), to identify complex patterns in data, which is critical for image analysis [35,36,37]. CNNs are a key DL technique for automated radiomics feature extraction as they are capable of automatically extracting deep images, features discriminating infinitesimal details and handling a large amount of data [38]. DL networks are composed of a huge number of intermediate layers representing increasing degrees of sophistication. These models are inspired by the organization of the visual cortex of the mammalian brain, where the hierarchically organized layers process increasingly complex intermediate visual features, such as lines, edges, and shapes until they return to the meaning of the entire visual object. It is not intuitively decipherable how the processing of the middle layers adds up to the result; this is also known as the “black box phenomenon” and contributes to the challenge of interpreting the results of AI tools [39].
DL models are designed to capture the relation between local features and the entire image context, thus resulting in higher performance in image-recognition tasks. Some of the very popular CNNs, such as AlexNet [40], VGG [41] and GoogLeNet [42], are currently being used in medical image-classification tasks. Various network architectures or stacks of linear and nonlinear functions, such as CNNs or auto-encoders, are used in DL-based radiomics to find the most important/critical characteristics of radiological images. In 2019, the simplest form of a CNN was proposed to classify brain images into three classes (glioma, meningioma, and pituitary), and a classification accuracy of 84.19% was reported [43].

3. Lesion Detection and Differential Diagnosis

AI-powered tools can aid neuroradiologists in lesion detection and differential diagnosis.
Since gliomas are often diagnosed when they are large and symptomatic, the detection of glioma-like lesions on MRI may seem relatively trivial to an experienced neuroradiologist. Conversely, the early diagnosis of small brain metastases (BM) in oncological patients during follow-up is challenging, because sensitivity on MRI is variable, and many details of MRI acquisition can impact the performance [44]. However, since stereotactic radiosurgery protocols and other therapeutic decisions are based on the number and location of even small metastases, early diagnosis is a real concern for neuroradiologists, given the high impact on the patient’s prognosis. For this reason, most of the computer-aided detection (CAD) tools available in the field of neuro-oncology focus primarily on the automated detection of brain metastases.
The proper tuning of CAD tools is essential to ensure diagnostic accuracy, lowering the risk of overdiagnosis, overtreatment, and unreasonable concern in patients [23]. Generally speaking, if the threshold sensibility is too low, the model can be affected by a high false-positive rate, for example, including vascular structures instead of small metastases; on the other hand, when the threshold is high, the model can fail to detect small (in particular, <3 mm) lesions [45].
Park et al. have recently demonstrated how DL-based models significantly increase the diagnostic accuracy in the detection of small lesions by exploiting the integration of large amounts of MRI data: in particular, a DL model that combines 3D Black Blood and 3D GRE MRI sequences outperformed a DL model using only 3D GRE sequences in the detection of brain metastases (p < 0.001), yielding a sensitivity of 93.1% versus 76.8% [46].
Solitary BM and GBM can exhibit quite similar MRI features, such as post-contrast ring enhancement, necrotic core, and large peritumoral edema presenting with high signal on T2-weighted and FLAIR images [47]. Differentiating these two entities is essential, considering they are the most common brain tumors in the adult population and have quite different treatments [47]. Thus, several researchers have focused on this topic, showing the advantages of multiparametric MRI [48,49] and, more recently, evaluating the performances of different AI-based classifiers compared to expert neuroradiologists.
For example, Swinburne et al. investigated whether an ML algorithm including advanced MRI (advMRI) data from 26 patients can reliably differentiate between GBMs (n = 9), BM (n = 9), and primary central nervous system lymphoma (PCNSL) (n = 8). Their multilayer perceptron model performed well in discriminating between the three pathological classes. After adopting a leave-one-out cross-validation strategy, the model achieved a maximum accuracy of 69.2%, intermediate to that of two human readers (65.4% and 80.8%). However, the use of the same model for cases where human reviewers disagreed on the diagnosis yielded an increase of 19.2% incorrect diagnoses. No evaluation with an independent test cohort was carried out in this study, and this represents the main limitation of this study [50].
Since the contrast enhancement and local infiltration of white matter bundles are key features of high grade-gliomas (HGGs) [51], most ML and DL algorithms exploit radiomic features extracted on post-contrast T1-weighted 3D images or diffusion-weighted images (DWI) and related techniques, such as diffusion tensor imaging (DTI).
For example, a recent study based on DTI metrics, especially fractional anisotropy (FA) and ADC values, demonstrated that peritumoral alteration is different in these two entities, with GBM showing greater heterogeneity due to the infiltrative nature and aggressive tumor [1,52]
The combination of radiomic and non-radiomic features (clinical and qualitative imaging) has in some cases been shown to be better than using radiomic features alone. For example, a study by Han et al., established the importance of adding clinically relevant data (e.g., age and sex) and routine radiological indices (tumor size, edema ratio, and location) to build an AI-driven model to differentiate between GBM and BM from lungs and other sites using a logistic regression model; the integrated model was superior to the single model [53].
BM can be the first manifestation of a still unknown extracerebral malignancy; therefore, ML tools have been applied in the clinical scenario in which patients are found with brain metastases without a known primary site of cancer [54]. Metastases coming from different primary cancers show differences in the local environments and consequently exhibit different radiomic features [12]. Ortiz-Ramón et al. provided good results in differentiating metastases from lung cancers, melanoma, and breast cancers when they implemented an AI-driven model with two- and three-dimensional texture analyses of T1-weighted post-contrast sequences within a nested cross-validation structure after quantizing the images with multiple numbers of gray-levels to evaluate the influence of quantization [55].
Another challenging differential diagnosis is between GBM and PCNSL since these entities may show similar appearances on conventional MRI, especially when GBMs do not present a necrotic core and the enhancement is not confined to the peripheral area but is more homogeneous [56,57]. Generally, PCNSLs are treated with whole-brain chemotherapy and radiotherapy, while GBM commonly undergoes surgical resection before chemo-radiotherapy [58]; therefore, a proper diagnosis is mandatory.
A recent study by Stadlbauer et al. [59] analyzed the effectiveness of a multiclass ML algorithm that integrates several radiomic features extracted from advanced MRI (including axial diffusion-weighted imaging sequences and a gradient echo dynamic susceptibility contrast (GE-DSC) perfusion) and physiological MRI (protocol including the vascular architecture mapping (VAM) and the quantitative blood-oxygenation-level-dependent (qBOLD)) to classify the most common brain enhancing-tumors: (GBM, anaplastic glioma, meningioma, PCNSL, or brain metastasis). When compared to the human reader, the AI-driven algorithms achieved a better performance, resulting in superior accuracy (0.875 vs. 0.850), precision (0.862 vs. 0.798), F-score (0.774 vs. 0.740), and AUROC (0.886 vs. 0.813); however, the radiologists demonstrated higher sensitivity (0.767 vs. 0.750) and specificity (0.925 vs. 0.902).
The DL paradigm has evolved in recent years as a big data grinding machine and has replaced many conventional algorithms in the field of image analysis as well. Furthermore, the development of open-source web platforms for programming DL models has expanded the frontiers of collaborative research in the development and validation of new DL-based tools. A good example is provided by Ucuzal et al., who developed web-based DL software aimed at the differential diagnosis of brain tumors using the popular Python programming language and the dedicated Keras library. Their software accepts multiple formats of the images, such as .jpeg, .jpg, and .png, and can be used to classify the input MRI image datasets into three diagnostic classes: meningioma, glioma, and pituitary tumors [60].
CNNs have a significant drawback in that they underutilize spatial relationships between the tumor and its surroundings, which is especially detrimental for classifying tumors. K. Adu and Y. Yu recently proposed a dilated capsule network model (CapsNet model), which is an extension of the traditional CNN, to address this issue [61]. In this model, the “routing by agreement” layer in the dilated CapsNet architecture takes the place of the pooling layer in the current CNN architecture [61]. Afshar et al. proposed a modified CapsNet architecture for classifying brain tumors that incorporates additional inputs into its pipeline from tissues surrounding the tumor, without detracting from the primary target, yielding satisfactory results [62].
Most AI-based classification algorithms target supratentorial tumors. In the posterior fossa, on the other hand, the two most common lesions in the adult population are hemangioblastoma, a benign tumor of vascular origin with a good survival rate, and brain metastases [63,64]. Obviously, discrimination between these entities is crucial for patient management as once again the therapeutic approach is different. In this field the role of AI-based is not yet well defined, however, a recent study attempted the differential diagnosis of intra-axial lesions of the posterior fossa using different radiomic algorithms (CNN, SVM, etc.), with promising results [1,65].
In some cases, even the differentiation between tumoral and non-tumoral processes is not simple. Tumefactive multiple sclerosis lesions, infection, inflammation disease (paraneoplastic syndrome and autoimmune disease), cortical dysplasia, and even stroke may be confused with tumoral processes, and accurate differential diagnosis based only on the radiological appearance is impossible due to the overlapping radiological features [66].
For example, tumefactive multiple sclerosis is a great mimicker of HGG on conventional MRI. The use of an AI-assisted tool can help the neuroradiologist to improve the differential diagnosis [1]: a recent study by Verma et al. achieved good results in differentiating GBMs from PCNSL from tumefactive multiple sclerosis lesions using an in-house software called dynamic texture parameter analysis (DTPA), which incorporates the analysis of quantitative texture parameters extracted from dynamic susceptibility contrast-enhanced (DSCE) sequences [67]. A more recent study by Han et al. evaluated the performance of different radiomic signature models in differentiating between low-grade glioma (LGG) and inflammation using radiomic features extracted from T1-weighted (T1WI) and T2-weighted (T2WI) MRI images. The features were chosen after a t-test and statistical regression (LASSO algorithm) to develop three radiomic models based on T1WI, T2WI, and combination (T1WI + T2WI), using, respectively four, eight, and five radiomic features each. The T2WI and combination models achieved better diagnostic efficacy in both the primary cohort and the validation cohort, significantly outperforming radiologist assessments [68].
The main results of the above studies are listed in Table 1.

4. Tumor Characterization

In the era of molecular therapies, diagnostic neuroimaging should guide the diagnosis and treatment planning of brain tumors through a non-invasive characterization of the lesion, sometimes also called “virtual biopsy”, based on radiomic and radiogenomic approaches [11].
To date, most studies have challenged ML models to address very general classification tasks for brain tumors, such as differentiating between GBM and brain metastases [69,70]. However, more recently, researchers focused on the development of AI-driven tools, aiming to recognize the radiological signature of the tumor to provide a comprehensive analysis of the grading, genomic and epigenomic landscape of cerebral gliomas, which is extremely useful for decision-making towards a personalized medicine perspective. Therefore, several studies have been published in recent years where AI algorithms are challenged in increasingly specific classification tasks, such as differentiation within different subgroups of gliomas, for example, low-grade gliomas (LGGs) compared to high-grade gliomas (HGGs) [71,72]; isocitrate dehydrogenase (IDH) wild-type (IDH(−)) vs. IDH-mutated (IDH(−)) [73]; 1p/19q chromosomal arm deletion [74]; and others.
Several studies have focused on glioma grading. For example, Cho et al. used a radiomics approach to test the performance of various ML classifiers in determining the grading of 285 glioma cases (210 HGG, 75 LGG) obtained from the Brain Tumor Segmentation 2017 Challenge. The researchers extracted a large set of radiomic features from routine brain MRI sequences, including T1-weighted, T1-weighted contrast-enhanced, T2-weighted, and FLAIR. Three supervised ML classifiers showed an average AUC of 0.9400 for training cohorts and 0.9030 (logistic regression 0.9010, support vector machine 0.8866, and random forest 0.9213) for test cohorts [75].
In another study, Tian et al. investigated the role of radiomics in differentiating grade II gliomas from grade III and IV; they extracted radiomics features from conventional, diffusion, and perfusion arterial spin labeling (ASL) MRI. After multiparametric MRI preprocessing, high-throughput texture and histogram parameters features were derived from patients’ volumes of interest (VOIs). Then, the support vector machine (SVM) classifier showed good accuracy/AUC (96.8%/0.987) for classifying LGGs from HGGs, and 98.1%/0.992, respectively, for classifying grades III from IV. Furthermore, they proved that texture features were more effective for non-invasively grading gliomas than histogram parameters [76].
Mzoughi et al. proposed a fully automatic deep multi-scale 3D CNN architecture for MRI gliomas brain tumor classification into low-grade gliomas and high-grade gliomas, using the whole volumetric T1 contrast-enhancement MRI sequence. For effective training, they used a data augmentation technique. After data augmentation and proper validation, the proposed approach achieved 96.49% accuracy, confirming that adequate MRI pre-processing and data augmentation could lead to the development of an accurate classification model when exploiting CNN-based approaches [77].
Chang et al. used CNNs for the differential diagnosis between IDH-mutant and IDH wild-type gliomas on conventional MRI imaging, achieving 92% accuracy; these results were in line with prior hypotheses based on visual assessment and underlying pathophysiology, as IDH wild-type lesions are characterized by more infiltrative and ill-defined borders. Furthermore, the authors found that nodular and heterogeneous contrast enhancement and “mass-like FLAIR edema” could aid in the prediction of MGMT methylation status, with up to 83% accuracy [78].
In another study, Kim et al. aimed to evaluate the added value of radiomic features extracted from MRI DWI and perfusion sequences in the prediction of IDH mutation and tumor grading in LGGs. For the IDH mutation, the model trained with multiparametric features showed similar performance to the model based on conventional sequences, but in tumor grading, it showed higher performance. This trend was confirmed in the independent validation set, demonstrating that DWI features and especially the apparent diffusion coefficient (ADC) map play a significant role in tumor grading [73].
In one of the first studies in the field, Akkus et al. presented a non-invasive method to predict 1p/19q chromosomal arm deletion from post-contrast T1- and T2-weighted MR images using a multi-scale CNN. They found that increased enhancement, infiltrative margins, and left frontal lobe predilection are associated with 1p19q codeletion with up to 93% accuracy [74].
In a larger, recent retrospective study, Meng et al. specifically targeted ATRX status in 123 patients diagnosed with gliomas (World Health Organization grades II–IV) using radiomics analysis, showing that radiomic features derived from preoperative MRI facilitate the efficient prediction of ATRX status in gliomas, achieving an AUC for ATRX mutation (ATRX(−)) of 0.84 (95% CI: 0.63–0.91) on the validation set, with a sensitivity, specificity, and accuracy of 0.73, 0.86, and 0.79, respectively [79].
In another retrospective study by Ren et al., researchers focused on the non-invasive prediction of molecular status for both IDH1 mutation and ATRX expression loss in LGGs, exploiting a radiomic approach based on high-throughput multiparametric MRI radiomic features. An optimal features subset was selected using a support vector machine (SVM) algorithm and ROC curve analysis was employed to assess the efficiency for the identification of the IDH1(+) and ATRX (−) status. Using 28 optimal texture features extracted from multiple MRI sequences, the SVM predictive model achieved excellent performances in terms of accuracies/AUCs/sensitivity/specificity/PPV/NPV in the prediction of IDH1(+) (94.74%/0.931/100%/85.71%/92.31%/100%, respectively) and ATRX (−) within LGGs (91.67%/0.926/94.74%/88.24%/90.00%/93.75%) [80].
Recently, some more ambitious studies have investigated the diagnostic accuracy of a radiomic approach in evaluating both the grading and the complete molecular profile of cerebral gliomas [81]. For instance, Habould et al. integrated clinical and laboratory data into a completely automated segmentation-based radiomics tool for the prediction of molecular status (ATRX, IDH1/2, MGMT, and 1p19q co-deletion), also distinguishing low-grade from high-grade gliomas. The system provided an AUC (validation/test) of 0.981 ± 0.015/0.885 ± 0.02 for the grading task. The prediction of the ATRX (−) condition had the best results, with an AUC of 0.979 ± 0.028/0.923 ± 0.045, followed by the prediction of IDH1/2(+), with an AUC of 0.929 ± 0.042/0.861 ± 0.023, while they showed only moderate results for the prediction of 1p19q and MGMT status [82].
In a similar study, Shboul et al. performed a non-invasive analysis of 108 pre-operative LGGs using imaging features to predict the status of MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX mutation, and TERT mutations, achieving a good accuracy with AUC of 0.83  ±  0.04, 0.84  ±  0.03, 0.80  ±  0.04, 0.70  ±  0.09, and 0.82  ±  0.04 [83].
A recent study focused on the detailed analysis of the tumor landscape within HGGs, highlighting the outstanding potential of DL algorithms in the extraction of new imaging markers, otherwise impossible to evaluate visually or with traditional radiomics approaches. Calabrese et al. retrospectively analyzed preoperative MRI data from 400 patients with WHO grade 4 glioblastoma or astrocytoma, who underwent resection and genetic testing to assess the status of nine key biomarkers: hotspot mutations of IDH1 or TERT promoter, pathogenic mutations of TP53, PTEN, ATRX, or CDKN2A/B, MGMT promoter methylation, EGFR amplification, and combined aneuploidy of chromosomes 7 and 10. An AI-driven model was tested in the prediction of biomarker status from MRI data using radiomics features, DL-based CNN features, and a combination of both. The results showed that the combination of radiomics and CNN features from preoperative MRI yields improved non-invasive genetic biomarker prediction performance in patients with WHO grade 4 diffuse astrocytic gliomas [84].
The main results of these studies are listed in Table 2.
The performance of the succinctly presented prediction models indicates the potential to correlate computer imaging features with the types of molecular mutations in gliomas, demonstrating how the radiomics approach has the potential to complement histological assessment.

5. Segmentation

Tumor segmentation consists of image analysis and delimitation of the regions of interest (ROI) comprising the tumor, from a 2D or 3D acquisition [85].
Segmentation represents a critical process in different applications, including brain cancer detection and diagnosis, accurate and reliable quantification of the disease burden, with objective volumetric assessment, useful for follow-up.
Segmentation is also an essential step in the radiomics workflow since lesion delimitation is preliminary to the extraction of radiomics features [23].
Furthermore, an accurate definition of the tumor boundaries is essential for treatment planning of brain tumors, since both radiotherapy and surgical approaches must be strictly limited to the pathological tissue, preserving as much as possible surrounding critical structures (functional cortical epicenters, white matter bundles), to achieve the best onco-functional balance for each patient: while too aggressive resection can lead to a reduction in the patient’s quality of life, too cautious resection leads to an increased risk of recurrence after surgery or radiotherapy [86].
Therefore, after detecting a brain lesion and defining whether it is neoplastic or non-neoplastic, the pre-treatment work-up usually is completed by tumor segmentation. Although also CT can be used to detect and segment a brain lesion, MRI is the modality of choice thanks to its superior tissue contrast resolution and multiparametric nature. Both conventional MRI and advanced MRI play a role in this phase [87].
Segmentation can be manual, semi-automated, or fully automated. To make an accurate manual segmentation of a brain tumor, the neuroradiologist subjectively evaluates some qualitative features such as the solid, contrast-enhanced part of the tumor, the presence of necrotic foci, the non-contrast enhanced part tumor and perifocal edema [1]. However, this process is strictly affected by a high degree of inter-reader variability due to several limitations such as the challenge of solving the infiltration-edema relationship unambiguously, especially in lesions with poor contrast enhancement and infiltrative pattern. In this scenario, AI-assisted semi-automated and automated segmentation tools based on DL algorithms can reduce segmentation time and significantly increase reproducibility and efficiency, with consequently a better outcome for the patient.
ML-based brain tumor segmentation techniques are typically based on voxel-based features which are extracted from the volume of interest (VOIs) of the image [87]. Several segmentation approaches have been tested showing a wide range of performances [1,87]. Many ML algorithms have been developed and tested for automatic tumor segmentation; however, their efficacy however must be evaluated in a real-world scenario before being introduced in clinical practice [1]. (Many ML-based fast and trustworthy segmentation methods have been developed based on the differentiation of each image voxel, to determine whether it belongs to normal brain tissue, tumor lesion, or other pathological brain tissue changes such as edema. At present, the most reliable segmentation methods are based on DL, a subgroup of ML based on neural networks allowing more complex classification, particularly using convolutional neural networks. CNNs have a great performance with about 90% accuracy in voxel labeling [1].
The infiltrative growth pattern of certain gliomas represents a diagnostic challenge to both neuroradiologists and automatic segmentation tools. However, differentiating between neoplastic infiltration and perifocal edema is essential for pre-surgical or radiotherapy planning. This task is hardly achieved using conventional qualitative approaches but there are expectations that ML methods may help to better identify infiltrative tissue margins on preoperative MR images, thereby allowing for more targeted, extensive surgical resections, localized biopsies, and tailored treatment planning. Two recent studies respectively developed and refined a multivariate support vector machine approach, incorporating features from conventional and advanced MRI modalities to predict infiltrated peritumoral tissue with approximately 90% cross-validated accuracy [88,89]. Chang et al. developed a fully automated system to generate a non-invasive map of cell density useful for the identification of infiltrative margins of gliomas [90]. Considering that current surgical resection largely relies on the enhancing tumor alone, these promising methods may guide a more aggressive and extensive treatment.
The infiltration and extent of brain tumors can be estimated with features extraction from FLAIR and ADC maps with a voxel-wise logistic regression model, with a good prediction of potential future recurrence [1,12,91].
Several ML and DL-based segmentation methods have been developed and tested. In 2022, Akinyelu et al. published a survey in which they compare the most recently developed segmentation techniques based on ML, CNN, Capsule Networks (CapsNet), and Vision Transformers (ViT). Most of these methods are used for segmentation or classification tasks, which are strictly related since they both contribute to identifying the grade of a brain tumor and planning its best treatment [92].
At present, DL-based models have a greater impact on brain tumor segmentation and classification tasks compared to ML-based models. The most used DL-based technique is CNN in which the images represent a direct input into the network of data, generating translation-invariant and deformation-resistant features used for a more accurate segmentation process. CNN algorithms have negative sides such as the need for a large dataset for training and to correctly identifiable inputs of different rotations and transformations.
Most CNN networks can extract information only from 2D MRI images. However, some recent studies aimed to extract volumetric information in 3D MRI images using CNN models [77,92]. ViT-based models, for example, can be used for 2D and 3D image segmentation and classification. In some studies, they are combined with CNNs models to capture both local contextual features and global semantic features.
CapsNet-based tumor segmentation techniques have been proposed in the literature to address the downsides of CNNs methods. As previously mentioned CapsNets require smaller datasets to be trained in comparison with CNNs and consider the tumor surrounding tissues [92,93]. Even if most of the CapsNet-based techniques proposed in the literature are used for brain tumor classification, CapsNet-based models are also very useful for segmentation tasks, since they need small-scale datasets to train and require lesser computational complexity compared to CNN-based techniques.
Most brain tumor segmentation techniques found in the literature are based on pure ML-based or DL-based algorithms. Just a few studies used a hybrid technique, however with promising results [92,94].
Segmentation of brain tumors still remains a challenging task, especially when dealing with gliomas infiltrative growth pattern, future developments of AI systems may allow a more precise tumor definition and hopefully a progressive replacement of manual segmentation.

6. Prognosis

AI-assisted tools represent a novel frontier for the prediction of complications, recurrence, and therapeutic response in neuro-oncology, helping to outline the most appropriate follow-up and long-term treatment [12]. Looking ahead, AI-powered models promise to integrate clinical and laboratory data to stratify risk and build personalized screening protocols, such as what has been proposed for breast cancer [95].
Finding the clinical uses of ML algorithms in clinical practice and identifying the areas of clinical care that can be enhanced by artificially generated algorithms are thus the next steps in neuro-oncology imaging [96].

6.1. Prediction of Complications

It is well known that post-surgical complications depend on numerous variables, both fixed and dynamic, and some AI integration models have already been applied in fields other than neurosurgery [97,98,99]. For instance, Campillo-Gimenez et al. developed a ML program able to predict the occurrence of surgical site infection through the analyses of patient medical records [100]; similar algorithms were also used to predict complications such as venous thromboembolism and surgical site infection in patients undergoing anterior lumbar fusion, exhibiting an accuracy of 95%, significantly outperforming traditional statistical means [101]. Hopkins et al. predicted the development of infection in patients undergoing posterior spinal fusion, with a positive predictive value of 92.3% [102].
A recent review by Williams et al. [103] reported a few studies regarding the potential of AI integration in predicting the development of several typical post-operative complications in brain tumor patients, usually preventable, including venous thromboembolism [104], falls [105], hypoglycemia [74,106], adverse drug events [107], and pressure ulcers [108].
Prognostic value—Currently, poor overall prognostication of tumors is based on independent risk factors such as histological grade and clinical data; in addition, molecular subtypes play an important role in response to treatment and overall survival of brain tumors [12,109]: for instance, MGMT mutation in GBM can improve treatment response [110], and IDH mutation is an important prognostic factor for patients with improved survival rates compared to IDH wildtype glioblastoma [111,112]. Conventional survival prediction based on statistical models is valid at the population level but does not consider individual patient peculiarities and therefore may be inaccurate. Radiomic analysis provides a wide variety of additional imaging information which, together with clinical, biochemical, and histological data, can be used to develop more accurate predictive models in order to plan more personalized treatment and surveillance.
However, radiomics metrics are not currently widely adopted in current predictive models, despite their potential to capture underlying tumor biology and outcomes. Now only a few studies included artificial intelligence algorithms. One of these studies extracted about 60 radiomics features from traditional and advanced MRI metrics of glioma patients, including tumor volume, angiogenesis, peritumoral infiltration, and cell density, to predict the overall survival group (low, medium, and high) and molecular subtype; the predictors achieved an accuracy of about 80% and 76%, respectively, with the most predictive features being tumor volumes, angiogenesis, peritumoral infiltration, cell density, and distance to the ventricles [113]. Another study analyzed the performance of two-stage, multimodal, multi-channel 3D DL networks in predicting overall survival yielding an accuracy of up to 0.91 for high-grade glioma patients. The first stage used 3D technology to automatically extract imaging features from multimodal preoperative MRI, DTI, and resting-state functional MRI, while the second stage added the demographic and tumor-related features [1,114].
Recent results of several studies aimed to predict overall survival through AI-driven applications were reported in a thorough review published by Zhu et al. [115]. One of these studies, conducted by Sanghazni and coworkers [116], extracted texture, shape, and volumetric features from multimodal MRI data to validate an ML-based model for overall survival prediction in 173 patients with GBM performed for 2-class (short and long) and 3-class (short, medium, and long) survival groups, with a demonstrated prediction accuracy of 97.5% and 87.1%. The peritumoral environment, when combined across multiparametric sequences, may play a key role in predicting long-term vs. short-term survival for GBM patients, according to research by Prasanna and colleagues [117]. They looked at the role of radiomic features extracted from preoperative conventional MR images of the peritumoral brain zone in predicting long-term (>18 months) vs. short-term (7 months) survival in GBM patients.
In another study, Park and colleagues aimed to include diffusion- and perfusion-weighted MRI sequences together with conventional MRI and clinical data to develop an integrative AI–based model for prognostication of patients with newly diagnosed GBM; they showed that multiparametric MRI prognostic model including radiomic information and clinical predictors, exhibited good discrimination and performed better than the conventional MRI radiomics model or clinical predictors alone [118].
In another study, Grist et al. used various unsupervised and supervised ML models to determine new patient subgroups in relation to survival, based on MRI data, in particular perfusion, DWI, and ADC values. These models successfully determined two new subgroups of brain tumors with different survival characteristics (p <  0.01), which were subsequently classified with high accuracy (98%) by a neural network [119].
Tumor hypoxia is also known as a factor decreasing survival in GBM patients, and a study on GBM hypoxia-associated radiomics by Beig et al. [120] revealed that, when combining clinical features with radiomic features related to hypoxia, the concordance index for survival prediction rises in comparison to when using “generic” radiomic features alone.
Radiomic features can also be used to generate novel subgroups that may more closely align with the biology of gliomas [121,122]; although these studies are still preliminary, and conducted on relatively small sample sizes, they seem to be even more accurate than clinical models and molecular markers currently used in WHO classification. Prognosis can also be stratified by measuring the proliferative index of a tumor, such as Ki-67, linked to a worse outcome, and several studies are beginning to cover this aspect by using radiomic features [123].

6.2. Prediction of Recurrence and Follow-Up

Response Assessment for Neuro-Oncology, also known as RANO, criteria have recently been proposed for evaluating treatment response, which also includes clinical status and abnormalities in T2/FLAIR signal intensity and enhancing tissue [124]. However, the RANO criteria are still a limited tool for assessing treatment response, especially considering that they use two-dimensional subjective measurements and exclude advanced imaging modalities, such as MR perfusion. In light of this, Kickingereder et al. showed that an ANN model is more reliable than the current RANO-based assessment for determining the time to progression [125].
The risk of recurrence is dramatically linked to the radicality of the resection, which in turn depends on the correct evaluation of the margins of the lesion and on the ability to distinguish between perilesional edema and tumor infiltration: these aspects have already been discussed in the section precedent regarding segmentation.
Differentiating between tumor recurrence and post-treatment alterations is a difficult choice to make when planning glioma treatment. Radiation necrosis is frequently experienced three years after receiving the standard radiotherapy and chemotherapy combination regimen for glioma [126]. The ability of MRI qualitative analysis to distinguish between radiation necrosis and tumor recurrence is currently limited [127], and the use of artificial intelligence has not yet been able to fully characterize tumor heterogeneity [128,129].
Only a few studies have investigated this issue so far [130].
Together, available evidence proves that AI and especially DL-based volumetric assessment of tumor response is both feasible and clinically important in the prediction of a neuroimaging endpoint [45]. Regarding the alteration of the tumor immune microenvironment, immunotherapies in GBM also lack trustworthy radiological imaging evaluation techniques. In their groundbreaking study, Narang et al. extracted six imaging features that are connected to intra-GBM CD3 activity using T1-weighted post-contrast and T2-FLAIR images as well as T-cell surface marker CD3D/E/G mRNA expression data from GBM patients [131].

6.3. Tailored Therapeutics

Current standard treatment for glioblastoma consists of maximal safe resection followed by radiation and chemotherapy with temozolomide, whilst lower-grade gliomas may be treated with surgery and/or chemo-radiation. A few clinical trials are starting to assess the role of immunotherapy in the treatment of patients with glioblastoma, including some targeting specific molecular pathways such as EGFR [132]. Adjuvant therapy in the post-surgical phase may achieve maximal efficacy with the help of AI-driven evaluations. Although there are still no examples of AI-driven brain tumor chemotherapy protocols in routine clinical, some studies focused on the potential role of AI in optimizing the chemotherapeutic protocols at other primary tumor sites, achieving promising results [133]. Only recently, Yauney et al. described an ML program that could iteratively optimize chemotherapeutic dose in a simulated trial of GBM patients [134]. In the future, AI platforms may also predict response to immunotherapy, as well as optimize the dose and treatment protocol [135].

6.4. Progression vs. Pseudo-Progression

Pseudoprogression is defined as an increase in enhancement and/or T2/FLAIR signal abnormality on MRI within 12 weeks of radiotherapy or combined radiotherapy-chemotherapy, with spontaneous resolution or stabilization without change in management, occurring in 15–50% of patients with gliomas (MGMT-methylated and IDH-mutant tumors especially) [136]. Antiangiogenic medications, on the other hand, may cause pseudo-response, which is a sharp decline in enhancement brought on by altering the blood-brain barrier with little or no change in the progression of the infiltrating portion and overall survival [1]. Additionally, it has been demonstrated that new immunotherapy drugs trigger complex inflammatory reactions, which makes evaluating responses more challenging [132]. Differentiating between pseudo-progression and true progression is thus very difficult on MRI, and artificial intelligence is just beginning to solve this diagnostic dilemma, as several studies do successfully confirm [137,138]. On the other hand, a systematic review by Kim et al. [132] recently analyzed seven studies that suggest otherwise, maybe due to the inadequate size of training data, an inappropriate AI algorithm, or the substantial heterogeneity across the studies. Radiation necrosis is another effect that can take place any time after radiation therapy, usually, around 1–2 years after, and the key radiology tool in differentiating pseudo-progression or radiation necrosis from true progression is dynamic susceptibility contrast MR perfusion-weighted imaging [139]; however, PWI is unreliable in patients treated with immunotherapy [140]. Although many studies [115,137,141,142,143] succeeded in showing how AI models were able to use advanced MRI data in distinguishing pseudo-progression from true tumor progression, further research is needed to include AI-based models in everyday medical practice. Moreover, to date, there is no objective histological definition of pseudo-progression [144], indicating that even histology might not be the gold standard in differentiating pseudo-progression from true tumor progression.
The main results of these studies are listed in Table 3.

7. Limitations

This is not a systematic review but a narrative review, and therefore we included the articles that we thought to be most relevant, but we cannot exclude that other interesting articles on this topic have not been mentioned.

8. Conclusions

This review provides an overview of the AI applications in brain oncological imaging.
The development of CAD tools can increase diagnostic accuracy in the detection of small metastatic brain lesions, to enable early and correct treatment planning, especially stereotactic radiosurgery.
The AI-driven extraction of imaging features unavailable to the human eye is changing the approach to radiological image analysis and reporting, transforming it from a qualitative interpretation to an objective, quantifiable and reproducible task.
Segmentation is an essential step in planning surgery or radiation therapy, monitoring lesions, and even developing radiomics-based tools, as it is preliminary to the extraction of radiomic features. However, manual segmentation is extremely time-consuming, and therefore researchers worked with semi-automated or fully automated AI-based tools to help radiologists to assess in their daily practice, providing objective measurements of tumor burden as well as the characterization of its growth patterns.
The differential diagnosis of primary brain neoplasms can be challenging, particularly when dealing with PCNSL and HGG; in addition, tumefactive multiple sclerosis and other benign inflammatory and infectious disorders can mimic neoplastic conditions. Non-invasive techniques for accurate diagnosis based on artificial intelligence can revolutionize the approach to brain disorders, avoiding invasive biopsies and allowing the most appropriate treatment to start joyfully.
The so-called “virtual biopsy” is providing promising results, not only in differential diagnosis but also in the non-invasive characterization of tumor histotypes, to obtain increasingly personalized therapeutic plans.
The better is the characterization of a lesion, the better are the chances that clinicians have of identifying the most effective therapies and predicting complications, recurrences and progression.
All of these AI applications aim to achieve personalized medicine, improved patient outcomes, and increased survival.
The future development and progressive diffusion of these instruments will result in benefits for clinicians and patients, and in a personalized medical approach.

Author Contributions

Conceptualization, M.C. (Michaela Cellina), M.C. (Maurizio Cè) and G.O.; Methodology, M.C. (Maurizio Cè) and G.I.; Literature research, C.F., G.O., G.I., M.C. (Maurizio Cè) and C.M.; Data Curation, G.M.D., C.M., G.O., M.L.S. and A.F.; Writing—Original Draft Preparation, M.C. (Michaela Cellina), M.C. (Maurizio Cè), G.I., G.M.D., C.M., G.O., M.L.S., A.F. and M.L.S.; Writing—Review and Editing, M.C. (Michaela Cellina), M.C. (Maurizio Cè), C.F., A.F., M.L.S., L.V.F. and G.I.; Supervision, M.C. (Michaela Cellina), G.O., M.C. (Maurizio Cè) and C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdel Razek, A.A.K.; Alksas, A.; Shehata, M.; AbdelKhalek, A.; Abdel Baky, K.; El-Baz, A.; Helmy, E. Clinical Applications of Artificial Intelligence and Radiomics in Neuro-Oncology Imaging. Insights Imaging 2021, 12, 152. [Google Scholar] [CrossRef]
  2. Wesseling, P.; Capper, D. WHO 2016 Classification of Gliomas. Neuropathol. Appl. Neurobiol. 2018, 44, 139–150. [Google Scholar] [CrossRef]
  3. Jiang, H.; Cui, Y.; Wang, J.; Lin, S. Impact of Epidemiological Characteristics of Supratentorial Gliomas in Adults Brought about by the 2016 World Health Organization Classification of Tumors of the Central Nervous System. Oncotarget 2017, 8, 20354–20361. [Google Scholar] [CrossRef] [Green Version]
  4. Ceravolo, I.; Barchetti, G.; Biraschi, F.; Gerace, C.; Pampana, E.; Pingi, A.; Stasolla, A. Early Stage Glioblastoma: Retrospective Multicentric Analysis of Clinical and Radiological Features. Radiol. Med. 2021, 126, 1468–1476. [Google Scholar] [CrossRef]
  5. Louis, D.N.; Ohgaki, H.; Wiestler, O.D.; Cavenee, W.K.; Burger, P.C.; Jouvet, A.; Scheithauer, B.W.; Kleihues, P. The 2007 WHO Classification of Tumours of the Central Nervous System. Acta Neuropathol. 2007, 114, 97–109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Louis, D.N.; Perry, A.; Reifenberger, G.; von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: A Summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef] [Green Version]
  7. Capper, D.; Jones, D.T.W.; Sill, M.; Hovestadt, V.; Schrimpf, D.; Sturm, D.; Koelsche, C.; Sahm, F.; Chavez, L.; Reuss, D.E.; et al. DNA Methylation-Based Classification of Central Nervous System Tumours. Nature 2018, 555, 469–474. [Google Scholar]
  8. Luger, A.-L.; König, S.; Samp, P.F.; Urban, H.; Divé, I.; Burger, M.C.; Voss, M.; Franz, K.; Fokas, E.; Filipski, K.; et al. Molecular Matched Targeted Therapies for Primary Brain Tumors—A Single Center Retrospective Analysis. J. Neurooncol. 2022, 159, 243–259. [Google Scholar] [CrossRef] [PubMed]
  9. Di Bonaventura, R.; Montano, N.; Giordano, M.; Gessi, M.; Gaudino, S.; Izzo, A.; Mattogno, P.P.; Stumpo, V.; Caccavella, V.M.; Giordano, C.; et al. Reassessing the Role of Brain Tumor Biopsy in the Era of Advanced Surgical, Molecular, and Imaging Techniques—A Single-Center Experience with Long-Term Follow-Up. J. Pers. Med. 2021, 11, 909. [Google Scholar] [CrossRef] [PubMed]
  10. Singh, G.; Manjila, S.; Sakla, N.; True, A.; Wardeh, A.H.; Beig, N.; Vaysberg, A.; Matthews, J.; Prasanna, P.; Spektor, V. Radiomics and Radiogenomics in Gliomas: A Contemporary Update. Br. J. Cancer 2021, 125, 641–657. [Google Scholar] [CrossRef] [PubMed]
  11. Vagvala, S.; Guenette, J.P.; Jaimes, C.; Huang, R.Y. Imaging Diagnosis and Treatment Selection for Brain Tumors in the Era of Molecular Therapeutics. Cancer Imaging 2022, 22, 19. [Google Scholar] [CrossRef] [PubMed]
  12. Rudie, J.D.; Rauschecker, A.M.; Bryan, R.N.; Davatzikos, C.; Mohan, S. Emerging Applications of Artificial Intelligence in Neuro-Oncology. Radiology 2019, 290, 607–618. [Google Scholar] [CrossRef]
  13. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, R.G.P.M.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting More Information from Medical Images Using Advanced Feature Analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [Green Version]
  14. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial Intelligence in Radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  15. Scapicchio, C.; Gabelloni, M.; Barucci, A.; Cioni, D.; Saba, L.; Neri, E. A Deep Look into Radiomics. Radiol. Med. 2021, 126, 1296–1311. [Google Scholar] [CrossRef] [PubMed]
  16. Mayerhoefer, M.E.; Materka, A.; Langs, G.; Häggström, I.; Szczypiński, P.; Gibbs, P.; Cook, G. Introduction to Radiomics. J. Nucl. Med. 2020, 61, 488–495. [Google Scholar] [CrossRef] [PubMed]
  17. Irmici, G.; Cè, M.; Caloro, E.; Khenkina, N.; Della Pepa, G.; Ascenti, V.; Martinenghi, C.; Papa, S.; Oliva, G.; Cellina, M. Chest X-Ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics 2023, 13, 216. [Google Scholar] [CrossRef]
  18. Soda, P.; D’Amico, N.C.; Tessadori, J.; Valbusa, G.; Guarrasi, V.; Bortolotto, C.; Akbar, M.U.; Sicilia, R.; Cordelli, E.; Fazzini, D.; et al. AIforCOVID: Predicting the Clinical Outcomes in Patients with COVID-19 Applying AI to Chest-X-Rays. An Italian Multicentre Study. Med. Image Anal. 2021, 74, 102216. [Google Scholar] [CrossRef]
  19. Shui, L.; Ren, H.; Yang, X.; Li, J.; Chen, Z.; Yi, C.; Zhu, H.; Shui, P. The Era of Radiogenomics in Precision Medicine: An Emerging Approach to Support Diagnosis, Treatment Decisions, and Prognostication in Oncology. Front. Oncol. 2021, 10, 570465. [Google Scholar] [CrossRef]
  20. Avery, E.; Sanelli, P.C.; Aboian, M.; Payabvash, S. Radiomics: A Primer on Processing Workflow and Analysis. Semin. Ultrasound CT MRI 2022, 43, 142–146. [Google Scholar] [CrossRef]
  21. Lohmann, P.; Galldiks, N.; Kocher, M.; Heinzel, A.; Filss, C.P.; Stegmayr, C.; Mottaghy, F.M.; Fink, G.R.; Jon Shah, N.; Langen, K.-J. Radiomics in Neuro-Oncology: Basics, Workflow, and Applications. Methods 2021, 188, 112–121. [Google Scholar] [CrossRef]
  22. Cellina, M.; Pirovano, M.; Ciocca, M.; Gibelli, D.; Floridi, C.; Oliva, G. Radiomic Analysis of the Optic Nerve at the First Episode of Acute Optic Neuritis: An Indicator of Optic Nerve Pathology and a Predictor of Visual Recovery? Radiol. Med. 2021, 126, 698–706. [Google Scholar] [CrossRef] [PubMed]
  23. Cellina, M.; Cè, M.; Irmici, G.; Ascenti, V.; Khenkina, N.; Toto-Brocchi, M.; Martinenghi, C.; Papa, S.; Carrafiello, G. Artificial Intelligence in Lung Cancer Imaging: Unfolding the Future. Diagnostics 2022, 12, 2644. [Google Scholar] [CrossRef]
  24. Tan, P.-N.; Steinbach, M.; Karpatne, A.; Kumar, V. Introduction to Data Mining, 2nd ed.; What’s New in Computer Science; Pearson: London, UK, 2018. [Google Scholar]
  25. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  26. Jung, A. Machine Learning: The Basics; Springer: Singapore, 2022. [Google Scholar]
  27. Sidey-Gibbons, J.A.M.; Sidey-Gibbons, C.J. Machine Learning in Medicine: A Practical Introduction. BMC Med. Res. Methodol. 2019, 19, 64. [Google Scholar] [CrossRef] [Green Version]
  28. Vicini, S.; Bortolotto, C.; Rengo, M.; Ballerini, D.; Bellini, D.; Carbone, I.; Preda, L.; Laghi, A.; Coppola, F.; Faggioni, L. A Narrative Review on Current Imaging Applications of Artificial Intelligence and Radiomics in Oncology: Focus on the Three Most Common Cancers. Radiol. Med. 2022, 127, 819–836. [Google Scholar] [CrossRef] [PubMed]
  29. Cellina, M.; Cè, M.; Irmici, G.; Ascenti, V.; Caloro, E.; Bianchi, L.; Pellegrino, G.; D’Amico, N.; Papa, S.; Carrafiello, G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics 2022, 12, 3223. [Google Scholar] [CrossRef]
  30. Guido, S.; Muller, A. Introduction to Machine Learning with Python a Guide for Data Scientists; O’Reilly Media: Sebastopol, CA, USA, 2018. [Google Scholar]
  31. Sansone, M.; Fusco, R.; Grassi, F.; Gatta, G.; Belfiore, M.P.; Angelone, F.; Ricciardi, C.; Ponsiglione, A.M.; Amato, F.; Galdiero, R.; et al. Machine Learning Approaches with Textural Features to Calculate Breast Density on Mammography. Curr. Oncol. 2023, 30, 839–853. [Google Scholar] [CrossRef] [PubMed]
  32. Fusco, R.; Di Bernardo, E.; Piccirillo, A.; Rubulotta, M.R.; Petrosino, T.; Barretta, M.L.; Mattace Raso, M.; Vallone, P.; Raiano, C.; Di Giacomo, R.; et al. Radiomic and Artificial Intelligence Analysis with Textural Metrics Extracted by Contrast-Enhanced Mammography and Dynamic Contrast Magnetic Resonance Imaging to Detect Breast Malignant Lesions. Curr. Oncol. 2022, 29, 1947–1966. [Google Scholar] [CrossRef]
  33. Zhang, Z. A Gentle Introduction to Artificial Neural Networks. Ann. Transl. Med. 2016, 4, 370. [Google Scholar] [CrossRef] [Green Version]
  34. Agatonovic-Kustrin, S.; Beresford, R. Basic Concepts of Artificial Neural Network (ANN) Modeling and Its Application in Pharmaceutical Research. J. Pharm. Biomed. Anal. 2000, 22, 717–727. [Google Scholar] [CrossRef]
  35. Santosh, K.C.; Das, N.; Ghosh, S. Deep Learning Models for Medical Imaging. In Deep Learning Models for Medical Imaging; Elsevier: Amsterdam, The Netherlands, 2021; pp. i–iii. [Google Scholar]
  36. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  37. Nielsen, M.A. Neural Networks and Deep Learning; Determination Press: San Francisco, CA, USA, 2015. [Google Scholar]
  38. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.M.; Hamed, H.F.A. A Review on Brain Tumor Diagnosis from MRI Images: Practical Implications, Key Achievements, and Lessons Learned. Magn. Reson. Imaging 2019, 61, 300–318. [Google Scholar] [CrossRef]
  39. Parekh, V.S.; Jacobs, M.A. Deep Learning and Radiomics in Precision Medicine. Expert Rev. Precis. Med. Drug Dev. 2019, 4, 59–72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  41. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  42. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  43. Abiwinanda, N.; Hanif, M.; Hesaputra, S.T.; Handayani, A.; Mengko, T.R. Brain Tumor Classification Using Convolutional Neural Network; Springe: Singapore, 2019. [Google Scholar]
  44. Pope, W.B. Brain Metastases: Neuroimaging. Handb. Clin. Neurol. 2018, 149, 89–112. [Google Scholar]
  45. Park, J.E. Artificial Intelligence in Neuro-Oncologic Imaging: A Brief Review for Clinical Use Cases and Future Perspectives. Brain Tumor Res. Treat. 2022, 10, 69. [Google Scholar] [CrossRef] [PubMed]
  46. Park, Y.W.; Jun, Y.; Lee, Y.; Han, K.; An, C.; Ahn, S.S.; Hwang, D.; Lee, S.-K. Robust Performance of Deep Learning for Automatic Detection and Segmentation of Brain Metastases Using Three-Dimensional Black-Blood and Three-Dimensional Gradient Echo Imaging. Eur. Radiol. 2021, 31, 6686–6695. [Google Scholar] [CrossRef]
  47. Voicu, I.P.; Pravatà, E.; Panara, V.; Navarra, R.; Mattei, P.A.; Caulo, M. Differentiating Solitary Brain Metastases from High-Grade Gliomas with MR: Comparing Qualitative versus Quantitative Diagnostic Strategies. Radiol. Med. 2022, 127, 891–898. [Google Scholar] [CrossRef]
  48. Bauer, A.H.; Erly, W.; Moser, F.G.; Maya, M.; Nael, K. Differentiation of Solitary Brain Metastasis from Glioblastoma Multiforme: A Predictive Multiparametric Approach Using Combined MR Diffusion and Perfusion. Neuroradiology 2015, 57, 697–703. [Google Scholar] [CrossRef] [PubMed]
  49. Romano, A.; Moltoni, G.; Guarnera, A.; Pasquini, L.; Di Napoli, A.; Napolitano, A.; Espagnet, M.C.R.; Bozzao, A. Single Brain Metastasis versus Glioblastoma Multiforme: A VOI-Based Multiparametric Analysis for Differential Diagnosis. Radiol. Med. 2022, 127, 490–497. [Google Scholar] [CrossRef]
  50. Swinburne, N.C.; Schefflein, J.; Sakai, Y.; Oermann, E.K.; Titano, J.J.; Chen, I.; Tadayon, S.; Aggarwal, A.; Doshi, A.; Nael, K. Machine Learning for Semi-automated Classification of Glioblastoma, Brain Metastasis and Central Nervous System Lymphoma Using Magnetic Resonance Advanced Imaging. Ann. Transl. Med. 2019, 7, 232. [Google Scholar] [CrossRef]
  51. Upadhyay, N.; Waldman, A.D. Conventional MRI Evaluation of Gliomas. Br. J. Radiol. 2011, 84, S107–S111. [Google Scholar] [CrossRef] [Green Version]
  52. Skogen, K.; Schulz, A.; Helseth, E.; Ganeshan, B.; Dormagen, J.B.; Server, A. Texture Analysis on Diffusion Tensor Imaging: Discriminating Glioblastoma from Single Brain Metastasis. Acta Radiol. 2019, 60, 356–366. [Google Scholar] [CrossRef]
  53. Han, Y.; Zhang, L.; Niu, S.; Chen, S.; Yang, B.; Chen, H.; Zheng, F.; Zang, Y.; Zhang, H.; Xin, Y.; et al. Differentiation between Glioblastoma Multiforme and Metastasis from the Lungs and Other Sites Using Combined Clinical/Routine MRI Radiomics. Front. Cell Dev. Biol. 2021, 9, 710461. [Google Scholar] [CrossRef]
  54. Nayak, L.; Lee, E.Q.; Wen, P.Y. Epidemiology of Brain Metastases. Curr. Oncol. Rep. 2012, 14, 48–54. [Google Scholar] [CrossRef] [PubMed]
  55. Ortiz-Ramón, R.; Larroza, A.; Ruiz-España, S.; Arana, E.; Moratal, D. Classifying Brain Metastases by Their Primary Site of Origin Using a Radiomics Approach Based on Texture Analysis: A Feasibility Study. Eur. Radiol. 2018, 28, 4514–4523. [Google Scholar] [CrossRef]
  56. Barajas, R.F.; Politi, L.S.; Anzalone, N.; Schöder, H.; Fox, C.P.; Boxerman, J.L.; Kaufmann, T.J.; Quarles, C.C.; Ellingson, B.M.; Auer, D.; et al. Consensus Recommendations for MRI and PET Imaging of Primary Central Nervous System Lymphoma: Guideline Statement from the International Primary CNS Lymphoma Collaborative Group (IPCG). Neuro Oncol. 2021, 23, 1056–1071. [Google Scholar] [CrossRef]
  57. Tang, Y.Z.; Booth, T.C.; Bhogal, P.; Malhotra, A.; Wilhelm, T. Imaging of Primary Central Nervous System Lymphoma. Clin. Radiol. 2011, 66, 768–777. [Google Scholar] [CrossRef]
  58. Cai, Q.; Fang, Y.; Young, K.H. Primary Central Nervous System Lymphoma: Molecular Pathogenesis and Advances in Treatment. Transl. Oncol. 2019, 12, 523–538. [Google Scholar] [CrossRef] [PubMed]
  59. Stadlbauer, A.; Marhold, F.; Oberndorfer, S.; Heinz, G.; Buchfelder, M.; Kinfe, T.M.; Meyer-Bäse, A. Radiophysiomics: Brain Tumors Classification by Machine Learning and Physiological MRI Data. Cancers 2022, 14, 2363. [Google Scholar] [CrossRef]
  60. Ucuzal, H.; Yasar, S.; Colak, C. Classification of Brain Tumor Types by Deep Learning with Convolutional Neural Network on Magnetic Resonance Images Using a Developed Web-Based Interface. In Proceedings of the 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 11–13 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  61. Adu, K.; Yu, Y.; Cai, J.; Tashi, N. Dilated Capsule Network for Brain Tumor Type Classification via MRI Segmented Tumor Region. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 942–947. [Google Scholar]
  62. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule Networks for Brain Tumor Classification Based on MRI Images and Coarse Tumor Boundaries. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1368–1372. [Google Scholar]
  63. Sunderland, G.J.; Jenkinson, M.D.; Zakaria, R. Surgical Management of Posterior Fossa Metastases. J. Neurooncol. 2016, 130, 535–542. [Google Scholar] [CrossRef] [PubMed]
  64. She, D.; Yang, X.; Xing, Z.; Cao, D. Differentiating Hemangioblastomas from Brain Metastases Using Diffusion-Weighted Imaging and Dynamic Susceptibility Contrast-Enhanced Perfusion-Weighted MR Imaging. Am. J. Neuroradiol. 2016, 37, 1844–1850. [Google Scholar] [CrossRef] [Green Version]
  65. Payabvash, S.; Aboian, M.; Tihan, T.; Cha, S. Machine Learning Decision Tree Models for Differentiation of Posterior Fossa Tumors Using Diffusion Histogram Analysis and Structural MRI Findings. Front. Oncol. 2020, 10, 71. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Lin, X.; Yu, W.-Y.; Liauw, L.; Chander, R.J.; Soon, W.E.; Lee, H.Y.; Tan, K. Clinicoradiologic Features Distinguish Tumefactive Multiple Sclerosis from CNS Neoplasms. Neurol. Clin. Pract. 2017, 7, 53–64. [Google Scholar] [CrossRef] [Green Version]
  67. Verma, R.K.; Wiest, R.; Locher, C.; Heldner, M.R.; Schucht, P.; Raabe, A.; Gralla, J.; Kamm, C.P.; Slotboom, J.; Kellner-Weldon, F. Differentiating Enhancing Multiple Sclerosis Lesions, Glioblastoma, and Lymphoma with Dynamic Texture Parameters Analysis: A Feasibility Study. Med. Phys. 2017, 44, 4000–4008. [Google Scholar] [CrossRef] [PubMed]
  68. Han, Y.; Yang, Y.; Shi, Z.; Zhang, A.; Yan, L.; Hu, Y.; Feng, L.; Ma, J.; Wang, W.; Cui, G. Distinguishing Brain Inflammation from Grade II Glioma in Population without Contrast Enhancement: A Radiomics Analysis Based on Conventional MRI. Eur. J. Radiol. 2021, 134, 109467. [Google Scholar] [CrossRef]
  69. Qian, Z.; Li, Y.; Wang, Y.; Li, L.; Li, R.; Wang, K.; Li, S.; Tang, K.; Zhang, C.; Fan, X.; et al. Differentiation of Glioblastoma from Solitary Brain Metastases Using Radiomic Machine-Learning Classifiers. Cancer Lett. 2019, 451, 128–135. [Google Scholar] [CrossRef]
  70. Bae, S.; An, C.; Ahn, S.S.; Kim, H.; Han, K.; Kim, S.W.; Park, J.E.; Kim, H.S.; Lee, S.-K. Robust Performance of Deep Learning for Distinguishing Glioblastoma from Single Brain Metastasis Using Radiomic Features: Model Development and Validation. Sci. Rep. 2020, 10, 12110. [Google Scholar] [CrossRef]
  71. Wiestler, B.; Kluge, A.; Lukas, M.; Gempt, J.; Ringel, F.; Schlegel, J.; Meyer, B.; Zimmer, C.; Förster, S.; Pyka, T.; et al. Multiparametric MRI-Based Differentiation of WHO Grade II/III Glioma and WHO Grade IV Glioblastoma. Sci. Rep. 2016, 6, 35142. [Google Scholar] [CrossRef]
  72. Zhang, X.; Yan, L.-F.; Hu, Y.-C.; Li, G.; Yang, Y.; Han, Y.; Sun, Y.-Z.; Liu, Z.-C.; Tian, Q.; Han, Z.-Y.; et al. Optimizing a Machine Learning Based Glioma Grading System Using Multi-Parametric MRI Histogram and Texture Features. Oncotarget 2017, 8, 47816–47830. [Google Scholar] [CrossRef]
  73. Kim, M.; Jung, S.Y.; Park, J.E.; Jo, Y.; Park, S.Y.; Nam, S.J.; Kim, J.H.; Kim, H.S. Diffusion- and Perfusion-Weighted MRI Radiomics Model May Predict Isocitrate Dehydrogenase (IDH) Mutation and Tumor Aggressiveness in Diffuse Lower Grade Glioma. Eur. Radiol. 2020, 30, 2142–2151. [Google Scholar] [CrossRef] [PubMed]
  74. Akkus, Z.; Ali, I.; Sedlář, J.; Agrawal, J.P.; Parney, I.F.; Giannini, C.; Erickson, B.J. Predicting Deletion of Chromosomal Arms 1p/19q in Low-Grade Gliomas from MR Images Using Machine Intelligence. J. Digit. Imaging 2017, 30, 469–476. [Google Scholar] [CrossRef] [Green Version]
  75. Cho, H.; Lee, S.; Kim, J.; Park, H. Classification of the Glioma Grading Using Radiomics Analysis. PeerJ 2018, 6, e5982. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Tian, Q.; Yan, L.-F.; Zhang, X.; Zhang, X.; Hu, Y.-C.; Han, Y.; Liu, Z.-C.; Nan, H.-Y.; Sun, Q.; Sun, Y.-Z.; et al. Radiomics Strategy for Glioma Grading Using Texture Features from Multiparametric MRI. J. Magn. Reson. Imaging 2018, 48, 1518–1528. [Google Scholar] [CrossRef] [PubMed]
  77. Mzoughi, H.; Njeh, I.; Wali, A.; Slima, M.B.; BenHamida, A.; Mhiri, C.; Mahfoudhe, K.B. Deep Multi-Scale 3D Convolutional Neural Network (CNN) for MRI Gliomas Brain Tumor Classification. J. Digit. Imaging 2020, 33, 903–915. [Google Scholar] [CrossRef]
  78. Chang, P.; Grinband, J.; Weinberg, B.D.; Bardis, M.; Khy, M.; Cadena, G.; Su, M.-Y.; Cha, S.; Filippi, C.G.; Bota, D.; et al. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas. Am. J. Neuroradiol. 2018, 39, 1201–1207. [Google Scholar] [CrossRef] [Green Version]
  79. Meng, L.; Zhang, R.; Fa, L.; Zhang, L.; Wang, L.; Shao, G. ATRX Status in Patients with Gliomas: Radiomics Analysis. Medicine 2022, 101, e30189. [Google Scholar] [CrossRef]
  80. Ren, Y.; Zhang, X.; Rui, W.; Pang, H.; Qiu, T.; Wang, J.; Xie, Q.; Jin, T.; Zhang, H.; Chen, H.; et al. Noninvasive Prediction of IDH1 Mutation and ATRX Expression Loss in Low-Grade Gliomas Using Multiparametric MR Radiomic Features. J. Magn. Reson. Imaging 2019, 49, 808–817. [Google Scholar] [CrossRef] [PubMed]
  81. Alentorn, A.; Duran-Peña, A.; Pingle, S.C.; Piccioni, D.E.; Idbaih, A.; Kesari, S. Molecular profiling of gliomas: Potential therapeutic implications. Expert Rev. Anticancer Ther. 2015, 15, 955–962. [Google Scholar] [CrossRef]
  82. Haubold, J.; Hosch, R.; Parmar, V.; Glas, M.; Guberina, N.; Catalano, O.A.; Pierscianek, D.; Wrede, K.; Deuschl, C.; Forsting, M.; et al. Fully Automated MR Based Virtual Biopsy of Cerebral Gliomas. Cancers 2021, 13, 6186. [Google Scholar] [CrossRef]
  83. Shboul, Z.A.; Chen, J.; Iftekharuddin, K.M. Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas Using MR Imaging Features. Sci. Rep. 2020, 10, 3711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Calabrese, E.; Rudie, J.D.; Rauschecker, A.M.; Villanueva-Meyer, J.E.; Clarke, J.L.; Solomon, D.A.; Cha, S. Combining Radiomics and Deep Convolutional Neural Network Features from Preoperative MRI for Predicting Clinically Relevant Genetic Biomarkers in Glioblastoma. Neuro-Oncol. Adv. 2022, 4, vdac060. [Google Scholar] [CrossRef] [PubMed]
  85. Khastavaneh, H.; Ebrahimpour-komleh, H. Automated Segmentation of Abnormal Tissues in Medical Images. J. Biomed. Phys. Eng. 2019. [Google Scholar] [CrossRef]
  86. Barone, F.; Alberio, N.; Iacopino, D.; Giammalva, G.; D’Arrigo, C.; Tagnese, W.; Graziano, F.; Cicero, S.; Maugeri, R. Brain Mapping as Helpful Tool in Brain Glioma Surgical Treatment—Toward the “Perfect Surgery”? Brain Sci. 2018, 8, 192. [Google Scholar] [CrossRef] [Green Version]
  87. Kumar, A. Study and Analysis of Different Segmentation Methods for Brain Tumor MRI Application. Multimed. Tools Appl. 2023, 82, 7117–7139. [Google Scholar] [CrossRef]
  88. Akbari, H.; Macyszyn, L.; Da, X.; Bilello, M.; Wolf, R.L.; Martinez-Lage, M.; Biros, G.; Alonso-Basanta, M.; O’Rourke, D.M.; Davatzikos, C. Imaging Surrogates of Infiltration Obtained Via Multiparametric Imaging Pattern Analysis Predict Subsequent Location of Recurrence of Glioblastoma. Neurosurgery 2016, 78, 572–580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Rathore, S.; Akbari, H.; Doshi, J. Radiomic Signature of Infiltration in Peritumoral Edema Predicts Subsequent Recurrence in Glioblastoma: Implications for Personalized Radiotherapy Planning. J. Med. Imaging 2018, 5, 1. [Google Scholar] [CrossRef]
  90. Chang, P.D.; Malone, H.R.; Bowden, S.G.; Chow, D.S.; Gill, B.J.A.; Ung, T.H.; Samanamud, J.; Englander, Z.K.; Sonabend, A.M.; Sheth, S.A.; et al. A Multiparametric Model for Mapping Cellularity in Glioblastoma Using Radiographically Localized Biopsies. Am. J. Neuroradiol. 2017, 38, 890–898. [Google Scholar] [CrossRef] [Green Version]
  91. Chang, P.D.; Chow, D.S.; Yang, P.H.; Filippi, C.G.; Lignelli, A. Predicting Glioblastoma Recurrence by Early Changes in the Apparent Diffusion Coefficient Value and Signal Intensity on FLAIR Images. Am. J. Roentgenol. 2017, 208, 57–65. [Google Scholar] [CrossRef] [Green Version]
  92. Akinyelu, A.A.; Zaccagna, F.; Grist, J.T.; Castelli, M.; Rundo, L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J. Imaging 2022, 8, 205. [Google Scholar] [CrossRef]
  93. Afshar, P.; Mohammadi, A.; Plataniotis, K.N. BayesCap: A Bayesian Approach to Brain Tumor Classification Using Capsule Networks. IEEE Signal Process. Lett. 2020, 27, 2024–2028. [Google Scholar] [CrossRef]
  94. Thaha, M.M.; Kumar, K.P.M.; Murugan, B.S.; Dhanasekeran, S.; Vijayakarthick, P.; Selvi, A.S. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. J. Med. Syst. 2019, 43, 294. [Google Scholar] [CrossRef]
  95. Cè, M.; Caloro, E.; Pellegrino, M.E.; Basile, M.; Sorce, A.; Fazzini, D.; Oliva, G.; Cellina, M. Artificial Intelligence in Breast Cancer Imaging: Risk Stratification, Lesion Detection and Classification, Treatment Planning and Prognosis—A Narrative Review. Explor. Target. Anti-Tumor Ther. 2022, 3, 795–816. [Google Scholar] [CrossRef]
  96. Aboian, M.; Bousabarah, K.; Kazarian, E.; Zeevi, T.; Holler, W.; Merkaj, S.; Cassinelli Petersen, G.; Bahar, R.; Subramanian, H.; Sunku, P.; et al. Clinical Implementation of Artificial Intelligence in Neuroradiology with Development of a Novel Workflow-Efficient Picture Archiving and Communication System-Based Automated Brain Tumor Segmentation and Radiomic Feature Extraction. Front. Neurosci. 2022, 16, 860208. [Google Scholar] [CrossRef] [PubMed]
  97. Lu, S.; Yan, M.; Li, C.; Yan, C.; Zhu, Z.; Lu, W. Machine-Learning-Assisted Prediction of Surgical Outcomes in Patients Undergoing Gastrectomy. Chin. J. Cancer Res. 2019, 31, 797–805. [Google Scholar] [CrossRef]
  98. Harris, A.H.S.; Kuo, A.C.; Weng, Y.; Trickey, A.W.; Bowe, T.; Giori, N.J. Can Machine Learning Methods Produce Accurate and Easy-to-Use Prediction Models of 30-Day Complications and Mortality After Knee or Hip Arthroplasty? Clin. Orthop. Relat. Res. 2019, 477, 452–460. [Google Scholar] [CrossRef]
  99. Merath, K.; Hyer, J.M.; Mehta, R.; Farooq, A.; Bagante, F.; Sahara, K.; Tsilimigras, D.I.; Beal, E.; Paredes, A.Z.; Wu, L.; et al. Use of Machine Learning for Prediction of Patient Risk of Postoperative Complications After Liver, Pancreatic, and Colorectal Surgery. J. Gastrointest. Surg. 2020, 24, 1843–1851. [Google Scholar] [CrossRef]
  100. Campillo-Gimenez, B.; Garcelon, N.; Jarno, P.; Chapplain, J.M.; Cuggia, M. Full-Text Automated Detection of Surgical Site Infections Secondary to Neurosurgery in Rennes, France. Stud. Health Technol. Inform. 2013, 192, 572–575. [Google Scholar] [PubMed]
  101. Arvind, V.; Kaji, D.; Kim, J.; Caridi, J.M.; Cho, S.K. Artificial Intelligence (AI) Can Predict Postoperative Complications Better than Traditional Statistical Testing Following Anterior Cervical Discectomy and Fusion (ACDF). Spine J. 2017, 17, S145–S146. [Google Scholar] [CrossRef]
  102. Hopkins, B.S.; Mazmudar, A.; Driscoll, C.; Svet, M.; Goergen, J.; Kelsten, M.; Shlobin, N.A.; Kesavabhotla, K.; Smith, Z.A.; Dahdaleh, N.S. Using Artificial Intelligence (AI) to Predict Postoperative Surgical Site Infection: A Retrospective Cohort of 4046 Posterior Spinal Fusions. Clin. Neurol. Neurosurg. 2020, 192, 105718. [Google Scholar] [CrossRef] [PubMed]
  103. Williams, S.; Layard Horsfall, H.; Funnell, J.P.; Hanrahan, J.G.; Khan, D.Z.; Muirhead, W.; Stoyanov, D.; Marcus, H.J. Artificial Intelligence in Brain Tumour Surgery—An Emerging Paradigm. Cancers 2021, 13, 5010. [Google Scholar] [CrossRef] [PubMed]
  104. Ferroni, P.; Zanzotto, F.M.; Scarpato, N.; Riondino, S.; Nanni, U.; Roselli, M.; Guadagni, F. Risk Assessment for Venous Thromboembolism in Chemotherapy-Treated Ambulatory Cancer Patients. Med. Decis. Mak. 2017, 37, 234–242. [Google Scholar] [CrossRef]
  105. Howcroft, J.; Kofman, J.; Lemaire, E.D. Prospective Fall-Risk Prediction Models for Older Adults Based on Wearable Sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1812–1820. [Google Scholar] [CrossRef] [PubMed]
  106. Bates, D.W.; Levine, D.; Syrowatka, A.; Kuznetsova, M.; Craig, K.J.T.; Rui, A.; Jackson, G.P.; Rhee, K. The Potential of Artificial Intelligence to Improve Patient Safety: A Scoping Review. NPJ Digit. Med. 2021, 4, 54. [Google Scholar] [CrossRef] [PubMed]
  107. Zitnik, M.; Agrawal, M.; Leskovec, J. Modeling Polypharmacy Side Effects with Graph Convolutional Networks. Bioinformatics 2018, 34, i457–i466. [Google Scholar] [CrossRef] [Green Version]
  108. Hsiao, R.-S.; Mi, Z.; Yang, B.-R.; Kau, L.-J.; Bitew, M.A.; Li, T.-Y. Body Posture Recognition and Turning Recording System for the Care of Bed Bound Patients. Technol. Health Care 2015, 24, S307–S312. [Google Scholar] [CrossRef] [PubMed]
  109. Zlochower, A.; Chow, D.S.; Chang, P.; Khatri, D.; Boockvar, J.A.; Filippi, C.G. Deep Learning AI Applications in the Imaging of Glioma. Top. Magn. Reson. Imaging 2020, 29, 115-0. [Google Scholar] [CrossRef] [PubMed]
  110. Hegi, M.E.; Diserens, A.-C.; Gorlia, T.; Hamou, M.-F.; de Tribolet, N.; Weller, M.; Kros, J.M.; Hainfellner, J.A.; Mason, W.; Mariani, L.; et al. MGMT Gene Silencing and Benefit from Temozolomide in Glioblastoma. N. Engl. J. Med. 2005, 352, 997–1003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  111. Xia, L.; Wu, B.; Fu, Z.; Feng, F.; Qiao, E.; Li, Q.; Sun, C.; Ge, M. Prognostic Role of IDH Mutations in Gliomas: A Meta-Analysis of 55 Observational Studies. Oncotarget 2015, 6, 17354–17365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Chen, J.-R.; Yao, Y.; Xu, H.-Z.; Qin, Z.-Y. Isocitrate Dehydrogenase (IDH)1/2 Mutations as Prognostic Markers in Patients with Glioblastomas. Medicine 2016, 95, e2583. [Google Scholar] [CrossRef] [PubMed]
  113. Macyszyn, L.; Akbari, H.; Pisapia, J.M.; Da, X.; Attiah, M.; Pigrish, V.; Bi, Y.; Pal, S.; Davuluri, R.V.; Roccograndi, L.; et al. Imaging Patterns Predict Patient Survival and Molecular Subtype in Glioblastoma via Machine Learning Techniques. Neuro Oncol. 2016, 18, 417–425. [Google Scholar] [CrossRef] [Green Version]
  114. Nie, D.; Lu, J.; Zhang, H.; Adeli, E.; Wang, J.; Yu, Z.; Liu, L.; Wang, Q.; Wu, J.; Shen, D. Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages. Sci. Rep. 2019, 9, 1103. [Google Scholar] [CrossRef] [Green Version]
  115. Zhu, M.; Li, S.; Kuang, Y.; Hill, V.B.; Heimberger, A.B.; Zhai, L.; Zhai, S. Artificial Intelligence in the Radiomic Analysis of Glioblastomas: A Review, Taxonomy, and Perspective. Front. Oncol. 2022, 12, 3793. [Google Scholar] [CrossRef]
  116. Sanghani, P.; Ang, B.T.; King, N.K.K.; Ren, H. Overall Survival Prediction in Glioblastoma Multiforme Patients from Volumetric, Shape and Texture Features Using Machine Learning. Surg. Oncol. 2018, 27, 709–714. [Google Scholar] [CrossRef] [PubMed]
  117. Prasanna, P.; Patel, J.; Partovi, S.; Madabhushi, A.; Tiwari, P. Radiomic Features from the Peritumoral Brain Parenchyma on Treatment-Naïve Multi-Parametric MR Imaging Predict Long versus Short-Term Survival in Glioblastoma Multiforme: Preliminary Findings. Eur. Radiol. 2017, 27, 4188–4197. [Google Scholar] [CrossRef]
  118. Park, J.E.; Kim, H.S.; Jo, Y.; Yoo, R.-E.; Choi, S.H.; Nam, S.J.; Kim, J.H. Radiomics Prognostication Model in Glioblastoma Using Diffusion- and Perfusion-Weighted MRI. Sci. Rep. 2020, 10, 4250. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  119. Grist, J.T.; Withey, S.; Bennett, C.; Rose, H.E.L.; MacPherson, L.; Oates, A.; Powell, S.; Novak, J.; Abernethy, L.; Pizer, B.; et al. Combining Multi-Site Magnetic Resonance Imaging with Machine Learning Predicts Survival in Pediatric Brain Tumors. Sci. Rep. 2021, 11, 18897. [Google Scholar] [CrossRef]
  120. Beig, N.; Patel, J.; Prasanna, P.; Hill, V.; Gupta, A.; Correa, R.; Bera, K.; Singh, S.; Partovi, S.; Varadan, V.; et al. Radiogenomic Analysis of Hypoxia Pathway Is Predictive of Overall Survival in Glioblastoma. Sci. Rep. 2018, 8, 7. [Google Scholar] [CrossRef] [Green Version]
  121. Itakura, H.; Achrol, A.S.; Mitchell, L.A.; Loya, J.J.; Liu, T.; Westbroek, E.M.; Feroze, A.H.; Rodriguez, S.; Echegaray, S.; Azad, T.D.; et al. Magnetic Resonance Image Features Identify Glioblastoma Phenotypic Subtypes with Distinct Molecular Pathway Activities. Sci. Transl. Med. 2015, 7, 303ra138. [Google Scholar] [CrossRef] [Green Version]
  122. Rathore, S.; Akbari, H.; Rozycki, M.; Abdullah, K.G.; Nasrallah, M.P.; Binder, Z.A.; Davuluri, R.V.; Lustig, R.A.; Dahmane, N.; Bilello, M.; et al. Radiomic MRI Signature Reveals Three Distinct Subtypes of Glioblastoma with Different Clinical and Molecular Characteristics, Offering Prognostic Value beyond IDH1. Sci. Rep. 2018, 8, 5087. [Google Scholar] [CrossRef] [Green Version]
  123. Li, Y.; Qian, Z.; Xu, K.; Wang, K.; Fan, X.; Li, S.; Liu, X.; Wang, Y.; Jiang, T. Radiomic Features Predict Ki-67 Expression Level and Survival in Lower Grade Gliomas. J. Neurooncol. 2017, 135, 317–324. [Google Scholar] [CrossRef]
  124. Chukwueke, U.N.; Wen, P.Y. Use of the Response Assessment in Neuro-Oncology (RANO) Criteria in Clinical Trials and Clinical Practice. CNS Oncol. 2019, 8, CNS28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Kickingereder, P.; Isensee, F.; Tursunova, I.; Petersen, J.; Neuberger, U.; Bonekamp, D.; Brugnara, G.; Schell, M.; Kessler, T.; Foltyn, M.; et al. Automated Quantitative Tumour Response Assessment of MRI in Neuro-Oncology with Artificial Neural Networks: A Multicentre, Retrospective Study. Lancet Oncol. 2019, 20, 728–740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  126. Park, Y.W.; Choi, D.; Park, J.E.; Ahn, S.S.; Kim, H.; Chang, J.H.; Kim, S.H.; Kim, H.S.; Lee, S.-K. Differentiation of Recurrent Glioblastoma from Radiation Necrosis Using Diffusion Radiomics with Machine Learning Model Development and External Validation. Sci. Rep. 2021, 11, 2913. [Google Scholar] [CrossRef] [PubMed]
  127. Razek, A.A.K.A.; El-Serougy, L.; Abdelsalam, M.; Gaballa, G.; Talaat, M. Differentiation of Residual/Recurrent Gliomas from Postradiation Necrosis with Arterial Spin Labeling and Diffusion Tensor Magnetic Resonance Imaging-Derived Metrics. Neuroradiology 2018, 60, 169–177. [Google Scholar] [CrossRef] [PubMed]
  128. Lao, J.; Chen, Y.; Li, Z.-C.; Li, Q.; Zhang, J.; Liu, J.; Zhai, G. A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme. Sci. Rep. 2017, 7, 10353. [Google Scholar] [CrossRef] [Green Version]
  129. Antropova, N.; Huynh, B.Q.; Giger, M.L. A Deep Feature Fusion Methodology for Breast Cancer Diagnosis Demonstrated on Three Imaging Modality Datasets. Med. Phys. 2017, 44, 5162–5171. [Google Scholar] [CrossRef] [Green Version]
  130. Zhang, Q.; Cao, J.; Zhang, J.; Bu, J.; Yu, Y.; Tan, Y.; Feng, Q.; Huang, M. Differentiation of Recurrence from Radiation Necrosis in Gliomas Based on the Radiomics of Combinational Features and Multimodality MRI Images. Comput. Math. Methods Med. 2019, 2019, 2893043. [Google Scholar] [CrossRef]
  131. Narang, S.; Kim, D.; Aithala, S.; Heimberger, A.B.; Ahmed, S.; Rao, D.; Rao, G.; Rao, A. Tumor Image-Derived Texture Features Are Associated with CD3 T-Cell Infiltration Status in Glioblastoma. Oncotarget 2017, 8, 101244–101254. [Google Scholar] [CrossRef] [Green Version]
  132. Kim, H.Y.; Cho, S.J.; Sunwoo, L.; Baik, S.H.; Bae, Y.J.; Choi, B.S.; Jung, C.; Kim, J.H. Classification of True Progression after Radiotherapy of Brain Metastasis on MRI Using Artificial Intelligence: A Systematic Review and Meta-Analysis. Neuro-Oncol. Adv. 2021, 3, vdab080. [Google Scholar] [CrossRef]
  133. Blasiak, A.; Khong, J.; Kee, T. CURATE.AI: Optimizing Personalized Medicine with Artificial Intelligence. SLAS Technol. 2020, 25, 95–105. [Google Scholar] [CrossRef] [PubMed]
  134. Yauney, G.; Shah, P. Reinforcement Learning with Action-Derived Rewards for Chemotherapy and Clinical Trial Dosing Regimen Selection. In Proceedings of the 3rd Machine Learning for Healthcare Conference, Palo Alto, CA, USA, 17–18 August 2018; Volume 85, pp. 161–226. [Google Scholar]
  135. Jabbari, P.; Rezaei, N. Artificial Intelligence and Immunotherapy. Expert Rev. Clin. Immunol. 2019, 15, 689–691. [Google Scholar] [CrossRef]
  136. Thust, S.C.; van den Bent, M.J.; Smits, M. Pseudoprogression of Brain Tumors. J. Magn. Reson. Imaging 2018, 48, 571–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  137. Kim, J.Y.; Park, J.E.; Jo, Y.; Shim, W.H.; Nam, S.J.; Kim, J.H.; Yoo, R.-E.; Choi, S.H.; Kim, H.S. Incorporating Diffusion- and Perfusion-Weighted MRI into a Radiomics Model Improves Diagnostic Performance for Pseudoprogression in Glioblastoma Patients. Neuro Oncol. 2019, 21, 404–414. [Google Scholar] [CrossRef] [PubMed]
  138. Jang, B.-S.; Jeon, S.H.; Kim, I.H.; Kim, I.A. Prediction of Pseudoprogression versus Progression Using Machine Learning Algorithm in Glioblastoma. Sci. Rep. 2018, 8, 12516. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Le Fèvre, C.; Constans, J.-M.; Chambrelant, I.; Antoni, D.; Bund, C.; Leroy-Freschini, B.; Schott, R.; Cebula, H.; Noël, G. Pseudoprogression versus True Progression in Glioblastoma Patients: A Multiapproach Literature Review. Part 2—Radiological Features and Metric Markers. Crit. Rev. Oncol. Hematol. 2021, 159, 103230. [Google Scholar] [CrossRef] [PubMed]
  140. Chawla, S.; Shehu, V.; Gupta, P.K.; Nath, K.; Poptani, H. Physiological Imaging Methods for Evaluating Response to Immunotherapies in Glioblastomas. Int. J. Mol. Sci. 2021, 22, 3867. [Google Scholar] [CrossRef]
  141. Booth, T.C.; Larkin, T.J.; Yuan, Y.; Kettunen, M.I.; Dawson, S.N.; Scoffings, D.; Canuto, H.C.; Vowler, S.L.; Kirschenlohr, H.; Hobson, M.P.; et al. Analysis of Heterogeneity in T2-Weighted MR Images Can Differentiate Pseudoprogression from Progression in Glioblastoma. PLoS ONE 2017, 12, e0176528. [Google Scholar] [CrossRef] [Green Version]
  142. Hu, X.; Wong, K.K.; Young, G.S.; Guo, L.; Wong, S.T. Support Vector Machine Multiparametric MRI Identification of Pseudoprogression from Tumor Recurrence in Patients with Resected Glioblastoma. J. Magn. Reson. Imaging 2011, 33, 296–305. [Google Scholar] [CrossRef] [Green Version]
  143. Afridi, M.; Jain, A.; Aboian, M.; Payabvash, S. Brain Tumor Imaging: Applications of Artificial Intelligence. Semin. Ultrasound CT MRI 2022, 43, 153–169. [Google Scholar] [CrossRef]
  144. Melguizo-Gavilanes, I.; Bruner, J.M.; Guha-Thakurta, N.; Hess, K.R.; Puduvalli, V.K. Characterization of Pseudoprogression in Patients with Glioblastoma: Is Histology the Gold Standard? J. Neurooncol. 2015, 123, 141–150. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The flowchart in Figure 1 represents the developed AI tools for brain tumor imaging and their aim. The final purpose is to provide customized therapy and follow-up for each patient in order to achieve a good outcome.
Figure 1. The flowchart in Figure 1 represents the developed AI tools for brain tumor imaging and their aim. The final purpose is to provide customized therapy and follow-up for each patient in order to achieve a good outcome.
Curroncol 30 00203 g001
Table 1. The table reports the main characteristics and findings of the studies focused on lesion detection and the differential diagnosis of brain tumors.
Table 1. The table reports the main characteristics and findings of the studies focused on lesion detection and the differential diagnosis of brain tumors.
Author and YearCountryN. PatientsDatabaseMRI Sequences and Clinical DataAI ModelTaskMain ResultsMain Limitations
Park et al. [46]South Korea188 (917 lesions)Institutional brain MRI database3D-GRE, 3D-BBDL model based on 3D U-netBM detection (3D-BB + 3D-GRE vs. 3D-GRE)3D-BB + 3D-GRE model
sensitivity = 93.1%
3D-GRE model sensitivity = 76.8%, (p < 0.001)
Single-center, retrospective study, small data size, 3D-BB sequences may have limited availability in MRI scanners, model mostly trained on patients with metastases
Swinburne et al. [50]USA26Institutional brain MRI databaseDWI, DSC, DCEMLP (Multilayer Perceptorn) model using VpNET2GBM vs. BM vs. PCNSLIncrease in 19.2% in correct diagnoses in cases where neuroradiologists disagreed Manual tumor segmentation, sample size, no evaluation with an independent test cohort
Skogen et al. [52]Norway43Institutional brain MRI databaseDTI (FA and ADC)Commercially available texture analysis research software (TexRAD)GBM vs. BMThe heterogeneity of the peritumoral edema was significantly higher in GBMs (sensitivity 80% and specificity 90%)Retrospective study, analysis of a single slice, the manual drawn of the ROI
Han et al. [53]China350Institutional brain MRI database (two centers)T1C, clinical data (age, sex), routine radiological indices (tumor size, edema ratio, location)AI-driven model using logistic regression modelGBM vs. BM (lung cancer and other sites)Combination models superior to clinical or radiological models (AUC: 0.764 for differentiation and 0.759 for differentiation between MET-lung and MET-other in internal validation cohorts)Radiomic only based on T1-enhanced images, retrospective study, many small groups of metastases from other than lungs
Ortiz-Ramón et al. [55]Spain67Institutional brain MRI databaseIR-T1RF model Differentiate the primary site of origin of brain metastases Images quantized with 32 gray-levels (AUC = 0.873 ± 0.064). differentiating lung cancer from breast cancer (AUC = 0.963 ± 0.054) and melanoma (AUC = 0.936 ± 0.070) Small set of BM, single-center study,
Stadlbauer et al. [59]Austria167Institutional brain MRI databaseStandard MRI (FLAIR, T1C), advanced MRI (DWI, DSC), physiological MRI (VAM = vascular architecture mapping)Nine commonly use ML (SVM, DT, kNN, MLP, AdaBoost, RF, bagging)GBM vs. HHG (anaplastic glioma) vs. meningioma vs. PCNSL vs. BMAdaptive boosting and random forest + advanced MRI and physiological MRI data were superior to human reading in accuracy (0.875 vs. 0.850), precision (0.862 vs. 0.798), F-score (0.774 vs. 0.740), AUROC (0.886 vs. 0.813), and classification error (5 vs. 6)Small sample size, single MRI scanner and traditional ML
Ucuzal et al. [60]Turkey233Open-source dataset from https://figshare.com (accessed on 01 January 2022).T1CCNN from DL algorithm, developed web-based software (Python programming language and TensorFlow, Keras, Scikit-learn, OpenCV, Pandas, NumPy, MatPlotLib, and Flask libraries) Glioma vs. Meningioma vs. Pituitary lesionsAll the calculated performance metrics are higher than 98% for classifying the types of brain tumors on the training dataset Small size, not healthy individuals, the selection and creation of these algorithms may require a lot of time and experience
Pavabvash et al. [65]USA256Institutional brain MRI databaseT1, DWI, T2, FLAIR, SWI, DSC, T1CNaïve Bayes, RF, SVM, CNNDifferentiation of posterior fossa lesions (Hemangioblastoma, Pilocytic Astrocytoma, Ependymoma, Medulloblastoma The decision tree model achieved greater AUC for differentiation of pilocytic astrocytoma (p = 0.020); and ATRT (p = 0.001) from other types of neoplasmsSmall number of rare tumor types, lack of molecular subtyping in medulloblastoma and ependymoma, manual segmentation, acquisition in different field strengths
Verma et al. [67]Switzerland32Institutional brain MRI databaseDSC, T1CIDTPA-method with different texture parametersGBM vs. PCNSL, tumefactive multiple sclerosisThe texture parameters of the original DSCE-image for mean, standard deviation and variance showed the most significant differences (p-value between <0.00 and 0.05) between pathologiesSmall size, smaller TOI in MS, manual segmentation
Han et al. [68]China57Institutional brain MRI databaseT1, T2t-test and statistical regression (LASSO algorithm) to develop three radiomic models base on T1 WI, T2 WI and a combination LGG vs. multiple sclerosisT2WI and combination models achieved better diagnostic efficacy, with AUC of 0.980, 0.988 in primary cohort and that of 0.950, 0.925 in validation cohort,Retrospective study, small size, single scanner, unknown etiology of inflammation
Qian et al. [69]China412Cancer Genome Atlas (TCGA); retrospective dataset from Beijing Tiantan HospitalT1CRadiomic features extraction, MLGBM vs. single BMSVM + LASSO classifiers had the highest prediction efficacy (AUC, 0.90)Retrospective study; imaging data from multiple MRI systems; only CE sequences were used
Bae et al. [70]Korea166 (training) + 82 (validation)retrospective institutional brain MRI databaseT2, T1CDL using radiomic featuresGBM vs. single BMDNN showed high diagnostic performance, with an AUC, sen, spec, and acc of 0.956, 90.6%, 88.0% and 89.0%Automated tumor segmentation, not included advanced sequences, heterogeneous MR scanner types
Adu et al.
[61]
China Brain Tumor Dataset. Figshare (3064 images)T1CCapsNets (dilated capsulenet)Detection + classificationAcc.: 95%Not enough comparisons and experiments with confusion matrix
Abiwinanda et al. [43]Indonesia Brain Tumor Dataset. Figshare (3064 images)T1CCNNClassify into three typesAcc.: 98%Complexity of pre-processing
Table 2. The main characteristics and findings of the studies focused on the characterization of brain tumors.
Table 2. The main characteristics and findings of the studies focused on the characterization of brain tumors.
Author and Year.CountryN. PatientsDatabaseMRI Sequences and Clinical DataAI ModelTaskMain ResultsLimitations
Chang et al. [78]USA259The Cancer Imaging
Archives
T1, T1C, FLAIRCNN (DL)IDH1, 1p/19q co-deletion, MGMTAccuracy, respectively: 94%, 92%, 83%Small sample size; retrospective
study; lack of an independent dataset
Mzoughi et al. [77]Tunisia BraTS 2018 datasetT1C3D CNNGrade classification (LGG and HGG)Classification accuracy: 96.49%
Wiestler et al. [71]Germany37institutional brain MRI databaseT1C, FLAIR, T2, rOEF, CBVML (RF)WHO grade II/III vs. WHO grade IVAcc: 91.8%Lack of an independent validation cohort, small sample size, retrospective study
Zhang et al. (2017) [72]China120institutional brain MRI databaseT1, T1C, FLAIR, ASL, DWI, DCEMLComprehensive automated glioma grading scheme (LGG and HGG)SVM is superior to the other classifiers, best performance when combined with RFE attribute selection strategyHigh classification accuracy on current data but bad performance on new dataset
Kim et al. [73]South Korea127retrospective institutional brain MRI databaseT1, T2, FLAIR, T1C, DWI, DSCRadiomic features extraction, MLGlioma grading and IDH predictionHigher performance (AUC 0.932) of multiparametric model with ADC features in tumor gradingRetrospective design, small number of patients in the validation set, data from a single institution
Cho et al. [75]Korea285BraTS 2017T1, T1C, T2, FLAIRRadiomic features extraction, MLGlioma gradingRF classifier showed the best performance with AUC 0.9213 for the test cohortNot considered molecular information
Tian et al. [76]China153retrospective brain MRI databaseT1C, T2, DWI, ASLRadiomic features extractionGlioma grading (LGG vs. HGG; grade III vs. IV)SVM model’s more promising than using the single sequence MRI for classifying LGGs from HGGs and grade III from IV
Akkus et al. [74]USA159brain tumor patient database of Mayo ClinicT1C, T2multi-scale CNN1p/19q predictionIncreased enhancement, infiltrative margins, and left frontal lobe predilection are associated with 1p/19q codeletion with up to 93% accuracyLimited original data size (solved by data augmentation)
Meng et al. [79]China123Institutional brain MRI databaseT1, T2, FLAIR, T1C, ADCSVM model and 5-fold cross validationATRX statusAUC for ATRX mutation (ATRX(−)) on training set 0.93 (95%[CI]: 0.87–1.0), on validation set 0.84Small dataset, lack of multiparametric MRI, just one imaging biomarker
Ren et al. [80]China57Institutional brain MRI database3D-ASL, T2, FLAIR, DWISVMIDH1(+) and ATRX(−)Accuracies/AUCs/sensitivity/specificity/PPV/NPV of predicting IDH1(+) in LGG: 94.74%/0.931/100%/85.71%/92.31%/100%; ATRX(−) in LGG with IDH1(+) 91.67%/0.926/94.74%/88.24%/90.00%/93.75%Qualified patient population relatively small, the molecular sequencing, such as IDH2 codons, was not performed; hard to be fully understood by treating physicians and applied to routine clinical practice
Haubold et al. [82]Germany217Institutional brain MRI databaseT1, T1C, FLAIRDeepMedic (CNN-based algorithm), XGBoost (SL) model for parameter optimizationATRX, IDH1/2, MGMT, 1p19q co-deletion, LGG vs. HGGAUC (validation/test) for LGG vs. HGG 0.981 ± 0.015/0.885 ± 0.02, ATRX(-) with AUCs of 0.979 ± 0.028/0.923 ± 0.045, followed by 0.929 ± 0.042/0.861 ± 0.023 for IDH1/2; 1p19q and MGMT achieved moderate results.Small sample size, different MRiI manufacturer, retrospective study
Shboul et al.l [83]USA108Institutional brain MRI databaseT1, T1C, FLAIR, T2XGBoost (SL model)MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX and TERT mutationsThe prediction models of MGMT, IDH, 1p/19q, ATRX, and TERT achieve a test performance AUC of 0.83  ±  0.04, 0.84  ±  0.03, 0.80  ±  0.04, 0.70  ±  0.09, and 0.82  ±  0.04, respectivelySmall sample size
Calabrese et al. [84]USA400Institutional brain MRI databaseT1, T1C, T2, FLAIR, SWI, DWI, ASL, MD, AD, RD, and FA.CNN, Random forest modelmutations of IDH, TERT, TP53, PTEN, ATRX, or CDKN2A/B, MGMT
methylation, EGFR amplification, and combined aneuploidy of chromosome 7 and 10
Good performances; ROC AUC highest for ATRX (0.97) and IDH1 (0.96) mutationsLack of external validation
Table 3. Main characteristics and results of studies aimed on prognostication.
Table 3. Main characteristics and results of studies aimed on prognostication.
Autor and YearCountryN. PatientsDatabaseMRI Sequences and Clinical DataAI ModelTaskMain ResultsLimitations
Macyszyn et al. (2019)
[113]
USA105 (retrospective) + 26 (prospective)Hospital case series of GB at the University of Pennsylvania from 2006 to 2013Structural, diffusion, and perfusion scans
>18 years old, GBM histopathological
diagnosis
Machine learning algorithmPrediction of overall survival and molecular subtypeHigh prediction accuracy (survival 80%, molecular subtype 76%)Only MRI at time of diagnosis was used in creating the predictive model, data from a single institution
Nie et al.
(2019)
[114]
China68 (training dataset) + 25 (validation dataset)Hospital case series (Huashan hospital, Shanghai, China
Huashan hospital, Shanghai, China
T1 MRI, rs-fMRI and DTI
HHG patients with evidence of enhancement in T1wi, no previous treatment
3D convolutional neural networks (CNNs) + support vector machine (SVM)Prediction of overall survivalAccuracy of 90.66%Limited clinical information (e.g., tumor genetics)
Sanghazni et al. (2022)
[115]
Singapore163 GBM patientsBraTS 2017 datasetT1, T2, FLAIR, T1 CESupport vector machine (SVM) classification based recursive feature elimination method to perform tumor feature selectionPrediction of overall survivalHigh accuracy for both 2-class and 3-class OS group predictions (89–99%)-
Prasanna et al. (2017)
[117]
USA65 GBM patientsCancer Imaging ArchiveT1C, T2, FLAIR402 radiomic features from enhancing lesion, PBZ and tumor necrosis Radiomic features from the peritumoral brain zone
can predict long- versus short-term survival
Features suggestive of intensity heterogeneity and textural patterns were found to be predictive of survival (p  =  1.47 × 10−5) as compared to features from enhancing tumor, necrotic regions and known clinical factorsPreliminary study
Parl et al. (2020)
[118]
Korea216 patients with newly diagnosed glioblastoma: training (n  =  158) and external validation sets (n  =  58)Two tertiary medical centersDWI, perfusionRadiomic feature selection using LASSO regression + multiparametric MR prognostic model (radiomics score + clinical predictors)Multiparametric MR prognostic model (radiomics score + clinical predictors) vs. conventional MR radiomics model discriminationBetter discrimination (C-index, 0.74) and performance of multiparametric MRI than a conventional MR radiomics model (C-index, 0.65, p  <  0.0001) or clinical predictors (C-index, 0.66; p  <  0.0001); good external validation (C-index, 0.70)Small number of patients, molecular changes were not considered in this analysis, only scans on 3.0 T
Grist et al. (2021)
[119]
UK69 participants with suspected brain tumors (medulloblastoma (N  =  17), pilocytic astrocytoma (N  =  22), ependymoma (considered high grade, N  =  10), other tumors (N  =  20)Four clinical sites in the UK (Birmingham Children’s Hospital, Newcastle Royal Victoria Infirmary, Queen’s Medical Centre, Alder Hey Children’s Hospital, Liverpool) 2009–2017T1, T1C, T2, DWI
Many different tumor types, stages and patient ages
Unsupervised and supervised machine learning modelsPerfusion, DWI, and ADC values determined two new subgroups of brain tumors with different survival characteristics (p  <  0.01)High accuracy (98%) by a neural network, non-invasive risk assessment tool, multi-site and multi-scanner dataSmall heterogeneous cohort treated in a diverse manner, variations in scanner protocol, here are a number of children alive at study end with high-risk tumors and currently limited follow-up
Zhang et al. (2019)
[130]
China51 glioma patients who underwent radiation treatments after surgeryHospital case seriesT1, T1C, T2, FLAIR
Necrosis or recurrence in different glioma subtypes, stages, location and patient ages
Deep features extracted from multimodality MRI images by two CNNs (AlexNet and Inception v3)Distinguish glioma necrosis from recurrence in glioma patients using a radiomics model based on combinational features and multimodality MRI imagesAccuracy of AlexNet and Inception v3 features is higher than that of employing handcrafted features (paired t-test (p < 0.0003)Correlations among features were ignored,
tens of thousands of features were used,
the dataset used in this study was relatively small
Narang et al. (2017)
[131]
USA79 GB patientsTCGA databasePresurgical T1C, T2, FLAIR
T-cell surface marker CD3D/E/G mRNA expression level data
Image-derived features extracted across the T1-post contrast and FLAIR images were selected with the Boruta package selectedDevelop an imaging-derived predictive model for assessing the extent of intra-tumoral CD3 T-cell infiltrationPrediction model for CD3 infiltration achieved accuracy of 97.1% and area under the curve (AUC) of 0.993Texture features derived only from T1-post and T2-FLAIR sequences,
variation in scanning and acquisition protocols,
adjustment for molecular status
Kim et al. (2018)
[137]
Korea238 patients who were pathologically confirmed as having GB and who subsequently received standard concurrent chemo-radiation therapyDatabase of the local Department of Radiology between March 2011 and March 2017T1Ci, FLAIR, DWI, and DSC imaging performed within 6 months after surgery or biopsy
De novo GB diagnosis according to WHO criteria who had chemo-radiation therapy
Multiparametric radiomics selection by ANTsR and WhiteStripe packagesDistinguish progression vs. pseudoprogressionMultiparametric radiomics model (AUC, 0.90) showed better performance than any single ADC or CBV parameter, robustness (high internal and external validation)Retrospective nature, small size of the cohort, relatively high fraction of pseudoprogression, need of validation with a 1.5T scanner,
time cost and complicated analytical process
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cè, M.; Irmici, G.; Foschini, C.; Danesini, G.M.; Falsitta, L.V.; Serio, M.L.; Fontana, A.; Martinenghi, C.; Oliva, G.; Cellina, M. Artificial Intelligence in Brain Tumor Imaging: A Step toward Personalized Medicine. Curr. Oncol. 2023, 30, 2673-2701. https://doi.org/10.3390/curroncol30030203

AMA Style

Cè M, Irmici G, Foschini C, Danesini GM, Falsitta LV, Serio ML, Fontana A, Martinenghi C, Oliva G, Cellina M. Artificial Intelligence in Brain Tumor Imaging: A Step toward Personalized Medicine. Current Oncology. 2023; 30(3):2673-2701. https://doi.org/10.3390/curroncol30030203

Chicago/Turabian Style

Cè, Maurizio, Giovanni Irmici, Chiara Foschini, Giulia Maria Danesini, Lydia Viviana Falsitta, Maria Lina Serio, Andrea Fontana, Carlo Martinenghi, Giancarlo Oliva, and Michaela Cellina. 2023. "Artificial Intelligence in Brain Tumor Imaging: A Step toward Personalized Medicine" Current Oncology 30, no. 3: 2673-2701. https://doi.org/10.3390/curroncol30030203

APA Style

Cè, M., Irmici, G., Foschini, C., Danesini, G. M., Falsitta, L. V., Serio, M. L., Fontana, A., Martinenghi, C., Oliva, G., & Cellina, M. (2023). Artificial Intelligence in Brain Tumor Imaging: A Step toward Personalized Medicine. Current Oncology, 30(3), 2673-2701. https://doi.org/10.3390/curroncol30030203

Article Metrics

Back to TopTop