Next Article in Journal
Robots in Inspection and Monitoring of Buildings and Infrastructure: A Systematic Review
Next Article in Special Issue
Hybrid System Mixed Reality and Marker-Less Motion Tracking for Sports Rehabilitation of Martial Arts Athletes
Previous Article in Journal
Seismic Performance of Fully Prefabricated L-Shaped Shear Walls with Grouted Sleeve Lapping Connectors under High Axial Compression Ratio
Previous Article in Special Issue
Skin Cancer Classification Framework Using Enhanced Super Resolution Generative Adversarial Network and Custom Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Applications of Deep Learning to Neurodevelopment in Pediatric Imaging: Achievements and Challenges

1
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
2
Department of Experimental and Clinical Biomedical Sciences, University of Florence—Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy
3
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2302; https://doi.org/10.3390/app13042302
Submission received: 15 December 2022 / Revised: 3 February 2023 / Accepted: 8 February 2023 / Published: 10 February 2023

Abstract

:
Deep learning has achieved remarkable progress, particularly in neuroimaging analysis. Deep learning applications have also been extended from adult to pediatric medical images, and thus, this paper aims to present a systematic review of this recent research. We first introduce the commonly used deep learning methods and architectures in neuroimaging, such as convolutional neural networks, auto-encoders, and generative adversarial networks. A non-exhaustive list of commonly used publicly available pediatric neuroimaging datasets and repositories are included, followed by a categorical review of recent works in pediatric MRI-based deep learning studies in the past five years. These works are categorized into recognizing neurodevelopmental disorders, identifying brain and tissue structures, estimating brain age/maturity, predicting neurodevelopment outcomes, and optimizing MRI brain imaging and analysis. Finally, we also discuss the recent achievements and challenges on these applications of deep learning to pediatric neuroimaging.

1. Introduction

Machine learning has achieved extraordinary achievements during the past decades. Conventional machine learning algorithms such as support vector machine and logistic regression have been widely applied to image analysis for pattern recognition and identification [1]. Yet applications of such approaches are limited by the reliance on feature extraction procedure and restrictions on high dimensionality of data. Feature extraction requires high expertise in domain knowledge to transform raw data into a different representation. Further dimension reduction techniques are required to fit the high-dimensional features to the machine learning algorithms [2]. Evolution of deep learning algorithms such as convolutional neural networks has advanced the development of machine learning to another triumph. The end-to-end framework of deep learning allows automatic feature learning of the complicated data patterns which migrates the subjectivity in feature extraction procedure. The deep architecture and nonlinear processing units empower the deep learning algorithm to deal with a vast amount of data [3,4]. Successful applications of conventional machine learning and deep learning to medical imaging have been widely reported [5,6]. Specifically, neuroimaging studies based on magnetic resonance imaging (MRI) have applied machine learning to the study of the brain in many aspects [7,8].
MRI has become a crucial diagnostic imaging technique for the study of the brain for its advantage of non-ionic and high-contrast resolution [9]. MRI relies on the nuclear magnetic resonance phenomenon, in which atomic nuclei will re-emit radio signals when placed in a magnetic field and stimulated by oscillating radio waves. Human body contains rich hydrogen nuclei and the nuclei align to the magnetic field generated by the MRI scanner. Then, an oscillating radio frequency deviates the magnetic momentum of the nuclei from the field. When the oscillating radio pulse is removed, signals generated by the realignment of hydrogen nuclei can be detected by a reciever coil [10,11]. The most common MRI modality is the structural MRI (sMRI) which provides morphostructural information based on the concentration of hydrogen protons. sMRI measures the signals produced by aligned hydrogen protons in water molecules in the body and creates excellent contrast among different tissues. Functional MRI (fMRI) quantifies the blood oxygenation level-dependent (BOLD) signals based on the blood flow and blood oxygen changes around cells and reflects the brain activity information [12]. Resting-state fMRI (rs-fMRI) is measured when the subject is at rest while task fMRI monitors the brain function during an assigned task. Diffusion tensor imaging (DTI) estimates the motion of water molecules in the brain. The water molecules’ diffusion speed and directions are restricted by tissue types and fiber architectures. DTI therefore provides information based on the quantitative anisotropy and orientation [13]. Deep learning methods have been widely applied to neuroimaging studies in adult for neuropsychiatric disorder recognition, brain tissues and structures segmentation, and clinical outcome prediction [8,14,15]. In comparison, relatively few deep learning studies have been conducted in pediatric MRI. Most previous reviews on pediatric MRI involved a large number of studies using conventional machine learning approaches instead of deep learning algorithms and some reviews focused on specific topics such as Autism [7,16,17]. To illustrate the most recent achievements of deep learning in pediatric MRI, this systematic review summarized the advanced deep learning approaches applied to multiple neurodevelopmental topics in MRI-based research in the past five years. Section 2 introduces the most commonly utilized deep learning algorithms as well as a list of available public datasets for neurodevelopment. Section 3 categorizes the recent studies into five main topics: recognizing neurodevelopmental disorders, identifying brain and tissue structures, estimating brain age/maturity, predicting neurodevelopment outcomes, and optimizing MRI brain imaging and analysis. The challenges and insights of applying deep learning to pediatric MRI are discussed in Section 4. We conclude in Section 5.

2. Methods

2.1. Deep Learning Model Architectures

Multi-layer perceptron (MLP) has the most basic architecture of deep neural networks, which is composed a stack of processing layers: an input layer, several hidden layers, and an output layer (Figure 1) [18]. The neurons in the processing layers allow nonlinear computation and empower the model to learn different representations of the training data at multiple levels of abstraction [3].
Convolutional neural network (CNN) is the most widely applied deep learning algorithm for medical imaging studies. A typical CNN consists of convolutional layers with activation functions, pooling layers, and fully connected layers (Figure 2) [2]. Convolutional layers convolve an image with different types of kernel functions to extract image features. The kernels are applied to the entire image, thus greatly reducing the number of weights to be trained compared to fully connected neural networks. Activation functions such as sigmoid and ReLu (Rectified Linear Unit) serve as nonlinear feature detectors to introduce nonlinearities to CNN. Pooling layers reduce feature map resolution with translational invariance. The combination of convolutional and pooling layers enables CNN to learn spatial hierarchies among feature patterns. Fully connected layers function as a classifier or regressor to predict the desired outcomes [2]. The weight sharing and translational invariance properties facilitate CNN the efficient and precise power on image processing tasks. Depending on the input data dimensionality, 1D, 2D, and 3D convolutional kernels can be employed. Besides the basic stacking of convolutional layers, pooling layers and fully connected layers, models with complex architectures have been developed to further improve the performance of CNN. AlexNet was the first big CNN model which showed the great potential of CNN on image recognition tasks [19]. Inception blocks utilize convolution kernels of different sizes at the same level to optimize the accuracy and computation time of the model [20]. Residual connection from a previous layer to a later layer without extra parameters solves the vanishing gradients issues and thereby make the CNN model with many layers [21]. Dense blocks formed by many convolution operations and a final pooling and connecting the input and output of each convolution are proposed to train even deeper models [22]. Many other CNN models with different architectures have been proposed. A detailed summary can be found in the review paper by Celard et al. [2].
U-net was proposed for semantic segmentation in 2015 and is still one of the most used CNN architectures for medical image segmentation. The typical U-net is composed of symmetrical encoder and decoder paths connected by skip connections (Figure 3) [23]. The model first performs a set of convolutions at the encoder side to extract features from the input data and then reconstructs the input image while including new information by transposed convolutions at the decoder side. Skip connections connect the encoder and decoder at each level. Complex architectures have also been applied to U-net to further improve its performance, for example, the Res-U-net and U-net with attention mechanism [24,25].
Auto-encoder plays a pivotal role in unsupervised deep learning. Auto-encoder follows the encoder and decoder architecture (Figure 4). The encoder aims at learning a latent representation with low dimensionality which retains only the significant information while ignoring the noise. The decoder utilizes the latent representation to reconstruct the input data. Auto-encoder provides an effective approach for feature learning in recognition tasks with unlabeled data. Variational auto-encoders are applied as generative models which randomly generate new data that are similar to the input data [2].
Generative adversarial network (GAN) has attracted attention with its ability to model data distributions and generate realistic data since proposed in 2014 [28]. GAN consists of one generator network which captures the data distribution in real images and generates a fake image and one discriminator which classifies the generated fake images and real images (Figure 5). Two networks are trained alternatively in a competitive manner. A large number of variations of GAN have been proposed and applied to object detection, localization, segmentation, data augmentation, and image quality improvement tasks [29]. A review paper [30] introduced various architectures of GAN and their applications in medical imaging.

2.2. Public Datasets and Repositories

Sample size is one of the most critical issues for training a deep learning algorithm as the number of trainable parameters grows exponentially with deep architectures. However, data collection is expensive and time-consuming for medical images. Fortunately, more and more data repositories and data-sharing platforms are available recently, making it possible to conduct medical imaging studies on a large scale. Table 1 lists the available public datasets and repositories involved in the studies reviewed in this manuscript. Some repositories collect data from multiple independent sites and provide a large number of subjects. The Autism Brain Imaging Data Exchange (ABIDE) dataset and IMaging-PsychiAtry Challenge (IMPAC) dataset focus on autism spectrum disorder (ASD) recognition and provide data of subjects with ASD and healthy controls. The ADHD-200 consortium collects data for attention deficit hyperactivity disorder (ADHD) patients and healthy controls. The Healthy Brain Network (HBN) dataset and Human Connectome Project Development (dHCP) project are data collections for typically developed individuals. The UNC/UMN Baby Connectome Project (BCP) collects data of infants and pre-school age children. Other datasets including a large number of participants such as UK Biobank and International Consortium for Brain Mapping (ICBM) involve healthy controls as well as patients with various neurodevelopmental disorders at all ages.

2.3. Review Parameters

The paper selection and review procedure in this study follows the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines [46,47]. The search terms employed were <deep learning brain MRI neurodevelopment> or <deep learning pediatric brain MRI> or <deep learning child brain MRI> or <deep learning adolescent brain MRI> to include the deep learning studies based on MRI for pediatric neurodevelopment studies. The initial search was performed on PubMed and Web of Science databases on 26 October 2022. Search engines ScienceDirect and Google Scholar were excluded due to the large number of search results returned (thousands of results).
The initial search yielded 412 papers from PubMed and 252 papers from Web of science. Following the PRISMA protocols, we performed selection and review steps in Figure 6. A total of 304 duplicate records was removed in the first step. Secondly, we examined the keywords, titles, and abstracts of the remaining 360 papers and excluded review papers, case reports, papers with foreign language (French), and animal studies. Furthermore, we identified studies with topics on adult population, genetics, maternity, and non-deep learning approaches as irrelevant and excluded them. We retrieved the full paper for 184 out of the remaining 185 studies. The full papers were further examined for eligibility and 67 studies with non-pediatric population, non-MRI modality or non-deep learning methods were removed. Then, 120 Studies were carefully reviewed and 113 of them are categorized and reported in the next chapter. The remaining 7 studies on gender prediction, functional connectivity estimation, and fascicles detection are not reported.
Three researchers independently examined the eligibility of the studies and conflict decisions were resolved by discussion. Data extracted from selected studies include but are not limited to the year of the study, clinical questions, study population, imaging techniques, preprocessing protocols and tools, deep learning approach, training and validation settings, results, results interpretation, and limitations. Extracted information is presented and discussed in the following chapters. Specifically, risk of bias analysis was performed following the Risk Of Bias In Non-randomized Studies of Interventions [48] for (1) risk of bias due to confounding; (2) risk of bias in selection of participants into the study; (3) risk of bias in classification of interventions; (4) risk of bias due to deviations from intended interventions; (5) risk of bias due to missing data; (6) risk of bias arising from measurement of outcomes; (7) risk of bias in selection of reported results. Risk of bias analysis is presented in Appendix A (Table A1).

3. Results

3.1. Recognizing Neurodevelopmental Disorders

Neurodevelopmental disorders are common brain disorders in children, bringing a variety of challenges to the affected patients and causing great burdens to their families. Various genetic and environmental factors may perturb the developmental process and result in neurodevelopmental disorders [49].
Autism spectrum disorder (ASD) is one of the most common neurodevelopmental disorders [50]. ASD is characterized by early deficits in social interactions and communication accompanied by restricted and repetitive behaviors [49]. Review papers [7,17] summarized a selected number of studies using artificial intelligence approaches to classify ASD patients and healthy controls including both conventional machine learning methods and deep learning methods. This review listed the recent deep learning advancements using MLP, CNN, RNN, and auto-encoder models (Table 2). Rs-fMRI is widely utilized for ASD recognition. Connectomes derived from fMRI were used as inputs to MLP, CNN, and RNN for classification [51,52,53,54]. A multimodal study [55] combined sMRI, rs-fMRI, and task fMRI.
Attention deficit hyperactivity disorder (ADHD) is another common neurodevelopmental disorder [50]. ADHD patients often suffer from hyperactivity, impulsivity, and inattention, and ADHD often continues to adulthood [56]. Previous ADHD recognition studies were summarized in the review paper [7] in conventional machine learning category and deep learning category. This review paper focuses on more recent studies utilizing deep learning approaches for ADHD detection (Table 2). Both rs-fMRI and sMRI are employed as inputs for deep learning networks.
Neurodevelopmental disorders which are less common such as cerebellar dysplasia [57], dyslexic [58], epilepsy [59,60], conduct disorder [61], disruptive behavior disorder [62], and post-traumatic stress disorder [63] are also reviewed in this study. We also include three studies for detection of posterior fossa tumors and tubers in tuberous sclerosis complex [64,65,66], and two studies for white matter pathway classification [67,68]. This review aims to investigate the deep learning methods utilized in various pediatric topics in an overall manner and therefore includes multiple disorders. Structural imaging techniques such as sMRI and DTI are more commonly utilized in these studies.
Overall, the selected studies are summarized in Table 2. Most studies conducted baseline comparisons using conventional machine learning approaches and reported the superior performance of deep learning approaches [53,69]. CNN dominates in the image recognition tasks. A total of 41 out of 48 neurodevelopmental disorder classification studies in this review utilized CNN approaches. Advanced CNN architectures such as inception and residual modules were employed in 2D CNN models [70,71,72]. Several studies trained 3D CNN with a limited number of sample size [61,69,73,74], bringing concerns on overfitting. Large-scale studies which involve thousands of training data were conducted using public datasets and repositories [55,75,76,77,78]. Multimodal studies combined features from multiple MRI modalities showed better performance than single modality [62,76].
Table 2. Recognizing neurodevelopmental disorders.
Table 2. Recognizing neurodevelopmental disorders.
StudyYearDisorderPopulationTechniquePreprocessingMethodResults
[79]2017AutismABIDE I dataset
55 ASD (age 14.2 ± 3.2 years)
55 HC (age 12.7 ± 2.4 years)
rs-fMRIPreprocessed Connectomes ProjectMLPAccuracy 86.36%
[80]2018Autism62 ASD 48 HCtask fMRIFSLMLPAccuracy 87.1%
[51]2018AutismABIDE I dataset
529 ASD 571 HC
rs-fMRIIn-house pipelineRNNAccuracy 70.1%
[81]2018AutismABIDE I & II dataset
116 ASD 69 HC
(age 5–10 years)
sMRI, rs-fMRISPM8Deep Belief NetworkAccuracy 65.56%
[53]2019AutismABIDE I & II dataset
210 ASD 249 HC
(age 5–10 years)
rs-fMRISPM8CNNAccuracy 72.73%
[52]2019AutismABIDE II dataset
117 ASD 81 HC
(age 5–12 years)
rs-fMRIFSLAuto-encoderAccuracy 96.26%
[55]2020Autismmulti datasets: ABCD, ABIDE I, II, BioBank, NDAR, ICBM, Open fMRI,
1000 Functional Connectomes
43,838 total connectomes
1711 ASD
(age 0.42–78 years)
rs-fMRI, task-fMRISPT, AFNI, SpeddyPPCNNAUROC 0.6774
[82]2020AutismYUM dataset 40 ASD
(age 29.4 ± 11.6 years) 33 HC (age 30.1 ± 5.3 years)
ABIDE I dataset
521 ASD
(age 29.4 ± 11.6 years)
593 HC
(age 30.1 ± 5.3 years)
sMRISPM83D CNNAccuracy 88% (YUM) 64% (ABIDE)
[69]2021AutismABIDE I dataset
55 ASD
(age 14.52 ± 6.97 years)
55 HC
(age 15.81 ± 6.25 years)
rs-fMRIConfigurable Pipeline for the Analysis of Connectomes3D CNNAccuracy 77.74%
[74]2021Autism50 ASD 50 HC
(age 12–40 months)
task-fMRIFSL, FEAT3D CNNAccuracy 80%
[83]2021AutismABIDE I & II dataset
1060 ASD 1146 HC
(age 5–64 years)
rs-fMRIIn-house pipelineCNNAccuracy 89.5%
[84]2021AutismABIDE I dataset
506 ASD 532 HC
(age 10–28 years)
rs-fMRIDPABIMLPAccuracy 78.07 ± 4.38%
[85]2021Autism52 ASD 195 HC
infants (age 24 months)
MRIiBEATCNNAccuracy 92%
[76]2021Autismmulti datasets: ABCD, ABIDE I, II, BioBank, NDAR, Open fMRI 29,288 total connectomes
1555 ASD
(age 0.42–78 years)
sMRI, rs-fMRI, task-fMRIAFNI, SpeddyPPCNNAUROC 0.7354
[54]2022AutismABIDE & UM dataset
411 HC for offline learning
48 ASD 65 HC for testing
(age 13.8 ± 2 years)
rs-fMRIConnectome Computation SystemAuto-encoderAccuracy 67.2%
[73]2022AutismPreschool dataset
110 subjects
ABIDE I dataset
1099 subjects
sMRISPM8CNNAUROC 0.787 (preschool) 0.856 (ABIDE)
[86]2022Autism151 ASD 151 HC
(age 1–6 years)
sMRIIn-house pipeline3D CNNAccuracy 84.4%
[75]2022AutismIMPAC dataset
418 ASD 497 hc
(age 17 ± 9.6 years)
sMRI, rs-fMRIIn-house pipelineMLPAUROC
0.79 ± 0.01
[87]2019ADHDADHD-200 consortium
776 subjects
rs-fMRIIn-house pipeline3D CNNAccuracy 69.01%
[88]2020ADHDADHD-200 consortium
262 subjects
rs-fMRIAFNI, FSLCNNAccuracy 73.1%
[78]2021ADHDENIGMA-ADHD Working Group
2192 ADHD 1850 HC
(age 4–63 years)
sMRIFreeSurferMLPTesting AUROC 0.60
[89]2022ADHDADHD-200 consortium
NI site25 ADHD 23 HC
(age 11–22 years) NYU site: 118 ADHD 98 HC (age 7–18 years)
KKI site: 22 ADHD 61 HC (age 8–13 years)
PU site: 78 ADHD 116 HC (age 8–17 years)
PU-1 site: 24 ADHD 62 HC (age 8–17 years)
rs-fMRI Preprocessed Connectomes ProjectAuto-encoderAccuracy >99%
[90]2022ADHDADHD-200 consortium
NI site: 28 ADHD-I 37 HC
NYU site: 72 ADHD-I,
42 ADHD-C, 96 HC
OHSU site: 27 ADHD-I,
13 ADHD-C, 70 HC
KKI site: 16 ADHD-I,
5 ADHD-C 60 HC
PU-1 site: 16 ADHD-I,
26 ADHD-C, 88 HC
PU-2 site: 15 ADHD-I,
20 ADHD-C, 31 HC
PU-3 site: 7 ADHD-I,
12 ADHD-C, 23 HC
rs-fMRI DPABI CNN Accuracy >99%
[91]2022ADHDADHD-200 consortium
Training: 69 ADHD 99HC
Testing: 24 ADHD 27 HC
(age 7–21 years
rs-fMRIAthena pipelineCNNTesting accuracy 67%
[77]2022ADHDADHD-200 consortium
325 ADHD 547 HC
(age 12 ± 3.0 years)
rs-fMRIAthena pipelineCNNAccuracy
78.7 ± 4.3%
[92]2022ADHD19 ADHD
(age 10.25 ± 1.94 years)
20 HC
(age 10.15 ± 2.13 years)
sMRISPMCNNAccuracy 93.45 ± 1.18%
[93]2022ADHDABCD Dataset 127 ADHD 127 HC (age 9–10 years)sMRIANTsCNNAccuracy 71.1%
[57]2018Cerebellar Dysplasia90 patients, 40 HCsMRIFSL, ANTs3D CNNAccuracy
98.5 ± 2.41%
[61]2020Conduct Disorder60 patients
(age 15.3 ± 1.0 years)
60 HC (age 15.5 ± 0.7 years)
sMRI-3D CNNAccuracy 85%
[62]2021Disruptive Behavior DisorderABCD Study: 550 patients, 550 HC (age 9–11 years)sMRI, rs-fMRI, DTIFSL3D CNNAccuracy 72%
[58]2020Dyslexic36 patients, 19 HC
(age 9–12 years)
task fMRISPM3D CNNAccuracy 72.73%
[94]2020Embryonic Neurodevelopmental Disorders114 patients, 113 HC

(age 16–39 weeks)
sMRICNNAccuracy 87.7%
[59]2020Epilepsy30 patients, 13 HCsMRIBETCNNAccuracy 66–73%
[60]2020Epilepsy59 patients, 70 HC
(age 7–18 years)
DTISPMCNNAccuracy 90.75%
[70]2021Neonatal Hyperbilirubinemia47patients, 32 HC
(age 1–18 days)
sMRI CNNAccuracy 72.15%
[63]2021PTSD33 patients
(age 14.3 ± 3.3 years) 53 HC
(age 15.0 ± 2.3 years)
rs-fMRI SPM12MLPAccuracy 72%
[64]2020Tuber260 patients, 260 HCsMRIFSL3D CNNAccuracy 97.1%
[65]2022Tuber296 patients, 245 HC
(age 0–8 years)
sMRI-3D CNNAccuracy 86%
[71]2020Tuber114 patients
(age 5–15.3 years), 114 HC
(age 6.9–15.7 years)
sMRIIn-house pipelineCNNAccuracy 95%
[95]2021Tumor136 patients, 22 HC
(age 0–11 years)
sMRISPMCNNAccuracy
87 ± 2%
[72]2020Tumor617 patients with tumor
(age 0.2–34 years)
sMRIPydicomCNNAccuracy 72%
[66]2018Tumor233 subjectssMRI-Capsule NetworkAccuracy 86.56%
[96]2020Tumor39 pediatric patientssMRI-CNNAccuracy 87.8%
[67]2020White Matter Pathways89 patients with focal epilepsy
(age 9.95 ± 5.41 years)
DTIFreeSurferCNNAccuracy 98%
[68]2019White Matter Pathways70 HC
(age 12.01 ± 4.80 years),
70 patients with focal epilepsy
(age 11.60 ± 4.80 years)
DTIFreeSurfer, FSL, NIH TORTOISECNNF1 score
0.9525 ± 0.0053
Abbreviations: ASD—Autism spectrum disorder, HC—healthy control, ADHD—Attention deficit hyperactivity disorder, sMRI—structural MRI, rs-fMRI—resting-state functional MRI, DTI—Diffusion Tensor Imaging, MLP—Multi-layer perceptron, CNN—Convolutional neural network.

3.2. Identifying Brain and Tissue Structures

Identifying brain and tissue structures is of great importance in facilitating studies investigating changes in a specific region of interest. Accurate segmentation of brain tissues and structures lays the foundation for volumetric and morphologic analysis. Volumetric analysis of gray matter, white matter, cerebrospinal fluid, and specific brain structure such as amygdala assist in computer-aided diagnosis of neurodevelopmental disorders. Localization and segmentation of brain tumor is essential for assessment of the tumor burden as well as treatment response and tumor progression [97]. Brain masking isolates the brain from surrounding tissues across non-stationary 3D brain volumes in fMRI, which is important and challenging, especially for fetal imaging [98]. Specific challenges for pediatric brain segmentation exist due to the variations in head size and shape in children compared to adults. Rapid changes in tissue contrast and low contrast to noise ratio in fetal and newborn MRIs lead to further demanding techniques [99]. This study reviews segmentation of pediatric brain tissues, structures, tumors, and masking of fetal brain (Table 3).
Most of the studies employed U-net for segmentation. Dice scores vary across studies. 3D U-net models were implemented for brain tissue and volume segmentation [25,100,101,102]. Transfer learning and active learning greatly reduced the number of samples that need to be labeled for training a high-quality patch-wise segmentation method [99]. FetalGAN was proposed to segment a fetal functional brain MRI using a segmentor as the generator in GAN architecture and achieved better performance than 3D U-net [98]. Adversarial domain adaptation was used to adapt a pre-trained U-net to another segmentation task in an unsupervised learning manner [103]. Transfer learning and GAN stand for the opportunity of training segmentation algorithms with weakly labeled or unlabeled data, which may greatly reduce the tedious and time-consuming process of creating groundtruth for segmentation tasks.
Table 3. Identifying brain and tissue structures.
Table 3. Identifying brain and tissue structures.
StudyYearStructurePopulationTechniquePreprocessingMethodResults
[104]2020Amygdala171 infants (age 6 months) 204 infants (age 12 months) 201 infants (age 24 months)sMRI-U-netDice score 0.882 (6-month) 0.882 (12-month) 0.903 (24-month)
[105]2020Anterior Visual Pathway18 subjectssMRI-GANDice score 0.602 ± 0.201
[106]2018 Brain Mask10 adolescent subjects (age 10–15 years), 25 newborn subjects from dHCP datasetsMRI-CNNF1 score
95.21 ± 0.94 (adolescent) 90.24 ± 1.84 (newborns)
[99]2019Brain Mask10 adolescent subjects,
26 newborn subjects from dHCP dataset, 25 other subjects (age 0.2–2.5 years)
sMRI-CNNImprove dice score after labeling a very small portion of target dataset (<0.25%)
[107]2020Brain Mask197 fetuses (gestation age 24–39 weeks)rs-fMRIFSLU-netDice score 0.94
[98]2020Brain Mask71 scans of fetusesrs-fMRIAFNIGANDice score 0.973 ± 0.013
[108]2020Brain Mask37 healthy fetuses (gestation age 27.3 ± 4.11 weeks) 32 fetuses with spina bifida pre-surgery (gestation age 23.06 ± 1.64 weeks) 16 fetuses post-surgery (gestation age
25.69 ± 1.21 weeks)
sMRI-N4ITKU-netDice score 0.9321 (healthy), 0.9387 (pre-surgery), 0.9294 (post-surgery)
[101]2021Brain Mask214 fetuses (gestation age 22–38 weeks)sMRI-3D U-netTesting dice score 0.944
[109]2021Brain Mask30 subjects
(ages 2.34–4.31 years)
sMRI-CNNDice score 0.90 ± 0.14
[110]2019Brain Tissue29 subjects
(age 9.96 ± 7.16 years)
sMRI-3D CNNDice score 0.888 (gray matter), 0.863 (white matter), 0.937 (CSF)
[111]2019Brain Tissue12 fetuses (gestation age 22.9–34.6 weeks)sMRI-CNNDice score 0.88
[112]2019Brain Tissue95 very pre-term infants (gestation age 28.5 ± 2.5 weeks, scan at term age),
28 very pre-term infants (gestation age 26.8 ± 2.1 weeks, scan at term age)
sMRI-CNNDice score 0.895 ± 0.098 testing dice score
0.845 ± 0.079
[113]2020Brain Tissue47 patients with pediatric hydrocephalus
(age 5.8 ± 5.4 years)
sMRI-CNNDice score 0.86
[114]2021Brain Tissue35 subjects
(age 4.2 ± 0.7 years)
sMRI-3D CNNJS = 0.83 for gray matter
JS = 0.92 for white matter
[25]2021Brain Tissue98 preterm infants (gestation age ≤ 32 weeks)DTIIn-house pipeline3D U-netDice score 0.907 ± 0.041
[102]2022Brain Tissue106 fetuses (gestation age 23–39 weeks)sMRIFSL3D U-netDice score 0.897
[115]2022Brain TissuedHCP datast: 150 term (gestation age 37–44 weeks )
50 preterm (gestation age ≤ 32 weeks, scan at term-equivalent age)
sMRI-CNNDice score 0.88
[116]2022Brain Tissue23 infants
(age 6 ± 0.5 months)
sMRIIn-house pipelineU-netDice score 0.92 (gray matter), 0.901 (white matter), 0.955 (CSF)
[117]2020Cerebral Arteries48 subjects
(age 0.8–22 years)
sMRIIn-house pipelineU-netTesting dice score 0.75
[118]2021Cerebral Ventricle200 patients with obstructive hydrocephalus (age 0–22 years) 199 HC (age 0–19 years)sMRIIn-house pipelineU-netDice score 0.901
[103]2021Cortical Parcellation NetworkdHCP datast: 403 infants, ePRIME dataset: 486 infants (gestation age 23–42 weeks, scanned at term-equivalent age)sMRI-MRITKGANDice score 0.96–0.99
[119]2020Cortical Plate52 fetuses (gestation age 22.9–31.4 weeks)sMRIIn-house pipelineCNNTesting dice score
0.907 ± 0.027
[120]2021Cortical Plate12 fetuses (gestation age 16–39 weeks)sMRI-AutoNet, ITK-SNAPCNNDice score 0.87
[121]2019Intracranial Volume80 scans of fetuses (gestation age 22.9–34.6 weeks) 101 scans of infants (age 30–44 weeks)sMRI-U-netDice score 0.976
[122]2022Limbic StructuredHCPdataset: 473 subjects (40.65 ± 2.19)sMRI-CNNDice score 0.87
[123]2022Posterior Limb of Internal Capsule450 preterm infants ( gestation age ≤ 32 weeks, scan at term-equivalent age)sMRIIn-house pipelineU-netDice score 0.690
[124]2022Tuber29 subjects
(age 9.96 ± 7.16 years)
sMRI-U-netTesting dice score
0.59 ± 0.23
[125]2022Tumor311 pediatric subjectssMRI-U-netDice score 0.773
[126]2022Tumor177 patients
(age 0.27–17.87 years)
sMRICaPTk softwareCNNDice score 0.910
[100]2022Tumor122 patients
(age 0.2–17.9 years)
sMRIANTs3D U-netDice score 0.724
[97]2022TumorBraTS 2020 Dataset: 369 patients local dataset:
22 patients (average
age 7.5–9 years)
sMRIIn-house pipelineU-netDice score 0.896
Abbreviations: sMRI—structural MRI, rs-fMRI—resting-state functional MRI, DTI—Diffusion Tensor Imaging, CNN—Convolutional neural network, GAN—Generative adversarial network.

3.3. Predicting Brain Age

The brain development of children experiences a rapid and complex stage, especially for children younger than two years. Early brain development is critical for cognitive, sensory, and motor ability. Delayed brain development can lead to many neurodevelopmental disorders in children and affect their quality of life [127]. Accurate evaluation of brain development via brain age estimation based on neuroimaging is of clinical importance to understand healthy brain development and study the brain maturity deviation caused by neurodevelopmental disorders [128].
We summarized age prediction studies involved both infants and young children (Table 4). Structural MRI techniques are commonly utilized in 2D and 3D CNN models. Study [128] using 2D CNN on DTI achieved comparison results with human experts. Study [127] demonstrated superior performance of 3D CNN compared to conventional machine learning approaches and 2D CNN. Multimodal study [129] combined sMRI, rs-fMRI, and DTI features and yielded a mean absolute error of 0.381 years for children and adolescents aged 8–21 years old. The age difference for the study population varies and thus reporting of the relative error rate is necessary for comparing different methods in different studies.

3.4. Predicting Neurodevelopment Outcomes

The relationship between brain structure and cognitive function is complex. Research on brain activity and connectivity builds the network theory to capture the brain trajectories. It remains a challenge in the field of neuroscience to relate basic structural properties of brain to complex cognitive functions [138]. This study reviewed research on correlating brain structure and measurable neurodevelopment outcomes such as fluid intelligence, language function, and motor function (Table 5).
The ABCD dataset provides neuroimaging data including sMRI, rsfMRI, and DTI as well as cognitive assessments such as fluid intelligence and oral reading scores. Large-scale studies based on the ABCD dataset involve thousands of data and a variety of modalities to predict neurodevelopment outcomes [138,139,140,141,142]. CNN models were also employed to predict motor function and cognitive deficits in very preterm infants [143,144].
Table 5. Predicting neurodevelopment outcomes.
Table 5. Predicting neurodevelopment outcomes.
StudyYearScorePopulationTechniquePreprocessingMethodResults
[143]2021Cognitive Deficits261 very preterm infants (gestation age ≤32 weeks, scan at 39–44 weeks postmenstrual age)DTI, rs-fMRIFSLCNNAccuracy 88.4%
[145]2020 Fluid IntelligenceABCD Study 8333 subjects (age 9–10 years)sMRI -3D CNN MSE 0.75626
[141]2021Fluid IntelligenceABCD Dataset 7709 subjects (age 9–10 years)sMRIFSL, ANFI, FreeSuerferCNNPearson’s correlation coefficient r = 0.18
[138]2022Fluid IntelligenceABCD Dataset 8070 subjects (age 9–11 years)
HCP Dataset 1079 subjects (age 22–35 years)
sMRIFreeSurferCNNMSE 0.919 (ABCD Dataset) 0.834 (HCP dataset)
[140]2022Fluid IntelligenceABCD Dataset 7693 subjects (age 9–11 years)rs-fMRIFreeSurferCNNMAE
5.582 ± 0.012
[142]2022Fluid IntelligenceABCD Dataset Training: 3739 subjects, Validation 415 subjects, Testing 4515 subjects (age 9–11 years)sMRIFSL, ANFI, FreeSuerferCNNMSE 82.56 for testing
[146]2021Language Scores31 subjects with persistent language concerns
(age 4.25 ± 2.38years)
DTIIn-house pipelineCNNMAE 0.28
[147]2021Language Scores37 subjects with epilepsy (age 11.8 ± 3.1years)DTIFSLCNNMAE 7.77
[144]2020Motor77 very pre-term infants
(gestation age <31 weeks )
DTIANTSCNNAccuracy 73%
[139]2021Oral ReadingABCD Study 5252 subjects (age 9–10 years)sMRI, DTI-Auto-encoderMSE 206.5
Abbreviations: sMRI—structural MRI, rs-fMRI—resting-state functional MRI, DTI—Diffusion Tensor Imaging, CNN—Convolutional neural network, MAE—mean absolute error, MSE—mean squared error.

3.5. Optimizing MRI Brain Imaging and Analysis

Assessing imaging quality and optimizing image acquisition are significant for medical imaging analysis. Reconstruction techniques adjust the scanning parameters to maximize the image quality and control the scanning time, which is of great benefit for pediatric imaging in which many subjects cannot stay still for a long time [148]. Furthermore, some scans may be missing or with low quality due to inadequate scanning time or fail completion by the participants. Image generation algorithms synthesize pseudo-images from low-resolution image or latent space, which provide a solution to recapture missing data or rectify scans with low quality [149]. Here, we review the deep learning algorithms for image quality assessment, reconstruction, and synthesis (Table 6).
Image quality assessment tools were constructed with 2D CNN for structural MRI and DTI [150,151,152]. Study [153] utilized a two-stage transfer learning strategy which showed near-perfect accuracy in evaluating image quality and is capable of real-time large-scale assessment. GANs are widely applied in image generation tasks [149,154,155,156,157]. GANs showed great capability in generating synthetic images to implement missing data or improve the signal-to-noise ratio of poor quality images [24,149]. Study [148] proposed CNN models for reconstruction which reduced the scan time by 42% while maintaining image quality and lesion detectability. CNN combined with RNN also showed superior performance in improving the signal-to-noise ratio [24].
Table 6. Optimizing MRI brain imaging and analysis.
Table 6. Optimizing MRI brain imaging and analysis.
StudyYearTaskPopulationTechniquePreprocessingMethodResults
[158]2020Image Enhancement131 neuro-oncology patients (age 0.4–17.1 years)ASL-Auto-encoderSNR Gain 62%
[159]2018Image Generation28 infants (scan at birth, 3 months, and 6 months)DTIFSLCNNMAE
44.4 ± 17.5 (3-month-old from neonates)
40.1 ± 10.6 (6-month-old from 3-month-old)
[154]2019Image Generation16 subjects (age 1.1–21.3 years)sMRI-GANMAE
52.4 ± 17.6
[155]2020Image Generation60 subjects (age 2.6–19 years)sMRIIn-house pipelineGANMAE
61.0 ± 14.1
[156]2022Image GenerationABCD Dataset: 1517 subjects (age 9–10 years)sMRI-GANPSNR
31.371 ± 1.813
[149]2022Image Generation127 neonates (postmenstrual
age = 41.1 ± 1.5 weeks)
sMRIANTs3D GANRMAE
5.6 ± 1.1%
[157]2022Image Generation125 subjects (age 1–20 years)sMRIFSLGANPSNR
28.5 ± 2.2
[150]2019Image Quality EvaluationABIDE Dataset:
1112 subjects (age 7–64 years)
sMRISPM12CNNAccuracy 84%
[153]2020Image Quality EvaluationBCP dataset: 534 images (age 0–6 years)sMRI-CNNcapable of real-time large-scale assessment with near-perfect accuracy.
[151]2021Image Quality Evaluation211 fetuses (gestation age 30.9 ± 5.5 weeks)sMRIIn-house pipelineCNNAccuracy
85 ± 1%
[152]2022Image Quality EvaluationABCD Dataset: 2494 subjects
(age 9–10 years) HBN Dataset: 4226 subjects
(age 5–21 years)
DTIMATRIX, FSLCNNAccuracy 96.61% (ABCD Dataset) 97.52% (HBN Dataset)
[160]2021Image Reconstruction20 fetuses (gestation age 23.4–38 weeks)DTISVR pipelineCNNRMSE
0.0379 ± 0.0030
[24]2021Image Reconstruction305 subjects
(age 0–15 years)
sMRIIn-house pipelineCNN+RNNPSNR 27.85+/−2.12
[161]2022Image Reconstruction107 subjects
(age 0.2–18 years)
sMRI-CNNimage quality improved significantly by qualitative assessment
[148]2022Image Reconstruction47 subjects
(age 2.3–14.7 years)
sMRI-CNNReduce scan time by 42%
Abbreviations: sMRI—structural MRI, ASL—Arterial spin labeling, DTI—Diffusion Tensor Imaging, CNN—Convolutional neural network, GAN—Generative adversarial network, MAE—mean absolute error, PSNR—Peak signal-to-noise ratio.

4. Discussion

4.1. Advancements in Deep Learning Applied to Pediatric MRI

This study reviews pediatric MRI studies for recognition, segmentation, and prediction tasks in neurodevelopment. Throughout the review, CNN is the most commonly utilized model. Variations and advancement based on the basic architecture have been proposed to improve the performance in multi-tasks. Multi-view 2D CNN and 3D CNN have been proposed to deal with the 3D volumes in neuroimaging [57,82,84]. The multi-view 2D CNN processes 3D volumes with slices generated from sagittal, axial, and coronal sections while 3D CNN utilizes 3D kernels in the networks. Multi-branch CNN models also utilize multimodal imaging to study the brain from different perspectives. Structural connectomes and functional connectomes were combined for age prediction in study [129] and cognitive function prediction in study [139]. Multimodal studies classified children with ASD from healthy controls using combinations of sMRI and rs-fMRI [75,76,81]. sMRI provides structural information, fMRI provides information based on brain activity, and DTI provides information regarding quantitative anisotropy and orientation. Multimodal neuroimaging allows researchers to understand the brain from different perspectives and plays an essential role in investigating the brain functional and structural changes in pediatric neurodevelopment. Variations of U-net dominate in the segmentation tasks. Dilated-Dense U-Net and U-net with attention mechanism achieved great performance in brain structure segmentation [104,120]. Meanwhile, semi-supervised learning and transfer learning initiated studies with a small number of training data [103,122]. GAN shows its superiority in image generation tasks. Variations of GANs have been proposed to synthesize pseudo-images from low-resolution images or latent space [149,155,156]. Overall, the development of computational powers has enabled deep learning models to have more complex structures and greater ability to process 3D volumes for a variety of tasks.

4.2. Challenges and Future Directions

4.2.1. Overfitting Caused by Small Sample Size

Overfitting remains a major concern for deep learning models with deep and complex architectures, especially the models with 3D structures as the number of training parameters grows exponentially with an extra dimension [2]. The sample size should also increase to train models with many parameters to avoid overfitting. Otherwise the model might be overfitted to the training data and fail to predict new data accurately. However, neuroimaging acquisition via MRI is expensive and time-consuming. Many studies are limited to a small number of training data, experiencing the risk of overfitting [162]. In our review, some studies use cross-validation to report results while some others also report results on an independent testing dataset. The testing results are important indicators of the capability to apply the trained model on unseen new data.
Data-sharing projects and platforms provide a vast amount of neuroimaging data, facilitating large-scale studies to train deep and complex models. We share a non-exhaustive list of available public datasets and repositories in Section 2. In common practice, supervised learning, in which the deep learning model is trained with labeled data is the most widely applied learning process [15,163]. Open datasets and repositories prepared data and labels in pairs where labels can be disease diagnosis, clinical outcomes, and semantic segmentation ground truth. Other than labeled data, there are tons of neuroimaging data without labels or with a limited number of labels. Unsupervised learning and semi-supervised learning show great potential in dealing with such data. Unsupervised learning utilizes training data without any labels by separating the data into different categories with automatically learned patterns during training [15,163]. Semi-supervised learning utilizes the unlabeled data to learn the feature patterns and use the labeled data to update model weights, which has yielded superior performance with a limited number of training samples in both classification and segmentation tasks [70,110]. Transfer learning accommodates another possibility for developing deep learning algorithms with a limited number of training data. Transfer learning takes advantage of models pre-trained on large datasets and fine-tunes the system with a small number of data, providing an applicable solution for neuroimaging studies with a small sample size [60,94,97].

4.2.2. Inconsistent Preprocessing Pipelines

Preprocessing is another challenge in pediatric neuroimaging studies. It is necessary to remove the non-brain tissue and noise in many tasks, especially for neuroimaging data of children with significant motion artifacts. However, replication and validation of results are often thus challenged by the variations in data inclusion criteria and preprocessing pipelines. The common preprocessing steps for sMRI include brain extraction, normalization to standard templates, brain tissue segmentation, and brain surface reconstruction [93]. The fMRI preprocessing steps include brain extraction, motion correction, slice time correction, distortion correction, alignment to structural images, and confounds regression [52,90]. The DTI preprocessing steps include distortion correction, Eddy current correction, brain extraction, alignment to structural images, and tensor fitting [60]. The mentioned preprocessing steps may involve multiple preprocessing softwares and adjustments may be applied to different pipelines in different studies. We listed the specified softwares and pipelines in our results. Common preprocessing softwares include SPM [164], AFNI [165], ANTs [166], FSL [167], Dpabi [168], and FreeSurfer [169]. Some studies use in-house preprocessing pipelines or did not specify the preprocessing steps. Preprocessing in single research projects may be time- and effort-consuming while variations of preprocessing pipelines restrict the replication of research results.
Standardization in data preparation and preprocessing is an urgent need for conducting large-scale neuroimaging studies. Fortunately, efforts towards standardization have been contributed by different organizations. Many data-sharing platforms employ the Brain Imaging Data Structure (BIDS) format to adopt a standardized way of organizing neuroimaging and behavioral data [170]. Furthermore, the ABIDE dataset and ADHD200 consortium release both raw and preprocessed data with shared preprocessing pipelines [31,34]. Standardization of preprocessing pipelines will greatly improve the efficacy of neuroimaging studies in the future.

4.2.3. Difficulty in Interpreting Deep Learning Results

Deep learning has been criticized for its “black-box nature” which poses challenges for the interpretability and explainability of trained models, and thus brings concerns to medical decision-making. The deep learning system must provide the rationale behind the decision-making process to make trustworthy predictions [171]. Various approaches have been proposed to interpret deep learning algorithms. One of the common methods is the graph-based visualization approach, which identifies the critical regions for predicting results based on activation maps derived from model weights [172,173]. Study [92] applied such an approach to identify the brain regions where children with ADHD differed from controls. The attention mechanism which focuses selectively on information of interest also plays a vital role in the interpretability of deep learning [174]. Functional connectivity differences between ADHD patients and healthy controls were identified using deep self-attention factorization in the study [90]. There are some other techniques for interpretation such as feature importance and analyzing trends and outliers in predictions. However, studies in this review have not utilized such techniques. Deep model interpretation provides crucial information for understanding brain functions and neurodevelopment, which is of great importance for pediatric neuroimaging studies. Interpretability should be one of the research focuses in future neuroimaging studies.

4.3. Limitations

Although some of the studies did not specify the limitations, there are some common limitations shared across individual studies. Firstly, many studies trained with a limited number of training samples, risking the bias of overfitting. The lack of independent testing results greatly restrains the generalizability of trained models to unseen data. Secondly, architectures of deep neural networks in many studies are trained in a non-exhausted exploration manner that is restricted by computational power. Thirdly, interpretation of the results is lacking in many studies and thus inhibits the interpretability and explainability of trained models. Lastly, for multi-site data which have different scanning protocols, confounding factors might cause risks of bias in the results.
This review systematically organized the most recent research on deep learning applied to pediatric MRI. However, we are unable to include the thousands of results returned by databases GoogleScholar and ScienceDirect, which remains a limitation of the study. Further investigations on unlisted studies may be applied with automatic review tools for paper selection. Keywords selected for the review are not disorder-specific and hence may neglect some studies optimal for the inclusion criteria but not included in the initial research. Future studies on specific disorders may accommodate the limitations.

5. Conclusions

Deep learning plays an essential role in recent neuroimaging studies. Advancements in applications of deep learning to pediatric neuroimaging have been illustrated in this review. Complex deep learning models such as CNN and GAN have shown superior performance in neuroimaging recognition, prediction, segmentation, and generation tasks. Semi-supervised learning demonstrated great potential in the utilization of weakly labeled or unlabeled data. Challenges such as overfitting, preprocessing variations, and interpretation issues remain in many neuroimaging studies, but data-sharing platforms, standardization of preprocessing protocols, and advanced interpretation approaches have been proposed to tackle such difficulties. Future neuroimaging research on large scales will not only achieve high accuracy but also benefit the understanding of the brain functions and neurodevelopment.

Author Contributions

Writing—original draft preparation, M.H.; writing—review and editing, K.-K.A., H.Z. and C.N. All authors have read and agreed to the published version of the manuscript.

Funding

The research is supported by Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore, and also by the A*STAR Strategic Programme Funds Project No. C211817001 Brain Body Initiative.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCDThe Adolescent Brain Cognitive Development
ABIDEAutism Brain Imaging Data Exchange
ADHDAttention deficit hyperactivity disorder
ASDAutism spectrum disorder
ASLArterial spin labeling
CNNConvolutional neural network
dHCPHuman Connectomme Project Development
DTIDiffusion tensor imaging
fMRIfunctional MRI
GANGenerative adversarial network
HBNHuman Brain Network
HCHealthy control
ICBMInternational Consortium for Brain Mapping
IMPACImaging Psychiatry Challenge
MAEmean absolute error
MLPMulti-layer perceptron
MRIMagnetic resonance imaging
MSEmean squared error
NDARNational Dtabase for Autism Research
PNCPhiladelphia Neurodevelopmental Cohort
PRISMApreferred reporting items for systematic reviews and meta-analysis
PSNRPeak signal-to-noise ratio
rs-fMRIresting-state fMRI
sMRIstructural MRI

Appendix A. Risk of Bias Analysis

Risk of bias analysis were performed following the Risk Of Bias In Non-randomized Studies of Interventions [48] for (1) risk of bias due to confounding (age, gender, scanning parameters); (2) risk of bias in selection of participants into the study (population, sample size); (3) risk of bias in classification of interventions; (4) risk of bias due to deviations from intended interventions (unexpected results); (5) risk of bias due to missing data; (6) risk of bias arising from measurement of outcomes (assessment parameters, validation protocol, independent testing protocols); (7) risk of bias in selection of reported results.
Each risk of bias is rated with “N”—No, “PN”—Probably No, “PY”—Probably Yes, and “Y”—Yes. Most studies are well-designed and have low risks in most criteria while some studies with small sample sizes have the risk of bias due to confounding, selection of participants, and measurement of outcomes. Studies with at least two “PY”s are rated “Moderate” in the summary. Ratings of individual studies are listed in Table A1.
Table A1. Risk of bias analysis.
Table A1. Risk of bias analysis.
StudyConfoundingSelection of ParticipantsClassification of InterventionsDeviations from Intended InterventionsMissing DataMeasurement of OutcomesSelection of Reported ResultsSummary
[79]PNPYNNNPYNModerate
[80]NPYNNNPYNModerate
[51]PNNNNNPYNLow
[81]PNPYNNNPYNModerate
[53]PNPNNNNPYNLow
[52]PNPYNNNPYNModerate
[55]PNNNNNPYNLow
[82]PNNNNNPYNLow
[69]PNPYNNNPYNModerate
[74]NPYNNNPYNModerate
[83]PNNNNNPYNLow
[84]PNNNNNPYNLow
[85]NPYNNNPYNModerate
[76]PNNNNNPYNLow
[54]PNNNNNNNLow
[73]PNNNNNPYNLow
[86]NPNNNNPYNLow
[75]PNPNNNNPYNLow
[87]PNNNPYNPYNModerate
[88]PNPNNNNPYNLow
[78]PNNNNNNNLow
[89]PNNNNNPYNLow
[90]PNNNNNPYNLow
[91]PNPYNNNNNLow
[77]PNNNNNPYNLow
[92]NPYNNNPYNModerate
[93]NPNNNNPYNLow
[57]NPYNNNPYNModerate
[61]NPYNNNPYNModerate
[62]PNNNNNPYNLow
[58]NPYNNNPYNModerate
[70]NPNNNNPYNLow
[59]NPYNNNPYNModerate
[60]NPYNNNPYNModerate
[94]NPYNNNPYNModerate
[63]NPYNNNPYNModerate
[64]NPNNNNPYNLow
[65]NPNNNNPYNLow
[71]NPNNNNPYNLow
[95]PYPYNNNPYNModerate
[72]NNNNNPYNLow
[66]NPNNNNPYNLow
[96]NPYNNNPYNModerate
[67]NPYNNNPYNModerate
[68]NPYNNNPYNModerate
[104]NPNNNNPYNLow
[105]NPYNNNPYNModerate
[106]NPYNNNPYNModerate
[99]NPYNNNPYNModerate
[107]NPNNNNPYNLow
[98]NPYNNNPYNModerate
[108]NPYNNNPYNModerate
[101]NPNNNNPYNLow
[109]NPYNNNPYNModerate
[110]NPYNNNPYNModerate
[111]NPYNNNPYNModerate
[112]NPYNNNPNNLow
[113]NPYNNNPYNModerate
[114]NPYNNNPYNModerate
[25]NPYNNNPYNModerate
[102]NPNNNNPYNLow
[115]PNPNNNNPYNLow
[116]NPYNNNPYNModerate
[117]NPYNNNPNNLow
[118]NPNNNNPYNLow
[103]PNPNNNNPYNLow
[119]NPYNNNPNNLow
[120]NPYNNNPYNModerate
[121]NPYNNNPYNModerate
[122]PNPNNNNPYNLow
[123]NPNNNNPYNLow
[124]NPYNNNPNNLow
[125]NPNNNNPYNLow
[126]NPNNNNPYNLow
[100]NPNNNNPYNLow
[97]NPNNNNPYNLow
[84]NPNNNNPYNLow
[130]NNNNNPYNLow
[131]NNNNNPYNLow
[132]PNNNNNNNLow
[127]NPNNNNPYNLow
[129]NNNNNPYNLow
[128]NPYNNNPYNModerate
[133]NPYNNNPYNModerate
[134]NPNNNNPYNLow
[135]NPNNNNPYNLow
[136]NPNNNNPYNLow
[137]NNNNNPYNLow
[143]NPNNNNPYNLow
[145]PNNNNNPYNLow
[141]PNNNNNPYNLow
[138]PNNNNNPYNLow
[140]PNNNNNPYNLow
[142]PNNNNNNNLow
[146]NPYNNNPYNModerate
[147]NPYNNNPYNModerate
[144]NPYNNNPYNModerate
[139]PNNNNNPYNLow
[158]NPNNNNPYNLow
[159]NPYNNNPYNModerate
[154]NPYNNNPYNModerate
[155]NPYNNNPYNModerate
[156]NNNNNPYNLow
[149]NPNNNNPYNLow
[157]NPNNNNPYNLow
[150]PNNNNNPYNLow
[153]PNNNNNPYNLow
[151]NPNNNNPYNLow
[152]PNNNNNPYNLow
[160]NPYNNNPYNModerate
[24]NPNNNNPYNLow
[161]NPNNNNPNNLow
[148]NPNNNNPYNLow
Abbreviations: N—No, PN—Probably No, PY—Probably Yes.

References

  1. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  2. Celard, P.; Iglesias, E.; Sorribes-Fdez, J.; Romero, R.; Vieira, A.S.; Borrajo, L. A survey on deep learning applied to medical images: From simple artificial neural networks to generative models. Neural Comput. Appl. 2022, 35, 2291–2323. [Google Scholar] [CrossRef] [PubMed]
  3. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  4. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  5. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial intelligence in radiology. Nat. Reviews. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  6. Reig, B.; Heacock, L.; Geras, K.J.; Moy, L. Machine learning in breast MRI. J. Magn. Reson. Imaging JMRI 2020, 52, 998–1018. [Google Scholar] [CrossRef]
  7. Eslami, T.; Almuqhim, F.; Raiker, J.S.; Saeed, F. Machine learning methods for diagnosing autism spectrum disorder and attention-deficit/hyperactivity disorder using functional and structural MRI: A survey. Front. Neuroinformatics 2021, 14, 575999. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Li, G.; Xu, Y.; Tang, X. Application of artificial intelligence in the MRI classification task of human brain neurological and psychiatric diseases: A scoping review. Diagnostics 2021, 11, 1402. [Google Scholar] [CrossRef]
  9. Yousaf, T.; Dervenoulas, G.; Politis, M. Advances in MRI methodology. Int. Rev. Neurobiol. 2018, 141, 31–76. [Google Scholar] [CrossRef]
  10. Pykett, I.L.; Newhouse, J.H.; Buonanno, F.S.; Brady, T.J.; Goldman, M.R.; Kistler, J.P.; Pohost, G.M. Principles of nuclear magnetic resonance imaging. Radiology 1982, 143, 157–168. [Google Scholar] [CrossRef]
  11. Van Geuns, R.J.M.; Wielopolski, P.A.; de Bruin, H.G.; Rensing, B.J.; van Ooijen, P.M.; Hulshoff, M.; Oudkerk, M.; de Feyter, P.J. Basic principles of magnetic resonance imaging. Prog. Cardiovasc. Dis. 1999, 42, 149–156. [Google Scholar] [CrossRef]
  12. Huettel, S.A.; Song, A.W.; McCarthy, G. Functional Magnetic Resonance Imaging; Sinauer Associates Sunderland: Sunderland, MA, USA, 2004; Volume 1. [Google Scholar]
  13. Mori, S.; Zhang, J. Principles of diffusion tensor imaging and its applications to basic neuroscience research. Neuron 2006, 51, 527–539. [Google Scholar] [CrossRef] [PubMed]
  14. Colombo, E.; Fick, T.; Esposito, G.; Germans, M.; Regli, L.; van Doormaal, T. Segmentation techniques of brain arteriovenous malformations for 3D visualization: A systematic review. Radiol. Medica 2022, 127, 1333–1341. [Google Scholar] [CrossRef] [PubMed]
  15. Castiglioni, I.; Rundo, L.; Codari, M.; Di Leo, G.; Salvatore, C.; Interlenghi, M.; Gallivanone, F.; Cozzi, A.; D’Amico, N.C.; Sardanelli, F. AI applications to medical images: From machine learning to deep learning. Phys. Medica 2021, 83, 9–24. [Google Scholar] [CrossRef] [PubMed]
  16. Khodatars, M.; Shoeibi, A.; Sadeghi, D.; Ghaasemi, N.; Jafari, M.; Moridian, P.; Khadem, A.; Alizadehsani, R.; Zare, A.; Kong, Y.; et al. Deep learning for neuroimaging-based diagnosis and rehabilitation of autism spectrum disorder: A review. Comput. Biol. Med. 2021, 139, 104949. [Google Scholar] [CrossRef] [PubMed]
  17. Bahathiq, R.A.; Banjar, H.; Bamaga, A.K.; Jarraya, S.K. Machine learning for autism spectrum disorder diagnosis using structural magnetic resonance imaging: Promising but challenging. Front. Neuroinform. 2022, 16, 949926. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, S.; Di, J.; Wang, D.; Dai, X.; Hua, Y.; Gao, X.; Zheng, A.; Gao, J. State-of-the-Art Review of Artificial Neural Networks to Predict, Characterize and Optimize Pharmaceutical Formulation. Pharmaceutics 2022, 14, 183. [Google Scholar] [CrossRef] [PubMed]
  19. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  20. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  22. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar] [CrossRef]
  23. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef]
  24. Li, Z.; Yu, J.; Wang, Y.; Zhou, H.; Yang, H.; Qiao, Z. Deepvolume: Brain structure and spatial connection-aware network for brain mri super-resolution. IEEE Trans. Cybern. 2019, 51, 3441–3454. [Google Scholar] [CrossRef]
  25. Li, H.; Chen, M.; Wang, J.; Illapani, V.S.P.; Parikh, N.A.; He, L. Automatic Segmentation of Diffuse White Matter Abnormality on T2-weighted Brain MR Images Using Deep Learning in Very Preterm Infants. Radiol. Artif. Intell. 2021, 3, e200166. [Google Scholar] [CrossRef]
  26. Yuan, J.; Ran, X.; Liu, K.; Yao, C.; Yao, Y.; Wu, H.; Liu, Q. Machine learning applications on neuroimaging for diagnosis and prognosis of epilepsy: A review. J. Neurosci. Methods 2021, 368, 109441. [Google Scholar] [CrossRef] [PubMed]
  27. Elbattah, M.; Loughnane, C.; Guérin, J.L.; Carette, R.; Cilia, F.; Dequen, G. Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data. J. Imaging 2021, 7, 83. [Google Scholar] [CrossRef] [PubMed]
  28. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q., Eds.; Curran Associates, Inc.: New York, NY, USA, 2014; Volume 27. [Google Scholar]
  29. Pan, Z.; Yu, W.; Yi, X.; Khan, A.; Yuan, F.; Zheng, Y. Recent progress on generative adversarial networks (GANs): A survey. IEEE Access 2019, 7, 36322–36333. [Google Scholar] [CrossRef]
  30. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef]
  31. Di Martino, A.; Yan, C.G.; Li, Q.; Denio, E.; Castellanos, F.X.; Alaerts, K.; Anderson, J.S.; Assaf, M.; Bookheimer, S.Y.; Dapretto, M.; et al. The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 2014, 19, 659–667. [Google Scholar] [CrossRef]
  32. Di Martino, A.; O’Connor, D.; Chen, B.; Alaerts, K.; Anderson, J.S.; Assaf, M.; Balsters, J.H.; Baxter, L.; Beggiato, A.; Bernaerts, S.; et al. Enhancing studies of the connectome in autism using the autism brain imaging data exchange II. Sci. Data 2017, 4, 170010. [Google Scholar] [CrossRef]
  33. IMPAC—Imaging-Psychiatry Challenge: Predicting Autism. Available online: https://paris-saclay-cds.github.io/autism_challenge/ (accessed on 15 December 2022).
  34. Consortium, T.A. The ADHD-200 consortium: A model to advance the translational potential of neuroimaging in clinical neuroscience. Front. Syst. Neurosci. 2012, 6, 62. [Google Scholar] [CrossRef]
  35. Sudlow, C.; Gallacher, J.; Allen, N.; Beral, V.; Burton, P.; Danesh, J.; Downey, P.; Elliott, P.; Green, J.; Landray, M.; et al. UK biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015, 12, e1001779. [Google Scholar] [CrossRef]
  36. Payakachat, N.; Tilford, J.M.; Ungar, W.J. National Database for Autism Research (NDAR): Big Data Opportunities for Health Services Research and Health Technology Assessment. PharmacoEconomics 2016, 34, 127–138. [Google Scholar] [CrossRef]
  37. Poldrack, R.A.; Barch, D.M.; Mitchell, J.P.; Wager, T.D.; Wagner, A.D.; Devlin, J.T.; Cumba, C.; Koyejo, O.; Milham, M.P. Toward open sharing of task-based fMRI data: The OpenfMRI project. Front. Neuroinform. 2013, 7, 12. [Google Scholar] [CrossRef]
  38. Mazziotta, J.; Toga, A.; Evans, A.; Fox, P.; Lancaster, J.; Zilles, K.; Woods, R.; Paus, T.; Simpson, G.; Pike, B.; et al. A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM). Philos. Trans. R. Soc. London. Ser. B Biol. Sci. 2001, 356, 1293–1322. [Google Scholar] [CrossRef] [PubMed]
  39. Yan, C.G.; Craddock, R.C.; Zuo, X.N.; Zang, Y.F.; Milham, M.P. Standardizing the intrinsic brain: Towards robust measurement of inter-individual variation in 1000 functional connectomes. Neuroimage 2013, 80, 246–262. [Google Scholar] [CrossRef] [PubMed]
  40. Casey, B.J.; Cannonier, T.; Conley, M.I.; Cohen, A.O.; Barch, D.M.; Heitzeg, M.M.; Soules, M.E.; Teslovich, T.; Dellarco, D.V.; Garavan, H. The adolescent brain cognitive development (ABCD) study: Imaging acquisition across 21 sites. Dev. Cogn. Neurosci. 2018, 32, 43–54. [Google Scholar] [CrossRef]
  41. Thompson, P.M.; Stein, J.L.; Medland, S.E.; Hibar, D.P.; Vasquez, A.A.; Renteria, M.E.; Toro, R.; Jahanshad, N.; Schumann, G.; Franke, B. The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging Behav. 2014, 8, 153–182. [Google Scholar] [CrossRef]
  42. Satterthwaite, T.D.; Elliott, M.A.; Ruparel, K.; Loughead, J.; Prabhakaran, K.; Calkins, M.E.; Hopson, R.; Jackson, C.; Keefe, J.; Riley, M. Neuroimaging of the Philadelphia neurodevelopmental cohort. Neuroimage 2014, 86, 544–553. [Google Scholar] [CrossRef]
  43. Alexander, L.M.; Escalera, J.; Ai, L.; Andreotti, C.; Febre, K.; Mangone, A.; Vega-Potler, N.; Langer, N.; Alexander, A.; Kovacs, M. An open resource for transdiagnostic research in pediatric mental health and learning disorders. Sci. Data 2017, 4, 1–26. [Google Scholar] [CrossRef]
  44. Van Essen, D.C.; Ugurbil, K.; Auerbach, E.; Barch, D.; Behrens, T.E.J.; Bucholz, R.; Chang, A.; Chen, L.; Corbetta, M.; Curtiss, S.W. The Human Connectome Project: A data acquisition perspective. Neuroimage 2012, 62, 2222–2231. [Google Scholar] [CrossRef]
  45. Howell, B.R.; Styner, M.A.; Gao, W.; Yap, P.T.; Wang, L.; Baluyot, K.; Yacoub, E.; Chen, G.; Potts, T.; Salzwedel, A.; et al. The UNC/UMN Baby Connectome Project (BCP): An overview of the study design and protocol development. NeuroImage 2019, 185, 891–905. [Google Scholar] [CrossRef]
  46. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
  47. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  48. Sterne, J.A.; Hernán, M.A.; Reeves, B.C.; Savović, J.; Berkman, N.D.; Viswanathan, M.; Henry, D.; Altman, D.G.; Ansari, M.T.; Boutron, I.; et al. ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016, 355. [Google Scholar] [CrossRef]
  49. Edition, F. Diagnostic and statistical manual of mental disorders. Am. Psychiatr. Assoc. 2013, 21, 591–643. [Google Scholar]
  50. Morris-Rosendahl, D.J.; Crocq, M.A. Neurodevelopmental disorders-the history and future of a diagnostic concept. Dialogues Clin. Neurosci. 2020, 22, 65–72. [Google Scholar] [CrossRef] [PubMed]
  51. Dvornek, N.C.; Ventola, P.; Duncan, J.S. Combining phenotypic and resting-state fMRI data for autism classification with recurrent neural networks. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 725–728. [Google Scholar]
  52. Xiao, Z.; Wu, J.; Wang, C.; Jia, N.; Yang, X. Computer-aided diagnosis of school-aged children with ASD using full frequency bands and enhanced SAE: A multi-institution study. Exp. Ther. Med. 2019, 17, 4055–4063. [Google Scholar] [CrossRef] [PubMed]
  53. Aghdam, M.A.; Sharifi, A.; Pedram, M.M. Diagnosis of autism spectrum disorders in young children based on resting-state functional magnetic resonance imaging data using convolutional neural networks. J. Digit. Imaging 2019, 32, 899–918. [Google Scholar] [CrossRef]
  54. Li, H.; Parikh, N.A.; He, L. A novel transfer learning approach to enhance deep neural network classification of brain functional connectomes. Front. Neurosci. 2018, 12, 491. [Google Scholar] [CrossRef]
  55. Leming, M.; Górriz, J.M.; Suckling, J. Ensemble deep learning on large, mixed-site fMRI datasets in autism and other tasks. Int. J. Neural Syst. 2020, 30, 2050012. [Google Scholar] [CrossRef] [PubMed]
  56. Sibley, M.H.; Swanson, J.M.; Arnold, L.E.; Hechtman, L.T.; Owens, E.B.; Stehli, A.; Abikoff, H.; Hinshaw, S.P.; Molina, B.S.; Mitchell, J.T.; et al. Defining ADHD symptom persistence in adulthood: Optimizing sensitivity and specificity. J. Child Psychol. Psychiatry 2017, 58, 655–662. [Google Scholar] [CrossRef]
  57. Ceschin, R.; Zahner, A.; Reynolds, W.; Gaesser, J.; Zuccoli, G.; Lo, C.W.; Gopalakrishnan, V.; Panigrahy, A. A computational framework for the detection of subcortical brain dysmaturation in neonatal MRI using 3D Convolutional Neural Networks. NeuroImage 2018, 178, 183–197. [Google Scholar] [CrossRef]
  58. Zahia, S.; Garcia-Zapirain, B.; Saralegui, I.; Fernandez-Ruanova, B. Dyslexia detection using 3D convolutional neural networks and functional magnetic resonance imaging. Comput. Methods Programs Biomed. 2020, 197, 105726. [Google Scholar] [CrossRef]
  59. Aminpour, A.; Ebrahimi, M.; Widjaja, E. Deep learning-based lesion segmentation in paediatric epilepsy. In Proceedings of the Medical Imaging 2021: Computer-Aided Diagnosis, Online, 15–19 February 2021; SPIE: Washington, DC, USA, 2021; Volume 11597, pp. 635–641. [Google Scholar] [CrossRef]
  60. Huang, J.; Xu, J.; Kang, L.; Zhang, T. Identifying epilepsy based on deep learning using DKI images. Front. Hum. Neurosci. 2020, 14, 590815. [Google Scholar] [CrossRef] [PubMed]
  61. Zhang, J.; Li, X.; Li, Y.; Wang, M.; Huang, B.; Yao, S.; Shen, L. Three dimensional convolutional neural network-based classification of conduct disorder with structural MRI. Brain Imaging Behav. 2020, 14, 2333–2340. [Google Scholar] [CrossRef] [PubMed]
  62. Menon, S.S.; Krishnamurthy, K. Multimodal Ensemble Deep Learning to Predict Disruptive Behavior Disorders in Children. Front. Neuroinform. 2021, 15, 742807. [Google Scholar] [CrossRef] [PubMed]
  63. Yang, J.; Lei, D.; Qin, K.; Pinaya, W.H.; Suo, X.; Li, W.; Li, L.; Kemp, G.J.; Gong, Q. Using deep learning to classify pediatric posttraumatic stress disorder at the individual level. BMC Psychiatry 2021, 21, 535. [Google Scholar] [CrossRef] [PubMed]
  64. Jiang, D.; Hu, Z.; Zhao, C.; Zhao, X.; Yang, J.; Zhu, Y.; Liao, J.; Liang, D.; Wang, H. Identification of Children’s Tuberous Sclerosis Complex with Multiple-contrast MRI and 3D Convolutional Network. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Scotland, UK, 11–15 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2924–2927. [Google Scholar]
  65. Shabanian, M.; Imran, A.A.Z.; Siddiqui, A.; Davis, R.L.; Bissler, J.J. 3D deep neural network to automatically identify TSC structural brain pathology based on MRI. In Proceedings of the Medical Imaging 2022: Image Processing, San Diego, CA, USA, 20–24 February 2022; SPIE: Washington, DC, USA, 2022; Volume 12032, pp. 613–619. [Google Scholar]
  66. Afshar, P.; Mohammadi, A.; Plataniotis, K.N. Brain tumor type classification via capsule networks. In Proceedings of the 2018 25th IEEE international conference on image processing (ICIP), Athens, Greece, 7–10 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 3129–3133. [Google Scholar] [CrossRef]
  67. Lee, M.H.; O’Hara, N.; Sonoda, M.; Kuroda, N.; Juhasz, C.; Asano, E.; Dong, M.; Jeong, J.W. Novel deep learning network analysis of electrical stimulation mapping-driven diffusion MRI tractography to improve preoperative evaluation of pediatric epilepsy. IEEE Trans. Biomed. Eng. 2020, 67, 3151–3162. [Google Scholar] [CrossRef] [PubMed]
  68. Xu, H.; Dong, M.; Lee, M.H.; O’Hara, N.; Asano, E.; Jeong, J.W. Objective detection of eloquent axonal pathways to minimize postoperative deficits in pediatric epilepsy surgery using diffusion tractography and convolutional neural networks. IEEE Trans. Med. Imaging 2019, 38, 1910–1922. [Google Scholar] [CrossRef]
  69. Yang, M.; Cao, M.; Chen, Y.; Chen, Y.; Fan, G.; Li, C.; Wang, J.; Liu, T. Large-scale brain functional network integration for discrimination of autism using a 3-D deep learning model. Front. Hum. Neurosci. 2021, 15, 277. [Google Scholar] [CrossRef]
  70. Wu, M.; Shen, X.; Lai, C.; Zheng, W.; Li, Y.; Shangguan, Z.; Yan, C.; Liu, T.; Wu, D. Detecting neonatal acute bilirubin encephalopathy based on T1-weighted MRI images and learning-based approaches. BMC Med. Imaging 2021, 21, 103. [Google Scholar] [CrossRef]
  71. Sánchez Fernández, I.; Yang, E.; Calvachi, P.; Amengual-Gual, M.; Wu, J.Y.; Krueger, D.; Northrup, H.; Bebin, M.E.; Sahin, M.; Yu, K.H.; et al. Deep learning in rare disease. Detection of tubers in tuberous sclerosis complex. PLoS ONE 2020, 15, e0232376. [Google Scholar] [CrossRef]
  72. Quon, J.; Bala, W.; Chen, L.; Wright, J.; Kim, L.; Han, M.; Shpanskaya, K.; Lee, E.; Tong, E.; Iv, M.; et al. Deep learning for pediatric posterior fossa tumor detection and classification: A multi-institutional study. Am. J. Neuroradiol. 2020, 41, 1718–1725. [Google Scholar] [CrossRef]
  73. Li, S.; Tang, Z.; Jin, N.; Yang, Q.; Liu, G.; Liu, T.; Hu, J.; Liu, S.; Wang, P.; Hao, J.; et al. Uncovering Brain Differences in Preschoolers and Young Adolescents with Autism Spectrum Disorder Using Deep Learning. Int. J. Neural Syst. 2022, 32, 2250044. [Google Scholar] [CrossRef] [PubMed]
  74. Haweel, R.; Shalaby, A.; Mahmoud, A.; Seada, N.; Ghoniemy, S.; Ghazal, M.; Casanova, M.F.; Barnes, G.N.; El-Baz, A. A robust DWT–CNN-based CAD system for early diagnosis of autism using task-based fMRI. Med. Phys. 2021, 48, 2315–2326. [Google Scholar] [CrossRef] [PubMed]
  75. Mellema, C.J.; Nguyen, K.P.; Treacher, A.; Montillo, A. Reproducible neuroimaging features for diagnosis of autism spectrum disorder with machine learning. Sci. Rep. 2022, 12, 3057. [Google Scholar] [CrossRef] [PubMed]
  76. Leming, M.J.; Baron-Cohen, S.; Suckling, J. Single-participant structural similarity matrices lead to greater accuracy in classification of participants than function in autism in MRI. Mol. Autism 2021, 12, 34. [Google Scholar] [CrossRef] [PubMed]
  77. Chen, M.; Li, H.; Fan, H.; Dillman, J.R.; Wang, H.; Altaye, M.; Zhang, B.; Parikh, N.A.; He, L. ConCeptCNN: A novel multi-filter convolutional neural network for the prediction of neurodevelopmental disorders using brain connectome. Med. Phys. 2022, 49, 3171–3184. [Google Scholar] [CrossRef]
  78. Zhang-James, Y.; Helminen, E.C.; Liu, J.; Franke, B.; Hoogman, M.; Faraone, S.V. Evidence for similar structural brain anomalies in youth and adult attention-deficit/hyperactivity disorder: A machine learning analysis. Transl. Psychiatry 2021, 11, 82. [Google Scholar] [CrossRef]
  79. Guo, X.; Dominick, K.C.; Minai, A.A.; Li, H.; Erickson, C.A.; Lu, L.J. Diagnosing autism spectrum disorder from brain resting-state functional connectivity patterns using a deep neural network with a novel feature selection method. Front. Neurosci. 2017, 11, 460. [Google Scholar] [CrossRef]
  80. Li, X.; Dvornek, N.C.; Zhuang, J.; Ventola, P.; Duncan, J.S. Brain biomarker interpretation in ASD using deep learning and fMRI. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 206–214. [Google Scholar]
  81. Akhavan Aghdam, M.; Sharifi, A.; Pedram, M.M. Combination of rs-fMRI and sMRI data to discriminate autism spectrum disorders in young children using deep belief network. J. Digit. Imaging 2018, 31, 895–903. [Google Scholar] [CrossRef]
  82. Ke, F.; Choi, S.; Kang, Y.H.; Cheon, K.A.; Lee, S.W. Exploring the structural and strategic bases of autism spectrum disorders with deep learning. IEEE Access 2020, 8, 153341–153352. [Google Scholar] [CrossRef]
  83. Husna, R.N.S.; Syafeeza, A.; Hamid, N.A.; Wong, Y.; Raihan, R.A. Functional magnetic resonance imaging for autism spectrum disorder detection using deep learning. J. Teknol. 2021, 83, 45–52. [Google Scholar] [CrossRef]
  84. Kawahara, J.; Brown, C.J.; Miller, S.P.; Booth, B.G.; Chau, V.; Grunau, R.E.; Zwicker, J.G.; Hamarneh, G. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage 2017, 146, 1038–1049. [Google Scholar] [CrossRef] [PubMed]
  85. Gao, K.; Sun, Y.; Niu, S.; Wang, L. Unified framework for early stage status prediction of autism based on infant structural magnetic resonance imaging. Autism Res. 2021, 14, 2512–2523. [Google Scholar] [CrossRef] [PubMed]
  86. Guo, X.; Wang, J.; Wang, X.; Liu, W.; Yu, H.; Xu, L.; Li, H.; Wu, J.; Dong, M.; Tan, W.; et al. Diagnosing autism spectrum disorder in children using conventional MRI and apparent diffusion coefficient based deep learning algorithms. Eur. Radiol. 2022, 32, 761–770. [Google Scholar] [CrossRef]
  87. Wang, T.; Kamata, S.I. Classification of structural MRI images in Adhd using 3D fractal dimension complexity map. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2018; IEEE: Piscataway, NJ, USA, 2019; pp. 215–219. [Google Scholar]
  88. Riaz, A.; Asad, M.; Alonso, E.; Slabaugh, G. DeepFMRI: End-to-end deep learning for functional connectivity and classification of ADHD using fMRI. J. Neurosci. Methods 2020, 335, 108506. [Google Scholar] [CrossRef]
  89. Tang, Y.; Sun, J.; Wang, C.; Zhong, Y.; Jiang, A.; Liu, G.; Liu, X. ADHD classification using auto-encoding neural network and binary hypothesis testing. Artif. Intell. Med. 2022, 123, 102209. [Google Scholar] [CrossRef]
  90. Ke, H.; Wang, F.; Ma, H.; He, Z. ADHD identification and its interpretation of functional connectivity using deep self-attention factorization. Knowl.-Based Syst. 2022, 250, 109082. [Google Scholar] [CrossRef]
  91. Wang, D.; Hong, D.; Wu, Q. Attention Deficit Hyperactivity Disorder Classification Based on Deep Learning. IEEE/Acm Trans. Comput. Biol. Bioinform. 2022, 1. [Google Scholar] [CrossRef]
  92. Uyulan, C.; Erguzel, T.T.; Turk, O.; Farhad, S.; Metin, B.; Tarhan, N. A Class Activation Map-Based Interpretable Transfer Learning Model for Automated Detection of ADHD from fMRI Data. Clin. EEG Neurosci. 2022, 54, 15500594221122699. [Google Scholar] [CrossRef]
  93. Stanley, E.A.M.; Rajashekar, D.; Mouches, P.; Wilms, M.; Plettl, K.; Forkert, N.D. A fully convolutional neural network for explainable classification of attention deficit hyperactivity disorder. In Proceedings of the Medical Imaging 2022: Computer-Aided Diagnosis, Leicester, UK, 20–21 November 2022; SPIE: Washington, DC, USA, 2022; Volume 12033, pp. 296–301. [Google Scholar]
  94. Attallah, O.; Sharkas, M.A.; Gadelkarim, H. Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders. Diagnostics 2020, 10, 27. [Google Scholar] [CrossRef]
  95. Artzi, M.; Redmard, E.; Tzemach, O.; Zeltser, J.; Gropper, O.; Roth, J.; Shofty, B.; Kozyrev, D.A.; Constantini, S.; Ben-Sira, L. Classification of pediatric posterior fossa tumors using convolutional neural network and tabular data. IEEE Access 2021, 9, 91966–91973. [Google Scholar] [CrossRef]
  96. Prince, E.W.; Whelan, R.; Mirsky, D.M.; Stence, N.; Staulcup, S.; Klimo, P.; Anderson, R.C.; Niazi, T.N.; Grant, G.; Souweidane, M.; et al. Robust deep learning classification of adamantinomatous craniopharyngioma from limited preoperative radiographic images. Sci. Rep. 2020, 10, 1–13. [Google Scholar] [CrossRef] [PubMed]
  97. Nalepa, J.; Adamski, S.; Kotowski, K.; Chelstowska, S.; Machnikowska-Sokolowska, M.; Bozek, O.; Wisz, A.; Jurkiewicz, E. Segmenting pediatric optic pathway gliomas from MRI using deep learning. Comput. Biol. Med. 2022, 142, 105237. [Google Scholar] [CrossRef] [PubMed]
  98. Asis-Cruz, D.; Krishnamurthy, D.; Jose, C.; Cook, K.M.; Limperopoulos, C. FetalGAN: Automated Segmentation of Fetal Functional Brain MRI Using Deep Generative Adversarial Learning and Multi-Scale 3D U-Net. Front. Neurosci. 2022, 16, 852. [Google Scholar] [CrossRef]
  99. Sourati, J.; Gholipour, A.; Dy, J.G.; Tomas-Fernandez, X.; Kurugol, S.; Warfield, S.K. Intelligent labeling based on fisher information for medical image segmentation using deep learning. IEEE Trans. Med. Imaging 2019, 38, 2642–2653. [Google Scholar] [CrossRef] [PubMed]
  100. Peng, J.; Kim, D.D.; Patel, J.B.; Zeng, X.; Huang, J.; Chang, K.; Xun, X.; Zhang, C.; Sollee, J.; Wu, J.; et al. Deep learning-based automatic tumor burden assessment of pediatric high-grade gliomas, medulloblastomas, and other leptomeningeal seeding tumors. Neuro-oncology 2022, 24, 289–299. [Google Scholar] [CrossRef] [PubMed]
  101. Avisdris, N.; Yehuda, B.; Ben-Zvi, O.; Link-Sourani, D.; Ben-Sira, L.; Miller, E.; Zharkov, E.; Ben Bashat, D.; Joskowicz, L. Automatic linear measurements of the fetal brain on MRI with deep neural networks. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1481–1492. [Google Scholar] [CrossRef]
  102. Zhao, L.; Asis-Cruz, J.; Feng, X.; Wu, Y.; Kapse, K.; Largent, A.; Quistorff, J.; Lopez, C.; Wu, D.; Qing, K.; et al. Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach. Am. J. Neuroradiol. 2022, 43, 448–454. [Google Scholar] [CrossRef]
  103. Grigorescu, I.; Vanes, L.; Uus, A.; Batalle, D.; Cordero-Grande, L.; Nosarti, C.; Edwards, A.D.; Hajnal, J.V.; Modat, M.; Deprez, M. Harmonized segmentation of neonatal brain MRI. Front. Neurosci. 2021, 15, 662005. [Google Scholar] [CrossRef]
  104. Li, G.; Chen, M.H.; Li, G.; Wu, D.; Lian, C.; Sun, Q.; Rushmore, R.J.; Wang, L. Volumetric Analysis of Amygdala and Hippocampal Subfields for Infants with Autism. J. Autism Dev. Disord. 2022, 1–15. [Google Scholar] [CrossRef]
  105. Tor-Diez, C.; Porras, A.R.; Packer, R.J.; Avery, R.A.; Linguraru, M.G. Unsupervised MRI homogenization: Application to pediatric anterior visual pathway segmentation. In Proceedings of the International Workshop on Machine Learning in Medical Imaging 2020, Lima, Peru, 4 October 2020; pp. 180–188. [Google Scholar] [CrossRef]
  106. Sourati, J.; Gholipour, A.; Dy, J.G.; Kurugol, S.; Warfield, S.K. Active deep learning with fisher information for patch-wise semantic segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 83–91. [Google Scholar] [CrossRef]
  107. Rutherford, S.; Sturmfels, P.; Angstadt, M.; Hect, J.; Wiens, J.; van den Heuvel, M.I.; Scheinost, D.; Sripada, C.; Thomason, M. Automated brain masking of fetal functional MRI with open data. Neuroinformatics 2022, 20, 173–185. [Google Scholar] [CrossRef]
  108. Ebner, M.; Wang, G.; Li, W.; Aertsen, M.; Patel, P.A.; Aughwane, R.; Melbourne, A.; Doel, T.; Dymarkowski, S.; De Coppi, P.; et al. An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI. NeuroImage 2020, 206, 116324. [Google Scholar] [CrossRef]
  109. Bermudez, C.; Blaber, J.; Remedios, S.W.; Reynolds, J.E.; Lebel, C.; McHugo, M.; Heckers, S.; Huo, Y.; Landman, B.A. Generalizing deep whole brain segmentation for pediatric and post-contrast MRI with augmented transfer learning. In Proceedings of the Medical Imaging 2020: Image Processing, Houston, TX, USA, 17–20 February 2020; SPIE: Washington, DC, USA, 2020; Volume 11313, pp. 111–118. [Google Scholar]
  110. Enguehard, J.; O’Halloran, P.; Gholipour, A. Semi-supervised learning with deep embedded clustering for image classification and segmentation. IEEE Access 2019, 7, 11093–11104. [Google Scholar] [CrossRef] [PubMed]
  111. Khalili, N.; Lessmann, N.; Turk, E.; Claessens, N.; de Heus, R.; Kolk, T.; Viergever, M.A.; Benders, M.J.; Išgum, I. Automatic brain tissue segmentation in fetal MRI using convolutional neural networks. Magn. Reson. Imaging 2019, 64, 77–89. [Google Scholar] [CrossRef] [PubMed]
  112. Li, H.; Parikh, N.A.; Wang, J.; Merhar, S.; Chen, M.; Parikh, M.; Holland, S.; He, L. Objective and automated detection of diffuse white matter abnormality in preterm infants using deep convolutional neural networks. Front. Neurosci. 2019, 13, 610. [Google Scholar] [CrossRef]
  113. Grimm, F.; Edl, F.; Kerscher, S.R.; Nieselt, K.; Gugel, I.; Schuhmann, M.U. Semantic segmentation of cerebrospinal fluid and brain volume with a convolutional neural network in pediatric hydrocephalus—transfer learning from existing algorithms. Acta Neurochir. 2020, 162, 2463–2474. [Google Scholar] [CrossRef] [PubMed]
  114. Yang, R.; Zuo, H.; Han, S.; Zhang, X.; Zhang, Q. Computer-Aided Diagnosis of Children with Cerebral Palsy under Deep Learning Convolutional Neural Network Image Segmentation Model Combined with Three-Dimensional Cranial Magnetic Resonance Imaging. J. Healthc. Eng. 2021, 2021, 1822776. [Google Scholar] [CrossRef]
  115. Uus, A.U.; Ayub, M.U.; Gartner, A.; Kyriakopoulou, V.; Pietsch, M.; Grigorescu, I.; Christiaens, D.; Hutter, J.; Grande, L.C.; Price, A. Segmentation of Periventricular White Matter in Neonatal Brain MRI: Analysis of Brain Maturation in Term and Preterm Cohorts. In Proceedings of the International Workshop on Preterm, Perinatal and Paediatric Image Analysis, Messina, Italy, 13–15 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 94–104. [Google Scholar]
  116. Luan, X.; Li, W.; Liu, L.; Shu, Y.; Guo, Y. Rubik-Net: Learning Spatial Information via Rotation-Driven Convolutions for Brain Segmentation. IEEE J. Biomed. Health Inform. 2021, 26, 289–300. [Google Scholar] [CrossRef]
  117. Quon, J.L.; Chen, L.C.; Kim, L.; Grant, G.A.; Edwards, M.S.; Cheshier, S.H.; Yeom, K.W. Deep learning for automated delineation of pediatric cerebral arteries on pre-operative brain magnetic resonance imaging. Front. Surg. 2020, 7, 89. [Google Scholar] [CrossRef]
  118. Quon, J.L.; Han, M.; Kim, L.H.; Koran, M.E.; Chen, L.C.; Lee, E.H.; Wright, J.; Ramaswamy, V.; Lober, R.M.; Taylor, M.D.; et al. Artificial intelligence for automatic cerebral ventricle segmentation and volume calculation: A clinical tool for the evaluation of pediatric hydrocephalus. J. Neurosurg. Pediatr. 2020, 27, 131–138. [Google Scholar] [CrossRef]
  119. Hong, J.; Yun, H.J.; Park, G.; Kim, S.; Laurentys, C.T.; Siqueira, L.C.; Tarui, T.; Rollins, C.K.; Ortinau, C.M.; Grant, P.E.; et al. Fetal cortical plate segmentation using fully convolutional networks with multiple plane aggregation. Front. Neurosci. 2020, 14, 591683. [Google Scholar] [CrossRef]
  120. Dou, H.; Karimi, D.; Rollins, C.K.; Ortinau, C.M.; Vasung, L.; Velasco-Annis, C.; Ouaalam, A.; Yang, X.; Ni, D.; Gholipour, A. A deep attentive convolutional neural network for automatic cortical plate segmentation in fetal MRI. IEEE Trans. Med. Imaging 2020, 40, 1123–1133. [Google Scholar] [CrossRef] [PubMed]
  121. Khalili, N.; Turk, E.; Benders, M.; Moeskops, P.; Claessens, N.; de Heus, R.; Franx, A.; Wagenaar, N.; Breur, J.; Viergever, M.; et al. Automatic extraction of the intracranial volume in fetal and neonatal MR scans using convolutional neural networks. Neuroimage Clin. 2019, 24, 102061. [Google Scholar] [CrossRef] [PubMed]
  122. Wang, Y.; Haghpanah, F.S.; Zhang, X.; Santamaria, K.; da Costa Aguiar Alves, G.K.; Bruno, E.; Aw, N.; Maddocks, A.; Duarte, C.S.; Monk, C.; et al. ID-Seg: An infant deep learning-based segmentation framework to improve limbic structure estimates. Brain Inform. 2022, 9, 12. [Google Scholar] [CrossRef]
  123. Gruber, N.; Galijasevic, M.; Regodic, M.; Grams, A.E.; Siedentopf, C.; Steiger, R.; Hammerl, M.; Haltmeier, M.; Gizewski, E.R.; Janjic, T. A deep learning pipeline for the automated segmentation of posterior limb of internal capsule in preterm neonates. Artif. Intell. Med. 2022, 132, 102384. [Google Scholar] [CrossRef] [PubMed]
  124. Park, D.K.; Kim, W.; Thornburg, O.S.; McBrian, D.K.; McKhann, G.M.; Feldstein, N.A.; Maddocks, A.B.; Gonzalez, E.; Shen, M.Y.; Akman, C.; et al. Convolutional neural network-aided tuber segmentation in tuberous sclerosis complex patients correlates with electroencephalogram. Epilepsia 2022, 63, 1530–1541. [Google Scholar] [CrossRef] [PubMed]
  125. Vafaeikia, P.; Wagner, M.W.; Hawkins, C.; Tabori, U.; Ertl-Wagner, B.B.; Khalvati, F. Improving the segmentation of pediatric low-grade gliomas through multitask learning. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Scotland, UK, 11–15 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2119–2122. [Google Scholar]
  126. Madhogarhia, R.; Kazerooni, A.F.; Arif, S.; Ware, J.B.; Familiar, A.M.; Vidal, L.; Bagheri, S.; Anderson, H.; Haldar, D.; Yagoda, S. Automated segmentation of pediatric brain tumors based on multi-parametric MRI and deep learning. In Proceedings of the Medical Imaging 2022: Computer-Aided Diagnosis, Leicester, UK, 10 August 2022; SPIE: Washington, DC, USA, 2022; Volume 12033, pp. 723–731. [Google Scholar]
  127. Hong, J.; Feng, Z.; Wang, S.H.; Peet, A.; Zhang, Y.D.; Sun, Y.; Yang, M. Brain age prediction of children using routine brain MR images via deep learning. Front. Neurol. 2020, 11, 584682. [Google Scholar] [CrossRef]
  128. Kawaguchi, M.; Kidokoro, H.; Ito, R.; Shiraki, A.; Suzuki, T.; Maki, Y.; Tanaka, M.; Sakaguchi, Y.; Yamamoto, H.; Takahashi, Y.; et al. Age estimates from brain magnetic resonance images of children younger than two years of age using deep learning. Magn. Reson. Imaging 2021, 79, 38–44. [Google Scholar] [CrossRef]
  129. Niu, X.; Zhang, F.; Kounios, J.; Liang, H. Improved prediction of brain age using multimodal neuroimaging data. Hum. Brain Mapp. 2020, 41, 1626–1643. [Google Scholar] [CrossRef]
  130. Shabanian, M.; Eckstein, E.C.; Chen, H.; DeVincenzo, J.P. Classification of neurodevelopmental age in normal infants using 3D-CNN based on brain MRI. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2373–2378. [Google Scholar]
  131. Hu, W.; Cai, B.; Zhang, A.; Calhoun, V.D.; Wang, Y.P. Deep collaborative learning with application to the study of multimodal brain development. IEEE Trans. Biomed. Eng. 2019, 66, 3346–3359. [Google Scholar] [CrossRef]
  132. Qu, T.; Yue, Y.; Zhang, Q.; Wang, C.; Zhang, Z.; Lu, G.; Du, W.; Li, X. Baenet: A brain age estimation network with 3d skipping and outlier constraint loss. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 399–403. [Google Scholar]
  133. Shabanian, M.; Wenzel, M.; DeVincenzo, J.P. Infant brain age classification: 2D CNN outperforms 3D CNN in small dataset. In Proceedings of the Medical Imaging 2022: Image Processing, San Diego, CA, USA, 20–24 February 2022; SPIE: Washington, DC, USA, 2022; Volume 12032, pp. 626–633. [Google Scholar]
  134. Wada, A.; Saito, Y.; Fujita, S.; Irie, R.; Akashi, T.; Sano, K.; Kato, S.; Ikenouchi, Y.; Hagiwara, A.; Sato, K.; et al. Automation of a Rule-based Workflow to Estimate Age from Brain MR Imaging of Infants and Children Up to 2 Years Old Using Stacked Deep Learning. Magn. Reson. Med. Sci. 2023, 22, 57–66. [Google Scholar] [CrossRef]
  135. Hong, J.; Yun, H.J.; Park, G.; Kim, S.; Ou, Y.; Vasung, L.; Rollins, C.K.; Ortinau, C.M.; Takeoka, E.; Akiyama, S.; et al. Optimal method for fetal brain age prediction using multiplanar slices from structural magnetic resonance imaging. Front. Neurosci. 2021, 15, 1284. [Google Scholar] [CrossRef]
  136. Zhang, Q.; He, Y.; Qu, T.; Yang, F.; Lin, Y.; Hu, Z.; Li, X.; Xu, Q.; Xing, W.; Gumenyuk, V.; et al. Delayed brain development of Rolandic epilepsy profiled by deep learning–based neuroanatomic imaging. Eur. Radiol. 2021, 31, 9628–9637. [Google Scholar] [CrossRef]
  137. Taoudi-Benchekroun, Y.; Christiaens, D.; Grigorescu, I.; Gale-Grant, O.; Schuh, A.; Pietsch, M.; Chew, A.; Harper, N.; Falconer, S.; Poppe, T.; et al. Predicting age and clinical risk from the neonatal connectome. NeuroImage 2022, 257, 119319. [Google Scholar] [CrossRef]
  138. Wu, Y.; Besson, P.; Azcona, E.A.; Bandt, S.K.; Parrish, T.B.; Breiter, H.C.; Katsaggelos, A.K. A multicohort geometric deep learning study of age dependent cortical and subcortical morphologic interactions for fluid intelligence prediction. Sci. Rep. 2022, 12, 17760. [Google Scholar] [CrossRef]
  139. Liu, M.; Zhang, Z.; Dunson, D.B. Graph auto-encoding brain networks with applications to analyzing large-scale brain imaging datasets. Neuroimage 2021, 245, 118750. [Google Scholar] [CrossRef]
  140. Huang, S.G.; Xia, J.; Xu, L.; Qiu, A. Spatio-temporal directed acyclic graph learning with attention mechanisms on brain functional time series and connectivity. Med. Image Anal. 2022, 77, 102370. [Google Scholar] [CrossRef]
  141. Saha, S.; Pagnozzi, A.; Bradford, D.; Fripp, J. Predicting fluid intelligence in adolescence from structural MRI with deep learning methods. Intelligence 2021, 88, 101568. [Google Scholar] [CrossRef]
  142. Li, M.; Jiang, M.; Zhang, G.; Liu, Y.; Zhou, X. Prediction of fluid intelligence from T1-w MRI images: A precise two-step deep learning framework. PLoS ONE 2022, 17, e0268707. [Google Scholar] [CrossRef] [PubMed]
  143. He, L.; Li, H.; Chen, M.; Wang, J.; Altaye, M.; Dillman, J.R.; Parikh, N.A. Deep multimodal learning from MRI and clinical data for early prediction of neurodevelopmental deficits in very preterm infants. Front. Neurosci. 2021, 15, 753033. [Google Scholar] [CrossRef] [PubMed]
  144. Saha, S.; Pagnozzi, A.; Bourgeat, P.; George, J.M.; Bradford, D.; Colditz, P.B.; Boyd, R.N.; Rose, S.E.; Fripp, J.; Pannek, K. Predicting motor outcome in preterm infants from very early brain diffusion MRI using a deep learning convolutional neural network (CNN) model. Neuroimage 2020, 215, 116807. [Google Scholar] [CrossRef] [PubMed]
  145. Han, S.; Zhang, Y.; Ren, Y.; Posner, J.; Yoo, S.; Cha, J. 3D distributed deep learning framework for prediction of human intelligence from brain MRI. In Proceedings of the Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging, Houston, TX, USA, 18–20 February 2020; SPIE: Washington, DC, USA, 2020; Volume 11317, pp. 484–490. [Google Scholar]
  146. Jeong, J.W.; Banerjee, S.; Lee, M.H.; O’Hara, N.; Behen, M.; Juhasz, C.; Dong, M. Deep reasoning neural network analysis to predict language deficits from psychometry-driven DWI connectome of young children with persistent language concerns. Hum. Brain Mapp. 2021, 42, 3326–3338. [Google Scholar] [CrossRef] [PubMed]
  147. Jeong, J.W.; Lee, M.H.; O’Hara, N.; Juhász, C.; Asano, E. Prediction of baseline expressive and receptive language function in children with focal epilepsy using diffusion tractography-based deep learning network. Epilepsy Behav. 2021, 117, 107909. [Google Scholar] [CrossRef] [PubMed]
  148. Kim, E.; Cho, H.H.; Cho, S.; Park, B.; Hong, J.; Shin, K.; Hwang, M.; You, S.; Lee, S. Accelerated Synthetic MRI with Deep Learning–Based Reconstruction for Pediatric Neuroimaging. Am. J. Neuroradiol. 2022, 43, 1653–1659. [Google Scholar] [CrossRef] [PubMed]
  149. Kaplan, S.; Perrone, A.; Alexopoulos, D.; Kenley, J.K.; Barch, D.M.; Buss, C.; Elison, J.T.; Graham, A.M.; Neil, J.J.; O’Connor, T.G.; et al. Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI. NeuroImage 2022, 253, 119091. [Google Scholar] [CrossRef]
  150. Sujit, S.J.; Coronado, I.; Kamali, A.; Narayana, P.A.; Gabr, R.E. Automated image quality evaluation of structural brain MRI using an ensemble of deep learning networks. J. Magn. Reson. Imaging 2019, 50, 1260–1267. [Google Scholar] [CrossRef]
  151. Largent, A.; Kapse, K.; Barnett, S.D.; De Asis-Cruz, J.; Whitehead, M.; Murnick, J.; Zhao, L.; Andersen, N.; Quistorff, J.; Lopez, C.; et al. Image quality assessment of fetal brain MRI using multi-instance deep learning methods. J. Magn. Reson. Imaging 2021, 54, 818–829. [Google Scholar] [CrossRef]
  152. Ettehadi, N.; Kashyap, P.; Zhang, X.; Wang, Y.; Semanek, D.; Desai, K.; Guo, J.; Posner, J.; Laine, A.F. Automated Multiclass Artifact Detection in Diffusion MRI Volumes via 3D Residual Squeeze-and-Excitation Convolutional Neural Networks. Front. Hum. Neurosci. 2022, 16, 877326. [Google Scholar] [CrossRef]
  153. Liu, S.; Thung, K.H.; Lin, W.; Yap, P.T.; Shen, D. Real-time quality assessment of pediatric MRI via semi-supervised deep nonlocal residual neural networks. IEEE Trans. Image Process. 2020, 29, 7697–7706. [Google Scholar] [CrossRef]
  154. Wang, C.; Uh, J.; He, X.; Hua, C.h.; Acharya, S. Transfer learning-based synthetic CT generation for MR-only proton therapy planning in children with pelvic sarcomas. In Proceedings of the Medical Imaging 2021: Physics of Medical Imaging, Online, 5–19 February 2021; SPIE: Washington, DC, USA, 2021; Volume 11595, pp. 1112–1118. [Google Scholar]
  155. Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; van den Berg, C.A.; Philippens, M.E. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef]
  156. Zhang, H.; Li, H.; Dillman, J.R.; Parikh, N.A.; He, L. Multi-Contrast MRI Image Synthesis Using Switchable Cycle-Consistent Generative Adversarial Networks. Diagnostics 2022, 12, 816. [Google Scholar] [CrossRef]
  157. Wang, C.; Uh, J.; Merchant, T.E.; Hua, C.h.; Acharya, S. Facilitating MR-Guided Adaptive Proton Therapy in Children Using Deep Learning-Based Synthetic CT. Int. J. Part. Ther. 2022, 8, 11–20. [Google Scholar] [CrossRef] [PubMed]
  158. Hales, P.W.; Pfeuffer, J.; A Clark, C. Combined denoising and suppression of transient artifacts in arterial spin labeling MRI using deep learning. J. Magn. Reson. Imaging 2020, 52, 1413–1426. [Google Scholar] [CrossRef]
  159. Kim, J.; Hong, Y.; Chen, G.; Lin, W.; Yap, P.T.; Shen, D. Graph-based deep learning for prediction of longitudinal infant diffusion MRI data. In Proceedings of the Computational Diffusion MRI: International MICCAI Workshop, Granada, Spain, 22 September 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 133–141. [Google Scholar] [CrossRef]
  160. Karimi, D.; Jaimes, C.; Machado-Rivas, F.; Vasung, L.; Khan, S.; Warfield, S.K.; Gholipour, A. Deep learning-based parameter estimation in fetal diffusion-weighted MRI. Neuroimage 2021, 243, 118482. [Google Scholar] [CrossRef] [PubMed]
  161. Kim, S.H.; Choi, Y.H.; Lee, J.S.; Lee, S.B.; Cho, Y.J.; Lee, S.H.; Shin, S.M.; Cheon, J.E. Deep learning reconstruction in pediatric brain MRI: Comparison of image quality with conventional T2-weighted MRI. Neuroradiology 2022, 65, 1–8. [Google Scholar] [CrossRef] [PubMed]
  162. Winterburn, J.L.; Voineskos, A.N.; Devenyi, G.A.; Plitman, E.; de la Fuente-Sandoval, C.; Bhagwat, N.; Graff-Guerrero, A.; Knight, J.; Chakravarty, M.M. Can we accurately classify schizophrenia patients from healthy controls using magnetic resonance imaging and machine learning? A multi-method and multi-dataset study. Schizophr. Res. 2019, 214, 3–10. [Google Scholar] [CrossRef]
  163. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4. [Google Scholar]
  164. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 2002, 15, 273–289. [Google Scholar] [CrossRef]
  165. Cox, R.W. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Comput. Biomed. Res. 1996, 29, 162–173. [Google Scholar] [CrossRef]
  166. Avants, B.B.; Tustison, N.; Song, G. Advanced normalization tools (ANTS). Insight J. 2009, 2, 1–35. [Google Scholar]
  167. Smith, S.M.; Jenkinson, M.; Woolrich, M.W.; Beckmann, C.F.; Behrens, T.E.J.; Johansen-Berg, H.; Bannister, P.R.; De Luca, M.; Drobnjak, I.; Flitney, D.E. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 2004, 23, S208–S219. [Google Scholar] [CrossRef]
  168. Yan, C.G.; Wang, X.D.; Zuo, X.N.; Zang, Y.F. DPABI: Data processing & analysis for (resting-state) brain imaging. Neuroinformatics 2016, 14, 339–351. [Google Scholar] [CrossRef]
  169. Fischl, B. FreeSurfer. Neuroimage 2012, 62, 774–781. [Google Scholar] [CrossRef] [PubMed]
  170. Gorgolewski, K.J.; Auer, T.; Calhoun, V.D.; Craddock, R.C.; Das, S.; Duff, E.P.; Flandin, G.; Ghosh, S.S.; Glatard, T.; Halchenko, Y.O.; et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci. Data 2016, 3, 160044. [Google Scholar] [CrossRef] [PubMed]
  171. Yang, G.; Ye, Q.; Xia, J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion 2022, 77, 29–52. [Google Scholar] [CrossRef] [PubMed]
  172. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar] [CrossRef]
  173. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
  174. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 2018, 51, 1–42. [Google Scholar] [CrossRef]
Figure 1. Architecture of multi-layer perceptron (MLP) [18].
Figure 1. Architecture of multi-layer perceptron (MLP) [18].
Applsci 13 02302 g001
Figure 2. Architecture of convolutional neural networks [2].
Figure 2. Architecture of convolutional neural networks [2].
Applsci 13 02302 g002
Figure 3. Architecture of U-net [26].
Figure 3. Architecture of U-net [26].
Applsci 13 02302 g003
Figure 4. Architecture of auto-encoder [27].
Figure 4. Architecture of auto-encoder [27].
Applsci 13 02302 g004
Figure 5. Architecture of generative adversarial networks (GAN) [29].
Figure 5. Architecture of generative adversarial networks (GAN) [29].
Applsci 13 02302 g005
Figure 6. Study selection procedure.
Figure 6. Study selection procedure.
Applsci 13 02302 g006
Table 1. Public datasets.
Table 1. Public datasets.
DatasetNo. of Sites/ProjectsPopulationTechniqueCitation
Autism Brain Imaging Data Exchange I (ABIDE I)17 independent imaging sites539 subjects with ASD and 573 healthy controls (age 7–64 years)sMRI, rs-fMRI[31]
Autism Brain Imaging Data Exchange II (ABIDE II)19 independent imaging sites521 subjects with ASD and 593 healthy controls (age 5–64 years)sMRI, rs-fMRI, DTI[32]
IMaging-PsychiAtry Challenge (IMPAC)-549 subjects with ASD 601 healthy controls
(age 0–80 years)
sMRI, rs-fMRI[33]
ADHD-200 Consortium8 independent imaging sites285 subjects with ADHD 491 healthy controls (age 7–21 years)sMRI, rs-fMRI[34]
UK Biobank-500,000 subjects
(age 40–69 years)
sMRI, rs-fMRI, DTI[35]
National Database for Autism Research (NDAR)hundreds of research projects117,573 subjects by age (57,510 affected subjects and 59,763 control subjects)sMRI, rs-fMRI, DTI[36]
Open fMRI95 datasets3375 subjects across all datasetssMRI, rs-fMRI, task fMRI[37]
International Consortium for Brain Mapping (ICBM)-853 subjects
(age 18–89 years)
sMRI, rs-fMRI, DTI[38]
1000 funtional connectome33 independent imaging sites1355 subjects
(age 13–80 years)
rs-fMRI[39]
The Adolescent Brain Cognitive Development (ABCD) Study-12,000 subjects
(age 9–10 years)
sMRI, rs-fMRI, task fMRI[40]
ENIGMA ADHD working group34 cohortsover 4000 subjectssMRI, rs-fMRI, DTI[41]
Philadelphia Neurodevelopmental Cohort (PNC)-9500 subjects
(age 8–21 years)
sMRI, rs-fMRI, task fMRI, DTI[42]
Healthy Brain Network (HBN)-10,000 subjects
(age 5–21 years)
sMRI, rs-fMRI, task fMRI, DTI[43]
Human Connectome Project Development (dHCP)-1350 subjects
(age 5–21 years)
sMRI, rs-fMRI, task fMRI[44]
The UNC/UMN Baby Connectome Project (BCP)2 sites500 subjects
(age 0–5 years )
sMRI, rs-fMRI, DTI[45]
Abbreviations: sMRI—structural MRI, rs-fMRI—resting-state functional MRI, DTI—Diffusion Tensor Imaging.
Table 4. Predicting brain age.
Table 4. Predicting brain age.
StudyYearPopulationTechniquePreprocessingMethodResults
[84]2017115 infants (gestation age 24–32 weeks )DTIIn-house pipelineCNNMAE 2.17 weeks
[130]2019317 MRI images of 112 infants age 2 weeks (8 to 35 days); 12 months (each ±2-weeks) and 3 years (each ±4-weeks).sMRIIn-house pipeline3D CNNAccuracy 98.4% classifying three age groups
[131]2019PNC Dataset: 857 subject (age 8–22 years) 20% as children 20% as young adultrs-fMRIISPM12MLPAccuracy 96.64% predicting children and young adult
[132]2020ABIDE II dataset 382 subjects ADHD200 consortium 378 subjectssMRISPM123D CNNMAE 1.11 years (ABIDE II dataset) 1.16 years (ADHD200 consortium)
[127]2020220 subjects (age 0–5 years)sMRIIn-house pipelineCNNMAE 2.26 months
[129]2020PNC Dataset: 839 subject (age 8–21 years)sMRI, rs-fMRI, DTISPM12, DPARSF, PANDAMLPMAE
0.381 ± 0.119 years
[128]2021161 subjects (age 0–2 years)sMRIIn-house pipelineCNNMAE 8.2 weeks
[133]202184 infants
(age 8 days–3 years)
sMRIIn-house pipelineCNNAccuracy 90%
[134]2021119 subjects (age 0–2 years)sMRIIn-house pipelineCNNMAE 0.98 months
[135]2021220 fetuses (gestation age 15.9–38.7 weeks)sMRIIn-house pipelineCNNMAE 0.125 weeks
[136]2021167 patients with Rolandic epilepsy
(age 9.81 ± 2.55 years),
107 HC
(age 9.43 ± 2.57 years)
sMRICAT12, SPM12CNNMAE 1.05 years for HC 1.21 years for patients
[137]2022524 infants (gestation age 23–42 weeks )sMRI, DTINeonatal specific segmentation pipelineCNNMAE 0.72 weeks (term-born) 2.21 weeks (preterm)
Abbreviations: sMRI—structural MRI, rs-fMRI—resting-state functional MRI, DTI—Diffusion Tensor Imaging, CNN—Convolutional neural network, GAN—Generative adversarial network, MAE—mean absolute error.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, M.; Nardi, C.; Zhang, H.; Ang, K.-K. Applications of Deep Learning to Neurodevelopment in Pediatric Imaging: Achievements and Challenges. Appl. Sci. 2023, 13, 2302. https://doi.org/10.3390/app13042302

AMA Style

Hu M, Nardi C, Zhang H, Ang K-K. Applications of Deep Learning to Neurodevelopment in Pediatric Imaging: Achievements and Challenges. Applied Sciences. 2023; 13(4):2302. https://doi.org/10.3390/app13042302

Chicago/Turabian Style

Hu, Mengjiao, Cosimo Nardi, Haihong Zhang, and Kai-Keng Ang. 2023. "Applications of Deep Learning to Neurodevelopment in Pediatric Imaging: Achievements and Challenges" Applied Sciences 13, no. 4: 2302. https://doi.org/10.3390/app13042302

APA Style

Hu, M., Nardi, C., Zhang, H., & Ang, K. -K. (2023). Applications of Deep Learning to Neurodevelopment in Pediatric Imaging: Achievements and Challenges. Applied Sciences, 13(4), 2302. https://doi.org/10.3390/app13042302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop