Next Article in Journal
Subtractive Proteomics and Reverse-Vaccinology Approaches for Novel Drug Target Identification and Chimeric Vaccine Development against Bartonella henselae Strain Houston-1
Next Article in Special Issue
Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs
Previous Article in Journal
Computational Lower Limb Simulator Boundary Conditions to Reproduce Measured TKA Loading in a Cohort of Telemetric Implant Patients
Previous Article in Special Issue
Evaluating the Effectiveness of 2D and 3D CT Image Features for Predicting Tumor Response to Chemotherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis

1
School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
2
Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
3
Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2024, 11(5), 504; https://doi.org/10.3390/bioengineering11050504
Submission received: 2 April 2024 / Revised: 11 May 2024 / Accepted: 15 May 2024 / Published: 17 May 2024

Abstract

:
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice.

1. Introduction

Nasopharyngeal carcinoma (NPC) is a distinct head and neck cancer subtype originating in the nasopharynx, the upper region of the throat posterior to the nasal cavity [1]. Despite its rarity on a global scale, NPC exhibits a higher incidence in specific geographic regions, such as Southeast Asia and North Africa, likely attributable to a combination of genetic, environmental, and Epstein–Barr virus-related factors [2,3]. The early detection and accurate diagnosis of NPC are paramount for optimal treatment planning and improving patient prognosis [4]. However, the complex anatomy of the nasopharynx and the variability in clinical presentation make early detection and accurate diagnosis of NPC challenging.
In this context, magnetic resonance imaging (MRI) is the preferred imaging modality for the diagnosis, staging, and treatment planning of NPC due to its superior soft tissue contrast resolution compared to other imaging techniques, such as computed tomography (CT). MRI’s excellent contrast resolution allows for an accurate delineation of the primary tumor, assessment of local invasion, and detection of lymph node involvement. This detailed visualization of the tumor’s extent significantly enhances NPC’s staging and treatment planning by accurately outlining the tumor’s extent [5]. The capability of MRI to accurately delineate the extent of tumor invasion is crucial for precise staging, which is a critical determinant of therapeutic strategies [6]. Moreover, MRI offers the advantage of not exposing patients to ionizing radiation, making it a safer choice for repeated imaging during follow up and treatment response evaluation. The multiplanar imaging capabilities of MRI further enhance the assessment of tumor spread in various planes, providing a comprehensive understanding of the disease extent.
However, despite the advantages of MRI, the manual segmentation of NPC from MRI, a prerequisite for accurate tumor delineation, is a time-consuming and subjective process and is prone to inter- and intra-observer variability [7,8]. This variability can lead to inconsistencies in tumor volume estimation, staging, and treatment planning, potentially impacting patient outcomes. Therefore, there is a pressing need for automated, reliable, and efficient segmentation methods to overcome these limitations and improve the clinical management of NPC.
The advent of deep learning (DL) technologies has revolutionized the field of medical imaging, offering novel paradigms for automated image analysis. Deep learning models, particularly convolutional neural networks (CNNs), have demonstrated remarkable performance in image recognition, segmentation, and analysis tasks, surpassing traditional image processing methods in terms of accuracy and efficiency [9,10]. In the context of NPC segmentation, DL models have the potential to overcome the limitations of manual segmentation by providing rapid, accurate, and reproducible results [11,12].
Previous reviews have focused on either the segmentation of head and neck cancers in general [13] or NPC segmentation using both computed tomography (CT) and MRI [14]. This systematic review and meta-analysis aim to synthesize the current evidence on the application of deep learning for NPC segmentation in MRI. Our study offers several unique contributions to the existing literature on DL applications in NPC segmentation. First, we focus exclusively on MRI, providing a comprehensive analysis of DL performance for this specific modality. Second, our review includes the most recent studies from 2023 and 2024, ensuring up-to-date evidence. Third, we employ a novel three-level meta-analytic approach that accounts for all reported results across validation sets, revealing significant heterogeneity among independent datasets. This finding underscores the need for further research to explore the sources of variability and standardize DL model development and validation.

2. Materials and Methods

2.1. General Guidelines

This study maintained high methodological quality during its planning and dissemination phases in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 standards [15]. We followed the PRISMA guidelines closely, and the complete checklists are available in the Supplementary Materials (Tables S1 and S2). This research is registered with the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPALSY), and the registration identifier is INPLASY202430120 [16]. As this systematic review and meta-analysis did not involve human participants, ethical approval and informed consent were not required, which was consistent with the guidelines for such studies.

2.2. Search of Databases and Selection of Eligible Studies

To select studies on DL applications for segmenting NPC in MRI scans, two reviewers (T-WW and C-KW) conducted a detailed literature review across PubMed, Embase, and Web of Science, covering all records up to 20 March 2024 (see Supplementary Table S2 for search details). The search strings were ((Nasopharyngeal Neoplasms OR Nasopharyngeal Cancer OR Nasopharyngeal Carcinoma OR Nasopharyngeal Tumors) AND (MRI OR magnetic resonance imaging OR MR) AND (segmentation OR contouring OR delineation) AND (deep learning OR convolutional neural networks OR CNN)) and are further detailed in Table S3. The process included title and abstract screening supplemented by manual searches to capture pertinent studies comprehensively. Any disagreements in study selection were resolved by consulting a third expert. We only included studies that applied DL for NPC segmentation in adult patients using MRI scans. Exclusions were made for non-MRI studies, retracted conference papers, Supplementary Materials, studies not addressing the research question directly, or those lacking necessary data for meta-analysis (e.g., missing standard deviation of Dice scores).

2.3. Data Extraction and Management

T-WW and C-KW collected key data from the chosen studies, including the study design, patient counts, and the number of series in training and testing sets. They also reviewed the sources of data, the validation techniques used for the models, and the standards for establishing reference values and indicators for ground truths. The documentation included MRI image specifics like magnetic field strength, sequences, and the manufacturer and model of the MRI equipment. The evaluation of the algorithms focused on their dimensions and types. This was accompanied by a detailed review of preprocessing methods, covering normalization, resolution resampling, data augmentation, and image cropping techniques. An extensive evaluation of the Sørensen–Dice coefficient was performed, highlighting its crucial role in assessing segmentation accuracy in these studies.

2.4. Methodological Quality Appraisal

Two established tools were used to evaluate the methodological quality of the studies: the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) [17,18]. T-WW and C-KW conducted these assessments independently to minimize bias. Disagreements were resolved by consulting senior researchers and ensuring a consensus-based, rigorous quality assessment. This approach underscores the commitment to methodological precision and consensus in evaluating study quality.

2.5. Statistical Analysis

Two meta-analyses assessed the Dice scores reported by the studies. The first analysis selected the highest-performing algorithm when multiple outcomes were reported per study or when different studies used the same validation dataset. Median and interquartile ranges were converted to mean and standard deviation using established formulas [19,20]. A random effects model with restricted maximum likelihood was applied to accommodate study population heterogeneity [21], visualized through forest plots and assessed via sensitivity analysis (leave-one-out method) and subgroup analyses on variables like publication status [22]. The Q test quantified heterogeneity across studies, setting statistical significance at a p-value of <0.05. Heterogeneity levels were categorized by I2 values as trivial (0–25%), minimal (26–50%), moderate (51–75%), and pronounced (76–100%) [23]. To assess publication bias, Egger’s method for funnel plot asymmetry was employed, utilizing Stata/SE 18.0 for Mac [24].
The second meta-analysis explored DL algorithm performance variability across validation sets, addressing dataset reuse by comparing bi-level and tri-level random effects models, the latter clustering by dataset to mitigate mixed effects from validation reuse. The variance was assessed across three levels—datasets, repeated analyses, and study samples—using analysis of variance and Cheung’s formula [25]. Meta-regression [26] incorporated moderators like dataset splitting (train/test vs. cross-validation), the validation method (internal validation vs. external validation), MRI sequence (single vs. multiple), algorithm type (U-net, U-net variants vs. CNN), training size, and preprocessing techniques (intensity normalization, resolution adjustment, image augmentation, and image cropping). Statistical analysis was conducted with the metafor package in R, considering p < 0.05 as significant.

3. Results

3.1. Study Identification and Selection

The PRISMA diagram (Figure 1) illustrates the exhaustive search and selection methodology adopted in the present investigation. Initially, a comprehensive search was conducted across various databases from inception to 20 March 2024, yielding 176 studies, comprising 36 from PubMed, 72 from EMBASE, and 68 from Web of Science. After 66 duplicates were removed, 110 articles were further assessed using EndNote software. An initial review of titles and abstracts led to the exclusion of 36 articles, attributed to their irrelevance or lack of comprehensive detail. Further evaluation of the 74 full-text articles resulted in the exclusion of 57 articles [8,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82] for various reasons, including the nature of the content being reviews, supplements, or conference abstracts; the absence of MRI application; retraction status; irrelevance to the scope of the current meta-analysis; or the inadequacy of reported outcomes for quantitative synthesis (refer to Table S4). This selection process culminated in the selection of 17 studies [11,12,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97] for detailed examination within the scope of this analysis.

3.2. Basic Characteristics of Included Studies

The seventeen investigations [83,84,85,86,87,88,89,90,91,92,93,94,95,96,97] implemented a retrospective approach, encompassing a cumulative patient population of 7830 individuals. The sizes of the patient cohorts exhibited significant variability, ranging from a minimum of 29 [11] to a maximum of 4100 [95] patients. A fundamental aspect of these investigations was the implementation of manual annotation, underscoring the indispensable role of human expertise within the research framework. The methodologies for validation adopted across these studies were bifurcated into either a train/test split [83,84,85,86,87,88,89,95,96] or cross-validation [11,12,90,91,92,93,94,97], with the criteria for annotation differing and encompassing evaluations by professionals such as experienced clinicians, radiologists, radiation oncologists, and oncologists (Table 1).

3.3. Characteristics of MRI

Magnetic field strengths span from 1.5 Tesla (T) [12,94] to 3T [11,84,91,92,95,97], with some studies encompassing both ranges [86,90]. This diversity underscores the extensive array of MRI technologies utilized in both the clinical and research domains. The MRI sequences investigated across these studies encompass T1 weighted (T1w), contrast-enhanced T1 weighted (T1c), T2 weighted (T2w), dynamic contrast enhanced (DCE), and a variety of specialized sequences, including Ktrans, T1 water, T2 water, fat-saturated T2 weighted (fs-T2W), and contrast-enhanced T1 weighted with fat saturation (fs-ce-T1W). With respect to hardware, the research references the utilization of apparatuses from foremost manufacturers, such as GE, Siemens, and Philips, among others, highlighting models like the GE Discovery MR 750w, Siemens Magnetom Skyra, Philips Achieva TX, and Siemens Aera (Table 2).

3.4. Characteristics and Performance of Preprocessing Techniques and DL Algorithms

Intensity normalization is prominently applied in a significant majority of investigations [11,12,83,84,85,86,87,89,90,91,92,93,96,97], underscoring its critical role in harmonizing the intensity scale across images to enhance algorithmic precision. Resolution adjustment, implemented selectively across studies [83,85,86,97], reflects a customized strategy to refine image resolution to meet specific analytical requisites. Image augmentation, a strategy designed to augment the diversity of training datasets, finds application in nearly all examined studies [94,95], evidencing its widespread adoption to bolster model resilience. Conversely, image cropping is utilized in various research endeavors [11,83,84,85,86,87,88,89,91,93], focusing model analysis on pertinent image regions (Table 3).
Research incorporating 3D input dimensions, such as those employing attention-guided Vnet [85], nnUNet [86], MMFNet [93], SC-DenseNet [95], and VoxResNet [96], illustrates a growing dependency on volumetric data to enhance accuracy in segmentation tasks, with nnUNet [86] achieving a notable Dice score of 0.88 across a dataset encompassing 600 samples. Conversely, 2D analyses persist in their prevalence, with CDDSA [87] attaining a Dice score of 0.92 on 114 samples, demonstrating the effectiveness of specialized 2D convolutional frameworks in distilling relevant features from intricate image datasets. The variation in training dataset sizes, ranging from a minimal 28 in studies utilizing CNNs [11] to a substantial 3285 in the context of SC-DenseNet [95], highlights the adaptability of deep learning frameworks to assimilate and learn from datasets of divergent scopes. Moreover, the application of distinct algorithms such as SICNet [83], ResU-Net [84], and T-U-Net [94] across various studies, with Dice scores fluctuating between 0.66 and 0.92, accentuates the diverse methodological tactics engaged within the domain (Table 3).

3.5. Quality Assessment

Figure S1 illustrates the quality assessments of the included studies conducted with the QUADAS-2 tool. Supplementary Table S6 details an analysis focusing on bias-related risks and applicability concerns, identifying ambiguous risks due to the exclusion of interval derivation in datasets in 10 (58.8%) of the studies [12,83,84,85,86,87,88,89,93,97], which may impact data interpretation. This criterion could influence the applicability and generalizability of the results from these studies.
Supplementary Table S7 reports a detailed evaluation of 17 studies using the CLAIM criteria, revealing an average CLAIM score of 27.35, equating to approximately 65.13% with a standard deviation of 3.86, and scores ranging from 23.00 to 33.00 out of a maximum of 42. The breakdown of average scores for CLAIM subsections indicates the quality of these studies as follows: title/abstract, 1.64/2 (82%); introduction, 2.00/2 (100%); methods, 18.18/28 (64.9%); results, 2.7/5 (54%); discussion, 1.94/2 (97%); and other information, 0.88/3 (29.4%). These results highlight the strengths and potential improvement areas in lung cancer research utilizing DL methodologies.

3.6. Efficacy of DL Model Segmentation of NPC on MRI

The investigation synthesized findings from 11 studies, each utilizing distinct datasets and DL models for segmentation tasks, and uncovered notable variations in Dice scores, which spanned from 66% to 84%. The consolidated outcomes produced a pooled Dice score of 78%, with a 95% confidence interval (CI) ranging from 74% to 83% (Figure 2). The Q test indicated substantial heterogeneity across the studies, as evidenced by a Q value of 588.81 with a significance level below 0.01. Further affirmation of this heterogeneity was provided by the Higgins I2 statistic, which reported a remarkably high degree of variability (I2 = 99.02%). Sensitivity analysis reinforced the reliability of these findings, affirming the statistical significance of the summary effect sizes even upon the sequential exclusion of individual studies from the analytical framework (Figure S2). Additionally, the funnel plot assessment of the 11 studies, coupled with Egger’s regression test, disclosed a p-value of 0.037, intimating the presence of publication bias within the examined corpus of studies (Figure S3). Nevertheless, subsequent analysis through subgrouping predicated on publication metrics failed to disclose any significant discrepancies (Figure S4).
Employing a sophisticated meta-analytic methodology, a three-level meta-analysis was undertaken to scrutinize potential moderating factors associated with DL models utilized in segmentation tasks. This meticulous examination included an extensive assessment of outcomes across numerous validation sets, augmented by clustering according to datasets to mitigate the impact of their repeated utilization. From an aggregation of 68 reported effects spanning 17 distinct studies, the mean Dice coefficient was calculated to be 76.4%, with a 95% CI ranging from 71.1% to 81.6%. The Q statistic analysis revealed an absence of significant heterogeneity, evidenced by a Q value of 55.4 (p = 0.821). Comparative evaluations employing Akaike and Bayesian information criteria highlighted a preference for the three-level model over conventional two-tiered approaches, highlighting its superior accuracy in representing the data structure. Further, variance analysis elucidated that 58.61% of the total variance was attributable to level 1 (sampling variance), with the residual variance delineated between within-dataset disparities (4.6e-8%) at level 2 and inter-dataset differences (41.39%) at level 3. This distribution of variability underscored significant inter-dataset variation, in contrast to negligible within-dataset discrepancies (Supplementary Table S5), reinforcing previously observed significant heterogeneity in meta-analyses of independent datasets. Meta-regression analysis probing factors such as dataset splitting, validation methodology, MRI sequence, algorithmic typology, training volume, and preprocessing approaches did not yield significant correlations with the segmentation efficacy of DL models.

4. Discussion

The primary objective of this systematic review and meta-analysis was to assess the efficacy and accuracy of DL models, specifically in the segmentation of NPC in MRI. In the landscape of medical imaging, especially for conditions like NPC where precision in diagnosis and treatment planning is critical, the role of DL technologies marks a transformative potential. By focusing on MRI, this review targets an area where DL models can significantly leverage high-resolution images for better disease characterization.

4.1. Summary of Findings

Our comprehensive analysis revealed that DL models, particularly convolutional neural networks (CNNs), enhance the accuracy of NPC segmentation in MRI scans. The pooled analysis of Dice scores, a key metric for evaluating segmentation accuracy, included 11 studies with a total of 7830 patients or MRI scans. Using a random effects model, we calculated a pooled mean Dice score of 78% (95% confidence interval: 74% to 83%) across the included studies (Figure 2). The Dice score ranges from 0 to 1, with higher values indicating better segmentation accuracy. Heterogeneity among the studies was assessed using the Q test and the I2 statistic. The Q test indicated substantial heterogeneity across the studies (Q = 588.81, p < 0.01), and the I2 statistic revealed a high degree of variability (I2 = 99.02%). To explore the potential sources of heterogeneity, we conducted subgroup analyses and meta-regressions on variables such as publication status, MRI sequence, algorithm type, and preprocessing techniques (see Section 3.6 for details). The funnel plot assessment and Egger’s regression test (p = 0.037) suggested the presence of publication bias within the examined studies (Figure S3). However, further subgroup analysis based on publication status did not reveal any significant discrepancies (Figure S4). These findings underscore the effectiveness of DL models in improving NPC segmentation accuracy in MRI scans compared to traditional methods. The pooled mean Dice score of 78% indicates a moderate to high level of segmentation accuracy, highlighting the potential of DL models to enhance clinical decision making and treatment planning in NPC management. However, it is important to acknowledge the substantial heterogeneity observed among the included studies, which may stem from differences in patient populations, MRI acquisition protocols, and DL model architectures.

4.2. Comparison with the Existing Literature

Previous reviews have extensively covered various applications of deep learning and machine learning for nasopharyngeal carcinoma (NPC) [98,99,100]. In the review by Li et al. [98], the authors briefly outline articles related to auto-segmentation using deep learning techniques. Ng et al. [99] presented a descriptive box plot in their study of auto-targeting, showing a median Dice score of 0.7530, which illustrates the current performance level in this field. Wang et al. [100] discussed the advantages and disadvantages of different imaging modalities. They noted that while CT images often lack sufficient soft tissue contrast, PET images provide excellent tumor visualization but fail to deliver accurate boundary information due to their low spatial resolution. Dual-modality PET-CT images, however, offer more valuable information for delineating tumor boundaries and assessing the extent of tumor invasion [101]. Despite its superior soft tissue contrast, MRI is considered the gold standard for staging and measuring target volume contours in NPC. However, identifying tumor margins on MRI can be challenging due to factors such as high variability, low contrast, and discontinuous soft tissue margins. While discussions on auto-segmentation using deep learning methods are present, there is a notable lack of comprehensive and quantitative analysis in the existing literature.
Compared to previous systematic reviews and meta-analyses on CT and MRI segmentation of nasopharyngeal cancer, our focused investigation into NPC segmentation exclusively using MRI technology represents a more specialized inquiry into this domain [14]. Our review not only corroborates the effectiveness of deep learning models in NPC segmentation, demonstrating a pooled Dice score of 78%, closely aligning with prior findings of 76% [14], but it also introduces several key differentiators that enhance the robustness and relevance of our conclusions. Notably, our review incorporates five additional studies from 2023 and 2024, broadening the evidence base. Our emphasis on MRI scans allowed for more nuanced data extraction and analysis, ensuring a deeper understanding of this specific imaging modality’s challenges and opportunities in NPC segmentation. Furthermore, we employed a two-pronged meta-analysis approach: a traditional two-level random effects model that addressed independent datasets and a novel three-level random effects model that accounted for all reported results across validation sets, effectively clustering by dataset. This methodology revealed significant heterogeneity among independent datasets, indicating the necessity for further research to explore the sources of this variability. Future studies are encouraged to expand the dataset to illuminate these findings further and comprehensively address the identified heterogeneity.

4.3. Strengths of Deep Learning Models

DL models handle complex, high-dimensional data, and are ideally suited for medical imaging tasks. Their strengths lie in rapid processing, high accuracy, and reproducibility, as demonstrated by models like nnU-Net [86] and CDDSA [87], which exhibited exemplary performance in our review. The nnU-Net (no-new-Net) [102] represents a significant stride in the application of deep learning for medical image segmentation, specifically highlighted in our review by its exceptional performance in NPC segmentation within MRI scans. Achieving a Dice score of 0.88, the nnU-Net not only demonstrates its robustness in precisely delineating the tumor boundaries in NPC but also underscores the model’s capability in handling the inherent complexities of medical imaging data. This performance is particularly noteworthy given the challenging nature of NPC, a cancer type characterized by its intricate anatomical location and the potential for subtle imaging signatures.
nnU-Net’s architecture is designed to automatically adapt to the segmentation task’s specifics, including optimizing its configuration to match the input data dimensions, preprocessing routines, and network architecture parameters. This adaptability is key to its success, enabling the nnU-Net to efficiently process the high-dimensional data typical of MRI scans, thereby ensuring high accuracy and reproducibility across different datasets and segmentation tasks. The model’s proficiency in capturing the nuanced details of NPC tumors from MRI without the need for extensive manual tuning or intervention represents a paradigm shift from traditional segmentation approaches, which are often time consuming and prone to inter- and intra-observer variability. By automating the segmentation process while maintaining, if not exceeding, the accuracy of manual methods, the nnUNet not only enhances diagnostic workflows but also paves the way for more personalized and timely treatment planning, leveraging the full potential of deep learning to improve patient care outcomes in oncology.
Comparing the three models, the nn-U-Net [86], CDDSA [87], and CNN [11], the studies using CDDSA and CNN demonstrated higher performance than the one using the nn-U-Net. All three studies utilized extensive preprocessing techniques such as intensity normalization, image augmentation, and image cropping. The study using the nn-U-Net [86] additionally employed resolution adjustment. It is important to note that the CNN study [11] from 2018 had a limited sample size of only 29 patients, which may affect the robustness and generalizability of their model’s performance. In contrast, the study using the nn-U-Net [86] included 1057 patients and performed external validation, demonstrating the most robust validation among the three. The CDDSA study [87] used 189 patients with internal validation, which can be considered decent. Moreover, the disentangle-based style augmentation technique utilized in the CDDSA study may have contributed to its high performance.

4.4. Limitations and Challenges

Despite the promising outcomes, our review faced limitations, including evident publication bias and significant study heterogeneity, which could influence the interpretability of our results. Moreover, while being the standard for comparison, the manual segmentation process introduces subjectivity and variability in outcomes. DL models, though superior, are not without challenges, including the need for extensive training data and the complexity of model tuning to achieve optimal performance.

4.5. Implications for Clinical Practice

Integrating DL models into clinical settings for NPC segmentation from MRI scans could revolutionize treatment planning and prognosis evaluation. The precision of DL-enhanced segmentation can lead to more accurate staging, targeted therapy, and monitoring strategies. However, for such integration to be successful and globally applicable, there is a critical need for standardization in DL model development, validation, and implementation across different healthcare contexts.

4.6. Future Research Directions

Future research should aim at developing more advanced DL models capable of accommodating the variability inherent in MRI data, including differences in imaging parameters and tumor presentation. Moreover, exploring DL applications beyond segmentation, such as in treatment response assessment and recurrence detection in NPC, could provide comprehensive tools for holistic disease management. This direction promises improvements in clinical outcomes and paves the way for personalized treatment approaches based on predictive analytics.

5. Conclusions

Our systematic review and meta-analysis have highlighted the effectiveness of deep learning (DL) models in improving the accuracy of nasopharyngeal carcinoma (NPC) segmentation in MRI scans, with a pooled mean Dice score of 78% (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy in DL models. DL’s role in medical imaging, particularly for NPC, marks a significant advancement that matches the growing need for precision in medical diagnostics. However, the substantial heterogeneity and the presence of publication bias observed necessitate a careful interpretation of these results. They emphasize the need for further validation and standardization of DL models across varied clinical environments to confirm their effectiveness and consistency. While current deep learning models achieve moderate to high segmentation accuracy, further optimization and improvement of deep learning architectures are warranted. As we look forward, integrating DL into clinical practice is set to transform NPC management by equipping clinicians with more accurate tools, potentially enhancing personalized treatment and patient outcomes. Future research should extend the use of DL to other areas, such as treatment response monitoring and intraoperative imaging, maximizing the benefits of this technology in cancer care.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/bioengineering11050504/s1, Figure S1: The results of QUADAS-2 quality assessment for the included studies. Figure S2: The results of a sensitivity analysis of deep learning algorithms in independent datasets using the one-study removal method. Figure S3: Funnel plot of Dice scores for deep learning algorithms in independent datasets. Figure S4: Forest plot of subgroup analysis deep learning algorithms in an independent dataset using publication status as moderator. Table S1: PRISMA-DTA Abstract Checklist. Table S2: PRISMA-DTA Checklist. Table S3: Keywords and search results in different databases. Table S4: Excluded articles and reasons. Table S5: Comparison of multilevel meta-analysis model clusters with datasets of segmentation Dice scores across all validation sets. Table S6: Quality assessment according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria. Table S7: The Checklist for Artificial Intelligence in Medical Imaging scores.

Author Contributions

Conceptualization, C.-K.W., T.-W.W. and Y.-T.W.; methodology, C.-K.W. and T.-W.W.; software, T.-W.W.; validation, C.-K.W. and T.-W.W.; formal analysis, C.-K.W. and T.-W.W.; investigation, C.-K.W. and T.-W.W.; resources, Y.-T.W.; data curation, C.-K.W. and T.-W.W.; writing—original draft preparation, C.-K.W. and T.-W.W.; writing—review and editing, C.-K.W., T.-W.W., Y.-X.Y. and Y.-T.W.; visualization, T.-W.W.; supervision, Y.-T.W.; project administration, Y.-T.W.; funding acquisition, C.-K.W. and Y.-T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Gen. & Mrs. M.C. Peng Fellowship from the School of Medicine, 406 National Yang Ming Chiao Tung University, MD-SY-A3-309-01, MD-SY-A3-112-C3-08, MD-SY-112-C3-09, MD-SY-112-C3-010, MD-SY-112-C3-011; Taichung Veterans General Hospital, TCVGH-YMCT1109111 and TCVGH-YMCT1119105; the National Science and Technology Council in Taiwan, MOST 110-2634-F-006-022; MOST110-2221-E-A49A-504-MY3; the National Yang Ming Chiao Tung University from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan, 111 W10159; the BRC Plan 112W32101; Towards the 2030 Cross-Domain Project for Smart Healthcare and Wellness (2/2)—SMART Plan; NSTC 111-2634-F-A49-014-; NSTC 111-2634-F-006-012; the Innovative Translational Center Project 112w94002; and Veterans’ Taiwan Joint University Program, 113W084700.

Institutional Review Board Statement

This meta-analysis did not intervene or interact with humans or collect identifiable private information and thus does not require institutional review board approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article and Supplementary Files.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, Y.P.; Chan, A.T.; Le, Q.T.; Blanchard, P.; Sun, Y.; Ma, J. Nasopharyngeal carcinoma. Lancet 2019, 394, 64–80. [Google Scholar] [CrossRef] [PubMed]
  2. Tang, X.; Zhou, Y.; Li, W.; Tang, Q.; Chen, R.; Zhu, J.; Feng, Z. T cells expressing a lmp1-specific chimeric antigen receptor mediate antitumor effects against lmp1-positive nasopharyngeal carcinoma cells in vitro and in vivo. J. Biomed. Res. 2014, 28, 468. [Google Scholar]
  3. Lam, W.K.J.; Chan, J.Y.K. Recent advances in the management of nasopharyngeal carcinoma. F1000Research 2018, 7, 1829. [Google Scholar] [CrossRef]
  4. Zhou, H.; Shen, G.; Zhang, W.; Cai, H.; Zhou, Y.; Li, L. 18f-fdg pet/ct for the diagnosis of residual or recurrent nasopharyngeal carcinoma after radiotherapy: A metaanalysis. J. Nucl. Med. 2015, 57, 342–347. [Google Scholar] [CrossRef] [PubMed]
  5. Li, C. Nasopharyngeal carcinoma: Imaging diagnosis and recent progress. J. Nasopharyngeal Carcinoma 2014, 1, e1. [Google Scholar]
  6. King, A.D.; Bhatia, K.S.S. Magnetic resonance imaging staging of nasopharyngeal carcinoma in the head and neck. World J. Radiol. 2010, 2, 159–165. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, B.; Tian, J.; Dong, D.; Gu, D.; Dong, Y.; Zhang, L.; Lian, Z.; Liu, J.; Luo, X.; Pei, S.; et al. Radiomics features of multiparametric MRI as novel prognostic factors in advanced nasopharyngeal carcinoma. Clin. Cancer Res. 2017, 23, 4259–4269. [Google Scholar] [CrossRef]
  8. Tang, P.; Zu, C.; Hong, M.; Yan, R.; Peng, X.; Xiao, J.; Wu, X.; Zhou, J.; Zhou, L.; Wang, Y. DA-DSUnet: Dual Attention-based Dense SU-net for automatic head-and-neck tumor segmentation in MRI images. Neurocomputing 2021, 435, 103–113. [Google Scholar] [CrossRef]
  9. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.; Philbrick, K. Toolkits and Libraries for Deep Learning. J. Digit. Imaging 2017, 30, 400–405. [Google Scholar] [CrossRef]
  10. Huang, Z.; Li, Q.; Lu, J.; Feng, J.; Hu, J.; Chen, P. Recent Advances in Medical Image Processing. Acta Cytol. 2020, 65, 310–323. [Google Scholar] [CrossRef] [PubMed]
  11. Li, Q.; Xu, Y.; Chen, Z.; Liu, D.; Feng, S.-T.; Law, M.; Ye, Y.; Huang, B. Tumor Segmentation in Contrast-Enhanced Magnetic Resonance Imaging for Nasopharyngeal Carcinoma: Deep Learning with Convolutional Neural Network. BioMed. Res. Int. 2018, 2018, 9128527. [Google Scholar] [CrossRef] [PubMed]
  12. Ye, Y.; Cai, Z.; Huang, B.; He, Y.; Zeng, P.; Zou, G.; Deng, W.; Chen, H.; Huang, B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front. Oncol. 2020, 10, 166. [Google Scholar] [CrossRef] [PubMed]
  13. Badrigilan, S.; Nabavi, S.; Abin, A.A.; Rostampour, N.; Abedi, I.; Shirvani, A.; Ebrahimi Moghaddam, M. Deep learning approaches for automated classification and segmentation of head and neck cancers and brain tumors in magnetic resonance images: A meta-analysis study. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 529–542. [Google Scholar] [CrossRef] [PubMed]
  14. Zamanian, M.; Abedi, I. Convolutional neural networks in auto-segmentation of nasopharyngeal carcinoma tumor—A systematic review and meta-analysis. Oncol. Clin. Pract. 2024, 20, 27–39. [Google Scholar] [CrossRef]
  15. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  16. Wang, C.K.; Wang, T.W.; Yang, Y.X.; Wu, Y.T. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-analysis. INPALSY 2024. [Google Scholar] [CrossRef]
  17. Mongan, J.; Moy, L.; Kahn, C.E. Checklist for Artificial Intelli- gence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol. Artif. Intell. 2020, 2, e200029. [Google Scholar] [CrossRef] [PubMed]
  18. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M.; QUADAS-2 Group. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann. In-tern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  19. Wan, X.; Wang, W.; Liu, J.; Tong, T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med. Res. Methodol. 2014, 14, 135. [Google Scholar] [CrossRef] [PubMed]
  20. Luo, D.; Wan, X.; Liu, J.; Tong, T. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range. Stat. Methods Med. Res. 2018, 27, 1785–1805. [Google Scholar] [CrossRef] [PubMed]
  21. Borenstein, M.; Hedges, L.V.; Rothstein, H.R. Fixed-Effect versus Random-Effects Models. In Introduction to Meta-Analysis; Borenstein, M., Ed.; Wiley: Hoboken, NJ, USA, 2009; pp. 77–86. [Google Scholar]
  22. Borenstein, M.; Higgins, J.P. Meta-analysis and subgroups. Prev. Sci. 2013, 14, 134–143. [Google Scholar] [CrossRef]
  23. Higgins, J.P.T.; Thompson, S.G.; Deeks, J.J.; Altman, D.G. Measuring Inconsistency in Meta-Analyses. BMJ 2003, 327, 557–560. [Google Scholar] [CrossRef] [PubMed]
  24. Egger, M.; Davey Smith, G.; Schneider, M.; Minder, C. Bias in Meta-Analysis Detected by a Simple, Graphical Test. BMJ 1997, 315, 629–634. [Google Scholar] [CrossRef] [PubMed]
  25. Cheung MW, L. Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach. Psychol. Methods 2014, 19, 211–229. [Google Scholar] [CrossRef] [PubMed]
  26. Morton, S.C.; Adams, J.L.; Suttorp, M.J.; Shekelle, P.G. Meta-regression Approaches: What, Why, When, and How? (Technical Reviews, No. 8.) 1, Introduction; Agency for Healthcare Research and Quality (US): Rockville, MD, USA, 2004. Available online: https://www.ncbi.nlm.nih.gov/books/NBK43897/ (accessed on 20 March 2024).
  27. McDonald, B.A.; Cardenas, C.E.; O’Connell, N.; Ahmed, S.; Naser, M.A.; Wahid, K.A.; Xu, J.; Thill, D.; Zuhour, R.J.; Mesko, S.; et al. Investigation of autosegmentation techniques on T2-weighted MRI for off-line dose reconstruction in MR-linac workflow for head and neck cancers. Med. Phys. 2024, 51, 278–291. [Google Scholar] [CrossRef] [PubMed]
  28. Zhong, Z.; He, L.; Chen, C.; Yang, X.; Lin, L.; Yan, Z.; Tian, M.; Sun, Y.; Zhan, Y. Full-scale attention network for automated organ segmentation on head and neck CT and MR images. IET Image Process. 2023, 17, 660–673. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Ye, X.; Ge, J.; Guo, D.; Zheng, D.; Yu, H.; Chen, Y.; Yao, G.; Lu, Z.; Yuille, A.; et al. Deep Learning-Based Multi-Modality Segmentation of Primary Gross Tumor Volume in CT and MRI for Nasopharyngeal Carcinoma. Int. J. Radiat. Oncol. Biol. Phys. 2023, 117, e498. [Google Scholar] [CrossRef]
  30. Zeng, Y.; Zeng, P.; Shen, S.; Liang, W.; Li, J.; Zhao, Z.; Zhang, K.; Shen, C. DCTR U-Net: Automatic segmentation algorithm for medical images of nasopharyngeal cancer in the context of deep learning. Front. Oncol. 2023, 13, 1190075. [Google Scholar] [CrossRef] [PubMed]
  31. Yang, P.; Peng, X.; Xiao, J.; Wu, X.; Zhou, J.; Wang, Y. Automatic Head-and-Neck Tumor Segmentation in MRI via an End-to-End Adversarial Network. Neural Process. Lett. 2023, 55, 9931–9948. [Google Scholar] [CrossRef]
  32. Wang, Y.; Chen, H.; Lin, J.; Dong, S.; Zhang, W. Automatic detection and recognition of nasopharynx gross tumour volume (GTVnx) by deep learning for nasopharyngeal cancer radiotherapy through magnetic resonance imaging. Radiat. Oncol. 2023, 18, 76. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, H.; Zhang, S.; Luo, X.; Liao, W.; Zhu, L. Advancing Delineation of Gross Tumor Volume Based on Magnetic Resonance Imaging by Performing Source-Free Domain Adaptation in Nasopharyngeal Carcinoma. In Computational Mathematics Modeling in Cancer Analysis; Qin, W., Zaki, N., Zhang, F., Wu, J., Yang, F., Li, C., Eds.; CMMCA 2023. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14243. [Google Scholar]
  34. Song, Y.; Hu, J.; Wang, Q.; Yu, C.; Su, J.; Chen, L.; Jiang, X.; Chen, B.; Zhang, L.; Yu, Q.; et al. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin. Transl. Radiat. Oncol. 2023, 41, 100635. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, X.; Li, Z.; Qi, X.; Zhou, Q. Objective Boundary Generation for Gross Target Volume and Organs at Risk Using 3D Multi-Modal Medical Images. Int. J. Radiat. Oncol. Biol. Phys. 2023, 117, e476. [Google Scholar] [CrossRef]
  36. Lin, L.; Peng, P.; Zhou, G.; Huang, S.; Hu, J.; Liu, Y.; He, S.; Sun, Y.; Zhang, W. Deep Learning-Based Synthesis of Contrast-Enhanced MRI for Automated Delineation of Primary Gross Tumor Volume in Radiotherapy of Nasopharyngeal Carcinoma. Int. J. Radiat. Oncol. Biol. Phys. 2023, 117, e475. [Google Scholar] [CrossRef]
  37. Huang, Y.; Zhu, Y.; Yang, Q.; Luo, Y.; Zhang, P.; Yang, X.; Ren, J.; Ren, Y.; Lang, J.; Xu, G. Automatic tumor segmentation and metachronous single-organ metastasis prediction of nasopharyngeal carcinoma patients based on multi-sequence magnetic resonance imaging. Front. Oncol. 2023, 13, 953893. [Google Scholar] [CrossRef] [PubMed]
  38. Hao, Y.; Jiang, H.; Diao, Z.; Shi, T.; Liu, L.; Li, H.; Zhang, W. MSU-Net: Multi-scale Sensitive U-Net based on pixel-edge-region level collaborative loss for nasopharyngeal MRI segmentation. Comput. Biol. Med. 2023, 159, 106956. [Google Scholar] [CrossRef]
  39. Fei, X.; Li, X.; Shi, C.; Ren, H.; Mumtaz, I.; Guo, J.; Wu, Y.; Luo, Y.; Lv, J.; Wu, X. Dual-feature Fusion Attention Network for Small Object Segmentation. Comput. Biol. Med. 2023, 160, 106985. [Google Scholar] [CrossRef]
  40. Cai, Z.; Ye, Y.; Zhong, Z.; Lin, H.; Xu, Z.; Huang, B.; Deng, W.; Wu, Q.; Lei, K.; Lyu, J.; et al. Automated Segmentation of Nasopharyngeal Carcinoma Based on Dual-Sequence Magnetic Resonance Imaging Using Self-supervised Learning. In Computational Mathematics Modeling in Cancer Analysis; Qin, W., Zaki, N., Zhang, F., Wu, J., Yang, F., Li, C., Eds.; CMMCA 2023. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14243. [Google Scholar]
  41. Zhao, W.; Zhang, D.; Mao, X. Application of Artificial Intelligence in Radiotherapy of Nasopharyngeal Carcinoma with Magnetic Resonance Imaging. J. Healthc. Eng. 2022, 2022, 4132989, Erratum in J. Healthc. Eng. 2023, 2023, 9825710. [Google Scholar] [CrossRef] [PubMed]
  42. Zhang, W.; Li, Z.; Peng, Y.; Yin, Y.; Zhou, Q. Patient-Specific Daily Updated Deep Learning Auto-Segmentation for MRI-Guided Adaptive Radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2022, 114, e108–e109. [Google Scholar] [CrossRef]
  43. Yue, M.; Dai, Z.; He, J.; Xie, Y.; Zaki, N.; Qin, W. MRI-guided Automated Delineation of Gross Tumor Volume for Nasopharyngeal Carcinoma using Deep Learning. In Proceedings of the 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS), Shenzhen, China, 21–22 July 2022; pp. 292–296. [Google Scholar]
  44. Yang, G.; Dai, Z.; Zhang, Y.; Zhu, L.; Tan, J.; Chen, Z.; Zhang, B.; Cai, C.; He, Q.; Li, F.; et al. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front. Oncol. 2022, 12, 827991. [Google Scholar] [CrossRef]
  45. Tao, G.; Li, H.; Huang, J.; Han, C.; Chen, J.; Ruan, G.; Huang, W.; Hu, Y.; Dan, T.; Zhang, B.; et al. SeqSeg: A sequential method to achieve nasopharyngeal carcinoma segmentation free from background dominance. Med. Image Anal. 2022, 78, 102381. [Google Scholar] [CrossRef] [PubMed]
  46. Hai-Feng, Q.; Fang, Y. Convolutional neural network in evaluation of radiotherapy effect for nasopharyngeal carcinoma. Sci. Program. 2022, 2022, 1509490. [Google Scholar]
  47. Martin, R.J.; Sharma, U.; Kaur, K.; Kadhim, N.M.; Lamin, M.; Ayipeh, C.S. Multidimensional CNN-Based Deep Segmentation Method for Tumor Identification. Biomed. Res. Int. 2022, 2022, 5061112, Erratum in Biomed. Res. Int. 2024, 2024, 9836130. [Google Scholar] [CrossRef] [PubMed]
  48. Ling, Z.; Tao, G.; Li, Y.; Cai, H. NPCFORMER: Automatic Nasopharyngeal Carcinoma Segmentation Based on Boundary Attention and Global Position Context Attention. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 1981–1985. [Google Scholar]
  49. Liang, S.; Dong, X.; Yang, K.; Chu, Z.; Tang, F.; Ye, F.; Chen, B.; Guan, J.; Zhang, Y. A multi-perspective information aggregation network for automatedT-staging detection of nasopharyngeal carcinoma. Phys. Med. Biol. 2022, 67, 245007. [Google Scholar] [CrossRef] [PubMed]
  50. Li, Z.; Zhang, W.; Li, B.; Zhu, J.; Peng, Y.; Li, C.; Zhu, J.; Zhou, Q.; Yin, Y. Patient-specific daily updated deep learning auto-segmentation for MRI-guided adaptive radiotherapy. Radiother. Oncol. 2022, 177, 222–230. [Google Scholar] [CrossRef] [PubMed]
  51. Li, W.; Xiao, H.; Li, T.; Ren, G.; Lam, S.; Teng, X.; Liu, C.; Zhang, J.; Kar-Ho Lee, F.; Au, K.H.; et al. Virtual Contrast-Enhanced Magnetic Resonance Images Synthesis for Patients with Nasopharyngeal Carcinoma Using Multimodality-Guided Synergistic Neural Network. Int. J. Radiat. Oncol. Biol. Phys. 2022, 112, 1033–1044. [Google Scholar] [CrossRef] [PubMed]
  52. Li, S.; Hua, H.L.; Li, F.; Kong, Y.G.; Zhu, Z.L.; Li, S.L.; Chen, X.X.; Deng, Y.Q.; Tao, Z.Z. Anatomical Partition-Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme. J. Magn. Reason. Imaging 2022, 56, 1220–1229. [Google Scholar] [CrossRef]
  53. He, Y.; Zhang, S.; Luo, Y.; Yu, H.; Fu, Y.; Wu, Z.; Jiang, X.; Li, P. Quantitative Comparisons of Deep-learning-based and Atlas-based Auto- segmentation of the Intermediate Risk Clinical Target Volume for Nasopharyngeal Carcinoma. Curr. Med. Imaging 2022, 18, 335–345. [Google Scholar] [CrossRef]
  54. Deng, Y.; Li, C.; Lv, X.; Xia, W.; Shen, L.; Jing, B.; Li, B.; Guo, X.; Sun, Y.; Xie, C.; et al. The contrast-enhanced MRI can be substituted by unenhanced MRI in identifying and automatically segmenting primary nasopharyngeal carcinoma with the aid of deep learning models: An exploratory study in large-scale population of endemic area. Comput. Methods Programs Biomed. 2022, 217, 106702. [Google Scholar] [CrossRef] [PubMed]
  55. Deng, Y.; Hou, D.; Li, B.; Lv, X.; Ke, L.; Qiang, M.; Li, T.; Jing, B.; Li, C. A Novel Fully Automated MRI-Based Deep-Learning Method for Segmentation of Nasopharyngeal Carcinoma Lymph Nodes. J. Med. Biol. Eng. 2022, 42, 604–612. [Google Scholar] [CrossRef]
  56. Zhong, Y.; Yang, Y.; Fang, Y.; Wang, J.; Hu, W. A Preliminary Experience of Implementing Deep-Learning Based Auto-Segmentation in Head and Neck Cancer: A Study on Real-World Clinical Cases. Front. Oncol. 2021, 11, 638197. [Google Scholar] [CrossRef]
  57. Zhang, W.; Chen, Z.; Liang, Z.; Hu, Y.; Zhou, Q. AccuLearning: A User-Friendly Deep Learning Auto-Segmentation Platform for Radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2021, 111, e122. [Google Scholar] [CrossRef]
  58. Wang, D.; Gong, Z.; Zhang, Y.; Wang, S. Convolutional Neural Network Intelligent Segmentation Algorithm-Based Magnetic Resonance Imaging in Diagnosis of Nasopharyngeal Carcinoma Foci. Contrast Media Mol. Imaging 2021, 2021, 2033806. [Google Scholar] [CrossRef] [PubMed]
  59. Song, L.; Li, Y.; Dong, G.; Lambo, R.; Qin, W.; Wang, Y.; Zhang, G.; Liu, J.; Xie, Y. Artificial intelligence-based bone-enhanced magnetic resonance image-a computed tomography/magnetic resonance image composite image modality in nasopharyngeal carcinoma radiotherapy. Quant. Imaging Med. Surg. 2021, 11, 4709–4720. [Google Scholar] [CrossRef] [PubMed]
  60. Ma, X.; Chen, X.; Li, J.; Wang, Y.; Men, K.; Dai, J. MRI-Only Radiotherapy Planning for Nasopharyngeal Carcinoma Using Deep Learning. Front. Oncol. 2021, 11, 713617. [Google Scholar] [CrossRef] [PubMed]
  61. Luo, X.; Liao, W.; Chen, J.; Song, T.; Chen, Y.; Zhang, S.; Chen, N.; Wang, G.; Zhang, S. Efficient Semi-supervised Gross Target Volume of Nasopharyngeal Carcinoma Segmentation via Uncertainty Rectified Pyramid Consistency. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021; de Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C., Eds.; MICCAI 2021. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2021; Volume 12902. [Google Scholar]
  62. Lo Faso, E.A.; Gambino, O.; Pirrone, R. Head–Neck Cancer Delineation. Appl. Sci. 2020, 11, 2721. [Google Scholar] [CrossRef]
  63. Li, Y.; Han, G.; Liu, X. DCNet: Densely Connected Deep Convolutional Encoder-Decoder Network for Nasopharyngeal Carcinoma Segmentation. Sensors 2021, 21, 7877. [Google Scholar] [CrossRef] [PubMed]
  64. Bai, X.; Hu, Y.; Gong, G.; Yin, Y.; Xia, Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed. Signal Process. Control. 2021, 64, 102246. [Google Scholar] [CrossRef]
  65. Xue, X.; Qin, N.; Hao, X.; Shi, J.; Wu, A.; An, H.; Zhang, H.; Wu, A.; Yang, Y. Sequential and Iterative Auto-Segmentation of High-Risk Clinical Target Volume for Radiotherapy of Nasopharyngeal Carcinoma in Planning CT Images. Front. Oncol. 2020, 10, 1134. [Google Scholar] [CrossRef] [PubMed]
  66. Vrtovec, T.; Močnik, D.; Strojan, P.; Pernuš, F.; Ibragimov, B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med. Phys. 2020, 47, e929–e950. [Google Scholar] [CrossRef] [PubMed]
  67. Li, Y.; Peng, H.; Dan, T.; Hu, Y.; Tao, G.; Cai, H. Coarse-to-fine Nasopharyngeal Carcinoma Segmentation in MRI via Multi-stage Rendering. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 623–628. [Google Scholar]
  68. Guo, Y.; Yang, Q.; Hu, W.; Zhang, Z.; Wang, J.; Hu, C. Automatic Segmentation of nasopharyngeal carcinoma on MR Images: A Single-Institution Experience. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, e776. [Google Scholar] [CrossRef]
  69. Guo, Y.; Qing, Y. PO-1743: Automatic segmentation of nasopharyngeal carcinoma: A solution for single institution. Radiother. Oncol. 2020, 152, S967–S968. [Google Scholar] [CrossRef]
  70. Guo, F.; Shi, C.; Li, X.; Wu, X.; Zhou, J.; Lv, J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020, 24, 12671–12680. [Google Scholar] [CrossRef]
  71. Cai, M.; Yang, Q.; Guo, Y.; Zhang, Z.; Wang, J.; Hu, W.; Hu, C. Combining images and clinical diagnostic information to improve automatic segmentation of nasopharyngeal carcinoma tumors on MR images. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, e308–e309. [Google Scholar] [CrossRef]
  72. Xiangyu, E.; Hongmei, Y.; Weigang, H.; Jiazhou, W. PO-1003 A deep learning based auto-segmentation for GTVs on NPC MR images. Radiother. Oncol. 2019, 133, S553–S554. [Google Scholar] [CrossRef]
  73. Wong, L.M.; Ai, Q.; Shi, L.; King, A.D. The Proceedings of the 19th International Cancer Imaging Society Meeting and Annual Teaching Course. Cancer Imaging 2019, 19 (Suppl. S1), 62. [Google Scholar]
  74. Ma, Z.; Zhou, S.; Wu, X.; Zhang, H.; Yan, W.; Sun, S.; Zhou, J. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys. Med. Biol. 2019, 64, 025005. [Google Scholar] [CrossRef] [PubMed]
  75. Huang, J.B.; Zhuo, E.; Li, H.; Liu, L.; Cai, H.; Ou, Y. Achieving Accurate Segmentation of Nasopharyngeal Carcinoma in MR Images Through Recurrent Attention. Lect. Notes Comput. Sci. 2019, 11768, 494–502. [Google Scholar] [CrossRef] [PubMed]
  76. Wang, Y.; Zu, C.; Hu, G.; Luo, Y.; Ma, Z.; He, K.; Wu, X.; Zhou, J. Automatic Tumor Segmentation with Deep Convolutional Neural Networks for Radiotherapy Applications. Neural Process. Lett. 2018, 48, 1323–1334. [Google Scholar] [CrossRef]
  77. Sun, Y.; Lin, L.; Dou, Q.; Chen, H.; Jin, Y.; Zhou, G.Q.; Tang, Y.; Chen, W.; Su, B.; Liu, F.; et al. Development and Validation of A Deep Learning Algorithm for Automated Delineation of Primary Tumor for Nasopharyngeal Carcinoma from Multimodal Magnetic Resonance Images. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, e330–e331. [Google Scholar] [CrossRef]
  78. Ma, Z.; Wu, X.; Sun, S.; Xia, C.; Yang, Z.; Li, S.; Zhou, J. A discriminative learning based approach for automated nasopharyngeal carcinoma segmentation leveraging multi-modality similarity metric learning. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 813–816. [Google Scholar]
  79. Hu, K.; Liu, C.; Yu, X.; Zhang, J.; He, Y.; Zhu, H. A 2.5D Cancer Segmentation for MRI Images Based on U-Net. In Proceedings of the 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, China, 20–22 July 2018; pp. 6–10. [Google Scholar]
  80. He, Y.; Yu, X.; Liu, C.; Zhang, J.; Hu, K.; Zhu, H.C. A 3D Dual Path U-Net of Cancer Segmentation Based on MRI. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 268–272. [Google Scholar]
  81. Men, K.; Chen, X.; Zhang, Y.; Zhang, T.; Dai, J.; Yi, J.; Li, Y. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images. Front. Oncol. 2017, 7, 315. [Google Scholar] [CrossRef] [PubMed]
  82. Ma, Z.; Wu, X.; Zhou, J. Automatic nasopharyngeal carcinoma segmentation in MR images with convolutional neural networks. In Proceedings of the 2017 International Conference on the Frontiers and Advances in Data Science (FADS), Xi’an, China, 23–25 October 2017; pp. 147–150. [Google Scholar]
  83. Zhang, J.; Li, B.; Qiu, Q.; Mo, H.; Tian, L. SICNet: Learning selective inter-slice context via Mask-Guided Self-knowledge distillation for NPC segmentation. J. Vis. Commun. Image Represent. 2024, 98, 104053. [Google Scholar] [CrossRef]
  84. Huang, J.; Yang, S.; Zou, L.; Chen, Y.; Yang, L.; Yao, B.; Huang, Z.; Zhong, Y.; Liu, Z.; Zhang, N. Quantitative pharmacokinetic parameter Ktrans map assists in regional segmentation of nasopharyngeal carcinoma in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Biomed. Signal Process. Control. 2023, 87, 105433. [Google Scholar] [CrossRef]
  85. Meng, D.; Li, S.; Sheng, B.; Wu, H.; Tian, S.; Ma, W.; Wang, G.; Yan, W. 3D reconstruction-oriented fully automatic multi-modal tumor segmentation by dual attention-guided VNet. Vis. Comput. 2023, 39, 3183–3196. [Google Scholar] [CrossRef]
  86. Luo, X.; Liao, W.; He, Y.; Tang, F.; Wu, M.; Shen, Y.; Huang, H.; Song, T.; Li, K.; Zhang, S.; et al. Deep learning-based accurate delineation of primary gross tumor volume of nasopharyngeal carcinoma on heterogeneous magnetic resonance imaging: A large-scale and multi-center study. Radiother. Oncol. 2023, 180, 109480. [Google Scholar] [CrossRef] [PubMed]
  87. Gu, R.; Wang, G.; Lu, J.; Zhang, J.; Lei, W.; Chen, Y.; Liao, W.; Zhang, S.; Li, K.; Metaxas, D.N.; et al. CDDSA: Contrastive domain disentanglement and style augmentation for generalizable medical image segmentation. Med. Image Anal. 2023, 89, 102904. [Google Scholar] [CrossRef]
  88. Zhang, J.; Gu, L.; Han, G.; Liu, X. AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution. Front. Oncol. 2022, 11, 816672. [Google Scholar] [CrossRef] [PubMed]
  89. Liu, Y.; Han, G.; Liu, X. Lightweight Compound Scaling Network for Nasopharyngeal Carcinoma Segmentation from MR Images. Sensors 2022, 22, 5875. [Google Scholar] [CrossRef] [PubMed]
  90. Li, Y.; Dan, T.; Li, H.; Chen, J.; Peng, H.; Liu, L.; Cai, H. NPCNet: Jointly Segment Primary Nasopharyngeal Carcinoma Tumors and Metastatic Lymph Nodes in MR Images. IEEE Trans. Med. Imaging 2022, 41, 1639–1650. [Google Scholar] [CrossRef]
  91. Wong, L.M.; Ai, Q.Y.H.; Poon, D.M.C.; Tong, M.; Ma, B.B.Y.; Hui, E.P.; Shi, L.; King, A.D. A convolutional neural network combined with positional and textural attention for the fully automatic delineation of primary nasopharyngeal carcinoma on non-contrast-enhanced MRI. Quant. Imaging Med. Surg. 2021, 11, 3932–3944. [Google Scholar] [CrossRef]
  92. Wong, L.M.; Ai, Q.Y.H.; Mo, F.K.F.; Poon, D.M.C.; King, A.D. Convolutional neural network in nasopharyngeal carcinoma: How good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn J. Radiol. 2021, 39, 571–579. [Google Scholar] [CrossRef] [PubMed]
  93. Qi, Y.; Li, J.; Chen, H.; Guo, Y.; Yin, Y.; Gong, G.; Wang, L. Computer-aided diagnosis and regional segmentation of nasopharyngeal carcinoma based on multi-modality medical images. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 871–882. [Google Scholar] [CrossRef] [PubMed]
  94. Cai, M.; Wang, J.; Yang, Q.; Guo, Y.; Zhang, Z.; Ying, H.; Hu, W.; Hu, C. Combining Images and T-Staging Information to Improve the Automatic Segmentation of Nasopharyngeal Carcinoma Tumors in MR Images. IEEE Access 2021, 9, 21323–21331. [Google Scholar] [CrossRef]
  95. Ke, L.; Deng, Y.; Xia, W.; Qiang, M.; Chen, X.; Liu, K.; Jing, B.; He, C.; Xie, C.; Guo, X.; et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020, 110, 104862. [Google Scholar] [CrossRef] [PubMed]
  96. Lin, L.; Dou, Q.; Jin, Y.M.; Zhou, G.Q.; Tang, Y.Q.; Chen, W.L.; Su, B.A.; Liu, F.; Tao, C.J.; Jiang, N.; et al. Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma. Radiology 2019, 291, 677–686. [Google Scholar] [CrossRef] [PubMed]
  97. Ma, Z.; Wu, X.; Song, Q.; Luo, Y.; Wang, Y.; Zhou, J. Automated nasopharyngeal carcinoma segmentation in magnetic resonance images by combination of convolutional neural networks and graph cut. Exp. Ther. Med. 2018, 16, 2511–2521. [Google Scholar] [CrossRef] [PubMed]
  98. Li, S.; Deng, Y.; Zhu, Z.; Hua, H.; Tao, Z. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics 2021, 11, 1523. [Google Scholar] [CrossRef]
  99. Ng, W.T.; But, B.; Choi, H.C.W.; de Bree, R.; Lee, A.W.M.; Lee, V.H.F.; López, F.; Mäkitie, A.A.; Rodrigo, J.P.; Saba, N.F.; et al. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management—A Systematic Review. Cancer Manag. Res. 2022, 14, 339–366. [Google Scholar] [CrossRef] [PubMed]
  100. Wang, Z.; Fang, M.; Zhang, J.; Tang, L.; Zhong, L.; Li, H.; Cao, R.; Zhao, X.; Liu, S.; Zhang, R.; et al. Radiomics and Deep Learning in Nasopharyngeal Carcinoma: A Review. IEEE Rev. Biomed. Eng. 2024, 17, 118–135. [Google Scholar] [CrossRef] [PubMed]
  101. Song, Q.; Bai, J.; Han, D.; Bhatia, S.; Sun, W.; Rockey, W.; Bayouth, J.E.; Buatti, J.M.; Wu, X. Optimal co-segmentation of tumor in PET-CT images with context information. IEEE Trans. Med. Imag. 2013, 32, 1685–1697. [Google Scholar] [CrossRef]
  102. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA flowchart for study selection.
Figure 1. PRISMA flowchart for study selection.
Bioengineering 11 00504 g001
Figure 2. Forest plot of Dice scores for deep learning algorithms in independent datasets [12,83,84,85,86,87,88,91,92,93,94,97].
Figure 2. Forest plot of Dice scores for deep learning algorithms in independent datasets [12,83,84,85,86,87,88,91,92,93,94,97].
Bioengineering 11 00504 g002
Table 1. Basic characteristics.
Table 1. Basic characteristics.
First AuthorStudy DesignPatientsSeries (Train/Valid/Test)ReferenceValidationData SourceIndicator Standard
Zhang et al. (2024) [83]Retrospective130130 (90/15/25)ManualTrain/TestGuangdong Provincial People’s HospitalExperienced clinician
Huang et al. (2024) [84]Retrospective9696 (76/10/10)ManualTrain/TestCancer Hospital Chinese Academy of Medical Sciences, Shenzhen Hospital.Two radiologists
Meng et al. (2023) [85]Retrospective161161 (129/0/32)ManualTrain/TestCancer HospitalRadiation oncologists
Luo et al. (2023) [86]Retrospective10571057 (600/259/198)ManualTrain/TestSouthern Medical University, West China Hospital, Sichuan Provincial People’s Hospital, Anhui Provincial Hospital, Sichuan Cancer HospitalTwo oncologists
Gu et al. (2023) [87]Retrospective189189 (114/0/75)ManualTrain/TestSichuan Provincial People’s Hospital, West China HospitalRadiation oncologists
Zhang et al. (2022) [88]Retrospective9393 (75/9/9)ManualTrain/TestSun Yat-sen UniversityNR
Liu et al. (2022) [89]Retrospective9292 (74/9/9)ManualTrain/TestSun Yat-sen UniversityNR
Li et al. (2022) [90]Retrospective754754 (604/150/0)ManualCross-validationSun Yat-sen UniversityThree radiologists
Wong et al. (2021) I [91]Retrospective404404 (303/101/0)ManualCross-validationJoint Chinese University of Hong KongExpert
Wong et al. (2021) II [92]Retrospective201201 (130/6/65)ManualCross-validationJoint Chinese University of Hong KongExpert
Qi et al. (2021) [93]Retrospective149149 (119/30/0)ManualCross-validationShandong Cancer Hospital Affiliated to Shandong UniversityExperienced radiologists
Cai et al. (2021) [94]Retrospective251251 (226/25/0)ManualCross-validationFudan University Shanghai Cancer CenterRadiation oncologist
Ye et al. (2020) [12]Retrospective4444 (40/4/0)ManualCross-validationPanyu Central HospitalRadiologist
Ke et al. (2020) [95]Retrospective41004100 (3285/411/404)ManualTrain/TestSun Yat-sen UniversityRadiation oncologist
Lin et al. (2019) [96]Retrospective10211021 (715/103/203)ManualTrain/TestSun Yat-sen UniversityRadiation oncologist
Ma et al. (2018) [97]Retrospective3030 (29/1/0)ManualCross-validationWest China HospitalRadiation oncologist
Li et al. (2018) [11]Retrospective2929 (28/1/0)ManualCross-validationSun Yat-sen UniversityRadiologists
Abbreviations: NR, not recorded.
Table 2. Characteristics of MRI.
Table 2. Characteristics of MRI.
First AuthorTeslaSequenceHardware
Zhang et al. (2024) [83]NRT1cNR
Huang et al. (2024) [84]3TDCE, KtransGE Discovery MR 750w
Meng et al. (2023) [85]NRT2wSiemens Magnetom Skyra
Luo et al. (2023) [86]1.5T/3TT1cGE, Siemens, Philips
Gu et al. (2023) [87]NRT1w, T1c, T1 water, T2 waterNR
Zhang et al. (2022) [88]NRT1cSiemens Aera
Liu et al. (2022) [89]NRT1cSiemens Aera
Li et al. (2022) [90]1.5T/3TT1, T2, T1cNR
Wong et al. (2021) I [91]3TT2wPhilips Achieva TX
Wong et al. (2021) II [92]3TT1W, fs-T2W, T1c and fs-ce-T1WPhilips Achieva TX
Qi et al. (2021) [93]NRT1, T2, T1cNR
Cai et al. (2021) [94]1.5TT1, T2, T1cGE, Milwaukee
Ye et al. (2020) [12]1.5TT1w, T2wSiemens Avanto
Ke et al. (2020) [95]3TT1cTrio Tim; SIEMENS, Achieva, PHILIPS; Discovery MR750; GE; Discovery MR750w; GE, USA
Lin et al. (2019) [96]NRT1, T2, T1c, T1w-fsNR
Ma et al. (2018) [97]3TT1wPhilips Achieva
Li et al. (2018) [11]3TDCEMagnetom Trio, Siemens
Abbreviations: NR, not recorded. DCE: dynamic contrast enhanced; fs: fat suppress; GE: General Electric.
Table 3. Characteristics and performance of preprocessing techniques and deep learning algorithms.
Table 3. Characteristics and performance of preprocessing techniques and deep learning algorithms.
First AuthorIntensity
Normalization
Resolution
Adjustment
Image
Augmentation
Image
Cropping
Training SizeInput
Dimension
AlgorithmsDice Score
Zhang et al. (2024) [83]YesYesYesYes902D/3DSICNet0.74
Huang et al. (2024) [84]YesNoYesYes762DResU-Net0.66
Meng et al. (2023) [85]YesYesYesYes1293DAttention-guided Vnet0.72
Luo et al. (2023) [86]YesYesYesYes6003DnnUNet0.88
Gu et al. (2023) [87]YesNoYesYes1142DCDDSA0.92
Zhang et al. (2022) [88]NoNoYesYes752DAttR2U-Net0.82
Liu et al. (2022) [89]YesNoYesYes742DLW-UNet-30.81
Li et al. (2022) [90]YesNoYesNo6042DNPCNet0.73
Wong et al. (2021) I [91]YesNoYesYes3032DCNN0.79
Wong et al. (2021) II [92]NoNoYesNo1302DU-net0.73
Qi et al. (2021) [93]YesNoYesYes1493DMMFNet0.68
Cai et al. (2021) [94]NoNoNoNo2262DT-U-Net 0.85
Ye et al. (2020) [12]YesNoYesYes402DDEU0.72
Ke et al. (2020) [95]NoNoNoNo32853DSC-DenseNet0.77
Lin et al. (2019) [96]YesNoYesNo7153DVoxResNet0.79
Ma et al. (2018) [97]YesYesNoNo292D/3DCNN0.85
Li et al. (2018) [11]YesNoYesYes282DCNN0.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.-K.; Wang, T.-W.; Yang, Y.-X.; Wu, Y.-T. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering 2024, 11, 504. https://doi.org/10.3390/bioengineering11050504

AMA Style

Wang C-K, Wang T-W, Yang Y-X, Wu Y-T. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering. 2024; 11(5):504. https://doi.org/10.3390/bioengineering11050504

Chicago/Turabian Style

Wang, Chih-Keng, Ting-Wei Wang, Ya-Xuan Yang, and Yu-Te Wu. 2024. "Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis" Bioengineering 11, no. 5: 504. https://doi.org/10.3390/bioengineering11050504

APA Style

Wang, C. -K., Wang, T. -W., Yang, Y. -X., & Wu, Y. -T. (2024). Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering, 11(5), 504. https://doi.org/10.3390/bioengineering11050504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop