AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 October 2024) | Viewed by 9898

Special Issue Editor


E-Mail Website
Guest Editor
1. School of Biomedical Engineering & Imaging Sciences, Kings College London, London WC2R 2LS, UK
2. Rigshospitalet, Department of Clinical Physiology and Nuclear Medicine, Blegdamsvej 9, 2100 Copenhagen, Denmark
3. Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
Interests: hybrid imaging; PET/CT; PET/MR; oncology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Imaging plays a pivotal role in treating patients with cancer, and hybrid imaging represents a key phenotypic presentation of the disease, stage, and prognosis, as well as characterisation of the tumour. A prerequisite for continued improvement in the treatment of cancer patients is the ability to stratify patients into ever smaller subpopulations, enabling interventions to be tailored to individual patients, balancing the potential benefit with the risk and severity of side effects. The recent developments in the field of AI towards deep learning algorithms learning from examples rather than rule-based logic have enabled studies demonstrating the potential predictive power of data-driven stratification taking hundreds of variables into account. These new analytical methods enable us to harvest information not previously accessible or well understood as well as new ways to improve image acquisition, reconstruction, and clinical workflows. This Special Issue will present up-to-date knowledge and examples of the use of AI in a wide range of applications within hybrid imaging, including tumour classification, segmentation, multimodal data analysis, as well as in pre-processing and reconstruction of images.

Dr. Barbara Malene Fischer
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • machine learning
  • hybrid imaging
  • PET/CT
  • PET/MR
  • SPECT/CT
  • multimodal imaging
  • cancer
  • oncology
  • tumour segmentation
  • tumour characterisation
  • prediction
  • multimodal data analysis
  • image reconstruction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3886 KiB  
Article
Artificial Intelligence-Enhanced Quantitative Ultrasound for Breast Cancer: Pilot Study on Quantitative Parameters and Biopsy Outcomes
by Hyuksool Kwon, Seokhwan Oh, Myeong-Gee Kim, Youngmin Kim, Guil Jung, Hyeon-Jik Lee, Sang-Yun Kim and Hyeon-Min Bae
Diagnostics 2024, 14(4), 419; https://doi.org/10.3390/diagnostics14040419 - 14 Feb 2024
Cited by 1 | Viewed by 1652
Abstract
Traditional B-mode ultrasound has difficulties distinguishing benign from malignant breast lesions. It appears that Quantitative Ultrasound (QUS) may offer advantages. We examined the QUS imaging system’s potential, utilizing parameters like Attenuation Coefficient (AC), Speed of Sound (SoS), Effective Scatterer Diameter (ESD), and Effective [...] Read more.
Traditional B-mode ultrasound has difficulties distinguishing benign from malignant breast lesions. It appears that Quantitative Ultrasound (QUS) may offer advantages. We examined the QUS imaging system’s potential, utilizing parameters like Attenuation Coefficient (AC), Speed of Sound (SoS), Effective Scatterer Diameter (ESD), and Effective Scatterer Concentration (ESC) to enhance diagnostic accuracy. B-mode images and radiofrequency signals were gathered from breast lesions. These parameters were processed and analyzed by a QUS system trained on a simulated acoustic dataset and equipped with an encoder-decoder structure. Fifty-seven patients were enrolled over six months. Biopsies served as the diagnostic ground truth. AC, SoS, and ESD showed significant differences between benign and malignant lesions (p < 0.05), but ESC did not. A logistic regression model was developed, demonstrating an area under the receiver operating characteristic curve of 0.90 (95% CI: 0.78, 0.96) for distinguishing between benign and malignant lesions. In conclusion, the QUS system shows promise in enhancing diagnostic accuracy by leveraging AC, SoS, and ESD. Further studies are needed to validate these findings and optimize the system for clinical use. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition)
Show Figures

Figure 1

16 pages, 6798 KiB  
Article
Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features
by Venkatesan Rajinikanth, P. M. Durai Raj Vincent, C. N. Gnanaprakasam, Kathiravan Srinivasan and Chuan-Yu Chang
Diagnostics 2023, 13(11), 1832; https://doi.org/10.3390/diagnostics13111832 - 23 May 2023
Cited by 1 | Viewed by 2019
Abstract
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality [...] Read more.
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition)
Show Figures

Figure 1

16 pages, 3559 KiB  
Article
Applying Deep Transfer Learning to Assess the Impact of Imaging Modalities on Colon Cancer Detection
by Wael Alhazmi and Turki Turki
Diagnostics 2023, 13(10), 1721; https://doi.org/10.3390/diagnostics13101721 - 12 May 2023
Cited by 4 | Viewed by 2130
Abstract
The use of medical images for colon cancer detection is considered an important problem. As the performance of data-driven methods relies heavily on the images generated by a medical method, there is a need to inform research organizations about the effective imaging modalities, [...] Read more.
The use of medical images for colon cancer detection is considered an important problem. As the performance of data-driven methods relies heavily on the images generated by a medical method, there is a need to inform research organizations about the effective imaging modalities, when coupled with deep learning (DL), for detecting colon cancer. Unlike previous studies, this study aims to comprehensively report the performance behavior for detecting colon cancer using various imaging modalities coupled with different DL models in the transfer learning (TL) setting to report the best overall imaging modality and DL model for detecting colon cancer. Therefore, we utilized three imaging modalities, namely computed tomography, colonoscopy, and histology, using five DL architectures, including VGG16, VGG19, ResNet152V2, MobileNetV2, and DenseNet201. Next, we assessed the DL models on the NVIDIA GeForce RTX 3080 Laptop GPU (16GB GDDR6 VRAM) using 5400 processed images divided equally between normal colons and colons with cancer for each of the imaging modalities used. Comparing the imaging modalities when applied to the five DL models presented in this study and twenty-six ensemble DL models, the experimental results show that the colonoscopy imaging modality, when coupled with the DenseNet201 model under the TL setting, outperforms all the other models by generating the highest average performance result of 99.1% (99.1%, 99.8%, and 99.1%) based on the accuracy results (AUC, precision, and F1, respectively). Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition)
Show Figures

Figure 1

23 pages, 6415 KiB  
Article
DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection
by Ghazanfar Latif
Diagnostics 2022, 12(11), 2888; https://doi.org/10.3390/diagnostics12112888 - 21 Nov 2022
Cited by 18 | Viewed by 3008
Abstract
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain’s required portion. [...] Read more.
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain’s required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition)
Show Figures

Figure 1

Back to TopTop