applsci-logo

Journal Browser

Journal Browser

Deep Neural Networks in Medical Imaging

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 February 2023) | Viewed by 48444

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automation and Information Technology, Transilvania University of Brașov, 500174 Brașov, Romania
Interests: cardiovascular imaging; multi-task learning; privacy preserving learning; uncertainty quantification; robustness and out-of-distribution detection

E-Mail Website
Guest Editor
Department of Automation and Information Technology, Transilvania University of Brașov, 500174 Brașov, Romania
Interests: privacy preserving learning; uncertainty quantification; robustness and out-of-distribution detection

E-Mail Website
Guest Editor
Department of Automation and Information Technology, Transilvania University of Brașov, 500174 Brașov, Romania
Interests: AI for the early detection; prediction and diagnosis of diseases; image reconstruction; privacy-preserving AI approaches

Special Issue Information

Dear Colleagues,

Medical Imaging plays a key role in disease management, starting from baseline risk assessment and through diagnosis, staging, therapy planning, therapy delivery, and follow-up. Each type of disease has led to the development of more advanced imaging methods and modalities to help clinicians address the specific challenges in analyzing the underlying mechanisms of diseases. Imaging data is one of the most important sources of evidence for clinical analysis and medical intervention as it accounts for about 90% of all healthcare data. Researchers have been actively pursuing the development of advanced image analysis algorithms, some of which are routinely used in clinical practice. These developments were driven by the need for a comprehensive quantification of structure and function across several imaging modalities such as Computed Tomography (CT), X-ray Radiography, Magnetic Resonance Imaging (MRI), Ultrasound, Nuclear Medicine Imaging, and Digital Pathology.

In the context of the availability of unprecedented data storage capacity and computational power, deep learning has become the state-of-the-art machine learning technique, providing unprecedented performance at learning patterns in medical images and great promise for helping physicians during clinical decision-making processes. Previously reported deep learning-related studies cover various types of problems (e.g., classification, detection, and segmentation) for different types of structures (e.g., landmarks, lesions, organs) in diverse anatomical application areas.

The aim of this Special Issue of Applied Sciences is to present and highlight novel methods, architectures, techniques, and applications of deep learning in medical imaging.

Prof. Dr. Lucian Mihai Itu
Prof. Dr. Constantin Suciu
Dr. Anamaria Vizitiu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image reconstruction
  • image enhancement
  • segmentation
  • registration
  • computer aided detection
  • landmark detection
  • image or view recognition
  • automated report generation
  • multi-task learning
  • transfer learning
  • generative learning
  • self-supervised learning
  • semi-supervised learning
  • weakly supervised learning
  • unsupervised learning
  • federated learning
  • privacy preserving learning
  • explainability and interpretability
  • robustness and out-of-distribution detection
  • uncertainty quantification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

6 pages, 204 KiB  
Editorial
Deep Neural Networks in Medical Imaging: Privacy Preservation, Image Generation and Applications
by Diana Ioana Stoian, Horia Andrei Leonte, Anamaria Vizitiu, Constantin Suciu and Lucian Mihai Itu
Appl. Sci. 2023, 13(21), 11668; https://doi.org/10.3390/app132111668 - 25 Oct 2023
Viewed by 1062
Abstract
Medical Imaging plays a key role in disease management, starting from baseline risk assessment, diagnosis, staging, therapy planning, therapy delivery, and follow-up [...] Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)

Research

Jump to: Editorial

17 pages, 2838 KiB  
Article
Hybrid Classifier-Based Federated Learning in Health Service Providers for Cardiovascular Disease Prediction
by Muhammad Mateen Yaqoob, Muhammad Nazir, Muhammad Amir Khan, Sajida Qureshi and Amal Al-Rasheed
Appl. Sci. 2023, 13(3), 1911; https://doi.org/10.3390/app13031911 - 1 Feb 2023
Cited by 31 | Viewed by 4142
Abstract
One of the deadliest diseases, heart disease, claims millions of lives every year worldwide. The biomedical data collected by health service providers (HSPs) contain private information about the patient and are subject to general privacy concerns, and the sharing of the data is [...] Read more.
One of the deadliest diseases, heart disease, claims millions of lives every year worldwide. The biomedical data collected by health service providers (HSPs) contain private information about the patient and are subject to general privacy concerns, and the sharing of the data is restricted under global privacy laws. Furthermore, the sharing and collection of biomedical data have a significant network communication cost and lead to delayed heart disease prediction. To address the training latency, communication cost, and single point of failure, we propose a hybrid framework at the client end of HSP consisting of modified artificial bee colony optimization with support vector machine (MABC-SVM) for optimal feature selection and classification of heart disease. For the HSP server, we proposed federated matched averaging to overcome privacy issues in this paper. We tested and evaluated our proposed technique and compared it with the standard federated learning techniques on the combined cardiovascular disease dataset. Our experimental results show that the proposed hybrid technique improves the prediction accuracy by 1.5%, achieves 1.6% lesser classification error, and utilizes 17.7% lesser rounds to reach the maximum accuracy. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

16 pages, 1828 KiB  
Article
Modified Artificial Bee Colony Based Feature Optimized Federated Learning for Heart Disease Diagnosis in Healthcare
by Muhammad Mateen Yaqoob, Muhammad Nazir, Abdullah Yousafzai, Muhammad Amir Khan, Asad Ali Shaikh, Abeer D. Algarni and Hela Elmannai
Appl. Sci. 2022, 12(23), 12080; https://doi.org/10.3390/app122312080 - 25 Nov 2022
Cited by 25 | Viewed by 2947
Abstract
Heart disease is one of the lethal diseases causing millions of fatalities every year. The Internet of Medical Things (IoMT) based healthcare effectively enables a reduction in death rate by early diagnosis and detection of disease. The biomedical data collected using IoMT contains [...] Read more.
Heart disease is one of the lethal diseases causing millions of fatalities every year. The Internet of Medical Things (IoMT) based healthcare effectively enables a reduction in death rate by early diagnosis and detection of disease. The biomedical data collected using IoMT contains personalized information about the patient and this data has serious privacy concerns. To overcome data privacy issues, several data protection laws are proposed internationally. These privacy laws created a huge problem for techniques used in traditional machine learning. We propose a framework based on federated matched averaging with a modified Artificial Bee Colony (M-ABC) optimization algorithm to overcome privacy issues and to improve the diagnosis method for the prediction of heart disease in this paper. The proposed technique improves the prediction accuracy, classification error, and communication efficiency as compared to the state-of-the-art federated learning algorithms on the real-world heart disease dataset. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

15 pages, 2545 KiB  
Article
Improving Medical X-ray Report Generation by Using Knowledge Graph
by Dehai Zhang, Anquan Ren, Jiashu Liang, Qing Liu, Haoxing Wang and Yu Ma
Appl. Sci. 2022, 12(21), 11111; https://doi.org/10.3390/app122111111 - 2 Nov 2022
Cited by 8 | Viewed by 4049
Abstract
In clinical diagnosis, radiological reports are essential to guide the patient’s treatment. However, writing radiology reports is a critical and time-consuming task for radiologists. Existing deep learning methods often ignore the interplay between medical findings, which may be a bottleneck limiting the quality [...] Read more.
In clinical diagnosis, radiological reports are essential to guide the patient’s treatment. However, writing radiology reports is a critical and time-consuming task for radiologists. Existing deep learning methods often ignore the interplay between medical findings, which may be a bottleneck limiting the quality of generated radiology reports. Our paper focuses on the automatic generation of medical reports from input chest X-ray images. In this work, we mine the associations between medical discoveries in the given texts and construct a knowledge graph based on the associations between medical discoveries. The patient’s chest X-ray image and clinical history file were used as input to extract the image–text hybrid features. Then, this feature is used as the input of the adjacency matrix of the knowledge graph, and the graph neural network is used to aggregate and transfer the information between each node to generate the situational representation of the disease with prior knowledge. These disease situational representations with prior knowledge are fed into the generator for self-supervised learning to generate radiology reports. We evaluate the performance of the proposed method using metrics from natural language generation and clinical efficacy on two public datasets. Our experiments show that our method outperforms state-of-the-art methods with the help of a knowledge graph constituted by prior knowledge of the patient. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

11 pages, 2958 KiB  
Article
One-Step Enhancer: Deblurring and Denoising of OCT Images
by Shunlei Li, Muhammad Adeel Azam, Ajay Gunalan and Leonardo S. Mattos
Appl. Sci. 2022, 12(19), 10092; https://doi.org/10.3390/app121910092 - 7 Oct 2022
Cited by 2 | Viewed by 2083
Abstract
Optical coherence tomography (OCT) is a rapidly evolving imaging technology that combines a broadband and low-coherence light source with interferometry and signal processing to produce high-resolution images of living tissues. However, the speckle noise introduced by the low-coherence interferometry and the blur from [...] Read more.
Optical coherence tomography (OCT) is a rapidly evolving imaging technology that combines a broadband and low-coherence light source with interferometry and signal processing to produce high-resolution images of living tissues. However, the speckle noise introduced by the low-coherence interferometry and the blur from device motions significantly degrade the quality of OCT images. Convolutional neural networks (CNNs) are a potential solution to deal with these issues and enhance OCT image quality. However, training such networks based on traditional supervised learning methods is impractical due to the lack of clean ground truth images. Consequently, this research proposes an unsupervised learning method for OCT image enhancement, termed one-step enhancer (OSE). Specifically, OSE performs denoising and deblurring based on a single step process. A generative adversarial network (GAN) is used for this. Encoders disentangle the raw images into a content domain, blur domain and noise domain to extract features. Then, the generator can generate clean images from the extracted features. To regularize the distribution range of retrieved blur characteristics, KL divergence loss is employed. Meanwhile, noise patches are enforced to promote more accurate disentanglement. These strategies considerably increase the effectiveness of GAN training for OCT image enhancement when used jointly. Both quantitative and qualitative visual findings demonstrate that the proposed method is effective for OCT image denoising and deblurring. These results are significant not only to provide an enhanced visual experience for clinicians but also to supply good quality data for OCT-guide operations. The enhanced images are needed, e.g., for the development of robust, reliable and accurate autonomous OCT-guided surgical robotic systems. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

20 pages, 4919 KiB  
Article
GAN-TL: Generative Adversarial Networks with Transfer Learning for MRI Reconstruction
by Muhammad Yaqub, Feng Jinchao, Shahzad Ahmed, Kaleem Arshid, Muhammad Atif Bilal, Muhammad Pervez Akhter and Muhammad Sultan Zia
Appl. Sci. 2022, 12(17), 8841; https://doi.org/10.3390/app12178841 - 2 Sep 2022
Cited by 15 | Viewed by 3913
Abstract
Generative adversarial networks (GAN), which are fueled by deep learning, are an efficient technique for image reconstruction using under-sampled MR data. In most cases, the performance of a particular model’s reconstruction must be improved by using a substantial proportion of the training data. [...] Read more.
Generative adversarial networks (GAN), which are fueled by deep learning, are an efficient technique for image reconstruction using under-sampled MR data. In most cases, the performance of a particular model’s reconstruction must be improved by using a substantial proportion of the training data. However, gathering tens of thousands of raw patient data for training the model in actual clinical applications is difficult because retaining k-space data is not customary in the clinical process. Therefore, it is imperative to increase the generalizability of a network that was created using a small number of samples as quickly as possible. This research explored two unique applications based on deep learning-based GAN and transfer learning. Seeing as MRI reconstruction procedures go for brain and knee imaging, the proposed method outperforms current techniques in terms of signal-to-noise ratio (PSNR) and structural similarity index (SSIM). As compared to the results of transfer learning for the brain and knee, using a smaller number of training cases produced superior results, with acceleration factor (AF) 2 (for brain PSNR (39.33); SSIM (0.97), for knee PSNR (35.48); SSIM (0.90)) and AF 4 (for brain PSNR (38.13); SSIM (0.95), for knee PSNR (33.95); SSIM (0.86)). The approach that has been described would make it easier to apply future models for MRI reconstruction without necessitating the acquisition of vast imaging datasets. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

12 pages, 2651 KiB  
Article
CoroNet: Deep Neural Network-Based End-to-End Training for Breast Cancer Diagnosis
by Nada Mobark, Safwat Hamad and S. Z. Rida
Appl. Sci. 2022, 12(14), 7080; https://doi.org/10.3390/app12147080 - 13 Jul 2022
Cited by 20 | Viewed by 2789
Abstract
In 2020, according to the publications of both the Global Cancer Observatory (GCO) and the World Health Organization (WHO), breast cancer (BC) represents one of the highest prevalent cancers in women worldwide. Almost 47% of the world’s 100,000 people are diagnosed with breast [...] Read more.
In 2020, according to the publications of both the Global Cancer Observatory (GCO) and the World Health Organization (WHO), breast cancer (BC) represents one of the highest prevalent cancers in women worldwide. Almost 47% of the world’s 100,000 people are diagnosed with breast cancer, among females. Moreover, BC prevails among 38.8% of Egyptian women having cancer. Current deep learning developments have shown the common usage of deep convolutional neural networks (CNNs) for analyzing medical images. Unlike the randomly initialized ones, pre-trained natural image database (ImageNet)-based CNN models may become successfully fine-tuned to obtain improved findings. To conduct the automatic detection of BC by the CBIS-DDSM dataset, a CNN model, namely CoroNet, is proposed. It relies on the Xception architecture, which has been pre-trained on the ImageNet dataset and has been fully trained on whole-image BC according to mammograms. The convolutional design method is used in this paper, since it performs better than the other methods. On the prepared dataset, CoroNet was trained and tested. Experiments show that in a four-class classification, it may attain an overall accuracy of 94.92% (benign mass vs. malignant mass) and (benign calcification vs. malignant calcification). CoroNet has a classification accuracy of 88.67% for the two-class cases (calcifications and masses). The paper concluded that there are promising outcomes that could be improved because more training data are available. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

23 pages, 2065 KiB  
Article
Towards a Deep-Learning Approach for Prediction of Fractional Flow Reserve from Optical Coherence Tomography
by Cosmin-Andrei Hatfaludi, Irina-Andra Tache, Costin Florian Ciușdel, Andrei Puiu, Diana Stoian, Lucian Mihai Itu, Lucian Calmac, Nicoleta-Monica Popa-Fotea, Vlad Bataila and Alexandru Scafa-Udriste
Appl. Sci. 2022, 12(14), 6964; https://doi.org/10.3390/app12146964 - 9 Jul 2022
Cited by 5 | Viewed by 2494
Abstract
Cardiovascular disease (CVD) is the number one cause of death worldwide, and coronary artery disease (CAD) is the most prevalent CVD, accounting for 42% of these deaths. In view of the limitations of the anatomical evaluation of CAD, Fractional Flow Reserve (FFR) has [...] Read more.
Cardiovascular disease (CVD) is the number one cause of death worldwide, and coronary artery disease (CAD) is the most prevalent CVD, accounting for 42% of these deaths. In view of the limitations of the anatomical evaluation of CAD, Fractional Flow Reserve (FFR) has been introduced as a functional diagnostic index. Herein, we evaluate the feasibility of using deep neural networks (DNN) in an ensemble approach to predict the invasively measured FFR from raw anatomical information that is extracted from optical coherence tomography (OCT). We evaluate the performance of various DNN architectures under different formulations: regression, classification—standard, and few-shot learning (FSL) on a dataset containing 102 intermediate lesions from 80 patients. The FSL approach that is based on a convolutional neural network leads to slightly better results compared to the standard classification: the per-lesion accuracy, sensitivity, and specificity were 77.5%, 72.9%, and 81.5%, respectively. However, since the 95% confidence intervals overlap, the differences are statistically not significant. The main findings of this study can be summarized as follows: (1) Deep-learning (DL)-based FFR prediction from reduced-order raw anatomical data is feasible in intermediate coronary artery lesions; (2) DL-based FFR prediction provides superior diagnostic performance compared to baseline approaches that are based on minimal lumen diameter and percentage diameter stenosis; and (3) the FFR prediction performance increases quasi-linearly with the dataset size, indicating that a larger train dataset will likely lead to superior diagnostic performance. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

17 pages, 4275 KiB  
Article
Balancing Data through Data Augmentation Improves the Generality of Transfer Learning for Diabetic Retinopathy Classification
by Zahra Mungloo-Dilmohamud, Maleika Heenaye-Mamode Khan, Khadiime Jhumka, Balkrish N. Beedassy, Noorshad Z. Mungloo and Carlos Peña-Reyes
Appl. Sci. 2022, 12(11), 5363; https://doi.org/10.3390/app12115363 - 25 May 2022
Cited by 20 | Viewed by 3457
Abstract
The incidence of diabetes in Mauritius is amongst the highest in the world. Diabetic retinopathy (DR), a complication resulting from the disease, can lead to blindness if not detected early. The aim of this work was to investigate the use of transfer learning [...] Read more.
The incidence of diabetes in Mauritius is amongst the highest in the world. Diabetic retinopathy (DR), a complication resulting from the disease, can lead to blindness if not detected early. The aim of this work was to investigate the use of transfer learning and data augmentation for the classification of fundus images into five different stages of diabetic retinopathy. The five stages are No DR, Mild nonproliferative DR, Moderate nonproliferative DR, Severe nonproliferative DR and Proliferative. To this end, deep transfer learning and three pre-trained models, VGG16, ResNet50 and DenseNet169, were used to classify the APTOS dataset. The preliminary experiments resulted in low training and validation accuracies, and hence, the APTOS dataset was augmented while ensuring a balance between the five classes. This dataset was then used to train the three models, and the best three models were used to classify a blind Mauritian test datum. We found that the ResNet50 model produced the best results out of the three models and also achieved very good accuracies for the five classes. The classification of class-4 Mauritian fundus images, severe cases, produced some unexpected results, with some images being classified as mild, and therefore needs to be further investigated. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

26 pages, 2483 KiB  
Article
Obfuscation Algorithm for Privacy-Preserving Deep Learning-Based Medical Image Analysis
by Andreea Bianca Popescu, Ioana Antonia Taca, Anamaria Vizitiu, Cosmin Ioan Nita, Constantin Suciu, Lucian Mihai Itu and Alexandru Scafa-Udriste
Appl. Sci. 2022, 12(8), 3997; https://doi.org/10.3390/app12083997 - 14 Apr 2022
Cited by 11 | Viewed by 3703
Abstract
Deep learning (DL)-based algorithms have demonstrated remarkable results in potentially improving the performance and the efficiency of healthcare applications. Since the data typically needs to leave the healthcare facility for performing model training and inference, e.g., in a cloud based solution, privacy concerns [...] Read more.
Deep learning (DL)-based algorithms have demonstrated remarkable results in potentially improving the performance and the efficiency of healthcare applications. Since the data typically needs to leave the healthcare facility for performing model training and inference, e.g., in a cloud based solution, privacy concerns have been raised. As a result, the demand for privacy-preserving techniques that enable DL model training and inference on secured data has significantly grown. We propose an image obfuscation algorithm that combines a variational autoencoder (VAE) with random non-bijective pixel intensity mapping to protect the content of medical images, which are subsequently employed in the development of DL-based solutions. A binary classifier is trained on secured coronary angiographic frames to evaluate the utility of obfuscated images in the context of model training. Two possible attack configurations are considered to assess the security level against artificial intelligence (AI)-based reconstruction attempts. Similarity metrics are employed to quantify the security against human perception (structural similarity index measure and peak signal-to-noise-ratio). Furthermore, expert readers performed a visual assessment to determine to what extent the reconstructed images are protected against human perception. The proposed algorithm successfully enables DL model training on obfuscated images with no significant computational overhead while ensuring protection against human eye perception and AI-based reconstruction attacks. Regardless of the threat actor’s prior knowledge of the target content, the coronary vessels cannot be entirely recovered through an AI-based attack. Although a drop in accuracy can be observed when the classifier is trained on obfuscated images, the performance is deemed satisfactory in the context of a privacy–accuracy trade-off. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

21 pages, 1150 KiB  
Article
Normalizing Flows for Out-of-Distribution Detection: Application to Coronary Artery Segmentation
by Costin Florian Ciușdel, Lucian Mihai Itu, Serkan Cimen, Michael Wels, Chris Schwemmer, Philipp Fortner, Sebastian Seitz, Florian Andre, Sebastian Johannes Buß, Puneet Sharma and Saikiran Rapaka
Appl. Sci. 2022, 12(8), 3839; https://doi.org/10.3390/app12083839 - 11 Apr 2022
Cited by 2 | Viewed by 3246
Abstract
Coronary computed tomography angiography (CCTA) is an effective imaging modality, increasingly accepted as a first-line test to diagnose coronary artery disease (CAD). The accurate segmentation of the coronary artery lumen on CCTA is important for the anatomical, morphological, and non-invasive functional assessment of [...] Read more.
Coronary computed tomography angiography (CCTA) is an effective imaging modality, increasingly accepted as a first-line test to diagnose coronary artery disease (CAD). The accurate segmentation of the coronary artery lumen on CCTA is important for the anatomical, morphological, and non-invasive functional assessment of stenoses. Hence, semi-automated approaches are currently still being employed. The processing time for a semi-automated lumen segmentation can be reduced by pre-selecting vessel locations likely to require manual inspection and by submitting only those for review to the radiologist. Detection of faulty lumen segmentation masks can be formulated as an Out-of-Distribution (OoD) detection problem. Two Normalizing Flows architectures are investigated and benchmarked herein: a Glow-like baseline, and a proposed one employing a novel coupling layer. Synthetic mask perturbations are used for evaluating and fine-tuning the learnt probability densities. Expert annotations on a separate test-set are employed to measure detection performance relative to inter-user variability. Regular coupling-layers tend to focus more on local pixel correlations and to disregard semantic content. Experiments and analyses show that, in contrast, the proposed architecture is capable of capturing semantic content and is therefore better suited for OoD detection of faulty lumen segmentations. When compared against expert consensus, the proposed model achieves an accuracy of 78.6% and a sensitivity of 76%, close to the inter-user mean of 80.9% and 79%, respectively, while the baseline model achieves an accuracy of 64.3% and a sensitivity of 48%. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

20 pages, 5497 KiB  
Article
Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos
by Tao Yu, Huiyi Hu, Xinsen Zhang, Honglin Lei, Jiquan Liu, Weiling Hu, Huilong Duan and Jianmin Si
Appl. Sci. 2022, 12(7), 3306; https://doi.org/10.3390/app12073306 - 24 Mar 2022
Cited by 3 | Viewed by 2630
Abstract
Esophagogastroduodenoscopy (EGD) is a critical step in the diagnosis of upper gastrointestinal disorders. However, due to inexperience or high workload, there is a wide variation in EGD performance by endoscopists. Variations in performance may result in exams that do not completely cover all [...] Read more.
Esophagogastroduodenoscopy (EGD) is a critical step in the diagnosis of upper gastrointestinal disorders. However, due to inexperience or high workload, there is a wide variation in EGD performance by endoscopists. Variations in performance may result in exams that do not completely cover all anatomical locations of the stomach, leading to a potential risk of missed diagnosis of gastric diseases. Numerous guidelines or expert consensus have been proposed to assess and optimize the quality of endoscopy. However, there is a lack of mature and robust methods to accurately apply to real clinical real-time video environments. In this paper, we innovatively define the problem of recognizing anatomical locations in videos as a multi-label recognition task. This can be more consistent with the model learning of image-to-label mapping relationships. We propose a combined structure of a deep learning model (GL-Net) that combines a graph convolutional network (GCN) with long short-term memory (LSTM) networks to both extract label features and correlate temporal dependencies for accurate real-time anatomical locations identification in gastroscopy videos. Our methodological evaluation dataset is based on complete videos of real clinical examinations. A total of 29,269 images from 49 videos were collected as a dataset for model training and validation. Another 1736 clinical videos were retrospectively analyzed and evaluated for the application of the proposed model. Our method achieves 97.1% mean accuracy (mAP), 95.5% mean per-class accuracy and 93.7% average overall accuracy in a multi-label classification task, and is able to process these videos in real-time at 29.9 FPS. In addition, based on our approach, we designed a system to monitor routine EGD videos in detail and perform statistical analysis of the operating habits of endoscopists, which can be a useful tool to improve the quality of clinical endoscopy. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

13 pages, 23641 KiB  
Article
Generative Adversarial CT Volume Extrapolation for Robust Small-to-Large Field of View Registration
by Andrei Puiu, Sureerat Reaungamornrat, Thomas Pheiffer, Lucian Mihai Itu, Constantin Suciu, Florin Cristian Ghesu and Tommaso Mansi
Appl. Sci. 2022, 12(6), 2944; https://doi.org/10.3390/app12062944 - 14 Mar 2022
Cited by 2 | Viewed by 2186
Abstract
Intraoperative Computer Tomographs (iCT) provide near real time visualizations which can be registered with high-quality preoperative images to improve the confidence of surgical instrument navigation. However, intraoperative images have a small field of view making the registration process error prone due to the [...] Read more.
Intraoperative Computer Tomographs (iCT) provide near real time visualizations which can be registered with high-quality preoperative images to improve the confidence of surgical instrument navigation. However, intraoperative images have a small field of view making the registration process error prone due to the reduced amount of mutual information. We herein propose a method to extrapolate thin acquisitions as a prior step to registration, to increase the field of view of the intraoperative images, and hence also the robustness of the guiding system. The method is based on a deep neural network which is trained adversarially using self-supervision to extrapolate slices from the existing ones. Median landmark detection errors are reduced by approximately 40%, yielding a better initial alignment. Furthermore, the intensity-based registration is improved; the surface distance errors are reduced by an order of magnitude, from 5.66 mm to 0.57 mm (p-value = 4.18×106). The proposed extrapolation method increases the registration robustness, which plays a key role in guiding the surgical intervention confidently. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

10 pages, 940 KiB  
Article
Deep Learning of Retinal Imaging: A Useful Tool for Coronary Artery Calcium Score Prediction in Diabetic Patients
by Rubén G. Barriada, Olga Simó-Servat, Alejandra Planas, Cristina Hernández, Rafael Simó and David Masip
Appl. Sci. 2022, 12(3), 1401; https://doi.org/10.3390/app12031401 - 28 Jan 2022
Cited by 11 | Viewed by 3331
Abstract
Cardiovascular diseases (CVD) are one of the leading causes of death in the developed countries. Previous studies suggest that retina blood vessels provide relevant information on cardiovascular risk. Retina fundus imaging (RFI) is a cheap medical imaging test that is already regularly performed [...] Read more.
Cardiovascular diseases (CVD) are one of the leading causes of death in the developed countries. Previous studies suggest that retina blood vessels provide relevant information on cardiovascular risk. Retina fundus imaging (RFI) is a cheap medical imaging test that is already regularly performed in diabetic population as screening of diabetic retinopathy (DR). Since diabetes is a major cause of CVD, we wanted to explore the use Deep Learning architectures on RFI as a tool for predicting CV risk in this population. Particularly, we use the coronary artery calcium (CAC) score as a marker, and train a convolutional neural network (CNN) to predict whether it surpasses a certain threshold defined by experts. The preliminary experiments on a reduced set of clinically verified patients show promising accuracies. In addition, we observed that elementary clinical data is positively correlated with the risk of suffering from a CV disease. We found that the results from both informational cues are complementary, and we propose two applications that can benefit from the combination of image analysis and clinical data. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

15 pages, 7724 KiB  
Article
End-to-End Deep Learning CT Image Reconstruction for Metal Artifact Reduction
by Dominik F. Bauer, Constantin Ulrich, Tom Russ, Alena-Kathrin Golla, Lothar R. Schad and Frank G. Zöllner
Appl. Sci. 2022, 12(1), 404; https://doi.org/10.3390/app12010404 - 31 Dec 2021
Cited by 9 | Viewed by 4242
Abstract
Metal artifacts are common in CT-guided interventions due to the presence of metallic instruments. These artifacts often obscure clinically relevant structures, which can complicate the intervention. In this work, we present a deep learning CT reconstruction called iCTU-Net for the reduction of metal [...] Read more.
Metal artifacts are common in CT-guided interventions due to the presence of metallic instruments. These artifacts often obscure clinically relevant structures, which can complicate the intervention. In this work, we present a deep learning CT reconstruction called iCTU-Net for the reduction of metal artifacts. The network emulates the filtering and back projection steps of the classical filtered back projection (FBP). A U-Net is used as post-processing to refine the back projected image. The reconstruction is trained end-to-end, i.e., the inputs of the iCTU-Net are sinograms and the outputs are reconstructed images. The network does not require a predefined back projection operator or the exact X-ray beam geometry. Supervised training is performed on simulated interventional data of the abdomen. For projection data exhibiting severe artifacts, the iCTU-Net achieved reconstructions with SSIM = 0.970±0.009 and PSNR = 40.7±1.6. The best reference method, an image based post-processing network, only achieved SSIM = 0.944±0.024 and PSNR = 39.8±1.9. Since the whole reconstruction process is learned, the network was able to fully utilize the raw data, which benefited from the removal of metal artifacts. The proposed method was the only studied method that could eliminate the metal streak artifacts. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

Back to TopTop