sensors-logo

Journal Browser

Journal Browser

Deep Learning in Medical Imaging and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 7506

Special Issue Editors


E-Mail Website
Guest Editor
AMIDS Solutions, 69100 Villeurbanne, France
Interests: medical imaging; inverse problems; image reconstruction; deep learning

E-Mail Website
Guest Editor
Biomedical Imaging Research Laboratory CREATIS, University of Lyon, CEDEX, 69621 Villeurbanne, France
Interests: medical imaging; inverse problems; unrolled neural networks; single-pixel imaging; spectral computed tomography

Special Issue Information

Dear Colleagues,

Advancements in medical imaging are essential for accurate diagnosis, monitoring and outcome prediction, as well as for investigating the underlying mechanisms of diseases. The recent success of deep learning (DL) is pushing the limits of what is possible with imaging and has already become the state-of-the-art for several tasks: segmentation, automatic diagnosis, computer-aided detection (CAD), image restoration and image reconstruction, among others. At the same time, DL is being rapidly adopted by many medical imaging providers. 

Despite its current success, many challenges still remain. This Special Issue will address some of these challenges, such as explainability and interpretability, the hungry-data nature of DL models, and the need for robust methods in order to be translated to the clinic. One of the main challenges is the lack of annotated data in most medical applications. In this case, domain adaptation, self-supervised learning, and synthetic generation-based approaches play a critical role. 

However, more data is not always the only solution. Leveraging a priori information in the form of architecture design (CNNs, ViT), knowledge of the underlying physics, or other a priori information can reduce the problem complexity and the amount of needed data. This is very relevant for inverse problems arising in medical imaging: denoising, deblurring, super-resolution, image reconstruction under limited conditions, and natural ill-posed problems (EIT, ECT, MIT). Actually, the current trend in inverse problems is to move towards the so-called model-based deep learning (MBDL) approaches, which combine model-based methods with DL. MBDL methods (plug-and-play and unrolled algorithms) include a data consistency term that ensures robustness, making them perfectly suited for medical imaging. However, these methods are still under development. 

DL can also play an important role in sensor imaging for optimal sensor design and optimization of the transmitting and receiving processes. Optimal design can be beneficial for ultrasound, X-rays imaging, low-resolution imaging systems such as EIT, and others. Digital twin models could further unlock the potential with the low-res imaging systems with the aid of more established imaging information as well as physiological modelling.

The aim of this Special Issue of Sensors is to propose and highlight novel methods, architectures, and applications of deep learning in medical imaging and sensors. We expect submissions of articles related, but not limited to, the following topics:

  • Deep learning for medical imaging and imaging sensors (ultrasound, CT, MRI, spectral CT, optical imaging, PET, EIT, etc.);
  • Deep learning for image reconstruction and image restoration; 
  • Deep learning for segmentation; 
  • Deep learning for computer aided detection and diagnosis;
  • Semi/weak/self/unsupervised learning.

Dr. Juan Felipe Pérez-Juste Abascal
Dr. Nicolas Ducros
Prof. Dr. Manuchehr Soleimani 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • medical imaging
  • imaging sensor
  • image reconstruction
  • segmentation
  • diagnosis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3653 KiB  
Article
Classification of Benign–Malignant Thyroid Nodules Based on Hyperspectral Technology
by Junjie Wang, Jian Du, Chenglong Tao, Meijie Qi, Jiayue Yan, Bingliang Hu and Zhoufeng Zhang
Sensors 2024, 24(10), 3197; https://doi.org/10.3390/s24103197 - 17 May 2024
Viewed by 981
Abstract
In recent years, the incidence of thyroid cancer has rapidly increased. To address the issue of the inefficient diagnosis of thyroid cancer during surgery, we propose a rapid method for the diagnosis of benign and malignant thyroid nodules based on hyperspectral technology. Firstly, [...] Read more.
In recent years, the incidence of thyroid cancer has rapidly increased. To address the issue of the inefficient diagnosis of thyroid cancer during surgery, we propose a rapid method for the diagnosis of benign and malignant thyroid nodules based on hyperspectral technology. Firstly, using our self-developed thyroid nodule hyperspectral acquisition system, data for a large number of diverse thyroid nodule samples were obtained, providing a foundation for subsequent diagnosis. Secondly, to better meet clinical practical needs, we address the current situation of medical hyperspectral image classification research being mainly focused on pixel-based region segmentation, by proposing a method for nodule classification as benign or malignant based on thyroid nodule hyperspectral data blocks. Using 3D CNN and VGG16 networks as a basis, we designed a neural network algorithm (V3Dnet) for classification based on three-dimensional hyperspectral data blocks. In the case of a dataset with a block size of 50 × 50 × 196, the classification accuracy for benign and malignant samples reaches 84.63%. We also investigated the impact of data block size on the classification performance and constructed a classification model that includes thyroid nodule sample acquisition, hyperspectral data preprocessing, and an algorithm for thyroid nodule classification as benign and malignant based on hyperspectral data blocks. The proposed model for thyroid nodule classification is expected to be applied in thyroid surgery, thereby improving surgical accuracy and providing strong support for scientific research in related fields. Full article
(This article belongs to the Special Issue Deep Learning in Medical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 1937 KiB  
Communication
Analysis of Connectome Graphs Based on Boundary Scale
by María José Moron-Fernández, Ludovica Maria Amedeo, Alberto Monterroso Muñoz, Helena Molina-Abril, Fernando Díaz-del-Río, Fabiano Bini, Franco Marinozzi and Pedro Real
Sensors 2023, 23(20), 8607; https://doi.org/10.3390/s23208607 - 20 Oct 2023
Cited by 1 | Viewed by 1354
Abstract
The purpose of this work is to advance in the computational study of connectome graphs from a topological point of view. Specifically, starting from a sequence of hypergraphs associated to a brain graph (obtained using the Boundary Scale model, BS2), [...] Read more.
The purpose of this work is to advance in the computational study of connectome graphs from a topological point of view. Specifically, starting from a sequence of hypergraphs associated to a brain graph (obtained using the Boundary Scale model, BS2), we analyze the resulting scale-space representation using classical topological features, such as Betti numbers and average node and edge degrees. In this way, the topological information that can be extracted from the original graph is substantially enriched, thus providing an insightful description of the graph from a clinical perspective. To assess the qualitative and quantitative topological information gain of the BS2 model, we carried out an empirical analysis of neuroimaging data using a dataset that contains the connectomes of 96 healthy subjects, 52 women and 44 men, generated from MRI scans in the Human Connectome Project. The results obtained shed light on the differences between these two classes of subjects in terms of neural connectivity. Full article
(This article belongs to the Special Issue Deep Learning in Medical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 3756 KiB  
Article
Estimation of Left and Right Ventricular Ejection Fractions from cine-MRI Using 3D-CNN
by Soichiro Inomata, Takaaki Yoshimura, Minghui Tang, Shota Ichikawa and Hiroyuki Sugimori
Sensors 2023, 23(14), 6580; https://doi.org/10.3390/s23146580 - 21 Jul 2023
Cited by 4 | Viewed by 1702
Abstract
Cardiac function indices must be calculated using tracing from short-axis images in cine-MRI. A 3D-CNN (convolutional neural network) that adds time series information to images can estimate cardiac function indices without tracing using images with known values and cardiac cycles as the input. [...] Read more.
Cardiac function indices must be calculated using tracing from short-axis images in cine-MRI. A 3D-CNN (convolutional neural network) that adds time series information to images can estimate cardiac function indices without tracing using images with known values and cardiac cycles as the input. Since the short-axis image depicts the left and right ventricles, it is unclear which motion feature is captured. This study aims to estimate the indices by learning the short-axis images and the known left and right ventricular ejection fractions and to confirm the accuracy and whether each index is captured as a feature. A total of 100 patients with publicly available short-axis cine images were used. The dataset was divided into training:test = 8:2, and a regression model was built by training with the 3D-ResNet50. Accuracy was assessed using a five-fold cross-validation. The correlation coefficient, MAE (mean absolute error), and RMSE (root mean squared error) were determined as indices of accuracy evaluation. The mean correlation coefficient of the left ventricular ejection fraction was 0.80, MAE was 9.41, and RMSE was 12.26. The mean correlation coefficient of the right ventricular ejection fraction was 0.56, MAE was 11.35, and RMSE was 14.95. The correlation coefficient was considerably higher for the left ventricular ejection fraction. Regression modeling using the 3D-CNN indicated that the left ventricular ejection fraction was estimated more accurately, and left ventricular systolic function was captured as a feature. Full article
(This article belongs to the Special Issue Deep Learning in Medical Imaging and Sensing)
Show Figures

Figure 1

29 pages, 9546 KiB  
Article
A Real Time Method for Distinguishing COVID-19 Utilizing 2D-CNN and Transfer Learning
by Abida Sultana, Md. Nahiduzzaman, Sagor Chandro Bakchy, Saleh Mohammed Shahriar, Hasibul Islam Peyal, Muhammad E. H. Chowdhury, Amith Khandakar, Mohamed Arselene Ayari, Mominul Ahsan and Julfikar Haider
Sensors 2023, 23(9), 4458; https://doi.org/10.3390/s23094458 - 3 May 2023
Cited by 5 | Viewed by 2503
Abstract
Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real [...] Read more.
Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real time. The current, state-of-the-art literature suggests that deep learning (DL) is highly advantageous in automatic disease classification utilizing the CXR images. The goal of this study is to develop models by employing DL models for identifying COVID-19 and other lung disorders more efficiently. For this study, a dataset of 18,564 CXR images with seven disease categories was created from multiple publicly available sources. Four DL architectures including the proposed CNN model and pretrained VGG-16, VGG-19, and Inception-v3 models were applied to identify healthy and six lung diseases (fibrosis, lung opacity, viral pneumonia, bacterial pneumonia, COVID-19, and tuberculosis). Accuracy, precision, recall, f1 score, area under the curve (AUC), and testing time were used to evaluate the performance of these four models. The results demonstrated that the proposed CNN model outperformed all other DL models employed for a seven-class classification with an accuracy of 93.15% and average values for precision, recall, f1-score, and AUC of 0.9343, 0.9443, 0.9386, and 0.9939. The CNN model equally performed well when other multiclass classifications including normal and COVID-19 as the common classes were considered, yielding accuracy values of 98%, 97.49%, 97.81%, 96%, and 96.75% for two, three, four, five, and six classes, respectively. The proposed model can also identify COVID-19 with shorter training and testing times compared to other transfer learning models. Full article
(This article belongs to the Special Issue Deep Learning in Medical Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop