Advances in Medical Image Analysis and Deep Learning

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Biomedical Information and Health".

Deadline for manuscript submissions: closed (15 November 2022) | Viewed by 6990

Special Issue Editors

Department of Computer Science & Technology, Heilongjiang University, Harbin, China
Interests: medical image analysis; deep learning; diffusion MRI
School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
Interests: medical image analysis; geometric deep learning; diffusion MRI

E-Mail Website
Guest Editor
Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
Interests: medical image analysis; graph learning; multimodal fusion

E-Mail Website
Guest Editor
The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, China
Interests: construction and analysis of biological network
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development and progress of medical imaging and computer technology, medical image analysis has become an indispensable tool and technical means in medical research, clinical disease diagnosis and treatment. In recent years, deep learning (DL), especially deep convolutional neural networks (CNNs), has rapidly developed into a research hotspot in medical image analysis. These networks can automatically extract hidden features from medical image data related to diagnosis. Despite recent advances in DL research, questions remain on how best to learn representations of medical imaging data. Deep learning requires a large amount of data. However, the amount of patient data available for diseases is often insufficient, resulting in model promotion and optimization difficulties. The number of samples in disease categories is unbalanced, and the severe skewness of labels leads to poor generalizability of the model. Additionally, it is not enough to predict health or disease using solely medical image data. The exploration and fusion of other sources, types, or categories of data (signals, diagnostic reports, and other clinical parameters) is also very important, yet presents challenges for constructing DL models.

This Special Issue focuses on state-of-the-art DL techniques and their applications in medical imaging. We seek contributions that include, but are not limited to:

  • Theoretical underpinnings of DL problems arising in medical imaging;
  • Novel applications of DL in medical image acquisition, reconstruction, and analysis;
  • Un/semi/weakly-supervised learning for DL; Annotation efficient approaches to DL;
  • Domain adaptation, transfer learning, and adversarial learning in medical imaging with DL;
  • Multi-modal medical imaging data fusion and integration with DL;
  • Joint latent space learning with DL for medical imaging and non-imaging data integration;
  • Spatiotemporal medical imaging and image analysis using DL;
  • DL approaches for medical image registration, super-resolution, and resampling;
  • Accelerated medical imaging acquisition/reconstruction with non-Cartesian sampling using DL;
  • Novel datasets, challenges, and benchmarks for application and evaluation of DL.

Authors must submit papers on https://susy.mdpi.com/ according to the instructions here. Three or four reviewers will typically be recruited according to the standard MDPI review protocol. Authors are encouraged to speak with one of the Guest Editors to determine the suitability for this Special Issue.

Dr. Jiquan Ma
Dr. Geng Chen
Dr. Hui Cui
Dr. Desi Shang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image analysis
  • deep learning
  • AI healthcare
  • deep neural networks
  • convolutional neural networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 2922 KiB  
Article
FuseLGNet: Fusion of Local and Global Information for Detection of Parkinson’s Disease
by Ming Chen, Tao Ren, Pihai Sun, Jianfei Wu, Jinfeng Zhang and Aite Zhao
Information 2023, 14(2), 119; https://doi.org/10.3390/info14020119 - 13 Feb 2023
Cited by 3 | Viewed by 1725
Abstract
In the past few years, the assessment of Parkinson’s disease (PD) has mainly been based on the clinician’s examination, the patient’s medical history, and self-report. Parkinson’s disease may be misdiagnosed due to a lack of clinical experience. Moreover, it is highly subjective and [...] Read more.
In the past few years, the assessment of Parkinson’s disease (PD) has mainly been based on the clinician’s examination, the patient’s medical history, and self-report. Parkinson’s disease may be misdiagnosed due to a lack of clinical experience. Moreover, it is highly subjective and is not conducive to reflecting a true result. Due to the high incidence rate and increasing trend of PD, it is significant to use objective monitoring and diagnostic tools for accurate and timely diagnosis. In this paper, we designed a low-level feature extractor that uses convolutional layers to extract local information about an image and a high-level feature extractor that extracts global information about an image through the autofocus mechanism. PD is detected by fusing local and global information. The model is trained and evaluated on two publicly available datasets. Experiments have shown that our model has a strong advantage in diagnosing whether people have PD; gait-based analysis and recognition can also provide effective evidence for the early diagnosis of PD. Full article
(This article belongs to the Special Issue Advances in Medical Image Analysis and Deep Learning)
Show Figures

Figure 1

16 pages, 10594 KiB  
Article
Deep-Learning Image Stabilization for Adaptive Optics Ophthalmoscopy
by Shudong Liu, Zhenghao Ji, Yi He, Jing Lu, Gongpu Lan, Jia Cong, Xiaoyu Xu and Boyu Gu
Information 2022, 13(11), 531; https://doi.org/10.3390/info13110531 - 8 Nov 2022
Cited by 2 | Viewed by 2563
Abstract
An adaptive optics scanning laser ophthalmoscope (AOSLO) has the characteristics of a high resolution and a small field of view (FOV), which are greatly affected by eye motion. Continual eye motion will cause distortions both within the frame (intra-frame) and between frames (inter-frame). [...] Read more.
An adaptive optics scanning laser ophthalmoscope (AOSLO) has the characteristics of a high resolution and a small field of view (FOV), which are greatly affected by eye motion. Continual eye motion will cause distortions both within the frame (intra-frame) and between frames (inter-frame). Overcoming eye motion and achieving image stabilization is the first step and is of great importance in image analysis. Although cross-correlation-based methods enable image registration to be achieved, the manual identification and distinguishing of images with saccades is required; manual registration has a high accuracy, but it is time-consuming and complicated. Some imaging systems are able to compensate for eye motion during the imaging process, but special hardware devices need to be integrated into the system. In this paper, we proposed a deep-learning-based algorithm for automatic image stabilization. The algorithm used the VGG-16 network to extract convolution features and a correlation filter to detect the position of reference in the next frame, and finally, it compensated for displacement to achieve registration. According to the results, the mean difference in the vertical and horizontal displacement between the algorithm and manual registration was 0.07 pixels and 0.16 pixels, respectively, with a 95% confidence interval of (−3.26 px, 3.40 px) and (−4.99 px, 5.30 px). The Pearson correlation coefficients for the vertical and horizontal displacements between these two methods were 0.99 and 0.99, respectively. Compared with cross-correlation-based methods, the algorithm had a higher accuracy, automatically removed images with blinks, and corrected images with saccades. Compared with manual registration, the algorithm enabled manual registration accuracy to be achieved without manual intervention. Full article
(This article belongs to the Special Issue Advances in Medical Image Analysis and Deep Learning)
Show Figures

Figure 1

15 pages, 2861 KiB  
Article
Segmentation Method of Magnetoelectric Brain Image Based on the Transformer and the CNN
by Xiaoli Liu and Xiaorong Cheng
Information 2022, 13(10), 445; https://doi.org/10.3390/info13100445 - 23 Sep 2022
Cited by 2 | Viewed by 1825
Abstract
To address the problem of a low accuracy and blurred boundaries in segmenting multimodal brain tumor images using the TransBTS network, a 3D BCS_T model incorporating a channel space attention mechanism is proposed. Firstly, the TransBTS model hierarchy is increased to obtain more [...] Read more.
To address the problem of a low accuracy and blurred boundaries in segmenting multimodal brain tumor images using the TransBTS network, a 3D BCS_T model incorporating a channel space attention mechanism is proposed. Firstly, the TransBTS model hierarchy is increased to obtain more local feature information, and residual basis blocks are added to reduce feature loss. Secondly, downsampling is incorporated into the hybrid attention mechanism to enhance the critical region information extraction. Finally, weighted cross-entropy loss and generalized dice loss are employed to solve the inequality problem in the tumor sample categories. The experimental results show that the whole tumor region WT, the tumor core region TC, and the enhanced tumor region ET are improved by an average of 2.53% in the evaluation metric of the Dice similarity coefficient, compared with the TransBTS network and shortened by an average of 3.14 in the metric of Hausdorff distance 95. Therefore, the 3D BCS_T model can effectively improve the segmentation accuracy and boundary clarity of both the tumor core and the enhanced tumor categories of the small areas. Full article
(This article belongs to the Special Issue Advances in Medical Image Analysis and Deep Learning)
Show Figures

Figure 1

Back to TopTop