Stitching, Alignment and Segmentation Applications in Biomedical Images

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Biomedical Information and Health".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 12773

Special Issue Editors


E-Mail Website
Guest Editor
Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
Interests: computer vision; biomedical image analysis; machine learning

E-Mail Website
Guest Editor
Department of Electrical and Information Engineering, University of Cassino and southern Lazio, 03043 Cassino, FR, Italy
Interests: evolutionary computation; machine learning; feature selection; pattern recognition; bayesian networks; cultural heritage
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image stitching, alignment, and segmentation are common image processing steps in biomedical processing tasks, which are attracting more and more computer vision researchers’ attention. Although many conventional methods and machine-learning-based methods have been proposed to solve those problems, challenges still exist, e.g., 1) varied image quality (especially for some medical images); 2) large data size (may be terabyte to petabyte level); 3) high-precision requirement in biomedical scenarios. Thus, more advanced algorithms for image stitching, alignment, and segmentation are urgently needed.

Dr. Shuohong Wang
Dr. Francesco Fontanella
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • biomedical image analysis
  • image stitching
  • image alignment
  • image segmentation
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 6456 KiB  
Article
Image Stitching of Low-Resolution Retinography Using Fundus Blur Filter and Homography Convolutional Neural Network
by Levi Santos, Maurício Almeida, João Almeida, Geraldo Braz, José Camara and António Cunha
Information 2024, 15(10), 652; https://doi.org/10.3390/info15100652 - 17 Oct 2024
Viewed by 590
Abstract
Great advances in stitching high-quality retinal images have been made in recent years. On the other hand, very few studies have been carried out on low-resolution retinal imaging. This work investigates the challenges of low-resolution retinal images obtained by the D-EYE smartphone-based fundus [...] Read more.
Great advances in stitching high-quality retinal images have been made in recent years. On the other hand, very few studies have been carried out on low-resolution retinal imaging. This work investigates the challenges of low-resolution retinal images obtained by the D-EYE smartphone-based fundus camera. The proposed method uses homography estimation to register and stitch low-quality retinal images into a cohesive mosaic. First, a Siamese neural network extracts features from a pair of images, after which the correlation of their feature maps is computed. This correlation map is fed through four independent CNNs to estimate the homography parameters, each specializing in different corner coordinates. Our model was trained on a synthetic dataset generated from the Microsoft Common Objects in Context (MSCOCO) dataset; this work added an important data augmentation phase to improve the quality of the model. Then, the same is evaluated on the FIRE retina and D-EYE datasets for performance measurement using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The obtained results are promising: the average PSNR was 26.14 dB, with an SSIM of 0.96 on the D-EYE dataset. Compared to the method that uses a single neural network for homography calculations, our approach improves the PSNR by 7.96 dB and achieves a 7.86% higher SSIM score. Full article
Show Figures

Graphical abstract

15 pages, 3305 KiB  
Article
Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images
by Haotian Wang, Aleksandar Vakanski, Changfa Shi and Min Xian
Information 2024, 15(7), 417; https://doi.org/10.3390/info15070417 - 18 Jul 2024
Viewed by 996
Abstract
Separating overlapped nuclei is a significant challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on nuclei segmentation; however, their performance on separating overlapped nuclei is limited. To address this issue, we propose a novel multitask learning network with [...] Read more.
Separating overlapped nuclei is a significant challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on nuclei segmentation; however, their performance on separating overlapped nuclei is limited. To address this issue, we propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately. The newly proposed multitask learning architecture enhances generalization by learning shared representation from the following three tasks: instance segmentation, nuclei distance map prediction, and overlapped nuclei distance map prediction. The proposed bending loss defines high penalties to concave contour points with large curvatures, and small penalties are applied to convex contour points with small curvatures. Minimizing the bending loss avoids generating contours that encompass multiple nuclei. In addition, two new quantitative metrics, the Aggregated Jaccard Index of overlapped nuclei (AJIO) and the accuracy of overlapped nuclei (ACCO), have been designed to evaluate overlapped nuclei segmentation. We validate the proposed approach on the CoNSeP and MoNuSegv1 data sets using the following seven quantitative metrics: Aggregate Jaccard Index, Dice, Segmentation Quality, Recognition Quality, Panoptic Quality, AJIO, and ACCO. Extensive experiments demonstrate that the proposed Bend-Net outperforms eight state-of-the-art approaches. Full article
Show Figures

Figure 1

12 pages, 3334 KiB  
Article
Generation of Synthetic Images of Trabecular Bone Based on Micro-CT Scans
by Jonas Grande-Barreto, Eduardo Polanco-Castro, Hayde Peregrina-Barreto, Eduardo Rosas-Mialma and Carmina Puig-Mar
Information 2023, 14(7), 375; https://doi.org/10.3390/info14070375 - 1 Jul 2023
Cited by 1 | Viewed by 1799
Abstract
Creating synthetic images of trabecular tissue provides an alternative for researchers to validate algorithms designed to study trabecular bone. Developing synthetic images requires baseline data, such as datasets of digital biological samples or templates, often unavailable due to privacy restrictions. Even when this [...] Read more.
Creating synthetic images of trabecular tissue provides an alternative for researchers to validate algorithms designed to study trabecular bone. Developing synthetic images requires baseline data, such as datasets of digital biological samples or templates, often unavailable due to privacy restrictions. Even when this baseline is available, the standard procedure combines the information to generate a single template as a starting point, reducing the variability in the generated synthetic images. This work proposes a methodology for building synthetic images of trabecular bone structure, creating a 3D network that simulates it. Next, the technical characteristics of the micro-CT scanner, the biomechanical properties of trabecular bones, and the physics of the imaging process to produce a synthetic image are simulated. The proposed methodology does not require biological samples, datasets, or templates to generate synthetic images. Since each synthetic image built is unique, the methodology is enabled to generate a vast number of synthetic images, useful in the performance comparison of algorithms under different imaging conditions. The created synthetic images were assessed using microarchitecture parameters of reference, and experimental results provided evidence that the obtained values match approaches requiring initial data. The scope of this methodology covers research aspects related to using synthetic images in further biomedical research or the development of educational training tools to understand the medical image. Full article
Show Figures

Figure 1

18 pages, 4639 KiB  
Article
The Detection of COVID-19 in Chest X-rays Using Ensemble CNN Techniques
by Domantas Kuzinkovas and Sandhya Clement
Information 2023, 14(7), 370; https://doi.org/10.3390/info14070370 - 29 Jun 2023
Cited by 6 | Viewed by 2229
Abstract
Advances in the field of image classification using convolutional neural networks (CNNs) have greatly improved the accuracy of medical image diagnosis by radiologists. Numerous research groups have applied CNN methods to diagnose respiratory illnesses from chest X-rays and have extended this work to [...] Read more.
Advances in the field of image classification using convolutional neural networks (CNNs) have greatly improved the accuracy of medical image diagnosis by radiologists. Numerous research groups have applied CNN methods to diagnose respiratory illnesses from chest X-rays and have extended this work to prove the feasibility of rapidly diagnosing COVID-19 with high degrees of accuracy. One issue in previous research has been the use of datasets containing only a few hundred images of chest X-rays containing COVID-19, causing CNNs to overfit the image data. This leads to lower accuracy when the model attempts to classify new images, as would be clinically expected. In this work, we present a model trained on the COVID-QU-Ex dataset containing 33,920 chest X-ray images, with an equal share of COVID-19, Non-COVID pneumonia, and Normal images. The model is an ensemble of pre-trained CNNs (ResNet50, VGG19, and VGG16) and GLCM textural features. The model achieved a 98.34% binary classification accuracy (COVID-19/no COVID-19) on a test dataset of 6581 chest X-rays and 94.68% for distinguishing between COVID-19, Non-COVID pneumonia, and normal chest X-rays. The results also demonstrate that a higher 98.82% three-class test accuracy can be achieved using the model if the training dataset only contains a few thousand images. However, the generalizability of the model suffers due to the smaller dataset size. This study highlights the benefits of both ensemble CNN techniques and larger dataset sizes for medical image classification performance. Full article
Show Figures

Figure 1

15 pages, 5113 KiB  
Article
Virtual CT Myelography: A Patch-Based Machine Learning Model to Improve Intraspinal Soft Tissue Visualization on Unenhanced Dual-Energy Lumbar Spine CT
by Xuan V. Nguyen, Devi D. Nelakurti, Engin Dikici, Sema Candemir, Daniel J. Boulter and Luciano M. Prevedello
Information 2022, 13(9), 412; https://doi.org/10.3390/info13090412 - 31 Aug 2022
Viewed by 2864
Abstract
Background: Distinguishing between the spinal cord and cerebrospinal fluid (CSF) non-invasively on CT is challenging due to their similar mass densities. We hypothesize that patch-based machine learning applied to dual-energy CT can accurately distinguish CSF from neural or other tissues based on [...] Read more.
Background: Distinguishing between the spinal cord and cerebrospinal fluid (CSF) non-invasively on CT is challenging due to their similar mass densities. We hypothesize that patch-based machine learning applied to dual-energy CT can accurately distinguish CSF from neural or other tissues based on the center voxel and neighboring voxels. Methods: 88 regions of interest (ROIs) from 12 patients’ dual-energy (100 and 140 kVp) lumbar spine CT exams were manually labeled by a neuroradiologist as one of 4 major tissue types (water, fat, bone, and nonspecific soft tissue). Four-class classifier convolutional neural networks were trained, validated, and tested on thousands of nonoverlapping patches extracted from 82 ROIs among 11 CT exams, with each patch representing pixel values (at low and high energies) of small, rectangular, 3D CT volumes. Different patch sizes were evaluated, ranging from 3 × 3 × 3 × 2 to 7 × 7 × 7 × 2. A final ensemble model incorporating all patch sizes was tested on patches extracted from six ROIs in a holdout patient. Results: Individual models showed overall test accuracies ranging from 99.8% for 3 × 3 × 3 × 2 patches (N = 19,423) to 98.1% for 7 × 7 × 7 × 2 patches (N = 1298). The final ensemble model showed 99.4% test classification accuracy, with sensitivities and specificities of 90% and 99.6%, respectively, for the water class and 98.6% and 100% for the soft tissue class. Conclusions: Convolutional neural networks utilizing local low-level features on dual-energy spine CT can yield accurate tissue classification and enhance the visualization of intraspinal neural tissue. Full article
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 1919 KiB  
Review
A Review of Imaging Methods and Recent Nanoparticles for Breast Cancer Diagnosis
by Fahimeh Aminolroayaei, Saghar Shahbazi-Gahrouei, Amir Khorasani and Daryoush Shahbazi-Gahrouei
Information 2024, 15(1), 10; https://doi.org/10.3390/info15010010 - 22 Dec 2023
Cited by 2 | Viewed by 3052
Abstract
Breast cancer is the foremost common cause of death in women, and its early diagnosis will help treat and increase patients’ survival. This review article aims to look at the studies on the recent findings of standard imaging techniques and their characteristics for [...] Read more.
Breast cancer is the foremost common cause of death in women, and its early diagnosis will help treat and increase patients’ survival. This review article aims to look at the studies on the recent findings of standard imaging techniques and their characteristics for breast cancer diagnosis as well as on the recent role of nanoparticles (NPs) that are used for breast cancer detection. Herein, a search was performed in the literature through scientific citation websites, including Google Scholar, PubMed, Scopus, and Web of Science, until May 2023. A comprehensive review of different imaging modalities and NPs for breast cancer diagnosis is given, and the successes, challenges, and limitations of these methods are discussed. Full article
Show Figures

Figure 1

Back to TopTop