Application of Convolutional Neural Networks in Bioimaging and Biosignal Processes

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 4649

Special Issue Editors


E-Mail Website
Guest Editor
College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
Interests: image processing; signal processing; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Neurosciences Division, CIMA, University of Navarra, 31008 Pamplona, Navarra, Spain
Interests: bioengineering; brain diseases; neurophysiology; neurotechnology; signal analysis; ML/AI; embedded systems; complex systems

Special Issue Information

Dear Colleagues,

As is widely known, with the advancement of artificial intelligence (AI), the use of computer-aided diagnosis (CAD) systems in medicine has skyrocketed in recent years. Convolution neural network (CNN) models have demonstrated significantly high performance in identification, division, and classification, allowing them to be useful and effective in disease diagnosis and treatment. However, the majority of studies have used deep convolutional architectures with no significant changes. In this Special Issue, submissions on the following areas are of special interest: efforts to provide new architectures related to convolutional neural networks, new methods in feature selection/extraction and learning, and solving problems related to the scarcity and imbalance of medical data (including transfer learning, artificial data generation, and data augmentation). This research can include applications such as the automatic detection of sleep stages, the detection and classification of epilepsy stages, the automatic detection of driver fatigue, the detection and classification of emotions, etc., from physiological signals. In addition, research on the segmentation of medical images based on deep convolutional networks, including diagnosing and classifying tumors of the brain, liver, bone, etc., is also welcome.

Dr. Sebelan Danishvar
Dr. Miguel Valencia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical image/signal analysis
  • biomedical image/signal processing
  • new AI architectures
  • detection and recognition in biomedical image/signals
  • automated/computer-aided diagnosis using convolutional neural networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 6761 KiB  
Article
Detection of Rat Pain-Related Grooming Behaviors Using Multistream Recurrent Convolutional Networks on Day-Long Video Recordings
by Chien-Cheng Lee, Ping-Wing Lui, Wei-Wei Gao and Zhongjian Gao
Bioengineering 2024, 11(12), 1180; https://doi.org/10.3390/bioengineering11121180 - 21 Nov 2024
Viewed by 528
Abstract
In experimental pain studies involving animals, subjective pain reports are not feasible. Current methods for detecting pain-related behaviors rely on human observation, which is time-consuming and labor-intensive, particularly for lengthy video recordings. Automating the quantification of these behaviors poses substantial challenges. In this [...] Read more.
In experimental pain studies involving animals, subjective pain reports are not feasible. Current methods for detecting pain-related behaviors rely on human observation, which is time-consuming and labor-intensive, particularly for lengthy video recordings. Automating the quantification of these behaviors poses substantial challenges. In this study, we developed and evaluated a deep learning, multistream algorithm to detect pain-related grooming behaviors in rats. Pain-related grooming behaviors were induced by injecting small amounts of pain-inducing chemicals into the rats’ hind limbs. Day-long video recordings were then analyzed with our algorithm, which initially filtered out non-grooming segments. The remaining segments, referred to as likely grooming clips, were used for model training and testing. Our model, a multistream recurrent convolutional network, learned to differentiate grooming from non-grooming behaviors within these clips through deep learning. The average validation accuracy across three evaluation methods was 88.5%. We further analyzed grooming statistics by comparing the duration of grooming episodes between experimental and control groups. Results demonstrated statistically significant changes in grooming behavior consistent with pain expression. Full article
Show Figures

Figure 1

15 pages, 1082 KiB  
Article
Deep Attention Fusion Hashing (DAFH) Model for Medical Image Retrieval
by Gangao Wu, Enhui Jin, Yanling Sun, Bixia Tang and Wenming Zhao
Bioengineering 2024, 11(7), 673; https://doi.org/10.3390/bioengineering11070673 - 2 Jul 2024
Viewed by 1338
Abstract
In medical image retrieval, accurately retrieving relevant images significantly impacts clinical decision making and diagnostics. Traditional image-retrieval systems primarily rely on single-dimensional image data, while current deep-hashing methods are capable of learning complex feature representations. However, retrieval accuracy and efficiency are hindered by [...] Read more.
In medical image retrieval, accurately retrieving relevant images significantly impacts clinical decision making and diagnostics. Traditional image-retrieval systems primarily rely on single-dimensional image data, while current deep-hashing methods are capable of learning complex feature representations. However, retrieval accuracy and efficiency are hindered by diverse modalities and limited sample sizes. Objective: To address this, we propose a novel deep learning-based hashing model, the Deep Attention Fusion Hashing (DAFH) model, which integrates advanced attention mechanisms with medical imaging data. Methods: The DAFH model enhances retrieval performance by integrating multi-modality medical imaging data and employing attention mechanisms to optimize the feature extraction process. Utilizing multimodal medical image data from the Cancer Imaging Archive (TCIA), this study constructed and trained a deep hashing network that achieves high-precision classification of various cancer types. Results: At hash code lengths of 16, 32, and 48 bits, the model respectively attained Mean Average Precision (MAP@10) values of 0.711, 0.754, and 0.762, highlighting the potential and advantage of the DAFH model in medical image retrieval. Conclusions: The DAFH model demonstrates significant improvements in the efficiency and accuracy of medical image retrieval, proving to be a valuable tool in clinical settings. Full article
Show Figures

Figure 1

14 pages, 4131 KiB  
Article
Concurrent Learning Approach for Estimation of Pelvic Tilt from Anterior–Posterior Radiograph
by Ata Jodeiri, Hadi Seyedarabi, Sebelan Danishvar, Seyyed Hossein Shafiei, Jafar Ganjpour Sales, Moein Khoori, Shakiba Rahimi and Seyed Mohammad Javad Mortazavi
Bioengineering 2024, 11(2), 194; https://doi.org/10.3390/bioengineering11020194 - 17 Feb 2024
Viewed by 1789
Abstract
Accurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this [...] Read more.
Accurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this paper has been to present an innovative and accurate method for estimating the functional pelvic tilt (PT) from a standing anterior–posterior (AP) radiography image. We introduce an encoder–decoder-style network based on a concurrent learning approach called VGG-UNET (VGG embedded in U-NET), where a deep fully convolutional network known as VGG is embedded at the encoder part of an image segmentation network, i.e., U-NET. In the bottleneck of the VGG-UNET, in addition to the decoder path, we use another path utilizing light-weight convolutional and fully connected layers to combine all extracted feature maps from the final convolution layer of VGG and thus regress PT. In the test phase, we exclude the decoder path and consider only a single target task i.e., PT estimation. The absolute errors obtained using VGG-UNET, VGG, and Mask R-CNN are 3.04 ± 2.49, 3.92 ± 2.92, and 4.97 ± 3.87, respectively. It is observed that the VGG-UNET leads to a more accurate prediction with a lower standard deviation (STD). Our experimental results demonstrate that the proposed multi-task network leads to a significantly improved performance compared to the best-reported results based on cascaded networks. Full article
Show Figures

Figure 1

Back to TopTop