applsci-logo

Journal Browser

Journal Browser

Machine Learning for Biomedical Application

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 67191

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
Interests: image and signal processing; artificial intelligence; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biomedicine is a multidisciplinary branch of medical science that consists of many scientific disciplines, e.g., biology, biotechnology, bioinformatics, and genetics; moreover, it covers various medical specialties. In recent years, a huge development of this field of science has been observed. The consequence of this is not only achievements that allow better understanding of the principles of the human body functioning at various levels (cellular, anatomical and physiological), but also a large amount of data generated, among others as a result of analyzes of the human genome or the processing, analysis, and recognition of a wide class of biomedical signals and images obtained through increasingly advanced medical imaging devices. The analysis of these data requires the use of advanced IT methods, which include those related to the use of artificial intelligence, and in particular machine learning.

The Special Issue will include applications of machine learning in processing, analysis, and recognition of biomedical data. Specific attention will be given to recently developed deep learning techniques and their application in extracting essential information from large biomedical databases. Hence, proposed topics include but are not limited to the following applications of machine learning:

  • Genomic sequence determinations and analysis of gene expression patterns;
  • Processing and analysis of biomedical signals and images;
  • Modifying living organisms according to human purposes;
  • Improving cell and tissue culture technologies;
  • Development of deep learning architectures in analysis of biomedical data.

Prof. Dr. Michał Strzelecki
Prof. Pawel Badura
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • biotechnology
  • signal and image analysis
  • pattern recognition
  • genomics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 177 KiB  
Editorial
Machine Learning for Biomedical Application
by Michał Strzelecki and Pawel Badura
Appl. Sci. 2022, 12(4), 2022; https://doi.org/10.3390/app12042022 - 15 Feb 2022
Cited by 22 | Viewed by 5118
Abstract
The tremendous development of technology also affects medical science, including imaging diagnostics [...] Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)

Research

Jump to: Editorial, Review

24 pages, 6722 KiB  
Article
Computer Tools to Analyze Lung CT Changes after Radiotherapy
by Marek Konkol, Konrad Śniatała, Paweł Śniatała, Szymon Wilk, Beata Baczyńska and Piotr Milecki
Appl. Sci. 2021, 11(4), 1582; https://doi.org/10.3390/app11041582 - 10 Feb 2021
Cited by 7 | Viewed by 2475
Abstract
The paper describes a computer tool dedicated to the comprehensive analysis of lung changes in computed tomography (CT) images. The correlation between the dose delivered during radiotherapy and pulmonary fibrosis is offered as an example analysis. The input data, in DICOM (Digital Imaging [...] Read more.
The paper describes a computer tool dedicated to the comprehensive analysis of lung changes in computed tomography (CT) images. The correlation between the dose delivered during radiotherapy and pulmonary fibrosis is offered as an example analysis. The input data, in DICOM (Digital Imaging and Communications in Medicine) format, is provided from CT images and dose distribution models of patients. The CT images are processed using convolution neural networks, and next, the selected slices go through the segmentation and registration algorithms. The results of the analysis are visualized in graphical format and also in numerical parameters calculated based on the images analysis. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Comorbidity Pattern Analysis for Predicting Amyotrophic Lateral Sclerosis
by Chia-Hui Huang, Bak-Sau Yip, David Taniar, Chi-Shin Hwang and Tun-Wen Pai
Appl. Sci. 2021, 11(3), 1289; https://doi.org/10.3390/app11031289 - 31 Jan 2021
Cited by 10 | Viewed by 2514
Abstract
Electronic Medical Records (EMRs) can be used to create alerts for clinicians to identify patients at risk and to provide useful information for clinical decision-making support. In this study, we proposed a novel approach for predicting Amyotrophic Lateral Sclerosis (ALS) based on comorbidities [...] Read more.
Electronic Medical Records (EMRs) can be used to create alerts for clinicians to identify patients at risk and to provide useful information for clinical decision-making support. In this study, we proposed a novel approach for predicting Amyotrophic Lateral Sclerosis (ALS) based on comorbidities and associated indicators using EMRs. The medical histories of ALS patients were analyzed and compared with those of subjects without ALS, and the associated comorbidities were selected as features for constructing the machine learning and prediction model. We proposed a novel weighted Jaccard index (WJI) that incorporates four different machine learning techniques to construct prediction systems. Alternative prediction models were constructed based on two different levels of comorbidity: single disease codes and clustered disease codes. With an accuracy of 83.7%, sensitivity of 78.8%, specificity of 85.7%, and area under the receiver operating characteristic curve (AUC) value of 0.907 for the single disease code level, the proposed WJI outperformed the traditional Jaccard index (JI) and scoring methods. Incorporating the proposed WJI into EMRs enabled the construction of a prediction system for analyzing the risk of suffering a specific disease based on comorbidity combinatorial patterns, which could provide a fast, low-cost, and noninvasive evaluation approach for early diagnosis of a specific disease. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

12 pages, 2855 KiB  
Article
Intracranial Hemorrhage Detection in Head CT Using Double-Branch Convolutional Neural Network, Support Vector Machine, and Random Forest
by Agata Sage and Pawel Badura
Appl. Sci. 2020, 10(21), 7577; https://doi.org/10.3390/app10217577 - 27 Oct 2020
Cited by 60 | Viewed by 14897
Abstract
Brain hemorrhage is a severe threat to human life, and its timely and correct diagnosis and treatment are of great importance. Multiple types of brain hemorrhage are distinguished depending on the location and character of bleeding. The main division covers five subtypes: subdural, [...] Read more.
Brain hemorrhage is a severe threat to human life, and its timely and correct diagnosis and treatment are of great importance. Multiple types of brain hemorrhage are distinguished depending on the location and character of bleeding. The main division covers five subtypes: subdural, epidural, intraventricular, intraparenchymal, and subarachnoid hemorrhage. This paper presents an approach to detect these intracranial hemorrhage types in computed tomography images of the head. The model trained for each hemorrhage subtype is based on a double-branch convolutional neural network of ResNet-50 architecture. It extracts features from two chromatic representations of the input data: a concatenation of the image normalized in different intensity windows and a stack of three consecutive slices creating a 3D spatial context. The joint feature vector is passed to the classifier to produce the final decision. We tested two tools: the support vector machine and the random forest. The experiments involved 372,556 images from 11,454 CT series of 9997 patients, with each image annotated with labels related to the hemorrhage subtypes. We validated deep networks from both branches of our framework and the model with either of two classifiers under consideration. The obtained results justify the use of a combination of double-source features with the random forest classifier. The system outperforms state-of-the-art methods in terms of F1 score. The highest detection accuracy was obtained in intraventricular (96.7%) and intraparenchymal hemorrhages (93.3%). Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

18 pages, 9670 KiB  
Article
MRU-NET: A U-Shaped Network for Retinal Vessel Segmentation
by Hongwei Ding, Xiaohui Cui, Leiyang Chen and Kun Zhao
Appl. Sci. 2020, 10(19), 6823; https://doi.org/10.3390/app10196823 - 29 Sep 2020
Cited by 11 | Viewed by 3526
Abstract
Fundus blood vessel image segmentation plays an important role in the diagnosis and treatment of diseases and is the basis of computer-aided diagnosis. Feature information from the retinal blood vessel image is relatively complicated, and the existing algorithms are sometimes difficult to perform [...] Read more.
Fundus blood vessel image segmentation plays an important role in the diagnosis and treatment of diseases and is the basis of computer-aided diagnosis. Feature information from the retinal blood vessel image is relatively complicated, and the existing algorithms are sometimes difficult to perform effective segmentation with. Aiming at the problems of low accuracy and low sensitivity of the existing segmentation methods, an improved U-shaped neural network (MRU-NET) segmentation method for retinal vessels was proposed. Firstly, the image enhancement algorithm and random segmentation method are used to solve the problems of low contrast and insufficient image data of the original image. Moreover, smaller image blocks after random segmentation are helpful to reduce the complexity of the U-shaped neural network model; secondly, the residual learning is introduced into the encoder and decoder to improve the efficiency of feature use and to reduce information loss, and a feature fusion module is introduced between the encoder and decoder to extract image features with different granularities; and finally, a feature balancing module is added to the skip connections to resolve the semantic gap between low-dimensional features in the encoder and high-dimensional features in decoder. Experimental results show that our method has better accuracy and sensitivity on the DRIVE and STARE datasets (accuracy (ACC) = 0.9611, sensitivity (SE) = 0.8613; STARE: ACC = 0.9662, SE = 0.7887) than some of the state-of-the-art methods. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

16 pages, 974 KiB  
Article
Deep Instance Segmentation of Laboratory Animals in Thermal Images
by Magdalena Mazur-Milecka, Tomasz Kocejko and Jacek Ruminski
Appl. Sci. 2020, 10(17), 5979; https://doi.org/10.3390/app10175979 - 28 Aug 2020
Cited by 7 | Viewed by 2674
Abstract
In this paper we focus on the role of deep instance segmentation of laboratory rodents in thermal images. Thermal imaging is very suitable to observe the behaviour of laboratory animals, especially in low light conditions. It is an non-intrusive method allowing to monitor [...] Read more.
In this paper we focus on the role of deep instance segmentation of laboratory rodents in thermal images. Thermal imaging is very suitable to observe the behaviour of laboratory animals, especially in low light conditions. It is an non-intrusive method allowing to monitor the activity of animals and potentially observe some physiological changes expressed in dynamic thermal patterns. The analysis of the recorded sequence of thermal images requires smart algorithms for automatic processing of millions of thermal frames. Instance image segmentation allows to extract each animal from a frame and track its activity and thermal patterns. In this work, we adopted two instance segmentation algorithms, i.e., Mask R-CNN and TensorMask. Both methods in different configurations were applied to a set of thermal sequences, and both achieved high results. The best results were obtained for the TensorMask model, initially pre-trained on visible light images and finally trained on thermal images of rodents. The achieved mean average precision was above 90 percent, which proves that model pre-training on visible images can improve results of thermal image segmentation. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

22 pages, 7592 KiB  
Article
A Multi-Layer Perceptron Network for Perfusion Parameter Estimation in DCE-MRI Studies of the Healthy Kidney
by Artur Klepaczko, Michał Strzelecki, Marcin Kociołek, Eli Eikefjord and Arvid Lundervold
Appl. Sci. 2020, 10(16), 5525; https://doi.org/10.3390/app10165525 - 10 Aug 2020
Cited by 7 | Viewed by 2956
Abstract
Background: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is an imaging technique which helps in visualizing and quantifying perfusion—one of the most important indicators of an organ’s state. This paper focuses on perfusion and filtration in the kidney, whose performance directly influences versatile functions [...] Read more.
Background: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is an imaging technique which helps in visualizing and quantifying perfusion—one of the most important indicators of an organ’s state. This paper focuses on perfusion and filtration in the kidney, whose performance directly influences versatile functions of the body. In clinical practice, kidney function is assessed by measuring glomerular filtration rate (GFR). Estimating GFR based on DCE-MRI data requires the application of an organ-specific pharmacokinetic (PK) model. However, determination of the model parameters, and thus the characterization of GFR, is sensitive to determination of the arterial input function (AIF) and the initial choice of parameter values. Methods: This paper proposes a multi-layer perceptron network for PK model parameter determination, in order to overcome the limitations of the traditional model’s optimization techniques based on non-linear least-squares curve-fitting. As a reference method, we applied the trust-region reflective algorithm to numerically optimize the model. The effectiveness of the proposed approach was tested for 20 data sets, collected for 10 healthy volunteers whose image-derived GFR scores were compared with ground-truth blood test values. Results: The achieved mean difference between the image-derived and ground-truth GFR values was 2.35 mL/min/1.73 m2, which is comparable to the result obtained for the reference estimation method (−5.80 mL/min/1.73 m2). Conclusions: Neural networks are a feasible alternative to the least-squares curve-fitting algorithm, ensuring agreement with ground-truth measurements at a comparable level. The advantages of using a neural network are twofold. Firstly, it can estimate a GFR value without the need to determine the AIF for each individual patient. Secondly, a reliable estimate can be obtained, without the need to manually set up either the initial parameter values or the constraints thereof. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

16 pages, 3613 KiB  
Article
Person Independent Recognition of Head Gestures from Parametrised and Raw Signals Recorded from Inertial Measurement Unit
by Anna Borowska-Terka and Pawel Strumillo
Appl. Sci. 2020, 10(12), 4213; https://doi.org/10.3390/app10124213 - 19 Jun 2020
Cited by 10 | Viewed by 2808
Abstract
Numerous applications of human–machine interfaces, e.g., dedicated to persons with disabilities, require contactless handling of devices or systems. The purpose of this research is to develop a hands-free head-gesture-controlled interface that can support persons with disabilities to communicate with other people and devices, [...] Read more.
Numerous applications of human–machine interfaces, e.g., dedicated to persons with disabilities, require contactless handling of devices or systems. The purpose of this research is to develop a hands-free head-gesture-controlled interface that can support persons with disabilities to communicate with other people and devices, e.g., the paralyzed to signal messages or the visually impaired to handle travel aids. The hardware of the interface consists of a small stereovision rig with a built-in inertial measurement unit (IMU). The device is to be positioned on a user’s forehead. Two approaches to recognize head movements were considered. In the first approach, for various time window sizes of the signals recorded from a three-axis accelerometer and a three-axis gyroscope, statistical parameters were calculated such as: average, minimum and maximum amplitude, standard deviation, kurtosis, correlation coefficient, and signal energy. For the second approach, the focus was put onto direct analysis of signal samples recorded from the IMU. In both approaches, the accuracies of 16 different data classifiers for distinguishing the head movements: pitch, roll, yaw, and immobility were evaluated. The recordings of head gestures were collected from 65 individuals. The best results for the testing data were obtained for the non-parametric approach, i.e., direct classification of unprocessed samples of IMU signals for Support Vector Machine (SVM) classifier (95% correct recognitions). Slightly worse results, in this approach, were obtained for the random forests classifier (93%). The achieved high recognition rates of the head gestures suggest that a person with physical or sensory disability can efficiently communicate with other people or manage applications using simple head gesture sequences. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

16 pages, 11209 KiB  
Article
Automatic Cephalometric Landmark Detection on X-ray Images Using a Deep-Learning Method
by Yu Song, Xu Qiao, Yutaro Iwamoto and Yen-wei Chen
Appl. Sci. 2020, 10(7), 2547; https://doi.org/10.3390/app10072547 - 7 Apr 2020
Cited by 77 | Viewed by 12001
Abstract
Accurate automatic quantitative cephalometry are essential for orthodontics. However, manual labeling of cephalometric landmarks is tedious and subjective, which also must be performed by professional doctors. In recent years, deep learning has gained attention for its success in computer vision field. It has [...] Read more.
Accurate automatic quantitative cephalometry are essential for orthodontics. However, manual labeling of cephalometric landmarks is tedious and subjective, which also must be performed by professional doctors. In recent years, deep learning has gained attention for its success in computer vision field. It has achieved large progress in resolving problems like image classification or image segmentation. In this paper, we propose a two-step method which can automatically detect cephalometric landmarks on skeletal X-ray images. First, we roughly extract a region of interest (ROI) patch for each landmark by registering the testing image to training images, which have annotated landmarks. Then, we utilize pre-trained networks with a backbone of ResNet50, which is a state-of-the-art convolutional neural network, to detect each landmark in each ROI patch. The network directly outputs the coordinates of the landmarks. We evaluate our method on two datasets: ISBI 2015 Grand Challenge in Dental X-ray Image Analysis and our own dataset provided by Shandong University. The experiments demonstrate that the proposed method can achieve satisfying results on both SDR (Successful Detection Rate) and SCR (Successful Classification Rate). However, the computational time issue remains to be improved in the future. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

16 pages, 3043 KiB  
Article
An Efficient Algorithm for Cardiac Arrhythmia Classification Using Ensemble of Depthwise Separable Convolutional Neural Networks
by Eko Ihsanto, Kalamullah Ramli, Dodi Sudiana and Teddy Surya Gunawan
Appl. Sci. 2020, 10(2), 483; https://doi.org/10.3390/app10020483 - 9 Jan 2020
Cited by 44 | Viewed by 6744
Abstract
Many algorithms have been developed for automated electrocardiogram (ECG) classification. Due to the non-stationary nature of the ECG signal, it is rather challenging to use traditional handcraft methods, such as time-based analysis of feature extraction and classification, to pave the way for machine [...] Read more.
Many algorithms have been developed for automated electrocardiogram (ECG) classification. Due to the non-stationary nature of the ECG signal, it is rather challenging to use traditional handcraft methods, such as time-based analysis of feature extraction and classification, to pave the way for machine learning implementation. This paper proposed a novel method, i.e., the ensemble of depthwise separable convolutional (DSC) neural networks for the classification of cardiac arrhythmia ECG beats. Using our proposed method, the four stages of ECG classification, i.e., QRS detection, preprocessing, feature extraction, and classification, were reduced to two steps only, i.e., QRS detection and classification. No preprocessing method was required while feature extraction was combined with classification. Moreover, to reduce the computational cost while maintaining its accuracy, several techniques were implemented, including All Convolutional Network (ACN), Batch Normalization (BN), and ensemble convolutional neural networks. The performance of the proposed ensemble CNNs were evaluated using the MIT-BIH arrythmia database. In the training phase, around 22% of the 110,057 beats data extracted from 48 records were utilized. Using only these 22% labeled training data, our proposed algorithm was able to classify the remaining 78% of the database into 16 classes. Furthermore, the sensitivity ( S n ), specificity ( S p ), and positive predictivity ( P p ), and accuracy ( A c c ) are 99.03%, 99.94%, 99.03%, and 99.88%, respectively. The proposed algorithm required around 180 μs, which is suitable for real time application. These results showed that our proposed method outperformed other state of the art methods. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

24 pages, 5716 KiB  
Review
Automated Detection of Sleep Stages Using Deep Learning Techniques: A Systematic Review of the Last Decade (2010–2020)
by Hui Wen Loh, Chui Ping Ooi, Jahmunah Vicnesh, Shu Lih Oh, Oliver Faust, Arkadiusz Gertych and U. Rajendra Acharya
Appl. Sci. 2020, 10(24), 8963; https://doi.org/10.3390/app10248963 - 15 Dec 2020
Cited by 79 | Viewed by 9994
Abstract
Sleep is vital for one’s general well-being, but it is often neglected, which has led to an increase in sleep disorders worldwide. Indicators of sleep disorders, such as sleep interruptions, extreme daytime drowsiness, or snoring, can be detected with sleep analysis. However, sleep [...] Read more.
Sleep is vital for one’s general well-being, but it is often neglected, which has led to an increase in sleep disorders worldwide. Indicators of sleep disorders, such as sleep interruptions, extreme daytime drowsiness, or snoring, can be detected with sleep analysis. However, sleep analysis relies on visuals conducted by experts, and is susceptible to inter- and intra-observer variabilities. One way to overcome these limitations is to support experts with a programmed diagnostic tool (PDT) based on artificial intelligence for timely detection of sleep disturbances. Artificial intelligence technology, such as deep learning (DL), ensures that data are fully utilized with low to no information loss during training. This paper provides a comprehensive review of 36 studies, published between March 2013 and August 2020, which employed DL models to analyze overnight polysomnogram (PSG) recordings for the classification of sleep stages. Our analysis shows that more than half of the studies employed convolutional neural networks (CNNs) on electroencephalography (EEG) recordings for sleep stage classification and achieved high performance. Our study also underscores that CNN models, particularly one-dimensional CNN models, are advantageous in yielding higher accuracies for classification. More importantly, we noticed that EEG alone is not sufficient to achieve robust classification results. Future automated detection systems should consider other PSG recordings, such as electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) signals, along with input from human experts, to achieve the required sleep stage classification robustness. Hence, for DL methods to be fully realized as a practical PDT for sleep stage scoring in clinical applications, inclusion of other PSG recordings, besides EEG recordings, is necessary. In this respect, our report includes methods published in the last decade, underscoring the use of DL models with other PSG recordings, for scoring of sleep stages. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Application)
Show Figures

Figure 1

Back to TopTop