sensors-logo

Journal Browser

Journal Browser

Advances in Artificial Intelligence for Biomedical Signal and Image Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 39528

Special Issue Editors


E-Mail Website
Guest Editor
Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
Interests: audio signal processing; biomedical signal processing; machine learning
Centre for Health Informatics, Macquarie University, Sydney, NSW 2019, Australia
Interests: medical informatics; medical image computing; machine learning
Special Issues, Collections and Topics in MDPI journals
Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
Interests: machine learning; computer vision; health data science

Special Issue Information

Dear Colleagues,

Biomedical signal and image processing find application in the detection, management, and treatment of various diseases and disorders. Recent advances in artificial intelligence, and especially in deep learning, for biomedical signal and image analysis have improved predictive performance. However, various challenges are encountered in the use of artificial intelligence on biomedical data, such as small datasets, different data sources, noise in signal and data annotation, and computational limitations, amongst others.

In this Special Issue on “Advances in Artificial Intelligence for Biomedical Signal and Image Analysis”, we invite scientists and researchers to contribute original research papers on their use of novel artificial intelligence techniques for the analysis of biomedical signals and images or comprehensive review papers on the advances in artificial intelligence for biomedical signal and image analysis.

Dr. Roneel V. Sharan
Dr. Sidong Liu
Dr. Hao Xiong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical signal processing
  • biomedical imaging and image processing
  • biomedical sensors and wearable systems
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2115 KiB  
Article
Camera- and Viewpoint-Agnostic Evaluation of Axial Postural Abnormalities in People with Parkinson’s Disease through Augmented Human Pose Estimation
by Stefano Aldegheri, Carlo Alberto Artusi, Serena Camozzi, Roberto Di Marco, Christian Geroin, Gabriele Imbalzano, Leonardo Lopiano, Michele Tinazzi and Nicola Bombieri
Sensors 2023, 23(6), 3193; https://doi.org/10.3390/s23063193 - 16 Mar 2023
Cited by 3 | Viewed by 2435
Abstract
Axial postural abnormalities (aPA) are common features of Parkinson’s disease (PD) and manifest in over 20% of patients during the course of the disease. aPA form a spectrum of functional trunk misalignment, ranging from a typical Parkinsonian stooped posture to progressively greater degrees [...] Read more.
Axial postural abnormalities (aPA) are common features of Parkinson’s disease (PD) and manifest in over 20% of patients during the course of the disease. aPA form a spectrum of functional trunk misalignment, ranging from a typical Parkinsonian stooped posture to progressively greater degrees of spine deviation. Current research has not yet led to a sufficient understanding of pathophysiology and management of aPA in PD, partially due to lack of agreement on validated, user-friendly, automatic tools for measuring and analysing the differences in the degree of aPA, according to patients’ therapeutic conditions and tasks. In this context, human pose estimation (HPE) software based on deep learning could be a valid support as it automatically extrapolates spatial coordinates of the human skeleton keypoints from images or videos. Nevertheless, standard HPE platforms have two limitations that prevent their adoption in such a clinical practice. First, standard HPE keypoints are inconsistent with the keypoints needed to assess aPA (degrees and fulcrum). Second, aPA assessment either requires advanced RGB-D sensors or, when based on the processing of RGB images, they are most likely sensitive to the adopted camera and to the scene (e.g., sensor–subject distance, lighting, background–subject clothing contrast). This article presents a software that augments the human skeleton extrapolated by state-of-the-art HPE software from RGB pictures with exact bone points for posture evaluation through computer vision post-processing primitives. This article shows the software robustness and accuracy on the processing of 76 RGB images with different resolutions and sensor–subject distances from 55 PD patients with different degrees of anterior and lateral trunk flexion. Full article
Show Figures

Figure 1

14 pages, 1811 KiB  
Article
Compressed Sensing Data with Performing Audio Signal Reconstruction for the Intelligent Classification of Chronic Respiratory Diseases
by Timothy Albiges, Zoheir Sabeur and Banafshe Arbab-Zavar
Sensors 2023, 23(3), 1439; https://doi.org/10.3390/s23031439 - 28 Jan 2023
Cited by 2 | Viewed by 2106
Abstract
Chronic obstructive pulmonary disease (COPD) concerns the serious decline of human lung functions. These have emerged as one of the most concerning health conditions over the last two decades, after cancer around the world. The early diagnosis of COPD, particularly of lung function [...] Read more.
Chronic obstructive pulmonary disease (COPD) concerns the serious decline of human lung functions. These have emerged as one of the most concerning health conditions over the last two decades, after cancer around the world. The early diagnosis of COPD, particularly of lung function degradation, together with monitoring the condition by physicians, and predicting the likelihood of exacerbation events in individual patients, remains an important challenge to overcome. The requirements for achieving scalable deployments of data-driven methods using artificial intelligence for meeting such a challenge in modern COPD healthcare have become of paramount and critical importance. In this study, we have established the experimental foundations for acquiring and indeed generating biomedical observation data, for good performance signal analysis and machine learning that will lead us to the intelligent diagnosis and monitoring of COPD conditions for individual patients. Further, we investigated on the multi-resolution analysis and compression of lung audio signals, while we performed their machine classification under two distinct experiments. These respectively refer to conditions involving (1) “Healthy” or “COPD” and (2) “Healthy”, “COPD”, or “Pneumonia” classes. Signal reconstruction with the extracted features for machine learning and testing was also performed for securing the integrity of the original audio recordings. These showed high levels of accuracy together with the performances of the selected machine learning-based classifiers using diverse metrics. Our study shows promising levels of accuracy in classifying Healthy and COPD and also Healthy, COPD, and Pneumonia conditions. Further work in this study will be imminently extended to new experiments using multi-modal sensing hardware and data fusion techniques for the development of the next generation diagnosis systems for COPD healthcare of the future. Full article
Show Figures

Figure 1

12 pages, 3442 KiB  
Article
Medical Image Classification Based on Semi-Supervised Generative Adversarial Network and Pseudo-Labelling
by Kun Liu, Xiaolin Ning and Sidong Liu
Sensors 2022, 22(24), 9967; https://doi.org/10.3390/s22249967 - 17 Dec 2022
Cited by 1 | Viewed by 3224
Abstract
Deep learning has substantially improved the state-of-the-art in object detection and image classification. Deep learning usually requires large-scale labelled datasets to train the models; however, due to the restrictions in medical data sharing and accessibility and the expensive labelling cost, the application of [...] Read more.
Deep learning has substantially improved the state-of-the-art in object detection and image classification. Deep learning usually requires large-scale labelled datasets to train the models; however, due to the restrictions in medical data sharing and accessibility and the expensive labelling cost, the application of deep learning in medical image classification has been dramatically hindered. In this study, we propose a novel method that leverages semi-supervised adversarial learning and pseudo-labelling to incorporate the unlabelled images in model learning. We validate the proposed method on two public databases, including ChestX-ray14 for lung disease classification and BreakHis for breast cancer histopathological image diagnosis. The results show that our method achieved highly effective performance with an accuracy of 93.15% while using only 30% of the labelled samples, which is comparable to the state-of-the-art accuracy for chest X-ray classification; it also outperformed the current methods in multi-class breast cancer histopathological image classification with a high accuracy of 96.87%. Full article
Show Figures

Figure 1

14 pages, 13677 KiB  
Article
Retinal OCTA Image Segmentation Based on Global Contrastive Learning
by Ziping Ma, Dongxiu Feng, Jingyu Wang and Hu Ma
Sensors 2022, 22(24), 9847; https://doi.org/10.3390/s22249847 - 14 Dec 2022
Cited by 4 | Viewed by 2664
Abstract
The automatic segmentation of retinal vessels is of great significance for the analysis and diagnosis of retinal related diseases. However, the imbalanced data in retinal vascular images remain a great challenge. Current image segmentation methods based on deep learning almost always focus on [...] Read more.
The automatic segmentation of retinal vessels is of great significance for the analysis and diagnosis of retinal related diseases. However, the imbalanced data in retinal vascular images remain a great challenge. Current image segmentation methods based on deep learning almost always focus on local information in a single image while ignoring the global information of the entire dataset. To solve the problem of data imbalance in optical coherence tomography angiography (OCTA) datasets, this paper proposes a medical image segmentation method (contrastive OCTA segmentation net, COSNet) based on global contrastive learning. First, the feature extraction module extracts the features of OCTA image input and maps them to the segment head and the multilayer perceptron (MLP) head, respectively. Second, a contrastive learning module saves the pixel queue and pixel embedding of each category in the feature map into the memory bank, generates sample pairs through a mixed sampling strategy to construct a new contrastive loss function, and forces the network to learn local information and global information simultaneously. Finally, the segmented image is fine tuned to restore positional information of deep vessels. The experimental results show the proposed method can improve the accuracy (ACC), the area under the curve (AUC), and other evaluation indexes of image segmentation compared with the existing methods. This method could accomplish segmentation tasks in imbalanced data and extend to other segmentation tasks. Full article
Show Figures

Figure 1

11 pages, 2278 KiB  
Article
Effect of Face Blurring on Human Pose Estimation: Ensuring Subject Privacy for Medical and Occupational Health Applications
by Jindong Jiang, Wafa Skalli, Ali Siadat and Laurent Gajny
Sensors 2022, 22(23), 9376; https://doi.org/10.3390/s22239376 - 1 Dec 2022
Cited by 6 | Viewed by 2501
Abstract
The face blurring of images plays a key role in protecting privacy. However, in computer vision, especially for the human pose estimation task, machine-learning models are currently trained, validated, and tested on original datasets without face blurring. Additionally, the accuracy of human pose [...] Read more.
The face blurring of images plays a key role in protecting privacy. However, in computer vision, especially for the human pose estimation task, machine-learning models are currently trained, validated, and tested on original datasets without face blurring. Additionally, the accuracy of human pose estimation is of great importance for kinematic analysis. This analysis is relevant in areas such as occupational safety and clinical gait analysis where privacy is crucial. Therefore, in this study, we explore the impact of face blurring on human pose estimation and the subsequent kinematic analysis. Firstly, we blurred the subjects’ heads in the image dataset. Then we trained our neural networks using the face-blurred and the original unblurred dataset. Subsequently, the performances of the different models, in terms of landmark localization and joint angles, were estimated on blurred and unblurred testing data. Finally, we examined the statistical significance of the effect of face blurring on the kinematic analysis along with the strength of the effect. Our results reveal that the strength of the effect of face blurring was low and within acceptable limits (<1°). We have thus shown that for human pose estimation, face blurring guarantees subject privacy while not degrading the prediction performance of a deep learning model. Full article
Show Figures

Figure 1

17 pages, 479 KiB  
Article
Classification Strategies for P300-Based BCI-Spellers Adopting the Row Column Paradigm
by Sofien Gannouni, Kais Belwafi, Nourah Alangari, Hatim AboAlsamh and Abdelfettah Belghith
Sensors 2022, 22(23), 9159; https://doi.org/10.3390/s22239159 - 25 Nov 2022
Cited by 1 | Viewed by 1736
Abstract
Acknowledging the importance of the ability to communicate with other people, the researcher community has developed a series of BCI-spellers, with the goal of regaining communication and interaction capabilities with the environment for people with disabilities. In order to bridge the gap in [...] Read more.
Acknowledging the importance of the ability to communicate with other people, the researcher community has developed a series of BCI-spellers, with the goal of regaining communication and interaction capabilities with the environment for people with disabilities. In order to bridge the gap in the digital divide between the disabled and the non-disabled people, we believe that the development of efficient signal processing algorithms and strategies will go a long way towards achieving novel assistive technologies using new human–computer interfaces. In this paper, we present various classification strategies that would be adopted by P300 spellers adopting the row/column paradigm. The presented strategies have obtained high accuracy rates compared with existent similar research works. Full article
Show Figures

Figure 1

21 pages, 2158 KiB  
Article
Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets
by Yezi Ali Kadhim, Muhammad Umer Khan and Alok Mishra
Sensors 2022, 22(22), 8999; https://doi.org/10.3390/s22228999 - 21 Nov 2022
Cited by 17 | Viewed by 5616
Abstract
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have [...] Read more.
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors. Full article
Show Figures

Figure 1

46 pages, 19717 KiB  
Article
SoftMatch: Comparing Scanpaths Using Combinatorial Spatio-Temporal Sequences with Fractal Curves
by Robert Ahadizad Newport, Carlo Russo, Sidong Liu, Abdulla Al Suman and Antonio Di Ieva
Sensors 2022, 22(19), 7438; https://doi.org/10.3390/s22197438 - 30 Sep 2022
Cited by 2 | Viewed by 2271
Abstract
Recent studies matching eye gaze patterns with those of others contain research that is heavily reliant on string editing methods borrowed from early work in bioinformatics. Previous studies have shown string editing methods to be susceptible to false negative results when matching mutated [...] Read more.
Recent studies matching eye gaze patterns with those of others contain research that is heavily reliant on string editing methods borrowed from early work in bioinformatics. Previous studies have shown string editing methods to be susceptible to false negative results when matching mutated genes or unordered regions of interest in scanpaths. Even as new methods have emerged for matching amino acids using novel combinatorial techniques, scanpath matching is still limited by a traditional collinear approach. This approach reduces the ability to discriminate between free viewing scanpaths of two people looking at the same stimulus due to the heavy weight placed on linearity. To overcome this limitation, we here introduce a new method called SoftMatch to compare pairs of scanpaths. SoftMatch diverges from traditional scanpath matching in two different ways: firstly, by preserving locality using fractal curves to reduce dimensionality from 2D Cartesian (x,y) coordinates into 1D (h) Hilbert distances, and secondly by taking a combinatorial approach to fixation matching using discrete Fréchet distance measurements between segments of scanpath fixation sequences. These matching “sequences of fixations over time” are a loose acronym for SoftMatch. Results indicate high degrees of statistical and substantive significance when scoring matches between scanpaths made during free-form viewing of unfamiliar stimuli. Applications of this method can be used to better understand bottom up perceptual processes extending to scanpath outlier detection, expertise analysis, pathological screening, and salience prediction. Full article
Show Figures

Figure 1

10 pages, 2903 KiB  
Article
Simple and Robust Deep Learning Approach for Fast Fluorescence Lifetime Imaging
by Quan Wang, Yahui Li, Dong Xiao, Zhenya Zang, Zi’ao Jiao, Yu Chen and David Day Uei Li
Sensors 2022, 22(19), 7293; https://doi.org/10.3390/s22197293 - 26 Sep 2022
Cited by 3 | Viewed by 2559
Abstract
Fluorescence lifetime imaging (FLIM) is a powerful tool that provides unique quantitative information for biomedical research. In this study, we propose a multi-layer-perceptron-based mixer (MLP-Mixer) deep learning (DL) algorithm named FLIM-MLP-Mixer for fast and robust FLIM analysis. The FLIM-MLP-Mixer has a simple network [...] Read more.
Fluorescence lifetime imaging (FLIM) is a powerful tool that provides unique quantitative information for biomedical research. In this study, we propose a multi-layer-perceptron-based mixer (MLP-Mixer) deep learning (DL) algorithm named FLIM-MLP-Mixer for fast and robust FLIM analysis. The FLIM-MLP-Mixer has a simple network architecture yet a powerful learning ability from data. Compared with the traditional fitting and previously reported DL methods, the FLIM-MLP-Mixer shows superior performance in terms of accuracy and calculation speed, which has been validated using both synthetic and experimental data. All results indicate that our proposed method is well suited for accurately estimating lifetime parameters from measured fluorescence histograms, and it has great potential in various real-time FLIM applications. Full article
Show Figures

Figure 1

21 pages, 2914 KiB  
Article
EEG-Based Person Identification during Escalating Cognitive Load
by Ivana Kralikova, Branko Babusiak and Maros Smondrk
Sensors 2022, 22(19), 7154; https://doi.org/10.3390/s22197154 - 21 Sep 2022
Cited by 2 | Viewed by 2742
Abstract
With the development of human society, there is an increasing importance for reliable person identification and authentication to protect a person’s material and intellectual property. Person identification based on brain signals has captured substantial attention in recent years. These signals are characterized by [...] Read more.
With the development of human society, there is an increasing importance for reliable person identification and authentication to protect a person’s material and intellectual property. Person identification based on brain signals has captured substantial attention in recent years. These signals are characterized by original patterns for a specific person and are capable of providing security and privacy of an individual in biometric identification. This study presents a biometric identification method based on a novel paradigm with accrual cognitive brain load from relaxing with eyes closed to the end of a serious game, which includes three levels with increasing difficulty. The used database contains EEG data from 21 different subjects. Specific patterns of EEG signals are recognized in the time domain and classified using a 1D Convolutional Neural Network proposed in the MATLAB environment. The ability of person identification based on individual tasks corresponding to a given degree of load and their fusion are examined by 5-fold cross-validation. Final accuracies of more than 99% and 98% were achieved for individual tasks and task fusion, respectively. The reduction of EEG channels is also investigated. The results imply that this approach is suitable to real applications. Full article
Show Figures

Figure 1

30 pages, 9022 KiB  
Article
Atrioventricular Synchronization for Detection of Atrial Fibrillation and Flutter in One to Twelve ECG Leads Using a Dense Neural Network Classifier
by Irena Jekova, Ivaylo Christov and Vessela Krasteva
Sensors 2022, 22(16), 6071; https://doi.org/10.3390/s22166071 - 14 Aug 2022
Cited by 16 | Viewed by 2810
Abstract
This study investigates the use of atrioventricular (AV) synchronization as an important diagnostic criterion for atrial fibrillation and flutter (AF) using one to twelve ECG leads. Heart rate, lead-specific AV conduction time, and P-/f-wave amplitude were evaluated by three representative ECG metrics (mean [...] Read more.
This study investigates the use of atrioventricular (AV) synchronization as an important diagnostic criterion for atrial fibrillation and flutter (AF) using one to twelve ECG leads. Heart rate, lead-specific AV conduction time, and P-/f-wave amplitude were evaluated by three representative ECG metrics (mean value, standard deviation), namely RR-interval (RRi-mean, RRi-std), PQ-interval (PQi-mean, PQI-std), and PQ-amplitude (PQa-mean, PQa-std), in 71,545 standard 12-lead ECG records from the six largest PhysioNet CinC Challenge 2021 databases. Two rhythm classes were considered (AF, non-AF), randomly assigning records into training (70%), validation (20%), and test (10%) datasets. In a grid search of 19, 55, and 83 dense neural network (DenseNet) architectures and five independent training runs, we optimized models for one-lead, six-lead (chest or limb), and twelve-lead input features. Lead-set performance and SHapley Additive exPlanations (SHAP) input feature importance were evaluated on the test set. Optimal DenseNet architectures with the number of neurons in sequential [1st, 2nd, 3rd] hidden layers were assessed for sensitivity and specificity: DenseNet [16,16,0] with primary leads (I or II) had 87.9–88.3 and 90.5–91.5%; DenseNet [32,32,32] with six limb leads had 90.7 and 94.2%; DenseNet [32,32,4] with six chest leads had 92.1 and 93.2%; and DenseNet [128,8,8] with all 12 leads had 91.8 and 95.8%, indicating sensitivity and specificity values, respectively. Mean SHAP values on the entire test set highlighted the importance of RRi-mean (100%), RR-std (84%), and atrial synchronization (40–60%) for the PQa-mean (aVR, I), PQi-std (V2, aVF, II), and PQi-mean (aVL, aVR). Our focus on finding the strongest AV synchronization predictors of AF in 12-lead ECGs would lead to a comprehensive understanding of the decision-making process in advanced neural network classifiers. DenseNet self-learned to rely on a few ECG behavioral characteristics: first, characteristics usually associated with AF conduction such as rapid heart rate, enhanced heart rate variability, and large PQ-interval deviation in V2 and inferior leads (aVF, II); second, characteristics related to a typical P-wave pattern in sinus rhythm, which is best distinguished from AF by the earliest negative P-peak deflection of the right atrium in the lead (aVR) and late positive left atrial deflection in lateral leads (I, aVL). Our results on lead-selection and feature-selection practices for AF detection should be considered for one- to twelve-lead ECG signal processing settings, particularly those measuring heart rate, AV conduction times, and P-/f-wave amplitudes. Performances are limited to the AF diagnostic potential of these three metrics. SHAP value importance can be used in combination with a human expert’s ECG interpretation to change the focus from a broad observation of 12-lead ECG morphology to focusing on the few AV synchronization findings strongly predictive of AF or non-AF arrhythmias. Our results are representative of AV synchronization findings across a broad taxonomy of cardiac arrhythmias in large 12-lead ECG databases. Full article
Show Figures

Figure 1

18 pages, 1589 KiB  
Article
Combination of Feature Selection and Resampling Methods to Predict Preterm Birth Based on Electrohysterographic Signals from Imbalance Data
by Félix Nieto-del-Amor, Gema Prats-Boluda, Javier Garcia-Casado, Alba Diaz-Martinez, Vicente Jose Diago-Almela, Rogelio Monfort-Ortiz, Dongmei Hao and Yiyao Ye-Lin
Sensors 2022, 22(14), 5098; https://doi.org/10.3390/s22145098 - 7 Jul 2022
Cited by 14 | Viewed by 2374
Abstract
Due to its high sensitivity, electrohysterography (EHG) has emerged as an alternative technique for predicting preterm labor. The main obstacle in designing preterm labor prediction models is the inherent preterm/term imbalance ratio, which can give rise to relatively low performance. Numerous studies obtained [...] Read more.
Due to its high sensitivity, electrohysterography (EHG) has emerged as an alternative technique for predicting preterm labor. The main obstacle in designing preterm labor prediction models is the inherent preterm/term imbalance ratio, which can give rise to relatively low performance. Numerous studies obtained promising preterm labor prediction results using the synthetic minority oversampling technique. However, these studies generally overestimate mathematical models’ real generalization capacity by generating synthetic data before splitting the dataset, leaking information between the training and testing partitions and thus reducing the complexity of the classification task. In this work, we analyzed the effect of combining feature selection and resampling methods to overcome the class imbalance problem for predicting preterm labor by EHG. We assessed undersampling, oversampling, and hybrid methods applied to the training and validation dataset during feature selection by genetic algorithm, and analyzed the resampling effect on training data after obtaining the optimized feature subset. The best strategy consisted of undersampling the majority class of the validation dataset to 1:1 during feature selection, without subsequent resampling of the training data, achieving an AUC of 94.5 ± 4.6%, average precision of 84.5 ± 11.7%, maximum F1-score of 79.6 ± 13.8%, and recall of 89.8 ± 12.1%. Our results outperformed the techniques currently used in clinical practice, suggesting the EHG could be used to predict preterm labor in clinics. Full article
Show Figures

Figure 1

17 pages, 2681 KiB  
Article
A Principal Neighborhood Aggregation-Based Graph Convolutional Network for Pneumonia Detection
by Akram Ali Ali Guail, Gui Jinsong, Babatounde Moctard Oloulade and Raeed Al-Sabri
Sensors 2022, 22(8), 3049; https://doi.org/10.3390/s22083049 - 15 Apr 2022
Cited by 5 | Viewed by 2103
Abstract
Pneumonia is one of the main causes of child mortality in the world and has been reported by the World Health Organization (WHO) to be the cause of one-third of child deaths in India. Designing an automated classification system to detect pneumonia has [...] Read more.
Pneumonia is one of the main causes of child mortality in the world and has been reported by the World Health Organization (WHO) to be the cause of one-third of child deaths in India. Designing an automated classification system to detect pneumonia has become a worthwhile research topic. Numerous deep learning models have attempted to detect pneumonia by applying convolutional neural networks (CNNs) to X-ray radiographs, as they are essentially images and have achieved great performances. However, they failed to capture higher-order feature information of all objects based on the X-ray images because the topology of the X-ray images’ dimensions does not always come with some spatially regular locality properties, which makes defining a spatial kernel filter in X-ray images non-trivial. This paper proposes a principal neighborhood aggregation-based graph convolutional network (PNA-GCN) for pneumonia detection. In PNA-GCN, we propose a new graph-based feature construction utilizing the transfer learning technique to extract features and then construct the graph from images. Then, we propose a graph convolutional network with principal neighborhood aggregation. We integrate multiple aggregation functions in a single layer with degree-scalers to capture more effective information in a single layer to exploit the underlying properties of the graph structure. The experimental results show that PNA-GCN can perform best in the pneumonia detection task on a real-world dataset against the state-of-the-art baseline methods. Full article
Show Figures

Figure 1

7 pages, 1067 KiB  
Communication
Interictal Spike and Loss of Hippocampal Theta Rhythm Recorded by Deep Brain Electrodes during Epileptogenesis
by Xiaoxuan Fu, Youhua Wang, Abdelkader Nasreddine Belkacem, Yingxin Cao, Hao Cheng, Xiaohu Zhao, Shenghua Chen and Chao Chen
Sensors 2022, 22(3), 1114; https://doi.org/10.3390/s22031114 - 1 Feb 2022
Cited by 2 | Viewed by 2263
Abstract
Epileptogenesis is the gradual dynamic process that progressively led to epilepsy, going through the latent stage to the chronic stage. During epileptogenesis, how the abnormal discharges make theta rhythm loss in the deep brain remains not clear. In this paper, a loss of [...] Read more.
Epileptogenesis is the gradual dynamic process that progressively led to epilepsy, going through the latent stage to the chronic stage. During epileptogenesis, how the abnormal discharges make theta rhythm loss in the deep brain remains not clear. In this paper, a loss of theta rhythm was estimated based on time–frequency power using the longitudinal electroencephalography (EEG), recorded by deep brain electrodes (e.g., the intracortical microelectrodes such as stereo-EEG electrodes) with monitored epileptic spikes in a rat from the first region in the hippocampal circuit. Deep-brain EEG was collected from the period between adjacent sporadic interictal spikes (lasting 3.56 s—35.38 s) to the recovery period without spikes by videos while the rats were performing exploration. We found that loss of theta rhythm became more serious during the period between adjacent interictal spikes than during the recovery period without spike, and during epileptogenesis, more loss was observed at the acute stage than the chronic stage. We concluded that the emergence of the interictal spike was the direct cause of loss of theta rhythm, and the inhibitory effect of the interictal spike on ongoing theta rhythm was persistent as well as time dependent during epileptogenesis. With the help of the intracortical microelectrodes, this study provides a temporary proof of interictal spikes to produce ongoing theta rhythm loss, suggesting that the interictal spikes could correlate with the epileptogenesis process, display a time-dependent feature, and might be a potential biomarker to evaluate the deficits in theta-related memory in the brain. Full article
Show Figures

Figure 1

Back to TopTop