Previous Issue
Volume 4, December
 
 

BioMedInformatics, Volume 5, Issue 1 (March 2025) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 2569 KiB  
Article
Time–Frequency Transformations for Enhanced Biomedical Signal Classification with Convolutional Neural Networks
by Georgios Lekkas, Eleni Vrochidou and George A. Papakostas
BioMedInformatics 2025, 5(1), 7; https://doi.org/10.3390/biomedinformatics5010007 - 27 Jan 2025
Viewed by 465
Abstract
Background: Transforming one-dimensional (1D) biomedical signals into two-dimensional (2D) images enables the application of convolutional neural networks (CNNs) for classification tasks. In this study, we investigated the effectiveness of different 1D-to-2D transformation methods to classify electrocardiogram (ECG) and electroencephalogram (EEG) signals. Methods: We [...] Read more.
Background: Transforming one-dimensional (1D) biomedical signals into two-dimensional (2D) images enables the application of convolutional neural networks (CNNs) for classification tasks. In this study, we investigated the effectiveness of different 1D-to-2D transformation methods to classify electrocardiogram (ECG) and electroencephalogram (EEG) signals. Methods: We select five transformation methods: Continuous Wavelet Transform (CWT), Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT), Signal Reshaping (SR), and Recurrence Plots (RPs). We used the MIT-BIH Arrhythmia Database for ECG signals and the Epilepsy EEG Dataset from the University of Bonn for EEG signals. After converting the signals from 1D to 2D, using the aforementioned methods, we employed two types of 2D CNNs: a minimal CNN and the LeNet-5 model. Our results indicate that RPs, CWT, and STFT are the methods to achieve the highest accuracy across both CNN architectures. Results: These top-performing methods achieved accuracies of 99%, 98%, and 95%, respectively, on the minimal 2D CNN and accuracies of 99%, 99%, and 99%, respectively, on the LeNet-5 model for the ECG signals. For the EEG signals, all three methods achieved accuracies of 100% on the minimal 2D CNN and accuracies of 100%, 99%, and 99% on the LeNet-5 2D CNN model, respectively. Conclusions: This superior performance is most likely related to the methods’ capacity to capture time–frequency information and nonlinear dynamics inherent in time-dependent signals such as ECGs and EEGs. These findings underline the significance of using appropriate transformation methods, suggesting that the incorporation of time–frequency analysis and nonlinear feature extraction in the transformation process improves the effectiveness of CNN-based classification for biological data. Full article
Show Figures

Figure 1

17 pages, 3294 KiB  
Article
Hybrid Neural Network Models to Estimate Vital Signs from Facial Videos
by Yufeng Zheng
BioMedInformatics 2025, 5(1), 6; https://doi.org/10.3390/biomedinformatics5010006 - 22 Jan 2025
Viewed by 471
Abstract
Introduction: Remote health monitoring plays a crucial role in telehealth services and the effective management of patients, which can be enhanced by vital sign prediction from facial videos. Facial videos are easily captured through various imaging devices like phone cameras, webcams, or [...] Read more.
Introduction: Remote health monitoring plays a crucial role in telehealth services and the effective management of patients, which can be enhanced by vital sign prediction from facial videos. Facial videos are easily captured through various imaging devices like phone cameras, webcams, or surveillance systems. Methods: This study introduces a hybrid deep learning model aimed at estimating heart rate (HR), blood oxygen saturation level (SpO2), and blood pressure (BP) from facial videos. The hybrid model integrates convolutional neural network (CNN), convolutional long short-term memory (convLSTM), and video vision transformer (ViViT) architectures to ensure comprehensive analysis. Given the temporal variability of HR and BP, emphasis is placed on temporal resolution during feature extraction. The CNN processes video frames one by one while convLSTM and ViViT handle sequences of frames. These high-resolution temporal features are fused to predict HR, BP, and SpO2, capturing their dynamic variations effectively. Results: The dataset encompasses 891 subjects of diverse races and ages, and preprocessing includes facial detection and data normalization. Experimental results demonstrate high accuracies in predicting HR, SpO2, and BP using the proposed hybrid models. Discussion: Facial images can be easily captured using smartphones, which offers an economical and convenient solution for vital sign monitoring, particularly beneficial for elderly individuals or during outbreaks of contagious diseases like COVID-19. The proposed models were only validated on one dataset. However, the dataset (size, representation, diversity, balance, and processing) plays an important role in any data-driven models including ours. Conclusions: Through experiments, we observed the hybrid model’s efficacy in predicting vital signs such as HR, SpO2, SBP, and DBP, along with demographic variables like sex and age. There is potential for extending the hybrid model to estimate additional vital signs such as body temperature and respiration rate. Full article
(This article belongs to the Section Applied Biomedical Data Science)
Show Figures

Figure 1

25 pages, 18134 KiB  
Article
Advancing Emotion Recognition: EEG Analysis and Machine Learning for Biomedical Human–Machine Interaction
by Sara Reis, Luís Pinto-Coelho, Maria Sousa, Mariana Neto and Marta Silva
BioMedInformatics 2025, 5(1), 5; https://doi.org/10.3390/biomedinformatics5010005 - 10 Jan 2025
Viewed by 728
Abstract
Background: Human emotions are subjective psychophysiological processes that play an important role in the daily interactions of human life. Emotions often do not manifest themselves in isolation; people can experience a mixture of them and may not express them in a visible or [...] Read more.
Background: Human emotions are subjective psychophysiological processes that play an important role in the daily interactions of human life. Emotions often do not manifest themselves in isolation; people can experience a mixture of them and may not express them in a visible or perceptible way; Methods: This study seeks to uncover EEG patterns linked to emotions, as well as to examine brain activity across emotional states and optimise machine learning techniques for accurate emotion classification. For these purposes, the DEAP dataset was used to comprehensively analyse electroencephalogram (EEG) data and understand how emotional patterns can be observed. Machine learning algorithms, such as SVM, MLP, and RF, were implemented to predict valence and arousal classifications for different combinations of frequency bands and brain regions; Results: The analysis reaffirms the value of EEG as a tool for objective emotion detection, demonstrating its potential in both clinical and technological contexts. By highlighting the benefits of using fewer electrodes, this study emphasises the feasibility of creating more accessible and user-friendly emotion recognition systems; Conclusions: Further improvements in feature extraction and model generalisation are necessary for clinical applications. This study highlights not only the potential of emotion classification to develop biomedical applications, but also to enhance human–machine interaction systems. Full article
Show Figures

Figure 1

15 pages, 2245 KiB  
Article
Validation of an Upgraded Virtual Reality Platform Designed for Real-Time Dialogical Psychotherapies
by Taylor Simoes-Gomes, Stéphane Potvin, Sabrina Giguère, Mélissa Beaudoin, Kingsada Phraxayavong and Alexandre Dumais
BioMedInformatics 2025, 5(1), 4; https://doi.org/10.3390/biomedinformatics5010004 - 9 Jan 2025
Viewed by 411
Abstract
Background: The advent of virtual reality in psychiatry presents a wealth of opportunities for a variety of psychopathologies. Avatar Interventions are dialogic and experiential treatments integrating personalized medicine with virtual reality (VR), which have shown promising results by enhancing the emotional regulation of [...] Read more.
Background: The advent of virtual reality in psychiatry presents a wealth of opportunities for a variety of psychopathologies. Avatar Interventions are dialogic and experiential treatments integrating personalized medicine with virtual reality (VR), which have shown promising results by enhancing the emotional regulation of their participants. Notably, Avatar Therapy for the treatment of auditory hallucinations (i.e., voices) allows patients to engage in dialogue with an avatar representing their most persecutory voice. In addition, Avatar Intervention for cannabis use disorder involves an avatar representing a significant person in the patient’s consumption. In both cases, the main goal is to modify the problematic relationship and allow patients to regain control over their symptoms. While results are promising, its potential to be applied to other psychopathologies, such as major depression, is an exciting area for further exploration. In an era where VR interventions are gaining popularity, the present study aims to investigate whether technological advancements could overcome current limitations, such as avatar realism, and foster a deeper immersion into virtual environments, thereby enhancing participants’ sense of presence within the virtual world. A newly developed virtual reality platform was compared to the current platform used by our research team in past and ongoing studies. Methods: This study involved 43 subjects: 20 healthy subjects and 23 subjects diagnosed with severe mental disorders. Each participant interacted with an avatar using both platforms. After each immersive session, questionnaires were administered by a graduate student in a double-blind manner to evaluate technological advancements and user experiences. Results: The findings indicate that the new technological improvements allow the new platform to significantly surpass the current platform as per multiple subjective parameters. Notably, the new platform was associated with superior realism of the avatar (d = 0.574; p < 0.001) and the voice (d = 1.035; p < 0.001), as well as enhanced lip synchronization (d = 0.693; p < 0.001). Participants reported a significantly heightened sense of presence (d = 0.520; p = 0.002) and an overall better immersive experience (d = 0.756; p < 0.001) with the new VR platform. These observations were true in both healthy subjects and participants with severe mental disorders. Conclusions: The technological improvements generated a heightened sense of presence among participants, thus improving their immersive experience. These two parameters could be associated with the effectiveness of VR interventions and future studies should be undertaken to evaluate their impact on outcomes. Full article
Show Figures

Figure 1

11 pages, 333 KiB  
Article
Machine-Learning-Based Biomechanical Feature Analysis for Orthopedic Patient Classification with Disc Hernia and Spondylolisthesis
by Daniel Nasef, Demarcus Nasef, Viola Sawiris, Peter Girgis and Milan Toma
BioMedInformatics 2025, 5(1), 3; https://doi.org/10.3390/biomedinformatics5010003 - 7 Jan 2025
Viewed by 822
Abstract
(1) Background: The exploration of various machine learning (ML) algorithms for classifying the state of Lumbar Intervertebral Discs (IVD) in orthopedic patients is the focus of this study. The classification is based on six key biomechanical features of the pelvis and lumbar [...] Read more.
(1) Background: The exploration of various machine learning (ML) algorithms for classifying the state of Lumbar Intervertebral Discs (IVD) in orthopedic patients is the focus of this study. The classification is based on six key biomechanical features of the pelvis and lumbar spine. Although previous research has demonstrated the effectiveness of ML models in diagnosing IVD pathology using imaging modalities, there is a scarcity of studies using biomechanical features. (2) Methods: The study utilizes a dataset that encompasses two classification tasks. The first task classifies patients into Normal and Abnormal based on their IVDs (2C). The second task further classifies patients into three groups: Normal, Disc Hernia, and Spondylolisthesis (3C). The performance of various ML models, including decision trees, support vector machines, and neural networks, is evaluated using metrics such as accuracy, AUC, recall, precision, F1, Kappa, and MCC. These models are trained on two open-source datasets, using the PyCaret library in Python. (3) Results: The findings suggest that an ensemble of Random Forest and Logistic Regression models performs best for the 2C classification, while the Extra Trees classifier performs best for the 3C classification. The models demonstrate an accuracy of up to 90.83% and a precision of up to 91.86%, highlighting the effectiveness of ML models in diagnosing IVD pathology. The analysis of the weight of different biomechanical features in the decision-making processes of the models provides insights into the biomechanical changes involved in the pathogenesis of Lumbar IVD abnormalities. (4) Conclusions: This research contributes to the ongoing efforts to leverage data-driven ML models in improving patient outcomes in orthopedic care. The effectiveness of the models for both diagnosis and furthering understanding of Lumbar IVD herniations and spondylolisthesis is outlined. The limitations of AI use in clinical settings are discussed, and areas for future improvement to create more accurate and informative models are suggested. Full article
Show Figures

Figure 1

23 pages, 3601 KiB  
Article
A Data-Driven Approach to Revolutionize Children’s Vaccination with the Use of VR and a Novel Vaccination Protocol
by Stavros Antonopoulos, Manolis Wallace and Vassilis Poulopoulos
BioMedInformatics 2025, 5(1), 2; https://doi.org/10.3390/biomedinformatics5010002 - 30 Dec 2024
Viewed by 632
Abstract
Background: This study aims to revolutionize traditional pediatric vaccination protocols by integrating virtual reality (VR) technology. The purpose is to minimize discomfort in children, ages 2–12, during vaccinations by immersing them in a specially designed VR short story that aligns with the various [...] Read more.
Background: This study aims to revolutionize traditional pediatric vaccination protocols by integrating virtual reality (VR) technology. The purpose is to minimize discomfort in children, ages 2–12, during vaccinations by immersing them in a specially designed VR short story that aligns with the various stages of the clinical vaccination process. In our approach, the child dons a headset during the vaccination procedure and engages with a virtual reality (VR) short story that is specifically designed to correspond with the stages of a typical vaccination process in a clinical setting. Methods: A two-phase clinical trial was conducted to evaluate the effectiveness of the VR intervention. The first phase included 242 children vaccinated without VR, serving as a control group, while the second phase involved 97 children who experienced VR during vaccination. Discomfort levels were measured using the VACS (VAccination disComfort Scale) tool. Statistical analyses were performed to compare discomfort levels based on age, phases of vaccination, and overall experience. Results: The findings revealed significant reductions in discomfort among children who experienced VR compared to those in the control group. The VR intervention demonstrated superiority across multiple dimensions, including age stratification and different stages of the vaccination process. Conclusions: The proposed VR framework significantly reduces vaccination-related discomfort in children. Its cost-effectiveness, utilizing standard or low-cost headsets like Cardboard devices, makes it a feasible and innovative solution for pediatric practices. This approach introduces a novel, child-centric enhancement to vaccination protocols, improving the overall experience for young patients. Full article
(This article belongs to the Section Clinical Informatics)
Show Figures

Figure 1

13 pages, 1930 KiB  
Article
Explainable Machine Learning-Based Approach to Identify People at Risk of Diabetes Using Physical Activity Monitoring
by Simon Lebech Cichosz, Clara Bender and Ole Hejlesen
BioMedInformatics 2025, 5(1), 1; https://doi.org/10.3390/biomedinformatics5010001 - 24 Dec 2024
Viewed by 614
Abstract
Objective: This study aimed to investigate the utilization of patterns derived from physical activity monitoring (PAM) for the identification of individuals at risk of type 2 diabetes mellitus (T2DM) through an at-home screening approach employing machine learning techniques. Methods: Data from the 2011–2014 [...] Read more.
Objective: This study aimed to investigate the utilization of patterns derived from physical activity monitoring (PAM) for the identification of individuals at risk of type 2 diabetes mellitus (T2DM) through an at-home screening approach employing machine learning techniques. Methods: Data from the 2011–2014 National Health and Nutrition Examination Survey (NHANES) were scrutinized, focusing on the PAM component. The primary objective involved the identification of diabetes, characterized by an HbA1c ≥ 6.5% (48 mmol/mol), while the secondary objective included individuals with prediabetes, defined by an HbA1c ≥ 5.7% (39 mmol/mol). Features derived from PAM, along with age, were utilized as inputs for an XGBoost classification model. SHapley Additive exPlanations (SHAP) was employed to enhance the interpretability of the models. Results: The study included 7532 subjects with both PAM and HbA1c data. The model, which solely included PAM features, had a test dataset ROC-AUC of 0.74 (95% CI = 0.72–0.76). When integrating the PAM features with age, the model’s ROC-AUC increased to 0.79 (95% CI = 0.78–0.80) in the test dataset. When addressing the secondary target of prediabetes, the XGBoost model exhibited a test dataset ROC-AUC of 0.80 [95% CI; 0.79–0.81]. Conclusions: The objective quantification of physical activity through PAM yields valuable information that can be employed in the identification of individuals with undiagnosed diabetes and prediabetes. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop