sensors-logo

Journal Browser

Journal Browser

Wearable Sensors and Artificial Intelligence for Measuring Human Vital Signs: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: 30 November 2025 | Viewed by 12381

Special Issue Editors


E-Mail Website
Guest Editor
DET, Politecnico di Torino, 10129 Turin, Italy
Interests: artificial neural networks; smart sensors; wearable medical devices; IOT; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy
Interests: neural networks; Artificial Intelligence; mobile health; Telemedicine; IoT; ECG; topology analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Wearable sensors can be extremely useful in providing accurate and reliable information about people’s activities and behaviors. In recent times, there has been a surge in the usage of wearable sensors, especially in the medical sciences, where they have many applications in monitoring physiological activities. In the medical field, it is possible to monitor patients’ body temperature, heart rate, brain activity, muscle motion, and other critical data. It is important for us to have very simple sensors that could be worn on the body to perform standard medical monitoring. The extraction of relevant features is the most challenging part of the mobile and wearable-sensor-based human activity recognition pipeline. Feature extraction influences the algorithm’s performance and reduces computation time and complexity. The complexity and variety of body activities makes it difficult to quickly, accurately, and automatically recognize body activities. To solve this problem, Artificial Intelligence is becoming more and more important. Following the emergence of deep learning and increased computational power, these methods have been adopted for automatic feature learning in several areas such as health, image classification, and, recently, for feature extraction and the classification of simple and complex human activity recognition information from mobile and wearable sensors. Human activity recognition technology that analyzes data acquired from various types of sensing devices, including vision sensors and embedded sensors, has motivated the development of various context-aware applications in emerging domains, e.g., the Internet of Things (IoT) and healthcare.

The objective of this Special Issue is to collect state-of-the-art research contributions, tutorials, and position papers that address the broad challenges that have been faced in the development of wearable-sensor-based solutions in the field of human health. Original papers describing completed and unpublished work that are not currently under review by any other journal, magazine, or conference are solicited.

Prof. Dr. Eros Pasero
Dr. Vincenzo Randazzo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • wearable sensors
  • electronic health
  • telemedicine
  • artificial intelligence
  • machine learning
  • deep neural networks
  • human health
  • vital signs

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 1622 KiB  
Article
A Vision Transformer Model for the Prediction of Fatal Arrhythmic Events in Patients with Brugada Syndrome
by Vincenzo Randazzo, Silvia Caligari, Eros Pasero, Carla Giustetto, Andrea Saglietto, William Bertarello, Amir Averbuch, Mira Marcus-Kalish, Valery Zheludev and Fiorenzo Gaita
Sensors 2025, 25(3), 824; https://doi.org/10.3390/s25030824 - 30 Jan 2025
Viewed by 329
Abstract
Brugada syndrome (BrS) is an inherited electrical cardiac disorder that is associated with a higher risk of ventricular fibrillation (VF) and sudden cardiac death (SCD) in patients without structural heart disease. The diagnosis is based on the documentation of the typical pattern in [...] Read more.
Brugada syndrome (BrS) is an inherited electrical cardiac disorder that is associated with a higher risk of ventricular fibrillation (VF) and sudden cardiac death (SCD) in patients without structural heart disease. The diagnosis is based on the documentation of the typical pattern in the electrocardiogram (ECG) characterized by a J-point elevation of ≥2 mm, coved-type ST-segment elevation, and negative T wave in one or more right precordial leads, called type 1 Brugada ECG. Risk stratification is particularly difficult in asymptomatic cases. Patients who have experienced documented VF are generally recommended to receive an implantable cardioverter defibrillator to lower the likelihood of sudden death due to recurrent episodes. However, for asymptomatic individuals, the most appropriate course of action remains uncertain. Accurate risk prediction is critical to avoiding premature deaths and unnecessary treatments. Due to the challenges associated with experimental research on human cardiac tissue, alternative techniques such as computational modeling and deep learning-based artificial intelligence (AI) are becoming increasingly important. This study introduces a vision transformer (ViT) model that leverages 12-lead ECG images to predict potentially fatal arrhythmic events in BrS patients. This dataset includes a total of 278 ECGs, belonging to 210 patients which have been diagnosed with Brugada syndrome, and it is split into two classes: event and no event. The event class contains 94 ECGs of patients with documented ventricular tachycardia, ventricular fibrillation, or sudden cardiac death, while the no event class is composed of 184 ECGs used as the control group. At first, the ViT is trained on a balanced dataset, achieving satisfactory results (89% accuracy, 94% specificity, 84% sensitivity, and 89% F1-score). Then, the discarded no event ECGs are attached to additional 30 event ECGs, extracted by a 24 h recording of a singular individual, composing a new test set. Finally, the use of an optimized classification threshold improves the predictions on an unbalanced set of data (74% accuracy, 95% negative predictive value, and 90% sensitivity), suggesting that the ECG signal can reveal key information for the risk stratification of patients with Brugada syndrome. Full article
Show Figures

Graphical abstract

17 pages, 4059 KiB  
Article
A Deep Learning-Based Framework Oriented to Pathological Gait Recognition with Inertial Sensors
by Lucia Palazzo, Vladimiro Suglia, Sabrina Grieco, Domenico Buongiorno, Antonio Brunetti, Leonarda Carnimeo, Federica Amitrano, Armando Coccia, Gaetano Pagano, Giovanni D’Addio and Vitoantonio Bevilacqua
Sensors 2025, 25(1), 260; https://doi.org/10.3390/s25010260 - 5 Jan 2025
Viewed by 766
Abstract
Abnormal locomotor patterns may occur in case of either motor damages or neurological conditions, thus potentially jeopardizing an individual’s safety. Pathological gait recognition (PGR) is a research field that aims to discriminate among different walking patterns. A PGR-oriented system may benefit from the [...] Read more.
Abnormal locomotor patterns may occur in case of either motor damages or neurological conditions, thus potentially jeopardizing an individual’s safety. Pathological gait recognition (PGR) is a research field that aims to discriminate among different walking patterns. A PGR-oriented system may benefit from the simulation of gait disorders by healthy subjects, since the acquisition of actual pathological gaits would require either a higher experimental time or a larger sample size. Only a few works have exploited abnormal walking patterns, emulated by unimpaired individuals, to perform PGR with Deep Learning-based models. In this article, the authors present a workflow based on convolutional neural networks to recognize normal and pathological locomotor behaviors by means of inertial data related to nineteen healthy subjects. Although this is a preliminary feasibility study, its promising performance in terms of accuracy and computational time pave the way for a more realistic validation on actual pathological data. In light of this, classification outcomes could support clinicians in the early detection of gait disorders and the tracking of rehabilitation advances in real time. Full article
Show Figures

Figure 1

17 pages, 2088 KiB  
Article
Personalized Clustering for Emotion Recognition Improvement
by Laura Gutiérrez-Martín, Celia López-Ongil, Jose M. Lanza-Gutiérrez and Jose A. Miranda Calero
Sensors 2024, 24(24), 8110; https://doi.org/10.3390/s24248110 - 19 Dec 2024
Viewed by 554
Abstract
Emotion recognition through artificial intelligence and smart sensing of physical and physiological signals (affective computing) is achieving very interesting results in terms of accuracy, inference times, and user-independent models. In this sense, there are applications related to the safety and well-being of people [...] Read more.
Emotion recognition through artificial intelligence and smart sensing of physical and physiological signals (affective computing) is achieving very interesting results in terms of accuracy, inference times, and user-independent models. In this sense, there are applications related to the safety and well-being of people (sexual assaults, gender-based violence, children and elderly abuse, mental health, etc.) that require even more improvements. Emotion detection should be done with fast, discrete, and non-luxurious systems working in real time and real life (wearable devices, wireless communications, battery-powered). Furthermore, emotional reactions to violence are not equal in all people. Then, large general models cannot be applied to a multi-user system for people protection, and health and social workers and law enforcement agents would welcome customized and lightweight AI models. These semi-personalized models will be applicable to clusters of subjects sharing similarities in their emotional reactions to external stimuli. This customization requires several steps: creating clusters of subjects with similar behaviors, creating AI models for every cluster, continually updating these models with new data, and enrolling new subjects in clusters when required. An initial approach for clustering labeled data compiled (physiological data, together with emotional labels) is presented in this work, as well as the method to ensure the enrollment of new users with unlabeled data once the AI models are generated. The idea is that this complete methodology can be exportable to any other expert systems where unlabeled data are added during in-field operation and different profiles exist in terms of data. Experimental results demonstrate an improvement of 5% in accuracy and 4% in F1 score with respect to our baseline general model, along with a 32% to 58% reduction in variability, respectively. Full article
Show Figures

Figure 1

28 pages, 7390 KiB  
Article
Arrhythmia Detection by Data Fusion of ECG Scalograms and Phasograms
by Michele Scarpiniti
Sensors 2024, 24(24), 8043; https://doi.org/10.3390/s24248043 - 17 Dec 2024
Viewed by 610
Abstract
The automatic detection of arrhythmia is of primary importance due to the huge number of victims caused worldwide by cardiovascular diseases. To this aim, several deep learning approaches have been recently proposed to automatically classify heartbeats in a small number of classes. Most [...] Read more.
The automatic detection of arrhythmia is of primary importance due to the huge number of victims caused worldwide by cardiovascular diseases. To this aim, several deep learning approaches have been recently proposed to automatically classify heartbeats in a small number of classes. Most of these approaches use convolutional neural networks (CNNs), exploiting some bi-dimensional representation of the ECG signal, such as spectrograms, scalograms, or similar. However, by adopting such representations, state-of-the-art approaches usually rely on the magnitude information alone, while the important phase information is often neglected. Motivated by these considerations, the focus of this paper is aimed at investigating the effect of fusing the magnitude and phase of the continuous wavelet transform (CWT), known as the scalogram and phasogram, respectively. Scalograms and phasograms are fused in a simple CNN-based architecture by using several fusion strategies, which fuse the information in the input layer, some intermediate layers, or in the output layer. Numerical results evaluated on the PhysioNet MIT-BIH Arrhythmia database show the effectiveness of the proposed ideas. Although a simple architecture is used, their competitiveness is high compared to other state-of-the-art approaches, by obtaining an overall accuracy of about 98.5% and sensitivity and specificity of 98.5% and 95.6%, respectively. Full article
Show Figures

Figure 1

13 pages, 6856 KiB  
Article
Mind the Step: An Artificial Intelligence-Based Monitoring Platform for Animal Welfare
by Andrea Michielon, Paolo Litta, Francesca Bonelli, Gregorio Don, Stefano Farisè, Diana Giannuzzi, Marco Milanesi, Daniele Pietrucci, Angelica Vezzoli, Alessio Cecchinato, Giovanni Chillemi, Luigi Gallo, Marcello Mele and Cesare Furlanello
Sensors 2024, 24(24), 8042; https://doi.org/10.3390/s24248042 - 17 Dec 2024
Viewed by 926
Abstract
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based [...] Read more.
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based vision adapted from industrial applications and human behavioral analysis, the framework includes modules for markerless animal identification and health status assessment (e.g., locomotion score and body condition score). Methods for behavioral analysis are also included to evaluate how nutritional and rearing conditions impact behaviors. These models are initially trained on public datasets and then fine-tuned on original data. We demonstrate the approach through two use cases: a health monitoring system for dairy cattle and a piglet behavior analysis system. The results indicate that scalable deep learning and edge computing solutions can support precision livestock farming by automating welfare assessments and enabling timely, data-driven interventions. Full article
Show Figures

Figure 1

16 pages, 2423 KiB  
Article
Enhancing Autism Detection Through Gaze Analysis Using Eye Tracking Sensors and Data Attribution with Distillation in Deep Neural Networks
by Federica Colonnese, Francesco Di Luzio, Antonello Rosato and Massimo Panella
Sensors 2024, 24(23), 7792; https://doi.org/10.3390/s24237792 - 5 Dec 2024
Viewed by 1014
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages both data distillation and data attribution techniques to enhance ASD classification accuracy and explainability. Using data sampled by eye tracking sensors, the model identifies unique gaze behaviors linked to ASD and applies an explainability technique called TracIn for data attribution by computing self-influence scores to filter out noisy or anomalous training samples. This refinement process significantly improves both accuracy and computational efficiency, achieving a test accuracy of 94.35% while using only 77% of the dataset, showing that the proposed GBAC outperforms the same model trained on the full dataset and random sample reductions, as well as the benchmarks. Additionally, the data attribution analysis provides insights into the most influential training examples, offering a deeper understanding of how gaze patterns correlate with ASD-specific characteristics. These results underscore the potential of integrating explainable artificial intelligence into neurodevelopmental disorder diagnostics, advancing clinical research by providing deeper insights into the visual attention patterns associated with ASD. Full article
Show Figures

Figure 1

16 pages, 952 KiB  
Article
SiCRNN: A Siamese Approach for Sleep Apnea Identification via Tracheal Microphone Signals
by Davide Lillini, Carlo Aironi, Lucia Migliorelli, Leonardo Gabrielli and Stefano Squartini
Sensors 2024, 24(23), 7782; https://doi.org/10.3390/s24237782 - 5 Dec 2024
Viewed by 837
Abstract
Sleep apnea syndrome (SAS) affects about 3–7% of the global population, but is often undiagnosed. It involves pauses in breathing during sleep, for at least 10 s, due to partial or total airway blockage. The current gold standard for diagnosing SAS is polysomnography [...] Read more.
Sleep apnea syndrome (SAS) affects about 3–7% of the global population, but is often undiagnosed. It involves pauses in breathing during sleep, for at least 10 s, due to partial or total airway blockage. The current gold standard for diagnosing SAS is polysomnography (PSG), an intrusive procedure that depends on subjective assessment by expert clinicians. To address the limitations of PSG, we propose a decision support system, which uses a tracheal microphone for data collection and a deep learning (DL) approach—namely SiCRNN—to detect apnea events during overnight sleep recordings. Our proposed SiCRNN processes Mel spectrograms using a Siamese approach, integrating a convolutional neural network (CNN) backbone and a bidirectional gated recurrent unit (GRU). The final detection of apnea events is performed using an unsupervised clustering algorithm, specifically k-means. Multiple experimental runs were carried out to determine the optimal network configuration and the most suitable type and frequency range for the input data. Tests with data from eight patients showed that our method can achieve a Recall score of up to 95% for apnea events. We also compared the proposed approach to a fully convolutional baseline, recently introduced in the literature, highlighting the effectiveness of the Siamese training paradigm in improving the identification of SAS. Full article
Show Figures

Figure 1

17 pages, 5344 KiB  
Article
The Effects of Competition on Exercise Intensity and the User Experience of Exercise during Virtual Reality Bicycling for Young Adults
by John L. Palmieri and Judith E. Deutsch
Sensors 2024, 24(21), 6873; https://doi.org/10.3390/s24216873 - 26 Oct 2024
Viewed by 1410
Abstract
Background: Regular moderate–vigorous intensity exercise is recommended for adults as it can improve longevity and reduce health risks associated with a sedentary lifestyle. However, there are barriers to achieving intense exercise that may be addressed using virtual reality (VR) as a tool to [...] Read more.
Background: Regular moderate–vigorous intensity exercise is recommended for adults as it can improve longevity and reduce health risks associated with a sedentary lifestyle. However, there are barriers to achieving intense exercise that may be addressed using virtual reality (VR) as a tool to promote exercise intensity and adherence, particularly through visual feedback and competition. The purpose of this work is to compare visual feedback and competition within fully immersive VR to enhance exercise intensity and user experience of exercise for young adults; and to describe and compare visual attention during each of the conditions. Methods: Young adults (21–34 years old) bicycled in three 5 min VR conditions (visual feedback, self-competition, and competition against others). Exercise intensity (cycling cadence and % of maximum heart rate) and visual attention (derived from a wearable eye tracking sensor) were measured continuously. User experience was measured by an intrinsic motivation questionnaire, perceived effort, and participant preference. A repeated-measures ANOVA with paired t-test post hoc tests was conducted to detect differences between conditions. Results: Participants exercised at a higher intensity and had higher intrinsic motivation in the two competitive conditions compared to visual feedback. Further, participants preferred the competitive conditions and only reached a vigorous exercise intensity during self-competition. Visual exploration was higher in visual feedback compared to self-competition. Conclusions: For young adults bicycling in VR, competition promoted higher exercise intensity and motivation compared to visual feedback. Full article
Show Figures

Figure 1

10 pages, 509 KiB  
Communication
Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms
by Andrea Bizzego, Alessandro Carollo, Burak Senay, Seraphina Fong, Cesare Furlanello and Gianluca Esposito
Sensors 2024, 24(21), 6821; https://doi.org/10.3390/s24216821 - 24 Oct 2024
Viewed by 1084
Abstract
Functional near-infrared spectroscopy (fNIRS) is beneficial for studying brain activity in naturalistic settings due to its tolerance for movement. However, residual motion artifacts still compromise fNIRS data quality and might lead to spurious results. Although some motion artifact correction algorithms have been proposed [...] Read more.
Functional near-infrared spectroscopy (fNIRS) is beneficial for studying brain activity in naturalistic settings due to its tolerance for movement. However, residual motion artifacts still compromise fNIRS data quality and might lead to spurious results. Although some motion artifact correction algorithms have been proposed in the literature, their development and accurate evaluation have been challenged by the lack of ground truth information. This is because ground truth information is time- and labor-intensive to manually annotate. This work investigates the feasibility and reliability of a deep learning computer vision (CV) approach for automated detection and annotation of head movements from video recordings. Fifteen participants performed controlled head movements across three main rotational axes (head up/down, head left/right, bend left/right) at two speeds (fast and slow), and in different ways (half, complete, repeated movement). Sessions were video recorded and head movement information was obtained using a CV approach. A 1-dimensional UNet model (1D-UNet) that detects head movements from head orientation signals extracted via a pre-trained model (SynergyNet) was implemented. Movements were manually annotated as a ground truth for model evaluation. The model’s performance was evaluated using the Jaccard index. The model showed comparable performance between the training and test sets (J train = 0.954; J test = 0.865). Moreover, it demonstrated good and consistent performance at annotating movement across movement axes and speeds. However, performance varied by movement type, with the best results being obtained for repeated (J test = 0.941), followed by complete (J test = 0.872), and then half movements (J test = 0.826). This study suggests that the proposed CV approach provides accurate ground truth movement information. Future research can rely on this CV approach to evaluate and improve fNIRS motion artifact correction algorithms. Full article
Show Figures

Figure 1

13 pages, 2020 KiB  
Article
Multilayer Perceptron-Based Wearable Exercise-Related Heart Rate Variability Predicts Anxiety and Depression in College Students
by Xiongfeng Li, Limin Zou and Haojie Li
Sensors 2024, 24(13), 4203; https://doi.org/10.3390/s24134203 - 28 Jun 2024
Cited by 2 | Viewed by 1307
Abstract
(1) Background: This study aims to investigate the correlation between heart rate variability (HRV) during exercise and recovery periods and the levels of anxiety and depression among college students. Additionally, the study assesses the accuracy of a multilayer perceptron-based HRV analysis in predicting [...] Read more.
(1) Background: This study aims to investigate the correlation between heart rate variability (HRV) during exercise and recovery periods and the levels of anxiety and depression among college students. Additionally, the study assesses the accuracy of a multilayer perceptron-based HRV analysis in predicting these emotional states. (2) Methods: A total of 845 healthy college students, aged between 18 and 22, participated in the study. Participants completed self-assessment scales for anxiety and depression (SAS and PHQ-9). HRV data were collected during exercise and for a 5-min period post-exercise. The multilayer perceptron neural network model, which included several branches with identical configurations, was employed for data processing. (3) Results: Through a 5-fold cross-validation approach, the average accuracy of HRV in predicting anxiety levels was 89.3% for no anxiety, 83.6% for mild anxiety, and 74.9% for moderate to severe anxiety. For depression levels, the average accuracy was 90.1% for no depression, 84.2% for mild depression, and 82.1% for moderate to severe depression. The predictive R-squared values for anxiety and depression scores were 0.62 and 0.41, respectively. (4) Conclusions: The study demonstrated that HRV during exercise and recovery in college students can effectively predict levels of anxiety and depression. However, the accuracy of score prediction requires further improvement. HRV related to exercise can serve as a non-invasive biomarker for assessing psychological health. Full article
Show Figures

Figure 1

10 pages, 1087 KiB  
Article
Temporal Convolutional Neural Network-Based Prediction of Vascular Health in Elderly Women Using Photoplethysmography-Derived Pulse Wave during Exercise
by Yue Xiao, Guixian Wang and Haojie Li
Sensors 2024, 24(13), 4198; https://doi.org/10.3390/s24134198 - 28 Jun 2024
Viewed by 1159
Abstract
(1) Background: The objective of this study was to predict the vascular health status of elderly women during exercise using pulse wave data and Temporal Convolutional Neural Networks (TCN); (2) Methods: A total of 492 healthy elderly women aged 60–75 years were recruited [...] Read more.
(1) Background: The objective of this study was to predict the vascular health status of elderly women during exercise using pulse wave data and Temporal Convolutional Neural Networks (TCN); (2) Methods: A total of 492 healthy elderly women aged 60–75 years were recruited for the study. The study utilized a cross-sectional design. Vascular endothelial function was assessed non-invasively using Flow-Mediated Dilation (FMD). Pulse wave characteristics were quantified using photoplethysmography (PPG) sensors, and motion-induced noise in the PPG signals was mitigated through the application of a recursive least squares (RLS) adaptive filtering algorithm. A fixed-load cycling exercise protocol was employed. A TCN was constructed to classify flow-mediated dilation (FMD) into “optimal”, “impaired”, and “at risk” levels; (3) Results: TCN achieved an average accuracy of 79.3%, 84.8%, and 83.2% in predicting FMD at the “optimal”, “impaired”, and “at risk” levels, respectively. The results of the analysis of variance (ANOVA) comparison demonstrated that the accuracy of the TCN in predicting FMD at the impaired and at-risk levels was significantly higher than that of Long Short-Term Memory (LSTM) networks and Random Forest algorithms; (4) Conclusions: The use of pulse wave data during exercise combined with the TCN for predicting the vascular health status of elderly women demonstrated high accuracy, particularly in predicting impaired and at-risk FMD levels. This indicates that the integration of exercise pulse wave data with TCN can serve as an effective tool for the assessment and monitoring of the vascular health of elderly women. Full article
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 429 KiB  
Systematic Review
Non-Invasive Continuous Glucose Monitoring in Patients Without Diabetes: Use in Cardiovascular Prevention—A Systematic Review
by Filip Wilczek, Jan Gerrit van der Stouwe, Gloria Petrasch and David Niederseer
Sensors 2025, 25(1), 187; https://doi.org/10.3390/s25010187 - 1 Jan 2025
Viewed by 1527
Abstract
Continuous glucose monitoring (CGM) might provide immediate feedback regarding lifestyle choices such as diet and physical activity (PA). The impact of dietary habits and physical activity can be demonstrated in real time by providing continuous data on glucose levels and enhancing patient engagement [...] Read more.
Continuous glucose monitoring (CGM) might provide immediate feedback regarding lifestyle choices such as diet and physical activity (PA). The impact of dietary habits and physical activity can be demonstrated in real time by providing continuous data on glucose levels and enhancing patient engagement and adherence to lifestyle modifications. Originally developed for diabetic patients, its use has recently been extended to a non-diabetic population to improve cardiovascular health. However, since data in this population are scarce, the effect on cardiovascular outcomes is unclear. CGM may offer potential benefits for cardiovascular prevention in healthy individuals without diabetes. The aim of this systematic review is to evaluate the use of CGM in healthy non-diabetic individuals, focusing on its potential to guide lifestyle interventions in the context of cardiovascular prevention, which may ultimately reduce cardiovascular risk. Full article
Show Figures

Figure 1

Back to TopTop