Next Article in Journal
Ultra-High Resolution Imaging Method for Distributed Small Satellite Spotlight MIMO-SAR Based on Sub-Aperture Image Fusion
Next Article in Special Issue
Performance Evaluation of Machine Learning Frameworks for Aphasia Assessment
Previous Article in Journal
Gamma-Ray Sensor Using YAlO3(Ce) Single Crystal and CNT/PEEK with High Sensitivity and Stability under Harsh Underwater Conditions
Previous Article in Special Issue
InstanceEasyTL: An Improved Transfer-Learning Method for EEG-Based Cross-Subject Fatigue Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare

1
Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University, 00185 Rome, Italy
2
BrainSigns srl, 00185 Rome, Italy
3
Department of Business and Management, LUISS University, 00197 Rome, Italy
4
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
5
IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
6
People Advisory Services Department, Ernst & Young, 00187 Rome, Italy
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(5), 1607; https://doi.org/10.3390/s21051607
Submission received: 25 January 2021 / Revised: 12 February 2021 / Accepted: 20 February 2021 / Published: 25 February 2021
(This article belongs to the Collection Deep Learning in Biomedical Informatics and Healthcare)

Abstract

:
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors for an objective evaluation of the patient’s state. The proposed study explored an innovative video-based solution for monitoring neurophysiological parameters of potential patients and assessing their mental state. In particular, we investigated the possibility to estimate the heart rate (HR) and eye blinks rate (EBR) of participants while performing laboratory tasks by mean of facial—video analysis. The objectives of the study were focused on: (i) assessing the effectiveness of the proposed technique in estimating the HR and EBR by comparing them with laboratory sensor-based measures and (ii) assessing the capability of the video—based technique in discriminating between the participant’s resting state (Nominal condition) and their active state (Non-nominal condition). The results demonstrated that the HR and EBR estimated through the facial—video technique or the laboratory equipment did not statistically differ (p > 0.1), and that these neurophysiological parameters allowed to discriminate between the Nominal and Non-nominal states (p < 0.02).

1. Introduction

Nowadays telemedicine platforms are employed in a wide range of medical and clinical applications, such as the diabetes management [1], asthma monitoring [2,3], chronic disease [4,5] and age-related diseases [6,7]. According to Armaignac and colleagues [8], telemedicine is also applied in critical care [9,10] to overcome the increasing patient demands and shortage of intensivists, issues that may occur in different contexts, first and foremost during the COVID-19 pandemic [11]. Telemedicine could be defined as the use of technological equipment to provide a clinical and medical assistance when a physical distance separates patients and providers. Telemedicine also includes managing patients through monitoring devices controlled by physicians and nurses in remote locations [12], internet-based video-conferencing platforms for communicating with patients remotely [13], and asynchronous and synchronous systems for providing clinical care through the use of wearable devices [14,15]. Besides the therapeutical applications, telemedicine is also employed for remote monitoring of patients. The objective of this passive branch of telemedicine is to provide a warning to clinicians or doctors when the neurophysiological and physiological data collected from the patients indicate an adverse clinical event [16,17]. Several studies have already demonstrated the effectiveness of telemedicine in improving patients’ outcomes [18,19], while other studies have showed its benefits in terms of hospitalization reduction, a crucial aspect especially during a severe pandemic such as the COVID-19 one [11].
However, all the above mentioned telemedicine and remote healthcare concepts require physical contact between the patient and sensors, and the need of high—qualified personnel to set up the entire equipment and provide technical assistance to the patients. The internet-based video-conferencing telemedicine platforms do not require physical contact between patients and providers, although they imply a large limitation due to the lack of sensors to evaluate the neurophysiological parameters of the patients.
The present study explored an innovative approach for the telemedicine and telemonitoring that aims at estimating the heart rate (HR) and the eye blinks rate (EBR) through the analysis of the patient’s face video recorded by mean of a webcam. This video—based technique does not require any technical support to perform the measurements, as it does not require a physical contact between the user and the sensors. Furthermore, this kind of methodology is not expensive as the actual technologies, i.e., medical and wearable devices, employed in telemedicine and remote healthcare. Besides the clinical implications related to the HR monitoring, previous work demonstrated how this neurophysiological parameter is involved in human mental states assessment like the mental workload [20,21,22]. Similarly, the EBR is associated with specific mental states like the visual attention [23]. In fact, it was demonstrated that a decrease of EBR corresponds to greater processing of information [24]. Such two aspects indicate the suitability of HR and EBR parameters for characterizing the patient’s mental states in terms of attention and mental workload. Video-based techniques imply the recording of the patient’s facial video consequently they cannot be applied on patients who are not in front a video camera. The proposed technique for HR evaluation was already explored in prior works with promising results [22,25,26], and it is based on the modulation of the reflected ambient light from the skin by the absorption spectrum of hemoglobin in the patient’s blood [25]. In other words, such analysis is based on the extraction and processing of the Red component of the patient’s facial video. The minute—color variations on the skin are created by blood circulation, and they module the Red component of the video signal along the time. The remote EBR monitoring by means of facial video analysis was explored in recent works too. Zhang and colleagues [26] demonstrated the reliability of multi-channel ICA to detect eye blinks from smartphone facial videos, while Tsujikawa in 2018 [27] evaluated the reliability of EBR estimation from 30 frame per second (fps) facial video cameras. In this regard, the first objective of the present study was to investigate the reliability of the video—based technique for the simultaneous HR and EBR estimation. These neurophysiological parameters were compared with the corresponding ones computed from the electrocardiographic (ECG) and electrooculographic (EOG) signals gathered through laboratory equipment. Secondly, the experimental protocol was designed to represent the situation in which the patient’s state deviates from a resting condition. The deviation from the resting condition (nominal condition) could play a crucial role in several telemedicine application, such as the sleep apnea remote monitoring [28,29,30] and cardiovascular diseases remote monitoring while sleeping [31], but also in operative applications involving narcoleptic patients [32] and in emotional states discrimination, since several previous works demonstrated how the HR and EBR parameters are involved in the emotional state modulation [33,34]. The video–based technique has great potential in this latter application, especially in isolation and health emergency situations, a relevant risk factor for pathologies such as depression, anxiety and stress [35,36]. Therefore, the present study explored and validated the video-based method in terms of neurophysiological parameters estimation, i.e., the HR and EBR, with respect to conventional sensors. In summary, the present work aimed at addressing the following two experimental questions:
  • Is the considered video-based technique reliable in terms of HR and EBR estimation?
  • Is the considered video-based technique capable of discriminating between a nominal and a non-nominal state of the patient?

2. Materials and Methods

2.1. Participants

Informed consent for study participation, publication of images, and to use the video material were obtained from a group of 15 students, eight males and seven females (30.6 ± 3.7 years old) from the Sapienza University of Rome (Italy) after the explanation of the study. The experiments were conducted following the principles outlined in the Declaration of Helsinki of 1975, as revised in 2000. The study protocol received the favorable opinion from and has been approved by the Ethical Committee of the Sapienza University of Rome (protocol n. 2507/2020 approved on the 04/08/2020). The study involved only healthy participants, recruited on a voluntary basis. Furthermore, the students were free to accept or not to take part to the experimental protocol, and all of them have accepted to participate to the study. Only aggregate information were released while no individual information were or will be diffused in any form.

2.2. Experimental Protocol

To simulate the switch between a nominal and a non-nominal state in this experimental protocol, three tasks were designed:
  • The n-Back (NB) task. A well-known computer-based psychological test used to manipulate workload, or more specifically working memory load [37]. Within this task a sequence of stimuli is presented to the user. The goal is to indicate when the current stimulus matches the stimulus that occurred in the series n steps before. The factor n can be adjusted to make the task more difficult or easier. A baseline and three conditions (0-back, 2-back, and 2-back stressful) of such task were tested in the proposed study, all of them with different levels of difficulty. In all conditions, 21 uppercase letters were used, which were displayed for 500 ms and an inter-stimulus interval randomized between 500 to 3000 ms; 33% of the displayed letters were target. During the baseline (1 min duration), the same 21 uppercase letters was presented to the participants with no interaction required.
  • The Doctor Game (DG). The aim of the game was to remove small objects from the board without touching the edges. Here, a baseline and three difficulty levels were tested too.
  • Two interactive web calls (WEB) were performed. Three conditions of such task were performed: (i) Baseline condition, in which the participants looked at the web platform interface without reacting; (ii) Positive condition, in which the test persons were asked to report the happiest memory of their life; (iii) Negative condition, in which the test persons were asked to report the saddest memory of their life.
The participants went under training phases before performing each different task in order to avoid habituation bias. Considering the two main objectives of the present work, the different difficulty levels of the experimental tasks were not considered in the analysis. In particular, the neurophysiological parameters evaluated during the resting state, in which the participant rested in front of the PC screen, were referred to the Nominal condition while the neurophysiological parameters evaluated during the remaining experimental conditions, averaged along such conditions within each task, were referred to the Non-nominal condition.

2.3. Questionnaires

To validate the neurophysiological results two kinds of questionnaires were used, which were filled in after each experimental condition. The questionnaires were explained at the beginning of the experiment and the participants were trained to fill them before starting with the experiments. The following questionnaires were selected:
  • Self-assessment Manikin (SAM), consisting in a picture-oriented questionnaire [38] developed to measure the valence/pleasure of the response (from positive to negative), perceived arousal (from high to low levels), and perceptions of dominance/control (from low to high levels) associated with a person’s affective reaction to a wide variety of stimuli. After each experimental condition the participants were asked to provide only three simple judgments along each affective dimension (on a scale from 1 to 9) that best described how they felt during the condition just executed. This questionnaire was selected to have a subjective indication about the current state of the participants in terms of pleasure, arousal and control with the respect of each experimental condition of WEB task.
  • NASA Task Load Index (NASA-TLX), consisting of six sub-scales representing independent groups of variables: mental, physical and temporal demands, frustration, effort and performance. The participants were initially asked to rate on a scale from “low” to “high” (from 0 to 100) each of the six dimensions during the task. Afterwards, they had to choose the most important factor along pairwise comparisons [39]. The NASA-TLX was selected for subjectively quantify the mental demand perceived by the participants with the respect of the experimental condition of the DG and NB tasks.

2.4. Eye Blinks Signal Recording and Analysis

The EBR information were obtained by estimating the vertical electrooculographic (EOG) activity from a traditional electroencephalography (EEG) channel [R] [40]: the activity was recorded between a gel-based Ag/AgCl electrode placed on the participant’s Fpz scalp location (Figure 1) and reference electrodes placed on the earlobe, connected to the BEMicro system (EBNeuro, Firenze, Italy) with a sampling frequency of 256 (Hz). Firstly, the signal was band-pass filtered using a 5th order Butterworth filter within the frequency range of 2–10 Hz. In this way the recorded signal can be considered as an estimation of the vertical EOG one. The eye blinks detection method was performed in two main steps:
(i)
Threshold calculation
(ii)
Pattern Matching.
In (i) the Eyes Open condition was used to identify a threshold that when exceeded identified a potential blink. The threshold was calculated as follows, according with the BLINKER algorithm [41]:
T h r e s h o l d = m e a n ( E O G   E y e s   O p e n ) + 3 r o b u s t S t d D e v
where r o b u s t S t d D e v is the mean absolute deviation of the corresponding EOG channel. In (ii), every time the EOG signal exceeded the computed threshold, the Pearson correlation between a common blink template and the EOG signal was computed within each experimental condition. If this value was higher than 0.9, a potential blink would be classified as “real blink”. The EBR feature estimated for each participant in each condition was calculated as the mean of the total number of blinks in every condition per minute.
Regarding the EBR estimation through the video-based technique, a PC webcam (Microsoft, Albuquerque, New Mexico, USA) was used for facial video recording during the experimental protocol (Figure 1).
The RGB camera was set to a resolution of 640 × 480 (pixel) at a frame rate of 30 (fps). The camera was placed in front of the participant. Subsequently, the recorded video was analyzed offline. The participant’s face was automatically identified using a specific Python library named Dlib [42] coupled with an adaBoost classifier [43]. In particular, such library allowed us to select 68 facial features. Subsequently, the positions of the participant’s eyelids were identified frame by frame. The distance between the inferior and the superior eyelids was computed for both the eyes [44]. Then, such discrete signal was filtered between 1 and 3 (Hz) for noise removal, and a threshold was computed as the quadratic mean of the signal along each specific experimental condition [45]. Each event exceeding such a threshold was finally classified as eye blink. Here too, the EBR parameter was computed as the mean of the total number of blinks in every condition per minute. The required processing time for computing one EBR value was 0.174 s. The main steps of the described video—signal processing for EBR estimation are presented in Figure 2.

2.5. ECG Signal Recording and Analysis

The ECG signal was gathered by means of gel-based Ag/AgCl electrode fixed on the participant’s chest (Figure 1), connected to the BEMicro system and referred to the potential recorded at both the earlobes, with a sampling frequency of 256 Hz. First, the ECG signal was filtered using a 5th-order Butterworth band-pass filter (1–4 Hz) in order to reject the continuous component and the high-frequency interferences, such as that related to the mains power source. At the same time, the purpose of this filtering was to emphasize the QRS process of the ECG signal [46,47,48]. The following step consisted in computing the ECG signal to the power of 3 to emphasize the heartbeat peaks, as they generally have the higher amplitude, and at the same time reduce spurious artefacts peaks. Finally, we measured the distance between consecutive peaks (i.e., each R peak corresponds to a heartbeat) in order to estimate the heart rate (HR) values every 60 s.
Regarding the HR estimation by means of the video–based technique, the same participants’ facial video was analyzed. As described for the EBR estimation, the 68 visual feature required for the facial recognition were selected using the Dlib Python library [42] in conjunction with the adaBoost classifier [43]. This classifier was employed for the automatic face detection and it was based on the YCbCr Color model [49,50], in order to perform the face detection according with the luminance and chrominance variations of the video. First, the Red (R) component was selected and extracted from the raw signal, through the application of the fast Fourier transform (FFT) and principal component analysis (PCA). The PCA algorithm was also applied for fluctuations removal from the R component, technically implemented in the sklearn.decomposition.PCA Python library included in the Scikit-Learn Python library [51]. The considered signal was gathered from the participant’s cheeks, in each image frame, referenced to the participant’s eyes and nose [52]. Then, the clean R component was detrended for illumination variations compensation, by mean of the method proposed by Tarvainen and colleagues [53] based on smoothness priors technique employing a smoothing parameter λ = 10 and a cut-off frequency = 0.060 Hz. Subsequently, Hamming filtering (128 point, 0.6–2.2 Hz) was applied to the R detrended component. Finally, the filtered signal was normalized using z-score [54] by the formula provided below:
X i =   Y i ( t )   μ i ( t ) δ i
The HR values were computed with a 60 s time resolution for each experimental condition, considering a sliding time window of 100 image frames for each HR value. The processing time for computing one HR value was 0.041 s. The main steps of the described video—signal processing for HR estimation are presented in Figure 3.

2.6. Statistical Analysis

All the considered neurophysiological parameters, i.e., the EBR and HR, were normalized to obtain comparable distributions related to each sensor technology employed in the study. The normalization consisted in the subtraction of the baselines from the respective values estimated during each experimental condition. The statistical analysis was performed on the normalized parameters. The Shapiro–Wilk test was performed to determine the normality of each distribution involved in the analyses. The Student’s t-test was used to compare normal pairs, while the Wilcoxon signed-rank test was performed if the normality was not confirmed. For all tests, the statistical significance was set at α = 0.05.

3. Results

The results related to the DG task will not be reported because almost all the participants got too close to the game board to accurately extract the objects causing face-video signal loss therefore the impossibility to acquire and consequently analyze their facial videos.

3.1. Methodology Comparison

Regarding the EBR estimation, the paired Wilcoxon signed-rank test performed on the normalized EBR (EBR’) evaluated during the NB and WEB tasks did not show any significant difference (Figure 4) between the video—based technique and the laboratory technology (NB: p = 0.7; WEB: p = 0.5). The percentage difference between the EBR estimated through the video—based technique and the laboratory equipment was 4.5% during the NB task and 4.8% during the WEB task.
The same result (Figure 5) was observed on the paired Wilcoxon signed-rank test performed on the normalized HR (HR’) estimated during the NB and WEB tasks (NB: p = 0.2; WEB: p = 0.4). The percentage difference between the HR estimated through the video–based technique and the laboratory equipment was 9.3% within the NB task and 3.3% within the WEB task.
Furthermore, to investigate the reliability of the video-based technique with respect to the laboratory technology, the repeated measure correlation (rmcorr) analysis was performed. As reported in Figure 6, the rmcorr analysis [55] performed between the EBR estimated by the laboratory and video-based technique every 60 s showed a positive (R = 0.73) and significant (p < 10−27) correlation, as demonstration of how the two technology provided similar EBR estimations. Similarly, Figure 6 shows a positive (R = 0.64) and significant (p < 10−18) correlation between the HR values estimated every 60 s by means of the laboratory and the video-based technique.

3.2. Mental States Discrimination

The results of the Wilcoxon signed-rank test performed on the NASA-TLX reported significant (p = 0.0005) differences among the nominal and the non-nominal conditions of the NB task in terms of perceived mental demand (Figure 7). Similarly, the Wilcoxon signed-rank test performed on the SAM questionnaire demonstrated a significant (p = 0.02) increase of the perceived arousal and control between the Nominal and the Non-nominal conditions of the WEB task (Figure 7).
As mentioned in the Introduction, the second objective of the present study consisted in assessing the capability of the video-based technique in discriminating the participants’ state while they were in a resting state (nominal) or in an active state (non-nominal). Regarding the NB task, the paired Wilcoxon signed-rank test performed on the normalized EBR and HR estimations provided by the video-based technique (Figure 8) showed a significant difference between the nominal and non-nominal conditions (EBR: p = 0.0002; HR: p = 0.03).
Similarly, the paired Wilcoxon signed-rank test performed on the normalized EBR and HR evaluated by the video-based technique during the WEB task (Figure 9) showed a significant difference between the nominal and non-nominal conditions (EBR: p = 0.0003; HR: p = 0.02).

4. Discussion

The present study aimed at investigating the reliability of an innovative video—based technique in estimating neurophysiological parameters (i.e., EBR and HR) while dealing with different activities to find out if it could be a potential solution for healthcare telemonitoring of patients. Regarding the NB and WEB tasks, the results demonstrated the reliability of the video—based technique compared with the laboratory technology, generally considered as the gold—standard in scientific literature [56]. Moreover, the repeated measure correlation analysis revealed that the video—based technique was able to capture the considered neurophysiological parameters’ dynamics with the same capability exhibited by the laboratory device. More importantly for future applications, the statistical analyses demonstrated the capability of the explored video-based technique in discriminating between the nominal and non-nominal participant’s mental states. In particular, the normalized EBR (EBR’) estimated within the NB and WEB tasks significantly decreased during the non-nominal condition, while the normalized HR (HR’) significantly increased during the non-nominal condition within both tasks. These evidences are consistent with prior related works. In fact, Aricò and colleagues demonstrated the link between the EBR decrease and the visual attention increase [57], while we already observed in a previous study the relationship between the HR increase and the mental workload increase [22]. With respect to the two experimental tasks, subjective measures, i.e., the NASA-TLX and SAM, demonstrated that the nominal and non-nominal conditions were actually different in terms of mental demand, therefore validating the experimental hypothesis at the basis of the presented analysis. Such evidences open the path to apply video–based techniques for healthcare monitoring of patients in remote locations. In fact, such a technique does not require any physical contact between the patient and the sensor, nor the presence of a doctor or a facilitator for the sensors setting. In addition, the video-based technique implies very limited costs, since it needs only a commercial webcam, compared to the existing telemedicine platforms, which include commercial and medical wearable devices. Beyond the telemedicine and remote healthcare applications, the explored video–based technique could provide a valuable contribution in operative and industrial applications. To this regard, different works [58,59] already investigated the possible algorithms to automatically discriminate between the condition in which the operator is active and learning, or the one in which the operator is resting, a crucial aspect to trigger the activation of the artificial intelligence (AI) or support system platforms. Moreover, the presented results demonstrated the sensibility of the video–based technique for EBR and HR estimations to the visual attention and mental demand increases. Therefore, such a technique could offer relevant performance operative applications where it is required the minimum interference between the subjects and the sensors [60], in air traffic controllers’ (ATCOs) mental workload and attention evaluations [61] and in car driver’s monitoring [62].

Limitations

Despite the promising results we should highlight some limitations. The proposed video–based technique implies a direct visual contact between the subject and the video recorder, a condition that could not be easy to achieve in specific context as the telemedicine one, where the patient could not stand in front of a camera for long time period. In fact, during the execution of the DG task the posture of almost all the participants did not allow to acquire the participant’s face, hence to neither estimate the neurophysiological parameters considered nor assess the participants’ mental states. Therefore this aspect should be carefully considered when a video–based solution would be employed. Moreover, the investigated video–based technique could likely be sensible to illumination variations [63], a parameter that is not always controllable. Such a limitation could be solved or at least mitigated by using a camera featuring automatic brightness regulation for the facial video recording. Finally, it has to be noted that the video—based technique requires specific sensing and processing times, depending on the chosen sliding time window to perform the measurements among the image frames and on the PC used for the analysis.

5. Conclusions

The proposed study demonstrated the reliability of the innovative video–based technique for computing the EBR and HR neurophysiological parameters. Both parameters evaluated through the video–based technique did not differ by more than 5%, except for the HR evaluated during the NB task which differed by 9.3%, from the measurements provided by laboratory equipment. In addition, the results revealed its capability in discriminating between the participants’ resting state (nominal) and active state (non-nominal). Such evidences positively answer to the two initial experimental questions, and they pave the path to apply video—based approaches for estimating neurophysiological parameters not only for the telemedicine and remote healthcare, where it would provide a valuable monitoring tool for early adverse clinical events detection [64] especially in pandemic conditions, but also to the industrial automation field, future safety-oriented [62] and operative applications [65]. To this regard, further studies will aim at better investigating the video–based technique sensibility in mental workload and attention discrimination and to determine the optimal application conditions, i.e., the distance between the webcam and the subject’s face, in terms of reliability. Moreover, the combination of both HR and EBR for estimating the above mentioned mental states will be explored, since the merge of these neurophysiological parameters could lead to a more accurate mental workload and attention evaluation [66,67].

Author Contributions

Conceptualization, G.B. and V.R.; methodology, G.B. and V.R.; software, G.B., A.D.F., D.R., G.D.F., N.S. and V.R.; formal analysis, G.B., V.R., L.T., I.S. and A.G.; investigation, G.B. and V.R.; resources, G.B., P.A., A.G., A.V. and V.R.; data curation, G.B., V.R. and A.G.; writing—original draft preparation, V.R.; writing—review and editing, G.B., P.A., G.D.F., N.S., A.V.; visualization, G.B. and V.R.; supervision, G.B.; funding acquisition, G.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was European Commission by Horizon2020 projects “WORKINGAGE: Smart Working environments for all Ages” (GA n. 826232); “SIMUSAFE: Simulator Of Behavioral Aspects For Safer Transport” (GA n. 723386); “SAFEMODE: Strengthening synergies between Aviation and maritime in the area of human Factors towards achieving more Efficient and resilient MODE of transportation” (GA n. 814961), “BRAINSAFEDRIVE: A Technology to detect Mental States during Drive for improving the Safety of the road” (Italy-Sweden collaboration) with a grant of Ministero dell’Istruzione dell’Università e della Ricerca della Repubblica Italiana, “MINDTOOTH: Wearable device to decode human mind by neurometrics for a new concept of smart interaction with the surrounding environment” (GA n. 950998). H2020-SESAR-2019-2 project: Transparent artificial intelligence and automation to air traffic management systems, ‘‘ARTIMATION,’’ (GA n. 894238).

Institutional Review Board Statement

The study was conducted following the principles outlined in the Declaration of Helsinki of 1975, as revised in 2000. The study protocol received the favorable opinion from and has been approved by the Ethical Committee of the Sapienza University of Rome (protocol n. 2507/2020 approved on the 04/08/2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The aggregated data presented in this study might be available on request from the corresponding author. The data are not publicly available because they were collected within the EU Project “WORKINGAGE: Smart Working environments for all Ages” (GA n. 826232) and they are property of the Consortium.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bashshur, R.L.; Shannon, G.W.; Smith, B.R.; Woodward, M.A. The Empirical Evidence for the Telemedicine Intervention in Diabetes Management. Telemed. Health 2015, 21, 321–354. [Google Scholar] [CrossRef] [PubMed]
  2. Portnoy, J.M.; Waller, M.; De Lurgio, S.; Dinakar, C. Telemedicine is as effective as in-person visits for patients with asthma. Ann. Allergy Asthma Immunol. 2016, 117, 241–245. [Google Scholar] [CrossRef] [PubMed]
  3. Perry, T.T.; Turner, J.H. School-Based Telemedicine for Asthma Management. J. Allergy Clin. Immunol. Pract. 2019, 7, 2524–2532. [Google Scholar] [CrossRef]
  4. Kruse, C.S.; Soma, M.; Pulluri, D.; Nemali, N.T.; Brooks, M. The effectiveness of telemedicine in the management of chronic heart disease–a systematic review. JRSM Open 2017, 8, 174. [Google Scholar] [CrossRef] [Green Version]
  5. Ambrosino, N.; Vagheggini, G.; Mazzoleni, S.; Vitacca, M. Telemedicine in chronic obstructive pulmonary disease. Breathe 2016, 12, 350–356. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Vaziri, K.; Moshfeghi, D.M.; Moshfeghi, A.A. Feasibility of Telemedicine in Detecting Diabetic Retinopathy and Age-Related Macular Degeneration. Semin. Ophthalmol. 2013, 30, 81–95. [Google Scholar] [CrossRef]
  7. Miele, G.; Straccia, G.; Moccia, M.; Leocani, L.; Tedeschi, G.; Bonavita, S.; Lavorgna, L.; Padovani, A.; Clerico, M.; Brigo, F.; et al. Telemedicine in Parkinson’s Disease: How to Ensure Patient Needs and Continuity of Care at the Time of COVID-19 Pandemic. Telemed. Health 2020, 26, 1533–1536. [Google Scholar] [CrossRef] [PubMed]
  8. Armaignac, D.L.; Saxena, A.; Rubens, M.; Valle, C.A.; Williams, L.M.; Veledar, E.; Gidel, L.T. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: Experience from a large healthcare system. Crit. Care Med. 2018, 46, 728–735. [Google Scholar] [CrossRef] [Green Version]
  9. Nates, J.L.; Nunnally, M.; Kleinpell, R.; Blosser, S.; Goldner, J.; Birriel, B.; Fowler, C.S.; Byrum, D.; Miles, W.S.; Bailey, H.; et al. ICU Admission, Discharge, and Triage Guidelines. Crit. Care Med. 2016, 44, 1553–1602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Prin, M.; Wunsch, H. The Role of Stepdown Beds in Hospital Care. Am. J. Respir. Crit. Care Med. 2014, 190, 1210–1216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Zhai, Y. A Call for Addressing Barriers to Telemedicine: Health Disparities during the COVID-19 Pandemic. Psychother. Psychosom. 2021, 90, 64–66. [Google Scholar] [CrossRef]
  12. Baker, J.; Stanley, A. Telemedicine Technology: A Review of Services, Equipment, and Other Aspects. Curr. Allergy Asthma Rep. 2018, 18, 60. [Google Scholar] [CrossRef] [PubMed]
  13. Aldossary, S.; Martin-Khan, M.G.; Bradford, N.K.; Smith, A.C. A systematic review of the methodologies used to evaluate telemedicine service initiatives in hospital facilities. Int. J. Med. Inform. 2017, 97, 171–194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Carbone, M.; Freschi, C.; Mascioli, S.; Ferrari, V.; Ferrari, M. A Wearable Augmented Reality Platform for Telemedicine. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Otranto, Italy, 2016; Volume 9769, pp. 92–100. [Google Scholar]
  15. Fan, Y.; Xu, P.; Jin, H.; Ma, J.; Qin, L. Vital Sign Measurement in Telemedicine Rehabilitation Based on Intelligent Wearable Medical Devices. IEEE Access 2019, 7, 54819–54823. [Google Scholar] [CrossRef]
  16. Tartan, E.Ö.; Çiflikli, C. An Android Application for Geolocation Based Health Monitoring, Consultancy and Alarm System. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; Volume 2, pp. 341–344. [Google Scholar]
  17. Albahri, A.S.; Alwan, J.K.; Taha, Z.K.; Ismail, S.F.; Hamid, R.A.; Zaidan, A.A.; Albahri, O.S.; Zaidan, B.B.; Alamoodi, A.H.; Alsalem, M.A. IoT-based telemedicine for disease prevention and health promotion: State-of-the-Art. J. Netw. Comput. Appl. 2021, 173, 102873. [Google Scholar] [CrossRef]
  18. Lilly, C.M.; Cody, S.; Zhao, H.; Landry, K.; Baker, S.P.; McIlwaine, J.; Chandler, M.W.; Irwin, R.S. For the University of Massachusetts Memorial Critical Care Operations Group Hospital Mortality, Length of Stay, and Preventable Complications Among Critically Ill Patients Before and After Tele-ICU Reengineering of Critical Care Processes. JAMA 2011, 305, 2175–2183. [Google Scholar] [CrossRef]
  19. Driessen, J.; Bonhomme, A.; Chang, W.; Nace, D.A.; Kavalieratos, D.; Perera, S.; Handler, S.M. Nursing Home Provider Perceptions of Telemedicine for Reducing Potentially Avoidable Hospitalizations. J. Am. Med. Dir. Assoc. 2016, 17, 519–524. [Google Scholar] [CrossRef] [Green Version]
  20. Delliaux, S.; Delaforge, A.; Deharo, J.-C.; Chaumet, G. Mental Workload Alters Heart Rate Variability, Lowering Non-linear Dynamics. Front. Physiol. 2019, 10, 565. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Borghini, G.; Arico, P.; Di Flumeri, G.; Sciaraffa, N.; Ronca, V.; Vozzi, A.; Babiloni, F. Assessment of Athletes’ Attitude: Physiological Evaluation via Wearable Sensors during Grappling Competitions. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montréal, QC, Canada, 20–24 July 2020; Volume 2020, pp. 584–587. [Google Scholar]
  22. Ronca, V.; Rossi, D.; Di Florio, A.; Di Flumeri, G.; Aricò, P.; Sciaraffa, N.; Vozzi, A.; Babiloni, F.; Borghini, G. Contactless Physiological Assessment of Mental Workload During Teleworking-like Task. In Communications in Computer and Information Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 1318, pp. 76–86. [Google Scholar]
  23. Ahlstrom, U.; Friedman-Berg, F.J. Using eye movement activity as a correlate of cognitive workload. Int. J. Ind. Ergon. 2006, 36, 623–636. [Google Scholar] [CrossRef]
  24. Boehm-Davis, D.A.; Gray, W.D.; Schoelles, M.J. The Eye Blink as a Physiological Indicator of Cognitive Workload. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 30 July–4 August 2000; pp. 6–116. [Google Scholar] [CrossRef]
  25. Rahman, H.; Ahmed, M.U.; Begum, S.; Funk, P. Real Time Heart Rate Monitoring From Facial RGB Color Video Using Webcam. In Proceedings of the The 29th Annual Workshop of the Swedish Artificial Intelligence Society (SAIS 2016), Malmö, Sweden, 2–3 June 2016. [Google Scholar]
  26. Rahman, H.; Ahmed, M.U.; Begum, S. Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration. IEEE Trans. Biomed. Eng. 2019, 67, 88–98. [Google Scholar] [CrossRef]
  27. Zhang, C.; Wu, X.; Zhang, L.; He, X.; Lv, Z. Simultaneous detection of blink and heart rate using multi-channel ICA from smart phone videos. Biomed. Signal Process. Control. 2017, 33, 189–200. [Google Scholar] [CrossRef]
  28. Tsujikawal, M.; Onishil, Y.; Kiuchil, Y.; Ogatsul, T.; Nishino, A.; Hashimoto, S. Drowsiness Estimation from Low-Frame-Rate Facial Videos using Eyelid Variability Features. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 5203–5206. [Google Scholar]
  29. Farré, R.; Navajas, D.; Montserrat, J.M. Is Telemedicine a Key Tool for Improving Continuous Positive Airway Pressure Adherence in Patients with Sleep Apnea? Am. J. Respir. Crit. Care Med. 2018, 197, 12–14. [Google Scholar] [CrossRef] [PubMed]
  30. Lugo, V.M.; Garmendia, O.; Suarez-Girón, M.; Torres, M.; Vázquez-Polo, F.J.; Negrín, M.A.; Moraleda, A.; Roman, M.; Puig, M.; Ruiz, C.; et al. Comprehensive management of obstructive sleep apnea by telemedicine: Clinical improvement and cost-effectiveness of a Virtual Sleep Unit. A randomized controlled trial. PLoS ONE 2019, 14, e0224069. [Google Scholar] [CrossRef]
  31. Schoch, O.D.; Baty, F.; Boesch, M.; Benz, G.; Niedermann, J.; Brutsche, M.H. Telemedicine for Continuous Positive Airway Pressure in Sleep Apnea. A Randomized, Controlled Study. Ann. Am. Thorac. Soc. 2019, 16, 1550–1557. [Google Scholar] [CrossRef] [PubMed]
  32. di Rienzo, M.; Rizzo, G.; Işilay, Z.M.; Lombardi, P. SeisMote: A Multi-Sensor Wireless Platform for Cardiovascular Monitoring in Laboratory, Daily Life, and Telemedicine. Sensors 2020, 20, 680. [Google Scholar] [CrossRef] [Green Version]
  33. Ingravallo, F.; Vignatelli, L.; Pagotto, U.; Vandi, S.; Moresco, M.; Mangiaruga, A.; Oriolo, C.; Zenesini, C.; Pizza, F.; Plazzi, G. Protocols of a diagnostic study and a randomized controlled non-inferiority trial comparing televisits vs standard in-person outpatient visits for narcolepsy diagnosis and care: TElemedicine for NARcolepsy (TENAR). BMC Neurol. 2020, 20, 1–7. [Google Scholar] [CrossRef]
  34. Modica, E.; Cartocci, G.; Rossi, D.; Levy, A.C.M.; Cherubino, P.; Maglione, A.G.; Di Flumeri, G.; Mancini, M.; Montanari, M.; Perrotta, D.; et al. Neurophysiological Responses to Different Product Experiences. Comput. Intell. Neurosci. 2018, 2018, 1–10. [Google Scholar] [CrossRef] [PubMed]
  35. Cartocci, G.; Caratù, M.; Modica, E.; Maglione, A.G.; Rossi, D.; Cherubino, P.; Babiloni, F. Electroencephalographic, Heart Rate, and Galvanic Skin Response Assessment for an Advertising Perception Study: Application to Antismoking Public Service Announcements. J. Vis. Exp. 2017, 2017, e55872. [Google Scholar] [CrossRef]
  36. Robb, C.E.; De Jager, C.A.; Ahmadi-Abhari, S.; Giannakopoulou, P.; Udeh-Momoh, C.; McKeand, J.; Price, G.; Car, J.; Majeed, A.; Ward, H.; et al. Associations of Social Isolation with Anxiety and Depression During the Early COVID-19 Pandemic: A Survey of Older Adults in London, UK. Front. Psychiatry 2020, 11, 591120. [Google Scholar] [CrossRef]
  37. Fancourt, D.; Steptoe, A.; Bu, F. Trajectories of anxiety and depressive symptoms during enforced isolation due to COVID-19: Longitudinal analyses of 36,520 adults in England. medRxiv 2020, in press. [Google Scholar]
  38. Kane, M.J.; Conway, A.R.A.; Miura, T.K.; Colflesh, G.J.H. Working memory, attention control, and the n-back task: A question of construct validity. J. Exp. Psychol. Learn. Mem. Cogn. 2007, 33, 615–622. [Google Scholar] [CrossRef] [Green Version]
  39. Bradley, M.M.; Lang, P.J. The International Affective Digitized Sounds: Affective Ratings of Sounds and Instruction Manual; Technical Report B-3; University of Florida: Gainesville, FL, USA, 2007. [Google Scholar]
  40. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar] [CrossRef]
  41. Di Flumeri, G.; Arico, P.; Borghini, G.; Colosimo, A.; Babiloni, F. A new regression-based method for the eye blinks artifacts correction in the EEG signal, without using any EOG channel. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3187–3190. [Google Scholar]
  42. Kleifges, K.; Bigdely-Shamlo, N.; Kerick, S.E.; Robbins, K.A. BLINKER: Automated Extraction of Ocular Indices from EEG Enabling Large-Scale Analysis. Front. Neurosci. 2017, 11, 12. [Google Scholar] [CrossRef] [Green Version]
  43. King, D.E. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  44. Rajesh, K.N.; Dhuli, R. Classification of imbalanced ECG beats using re-sampling techniques and AdaBoost ensemble classifier. Biomed. Signal. Process. Control. 2018, 41, 242–254. [Google Scholar] [CrossRef]
  45. GitHub-Kostasthanos/Drowsiness-Detection: Real Time Drowsiness Detection (Blinks or Yawns) with OpenCV Using Facial Landmarks. Available online: https://github.com/kostasthanos/Drowsiness-Detection (accessed on 20 January 2021).
  46. Lenz, G.; Ieng, S.-H.; Benosman, R. Event-Based Face Detection and Tracking Using the Dynamics of Eye Blinks. Front. Neurosci. 2020, 14, 587. [Google Scholar] [CrossRef]
  47. Pan, J.; Tompkins, W.J. A Real-Time QRS Detection Algorithm. IEEE Trans. Biomed. Eng. 1985, 32, 230–236. [Google Scholar] [CrossRef] [PubMed]
  48. Goovaerts, H.G.; Ros, H.H.; van den Akker, T.J.; Schneider, H. Communications: A Digital QRS Detector Based on the Principle of Contour Limiting. IEEE Trans. Biomed. Eng. 1976, 23, 154–160. [Google Scholar] [CrossRef] [PubMed]
  49. Thakor, N.V.; Webster, J.G.; Tompkins, W.J. Optimal Qrs Filter. Med. Biol. Eng. Comput. 1983, 21, 343–350. [Google Scholar] [CrossRef]
  50. Sharma, S.; Shanmugasundaram, K.; Ramasamy, S.K. FAREC—CNN based efficient face recognition technique using Dlib. In Proceedings of the 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), Piscataway, NJ, USA, 25–27 May 2016; pp. 192–195. [Google Scholar]
  51. Yu, M.; Yun, L.; Chen, Z.; Cheng, F. Research on video face detection based on AdaBoost algorithm training classifier. In Proceedings of the 2017 First International Conference on Electronics Instrumentation & Information Systems (EIIS), Harbin, China, 3–5 June 2017; pp. 1–6. [Google Scholar]
  52. Sklearn.Decomposition.PCA—Scikit-Learn 0.23.1 Documentation. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html (accessed on 10 June 2020).
  53. Tarvainen, M.P.; Anta-Aho, P.O.; A Karjalainen, P. An advanced detrending method with application to HRV analysis. IEEE Trans. Biomed. Eng. 2002, 49, 172–175. [Google Scholar] [CrossRef]
  54. Singh, D.; Singh, B. Investigating the impact of data normalization on classification performance. Appl. Soft Comput. 2020, 97, 105524. [Google Scholar] [CrossRef]
  55. Bakdash, J.Z.; Marusich, L.R. Repeated Measures Correlation. Front. Psychol. 2017, 8, 456. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. di Flumeri, G.; Aricò, P.; Borghini, G.; Sciaraffa, N.; di Florio, A.; Babiloni, F. The Dry Revolution: Evaluation of Three Different EEG Dry Electrode Types in Terms of Signal Spectral Features, Mental States Classification and Usability. Sensors 2019, 19, 1365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Aricò, P.; Reynal, M.; Di Flumeri, G.; Borghini, G.; Sciaraffa, N.; Imbert, J.-P.; Hurter, C.; Terenzi, M.; Ferreira, A.; Pozzi, S.; et al. How Neurophysiological Measures Can be Used to Enhance the Evaluation of Remote Tower Solutions. Front. Hum. Neurosci. 2019, 13, 3. [Google Scholar] [CrossRef] [PubMed]
  58. Romeo, L.; Paolanti, M.; Bocchini, G.; Loncarski, J.; Frontoni, E. An Innovative Design Support System for Industry 4.0 Based on Machine Learning Approaches. In Proceedings of the 2018 5th International Symposium on Environment-Friendly Energies and Applications (EFEA), Rome, Italy, 24–26 September 2018; pp. 1–6. [Google Scholar]
  59. Romeo, L.; Loncarski, J.; Paolanti, M.; Bocchini, G.; Mancini, A.; Frontoni, E. Machine learning-based design support system for the prediction of heterogeneous machine parameters in industry 4. Expert. Syst. Appl. 2020, 140, 112869. [Google Scholar] [CrossRef]
  60. Cartocci, G.; Maglione, A.G.; Vecchiato, G.; Di Flumeri, G.; Colosimo, A.; Scorpecci, A.; Marsella, P.; Giannantonio, S.; Malerba, P.; Borghini, G.; et al. Mental workload estimations in unilateral deafened children. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; pp. 1654–1657. [Google Scholar] [CrossRef]
  61. Arico, P.; Borghini, G.; Di Flumeri, G.; Colosimo, A.; Graziani, I.; Imbert, J.-P.; Granger, G.; Benhacene, R.; Terenzi, M.; Pozzi, S.; et al. Reliability over time of EEG-based mental workload evaluation during Air Traffic Management (ATM) tasks. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; pp. 7242–7245. [Google Scholar] [CrossRef]
  62. Di Flumeri, G.; Borghini, G.; Aricò, P.; Sciaraffa, N.; Lanzi, P.; Pozzi, S.; Vignali, V.; Lantieri, C.; Bichicchi, A.; Simone, A.; et al. EEG-Based Mental Workload Neurometric to Evaluate the Impact of Different Traffic and Road Conditions in Real Driving Settings. Front. Hum. Neurosci. 2018, 12, 509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Chen, J.; Chang, Z.; Qiu, Q.; Li, X.; Sapiro, G.; Bronstein, A.; Pietikäinen, M. RealSense = real heart rate: Illumination invariant heart rate estimation from videos. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6. [Google Scholar]
  64. Sood, S.K.; Mahajan, I. IoT-fog-based healthcare framework to identify and control hypertension attack. IEEE Internet Things J. 2019, 6, 1920–1927. [Google Scholar] [CrossRef]
  65. Di Flumeri, G.; De Crescenzio, F.; Berberian, B.; Ohneiser, O.; Kramer, J.; Aricò, P.; Borghini, G.; Babiloni, F.; Bagassi, S.; Piastra, S. Brain–Computer Interface-Based Adaptive Automation to Prevent Out-Of-The-Loop Phenomenon in Air Traffic Controllers Dealing With Highly Automated Systems. Front. Hum. Neurosci. 2019, 13, 296. [Google Scholar] [CrossRef] [Green Version]
  66. Redondo, B.; Vera, J.; Luque-Casado, A.; García-Ramos, A.; Jiménez, R. Associations between accommodative dynamics, heart rate variability and behavioural performance during sustained attention: A test-retest study. Vision Res. 2019, 163, 24–32. [Google Scholar] [CrossRef] [PubMed]
  67. Butmee, T.; Lansdown, T.C.; Walker, G.H. Mental Workload and Performance Measurements in Driving Task: A Review Literature. In Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2018; Volume 823, pp. 286–294. [Google Scholar]
Figure 1. Overview of the experimental settings. Laboratory and video–based equipment was employed to address the objectives of the study. Other acquisition devices were present although they were not used for the purposes of this study.
Figure 1. Overview of the experimental settings. Laboratory and video–based equipment was employed to address the objectives of the study. Other acquisition devices were present although they were not used for the purposes of this study.
Sensors 21 01607 g001
Figure 2. Main steps of the video-signal processing for Eye Blinks Rate (EBR) estimation. The distance between the eyelids is computed frame by frame. Then, filtering and the quadratic mean threshold are applied for obtaining the eye blinks number from the raw data.
Figure 2. Main steps of the video-signal processing for Eye Blinks Rate (EBR) estimation. The distance between the eyelids is computed frame by frame. Then, filtering and the quadratic mean threshold are applied for obtaining the eye blinks number from the raw data.
Sensors 21 01607 g002
Figure 3. Main steps of the video-signal processing for heart rate (HR) estimation. Starting from the bottom left, the facial video is recorded by mean of a PC webcam and the regions of interest (ROIs) are selected. Then, the R, G and B components are selected by mean of Principal Component Analysis (PCA) algorithm. The Heart Rate (HR) frequency is extracted after detrending, filtering and fast Fourier transformation. Finally, the HR values in time domain are obtained after z-score normalization.
Figure 3. Main steps of the video-signal processing for heart rate (HR) estimation. Starting from the bottom left, the facial video is recorded by mean of a PC webcam and the regions of interest (ROIs) are selected. Then, the R, G and B components are selected by mean of Principal Component Analysis (PCA) algorithm. The Heart Rate (HR) frequency is extracted after detrending, filtering and fast Fourier transformation. Finally, the HR values in time domain are obtained after z-score normalization.
Sensors 21 01607 g003
Figure 4. The normalized EBR (EBR’) values evaluated through the video—based technique and the laboratory sensor during the n-Back (NB) (left image) and the Webcall (WEB) (right image) tasks did not statistically differ (all p > 0.05).
Figure 4. The normalized EBR (EBR’) values evaluated through the video—based technique and the laboratory sensor during the n-Back (NB) (left image) and the Webcall (WEB) (right image) tasks did not statistically differ (all p > 0.05).
Sensors 21 01607 g004
Figure 5. The normalized HR (HR’) values evaluated through the video—based technique and the laboratory sensor during the NB (left image) and the WE (right image) tasks did not statistically differ (all p > 0.05).
Figure 5. The normalized HR (HR’) values evaluated through the video—based technique and the laboratory sensor during the NB (left image) and the WE (right image) tasks did not statistically differ (all p > 0.05).
Sensors 21 01607 g005
Figure 6. Results of the repeated measure correlation analysis on the EBR (left image) and HR (right image) estimated by the laboratory and video-based technique every 60 s.
Figure 6. Results of the repeated measure correlation analysis on the EBR (left image) and HR (right image) estimated by the laboratory and video-based technique every 60 s.
Sensors 21 01607 g006
Figure 7. The average NASA-TLX scores during the nominal (blue bar) and non-nominal (red bar) conditions (left image), and the average Self Assessment Manikin (SAM) scores in terms of arousal during the nominal and non-nominal conditions (right image). * indicates a statistical difference between the represented parameters.
Figure 7. The average NASA-TLX scores during the nominal (blue bar) and non-nominal (red bar) conditions (left image), and the average Self Assessment Manikin (SAM) scores in terms of arousal during the nominal and non-nominal conditions (right image). * indicates a statistical difference between the represented parameters.
Sensors 21 01607 g007
Figure 8. The normalized EBR (left image) and HR (right image) values during the nominal (blue bar) and the non-nominal (red bar) conditions of the NB task. * indicates a statistical difference between the represented parameters.
Figure 8. The normalized EBR (left image) and HR (right image) values during the nominal (blue bar) and the non-nominal (red bar) conditions of the NB task. * indicates a statistical difference between the represented parameters.
Sensors 21 01607 g008
Figure 9. The normalized EBR (left image) and HR (right image) values during the nominal (blue bar) and the non-nominal (red bar) conditions of the WEB task. * indicates a statistical difference between the represented parameters.
Figure 9. The normalized EBR (left image) and HR (right image) values during the nominal (blue bar) and the non-nominal (red bar) conditions of the WEB task. * indicates a statistical difference between the represented parameters.
Sensors 21 01607 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ronca, V.; Giorgi, A.; Rossi, D.; Di Florio, A.; Di Flumeri, G.; Aricò, P.; Sciaraffa, N.; Vozzi, A.; Tamborra, L.; Simonetti, I.; et al. A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare. Sensors 2021, 21, 1607. https://doi.org/10.3390/s21051607

AMA Style

Ronca V, Giorgi A, Rossi D, Di Florio A, Di Flumeri G, Aricò P, Sciaraffa N, Vozzi A, Tamborra L, Simonetti I, et al. A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare. Sensors. 2021; 21(5):1607. https://doi.org/10.3390/s21051607

Chicago/Turabian Style

Ronca, Vincenzo, Andrea Giorgi, Dario Rossi, Antonello Di Florio, Gianluca Di Flumeri, Pietro Aricò, Nicolina Sciaraffa, Alessia Vozzi, Luca Tamborra, Ilaria Simonetti, and et al. 2021. "A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare" Sensors 21, no. 5: 1607. https://doi.org/10.3390/s21051607

APA Style

Ronca, V., Giorgi, A., Rossi, D., Di Florio, A., Di Flumeri, G., Aricò, P., Sciaraffa, N., Vozzi, A., Tamborra, L., Simonetti, I., & Borghini, G. (2021). A Video-Based Technique for Heart Rate and Eye Blinks Rate Estimation: A Potential Solution for Telemonitoring and Remote Healthcare. Sensors, 21(5), 1607. https://doi.org/10.3390/s21051607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop