Next Article in Journal
A High Peak Power and High Beam Quality Sub-Nanosecond Nd:YVO4 Laser System at 1 kHz Repetition Rate without SRS Process
Previous Article in Journal
Effects of Toe-Out and Toe-In Gaits on Lower-Extremity Kinematics, Dynamics, and Electromyography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Respiration Monitoring for Premature Neonates in NICU

1
Department of Electrical Engineering, Eindhoven University of Technology, 5612 WH Eindhoven, The Netherlands
2
Philips Research, High Tech Campus 34, 5656 AE Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(23), 5246; https://doi.org/10.3390/app9235246
Submission received: 18 October 2019 / Revised: 25 November 2019 / Accepted: 28 November 2019 / Published: 2 December 2019
(This article belongs to the Section Applied Physics General)

Abstract

:
In this paper, we investigate an automated pipeline to estimate respiration signals from videos for premature infants in neonatal intensive care units (NICUs). Two flow estimation methods, namely the conventional optical flow- and deep learning-based flow estimation methods, were employed and compared to estimate pixel motion vectors between adjacent video frames. The respiratory signal is further extracted via motion factorization. The proposed methods were evaluated by comparing our automated extracted respiration signals to that extracted from chest impedance on videos of five premature infants. The overall average cross-correlation coefficients are 0.70 for the optical flow-based method and 0.74 for the deep flow-based method. The average root mean-squared errors are 6.10 and 4.55 for the optical flow- and the deep flow-based methods, respectively. The experimental results are promising for further investigation and clinical application of the video-based respiration monitoring method for infants in NICUs.

1. Introduction

Vital signs, such as heart rate, blood pressure, respiratory rate, and body temperature, are physical parameters that can be measured and used to assess physiological state and functioning. Monitoring of vital parameters is a crucial topic in neonatal daily care. Premature infants have an immature respiratory control that predisposes them to apnea/periodic breathing, haemoglobin oxygen desaturation, and bradycardia [1,2]. Apnea is defined as the status of cessation of respiratory airflow, whereas periodic breathing is characterized by groups of respiratory movements interrupted by small intervals of apnea [3]. Continuous monitoring of respiration for premature infants is critical to detect abnormalities in breathing and help develop early treatments to prevent significant hypoxia and central depression from apnea. A long-term continuous monitoring approach of respiration may also be used to assess sleep stage, which varies with different clinical conditions [4,5].
The existing methods for monitoring respiration include nasal thermocouples, spirometers, transthoracic inductance, respiratory effort belt transducer, piezoelectric transducer, optical sensor (pulse oximetry), strain gauge, impedance plethysmography, and electrocardiogram (ECG). Currently, the respiration of premature infants in a neonatal intensive care unit (NICU) is monitored by bedside monitors. ECG is considered as the standard reference measurement for respiration, since ECG can provide stable and robust monitoring for a NICU. However, the pressure of the contact sensor may also change the local skin perfusion, which yields not a true measurement as compared to the non-contact sensor. The electrodes may exert pressure to the skin, yielding to tissue compression and vascular insufficiency. As a consequence, applying the ECG electrodes on infant skin for a long time increases the risk of trauma and infections. Removing the adhesives from the immature skin, as part of regular care, can damage the immature skin of preterm infants, as well as cause stress and pain [6,7]. This technique is impractical for home care, since the sensors have to be placed by skilled caregivers, and wearing the sensors also causes inconvenience for everyday life [8,9].
A non-contact monitoring system is a good alternative to improve infant comfort and safety and it also has the potential for monitoring at home. Few works to date have investigated video-based contactless methods for monitoring respiration. In this study, we developed and evaluated a contactless respiration measurement method based on video monitoring. The contribution of our method is: (1) we propose an approach for non-contact respiration monitoring; and (2) we validated the method on clinical data acquired in NICU.

2. Related Work

In the applications of adults, Deng et al. [10] presented a design and implementation of a novel sleep monitoring system that simultaneously analyzed respiration, head posture, and body posture for adults. For respiration, the region of breathing movement was automatically determined and the intensity estimated, yielding a waveform indicating respiratory rhythms with an accuracy of 96% in recognizing abnormal breathing. However, the accuracy of the respiration rate was not provided. Chazal et al. [11] applied a contactless biomotion sensor of Doppler radar for respiration monitoring. Respiratory frequency and magnitude were used for the classification of determining sleep/wake states, which achieved the accuracy of 69% for the wake and 88% for sleep state. Gupta et al. [12] estimated heart rate (HR) accurately, using face videos acquired from a low-cost camera for adults. The face video consisting of frontal, profile, or multiple faces, was divided into multiple overlapping fragments to determine HR estimates. The HR estimates were fused using quality-based fusion, which aimed to minimize illumination and face deformations. Prathosh et al. [13] proposed a general framework for estimating a periodic signal, which was applied to derive a computationally inexpensive method for estimating respiratory pattern using two-dimensional cameras that did not critically depend on the region of interest. Specifically, the patterns were estimated by imaging changes in the reflected light caused by respiration-induced motion. Estimation of the pattern was cast as a blind deconvolution problem and was solved through a method comprising subspace projection and statistical aggregation.
In terms of the applications for infants, Werth et al. reviewed the unobtrusive measurements for indicating sleep state in preterm infants [14]. Abbas et al. [15] attempted to detect respiration rate of neonates on a real-time basis using infrared thermography. They analyzed the anterior naris (nostrils) temperature profile associated with the inspiration and expiration phases. However, the region of interest (ROI) was assumed to be fixed after initialization. Moreover, the method is not practical for NICU infants, since the faces of premature infants in a NICU are often occluded by feeding tubes and/or breathing masks. In practice, the temperature inside an incubator is continuously monitored and adjusted by caregivers in order to help infants maintain their body temperature in a normal range. However, the controlled environmental temperature might affect the accuracy of the thermography-based respiration monitoring method. Koolen et al. [16] extracted the respiration rate from video data included in polysomnography. They used Eulerian video magnification (EVM) to amplify the respiration movements, which was followed by optical flow to estimate the respiration motion and therefore, obtained a respiration signal. Independent component analysis and principal component analysis were applied to improve signal quality. Finally, the results showed a detection accuracy of 94.12% for sleeping-stage patients. Antognoli et al. [17] applied a digital webcam (WeC) and an EVM algorithm to measure HR and respiration rate (RR). The accumulated RGB values of a manually selected ROI were calculated as a single signal, from which the power spectral density was further estimated and used for peak extraction. The evaluation based on data of seven patients yielded a root mean-squared error (RMSE) of 12.2 for the HR and 7.6 for the RR. However, the limitation of a pulse-based respiration extraction is that it extracts the respiratory modulation in blood volume changes, which is both subject dependent (different respiratory efforts) and measurement-location dependent. The modulation effect varies in different body parts. Moreover, the peak selection from spectrograms was based on common knowledge of normal HR and RR frequency ranges, which is not suitable for infants under clinical conditions of different diseases.
Methods on remote sensing of respiration or other contactless physiological measures have been developed for adults and infants while our applications focus on premature infants in NICU.

3. Methods

3.1. Material

Our study was conducted with videos recorded at the Máxima Medical Center in Veldhoven, The Netherlands, by a fixed-position high-definition camera (Camera model IDS uEye monochrome) filming the infant’s entire body in the direction of the foot to the head. Figure 1 shows an example of a captured video frame. We decided on the recording position of the camera by considering: (1) little or no interruption to daily routine care; and (2) a good viewpoint for observing vertical movement of the infant chest, where the movement is assumed to be with maximum respiratory motion energy in the vertical direction. For all infant recordings, written consent was obtained from the parents. The resolution of each video frame is 736 × 480 pixels, while the frame rate is 8 fps. The videos were recorded under uncontrolled, regular hospital lighting conditions. Five infants with an average gestational age of 29.6 ± 2.8 weeks (range 27+0–33+6 weeks), an average postnatal age of 1.2 ± 0.6 weeks (range 0+4–2+1 weeks), and an average weight of 1555 ± 682.4 g (range 755–2410 g) were filmed. Parallel with the video capturing, standard chest impedance (CI) signals were recorded simultaneously as reference standards.

3.2. Motion Matrix Calculation

The flowchart of the proposed system is shown in Figure 2, where motion matrix estimation is an essential step in the pipeline. We estimate dthe pattern of apparent motion including respiration of infants in videos. Motion matrix estimation can be defined as the distribution of apparent velocities of movement in successive images.
Two flow estimation methods were employed to estimate pixel motion vectors between adjacent video frames. First, the conventional optical flow method [18] was utilized and evaluated. However, the optimal flow is more sensitive to high gradient texture, whereas in our case the infant chest is either bare or covered by a blanket. Thus, in both cases, the chest area, which mostly indicates the respiratory motion, lacks gradient texture. Deep flow [19] is sensitive to the entire part of moving objects with less regard to texture information. Therefore, better performance is expected by using a flow estimation method based on deep learning.
For both methods, the derived motion vectors contain respiration information, induced by the motion of the abdominal wall and chest wall. We only considered the vertical motion vector, since the respiration related motion is mainly in the vertical direction in the captured videos. For infants under one year of age, it is also recommended by the American Academy of Pediatrics (AAP) that they should be placed to sleep on their backs every time being laid down to sleep. This lowers the risk of sudden infant death syndrome (SIDS).

3.2.1. Conventional Optical Flow

The optical flow should satisfy Equation (1):
I x V x + I y V y + I t = 0 ,
where I is the intensity matrix of a frame in a video and V x are V y are the actual optical flow parameters. We deployed the classic dense optical flow algorithm proposed by Barron et al. [18], where second-order differential equations based on the H e s s i a n matrix are used to constrain two-dimensional (2D) velocity. The Barron method creates flow fields with 100% density. However, the conventional optical flow method is limited at estimating motion in poorly textured areas, which lack gradient variation.

3.2.2. Deep Flow

The above conventional optical flow approach only computes image matching at a single scale. However, for complicated situations with complex motions, the traditional approach may not be able to effectively capture the interaction or dependency relationships. Brox and Malik [20] proposed to add the addition of a descriptor matching term in the variational approach, which allows better handling large displacements. The matching provides a guide using correspondences from sparse descriptor matching.
In our study, we applied the method from Weinzaepfel et al. [19]. A descriptor matching algorithm was incorporated, which is based on deep convolutions with six layers, interleaving convolutions and max-pooling. In the proposed framework, dense sampling is applied to efficiently retrieve quasi-dense correspondences, while incorporating a smoothing effect on the descriptors matches. Figure 3 shows the framework for deep flow estimation.

3.3. Respiratory Description

The results of flow calculation were captured in the rows of the derived motion matrix M (of size N × W , where N denotes the number of pixels in a video frame and W is the total number of frames in a video), which contain the motion derivatives that represent the velocity magnitudes of the pixel trajectories in the vertical direction.
The spatial statistics of the flow matrix was further analyzed by applying robust principal component analysis (PCA) [21]. PCA includes the eigendecomposition of a data covariance matrix or singular value decomposition of a data matrix. The decomposition task projects the original data onto an orthogonal subspace, where each direction is mutually de-correlated and the most informative data information becomes available in the first several principal components. In our study, instead of considering the originally obtained matrix, the first eigenvalue of the covariance matrix of flows was analyzed, since the first eigenvalue represents the major motion component. Besides, it is beneficial to reduce residual motion noise that has lower energy than the respiratory motion in each video.
The motion matrix M is masked by sub-spatiotemporal regions, m i , where each m i is a local mask that stores W consecutive squared blocks. For each m i , we can generate the eigenvectors that satisfy the following general condition, specified by
m i · D i = λ · D i s . t . Det ( m i - λ i · I ) = 0 ,
where Det ( · ) denotes the matrix determinant, I represents the identity matrix, and D i and λ i are eigenvectors and eigenvalues, respectively.

3.4. Evaluation

In this study, we employed a sliding window of 120 s with a step size of 1 s to estimate the respiration rate. The respiration rate was calculated by first averaging the time intervals between breathing peaks, followed by converting those numbers to a frequency value, expressed in the unit of breaths per minute (bpm).
It is well known that the respiration signal can be estimated from the CI signals. To evaluate the performance of respiration estimation using our video-based algorithms, we computed the cross-correlation coefficients and the RMSE as measures to compare the respiration rates of the reference breathing signal from the CI and our extracted respiration signals. The cross-correlation coefficient calculation is initially based on a zero-mean process. Thus, this calculation is only for comparing the respiration rate variance of our estimation and the reference standard. In addition, correlation plots and Bland–Altman plots [22,23] were also created.

4. Experimental Results and Discussion

Figure 4 shows a visual comparison of the reference breathing signal from the CI and two extracted respiration signals by the proposed optical flow- and deep flow-based methods. The three re-scaled 1D signals from the CI, optical flow- and deep flow-based motion estimations are depicted in Figure 5.
Table 1 summarizes the estimation of respiration rate using two different optical flow methods. Table 2 shows the root mean-squared errors (RMSE) and cross-correlation (CC) coefficients of the reference breathing signal from the CI, compared to our optical flow- and deep flow-based results. The results obtained from deep flow are more accurate than those of the conventional optical flow approach. The average cross-correlation coefficients of all videos is 0.74 and the average RMSE is 4.55, using the proposed deep learning-based method. Despite the limited number of measurements, these preliminary results provide a promising result for further investigation of the neonatal video-based respiration monitoring method. The overall RMSE for our deep learning-based method (4.55) shows the feasibility to apply our automated respiration for clinical use.
From both the correlation and Bland–Altman analyses in Figure 6a,b, the deep flow based-method produces a lower error than the optical flow-based method, especially when breathing rates are less than 50 bpm. The absolute values for the overall mean errors reduce from 4.8 to 2.7 bpm in the optical flow and deep flow cases, respectively.
There is a residual error made by our automated processing pipeline, where the automated method underestimates the respiration rate. This occurs because our band filter fails to remove motion caused by interruptions in the video, for example movement resulting from nurse care-handling. In the future, we will investigate a supervised approach to replace unsupervised filters to extract the noise from respiration signals.
We compared the effect of two different flow estimation methods on the final respiration calculation. The results show that the deep flow approach is both sensitive to homogeneous regions and the boundary area, whereas the conventional approach is more sensitive to the boundary area. This advantage of deep flow-based approach improves the whole processing pipeline by increasing both the accuracy and robustness.
We consider that the accuracy of our algorithm for extracting respiration signal may be affected by the captured image resolution and image sensor noise. If the resolution is rather low, the movement information from different regions can be blended within one pixel, which is not ideal for accurate motion extraction. If the image sensor noise is too high such that the sensor noise components in pixel values dominate the pixel changes induced by respiratory motion, the measurement will be polluted. The quantitative analysis regarding this perspective is a complicated matter and will be performed as future work.
Currently, the work is carried out as a feasibility study. The focus of this study is the installation, adaptation, and validation of camera-based monitoring technology in the NICU setting. We have not investigated the performance on preterm infants related to specific diseases. In the future, we will further validate our algorithm on infants having different health situations.
Our recordings were taken from real clinical practice without interfering with the clinical workflow. Our algorithm works when the respiratory motion can be observed by the camera, even as subtle movement, i.e., infants can be either naked or covered by a blanket (as long as the infant body has contact with the blanket such that the movement information from the thorax-abdomen can still be derived).
Our algorithm relies on the intensity of video frames for motion extraction (i.e., no color or chromaticity information is used). Therefore, for the nighttime condition, it is possible to measure the respiration signal by using the same software algorithms that we have proposed and just altering to an infrared camera with an infrared lighting source. During low-light conditions, the performance of our system may be affected by the noise induced by the camera sensor.
The highlight of using a video-based method to monitor respiration is its contact-free operation. Both CI and polysomnography need electrodes attached to the patient’s skin, which increase the risk of skin irritation. Therefore, our method can improve the comfort level and convenience. A further benefit of using a camera is that it enables more measurements than contact-based bio-sensors, including physiological signals (e.g., breathing rate, heart rate, and blood oxygen saturation) [24,25] and contexture signals (e.g., body motion, activities, and facial expressions) [26,27,28,29]. This will enrich the functionality of a health monitoring system. Our system can be constructed with a generic webcam and an embedded computing platform, which forms a cost-effective solution. In principle, one camera can monitor multiple subjects/infants simultaneously, as long as they are captured by the camera view, while each contact-based bio-sensor can only monitor one single subject/infant.

5. Conclusions

In this study, we applied an automated pipeline to estimate respiration signals from videos for premature infants in NICUs. We compared our automated extracted respiration signals to that extracted from the CI. The preliminary results are promising for further investigation of the video-based respiration monitoring method and for applying our automated respiration extraction for infants in NICUs. Experiments showed that the deep learning-based method outperforms the optical flow-based method in accuracy (low RMSE) and robustness. In the future, we will investigate the possibility of directly applying a deep learning framework to estimate respiration rate. For example, an LSTM-based system can effectively incorporate temporal information for a regression task and is expected to further enhance the obtained results.

Author Contributions

Conceptualization, Y.S., W.W., X.L., and P.H.N.d.W.; methodology, Y.S., W.W., and T.T.; software, Y.S., W.W., X.L., and T.T.; validation, Y.S., and X.L.; formal analysis, Y.S.; investigation, Y.S.; resources, M.M.; data curation, X.L.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S., W.W., X.L., C.S., R.M.A., and P.H.N.d.W.; visualization, Y.S.; and supervision, P.H.N.d.W.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sale, S.M. Neonatal apnoea. Best Pract. Res. Clin. Anaesthesiol. 2010, 24, 323–336. [Google Scholar] [CrossRef] [PubMed]
  2. Chernick, V.; Heldrich, F.; Avery, M.E. Periodic breathing of premature infants. J. Pediatr. 1964, 64, 330–340. [Google Scholar] [CrossRef]
  3. Poets, C.F.; Stebbens, V.A.; Samuels, M.P.; Southall, D.P. The relationship between bradycardia, apnea, and hypoxemia in preterm infants. Pediatr. Res. 1993, 34, 144. [Google Scholar] [CrossRef] [PubMed]
  4. Prechtl, H.; Theorell, K.; Blair, A. Behavioural state cycles in abnormal infants. Dev. Med. Child Neurol. 1973, 15, 606–615. [Google Scholar] [CrossRef] [PubMed]
  5. Prechtl, H.F. The behavioural states of the newborn infant (a review). Brain Res. 1974, 76, 185–212. [Google Scholar] [CrossRef]
  6. Lund, C.H.; Nonato, L.B.; Kuller, J.M.; Franck, L.S.; Cullander, C.; Durand, D.K. Disruption of barrier function in neonatal skin associated with adhesive removal. J. Pediatr. 1997, 131, 367–372. [Google Scholar] [CrossRef]
  7. Afsar, F. Skin care for preterm and term neonates. Clin. Exp. Dermatol. Clin. Dermatol. 2009, 34, 855–858. [Google Scholar] [CrossRef] [PubMed]
  8. Baker, G.; Norman, M.; Karunanithi, M.; Sullivan, C. Contactless monitoring for sleep disordered-breathing, respiratory and cardiac co-morbidity in an elderly independent living cohort. Eur. Respir. J. 2015, 46, PA3379. [Google Scholar] [CrossRef]
  9. Matthews, G.; Sudduth, B.; Burrow, M. A non-contact vital signs monitor. Crit. Rev. Biomed. Eng. 2000, 28, 173–178. [Google Scholar] [CrossRef] [PubMed]
  10. Deng, F.; Dong, J.; Wang, X.; Fang, Y.; Liu, Y.; Yu, Z.; Liu, J.; Chen, F. Design and Implementation of a Noncontact Sleep Monitoring System Using Infrared Cameras and Motion Sensor. IEEE Trans. Instrum. Meas. 2018, 67, 1555–1563. [Google Scholar] [CrossRef]
  11. De Chazal, P.; O’Hare, E.; Fox, N.; Heneghan, C. Assessment of sleep/wake patterns using a non-contact biomotion sensor. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 514–517. [Google Scholar]
  12. Gupta, P.; Bhowmick, B.; Pal, A. Accurate heart-rate estimation from face videos using quality-based fusion. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4132–4136. [Google Scholar] [CrossRef]
  13. Prathosh, A.P.; Praveena, P.; Mestha, L.K.; Bharadwaj, S. Estimation of Respiratory Pattern From Video Using Selective Ensemble Aggregation. IEEE Trans. Signal Process. 2017, 65, 2902–2916. [Google Scholar] [CrossRef]
  14. Werth, J.; Atallah, L.; Andriessen, P.; Long, X.; Zwartkruis-Pelgrim, E.; Aarts, R.M. Unobtrusive sleep state measurements in preterm infants—A review. Sleep Med. Rev. 2017, 32, 109–122. [Google Scholar] [CrossRef] [PubMed]
  15. Abbas, A.K.; Heimann, K.; Jergus, K.; Orlikowsky, T.; Leonhardt, S. Neonatal non-contact respiratory monitoring based on real-time infrared thermography. Biomed. Eng. Online 2011, 10, 93. [Google Scholar] [CrossRef] [PubMed]
  16. Koolen, N.; Decroupet, O.; Dereymaeker, A.; Jansen, K.; Vervisch, J.; Matic, V.; Vanrumste, B.; Naulaers, G.; Van Huffel, S.; De Vos, M. Automated Respiration Detection from Neonatal Video Data. In Proceedings of the International Conference on Pattern Recognition Applications and Methods ICPRAM, Lisbon, Portugal, 10–12 January 2015; pp. 164–169. [Google Scholar]
  17. Antognoli, L.; Marchionni, P.; Nobile, S.; Carnielli, V.; Scalise, L. Assessment of cardio-respiratory rates by non-invasive measurement methods in hospitalized preterm neonates. In Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 11–13 June 2018; pp. 1–5. [Google Scholar]
  18. Barron, J.L.; Fleet, D.J.; Beauchemin, S.S.; Burkitt, T. Performance of optical flow techniques. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 236–242. [Google Scholar]
  19. Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large Displacement Optical Flow with Deep Matching. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1385–1392. [Google Scholar] [CrossRef]
  20. Brox, T.; Malik, J. Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 500–513. [Google Scholar] [CrossRef] [PubMed]
  21. Pearson, K. LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  22. Bland, J.M.; Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
  23. Klein, R. Bland-Altman and Correlation Plot. Mathworks File Exch. 2014. Available online: http://www.mathworks.com/matlabcentral/fileexchange/45049-bland-altman-and-correlation-plot (accessed on 11 September 2015).
  24. Van Luijtelaar, R.; Wang, W.; Stuijk, S.; de Haan, G. Automatic roi detection for camera-based pulse-rate measurement. In Proceedings of the Asian Conference on Computer Vision, Singapore, 1–2 November 2014; pp. 360–374. [Google Scholar]
  25. Wang, W.; den Brinker, A.C.; Stuijk, S.; de Haan, G. Robust heart rate from fitness videos. Physiol. Meas. 2017, 38, 1023. [Google Scholar] [CrossRef] [PubMed]
  26. Sun, Y.; Kommers, D.; Wang, W.; Joshi, R.; Shan, C.; Tan, T.; Aarts, R.M.; van Pul, C.; Andriessen, P.; de With, P.H. Automatic and continuous discomfort detection for premature infants in a NICU using video-based motion analysis. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 5995–5999. [Google Scholar]
  27. Sun, Y.; Shan, C.; Tan, T.; Long, X.; Pourtaherian, A.; Zinger, S.; de With, P.H. Video-based discomfort detection for infants. Mach. Vis. Appl. 2019, 30, 933–944. [Google Scholar] [CrossRef]
  28. Sun, Y.; Shan, C.; Tan, T.; Tong, T.; Wang, W.; Pourtaherian, A.; de With, P.H.N. Detecting discomfort in infants through facial expressions. Physiol. Meas. 2019. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, Y.; Huang, T.S. Vision-based gesture recognition: A review. In Proceedings of the International Gesture Workshop, Gif-sur-Yvette, France, 17–19 March 1999; pp. 103–115. [Google Scholar]
Figure 1. Example of an acquired video frame.
Figure 1. Example of an acquired video frame.
Applsci 09 05246 g001
Figure 2. Flowchart of the proposed video-based respiration monitoring system.
Figure 2. Flowchart of the proposed video-based respiration monitoring system.
Applsci 09 05246 g002
Figure 3. Framework of deep flow.
Figure 3. Framework of deep flow.
Applsci 09 05246 g003
Figure 4. Monitored respiration from the CI, and corresponding extracted respiration signals from a segment, based on optical flow and deep flow.
Figure 4. Monitored respiration from the CI, and corresponding extracted respiration signals from a segment, based on optical flow and deep flow.
Applsci 09 05246 g004
Figure 5. Data patterns for chest impedance and video-based observation: (a) re-scaled 1D signals; and (b) zoom in of each interval indicated in (a).
Figure 5. Data patterns for chest impedance and video-based observation: (a) re-scaled 1D signals; and (b) zoom in of each interval indicated in (a).
Applsci 09 05246 g005
Figure 6. Correlation and Bland–Altman plots comparing respiration rate measurements derived from: (a) the CI and optical flow-based method; and (b) the CI and deep flow-based method. The correlation plots contain the linear regression equation, correlation coefficient ( r 2 ), sum of squared error (SSE), and number of points. The Bland–Altman plots contain the reproducibility coefficient (RPC = 1.96 σ ), and also show the coefficient of variation (CV = the standard deviation ( σ ) as a percentage of the mean), limits of agreement (LOA = ± 1.96 σ ), and the bias offset of the measures.
Figure 6. Correlation and Bland–Altman plots comparing respiration rate measurements derived from: (a) the CI and optical flow-based method; and (b) the CI and deep flow-based method. The correlation plots contain the linear regression equation, correlation coefficient ( r 2 ), sum of squared error (SSE), and number of points. The Bland–Altman plots contain the reproducibility coefficient (RPC = 1.96 σ ), and also show the coefficient of variation (CV = the standard deviation ( σ ) as a percentage of the mean), limits of agreement (LOA = ± 1.96 σ ), and the bias offset of the measures.
Applsci 09 05246 g006
Table 1. Duration of each video, and measured mean and standard deviations of reference and our optical flow- and deep flow-based methods.
Table 1. Duration of each video, and measured mean and standard deviations of reference and our optical flow- and deep flow-based methods.
Patient IDDuration (h:m:s)Mean ± std
Reference (CI)Optical FlowDeep Flow
100:50:13 51.12 ± 6.00 47.13 ± 4.96 48.67 ± 4.69
200:22:00 55.41 ± 6.73 51.18 ± 7.35 53.23 ± 7.07
300:26:54 48.78 ± 3.58 44.12 ± 2.91 46.75 ± 2.80
400:09:22 52.36 ± 5.25 48.75 ± 3.17 50.05 ± 3.72
500:06:15 61.61 ± 3.80 51.56 ± 4.29 55.58 ± 3.28
Table 2. Root mean-squared errors (RMSE) and cross-correlation (CC) coefficients of reference breathing signal from the CI compared to our optical flow- and deep flow-based results.
Table 2. Root mean-squared errors (RMSE) and cross-correlation (CC) coefficients of reference breathing signal from the CI compared to our optical flow- and deep flow-based results.
Patient IDRMSECC Coefficients
Optical FlowDeep FlowOptical FlowDeep Flow
15.094.320.850.82
24.943.100.940.95
33.863.510.730.75
45.755.110.470.52
510.856.710.490.66
Average6.104.550.700.74

Share and Cite

MDPI and ACS Style

Sun, Y.; Wang, W.; Long, X.; Meftah, M.; Tan, T.; Shan, C.; Aarts, R.M.; de With, P.H.N. Respiration Monitoring for Premature Neonates in NICU. Appl. Sci. 2019, 9, 5246. https://doi.org/10.3390/app9235246

AMA Style

Sun Y, Wang W, Long X, Meftah M, Tan T, Shan C, Aarts RM, de With PHN. Respiration Monitoring for Premature Neonates in NICU. Applied Sciences. 2019; 9(23):5246. https://doi.org/10.3390/app9235246

Chicago/Turabian Style

Sun, Yue, Wenjin Wang, Xi Long, Mohammed Meftah, Tao Tan, Caifeng Shan, Ronald M. Aarts, and Peter H. N. de With. 2019. "Respiration Monitoring for Premature Neonates in NICU" Applied Sciences 9, no. 23: 5246. https://doi.org/10.3390/app9235246

APA Style

Sun, Y., Wang, W., Long, X., Meftah, M., Tan, T., Shan, C., Aarts, R. M., & de With, P. H. N. (2019). Respiration Monitoring for Premature Neonates in NICU. Applied Sciences, 9(23), 5246. https://doi.org/10.3390/app9235246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop