Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms
Abstract
:1. Introduction
2. Methods
2.1. Study Design
2.2. Participants
2.3. Experimental Procedure
2.4. fNIRS Data Acquisition
2.5. Deep Learning Approach
2.5.1. SynergyNet
2.5.2. UNet
- 1.
- Input Layer: The network accepts a 1D input signal.
- 2.
- Contracting Path (Encoder): The encoder is composed of five blocks. Each block contains the following: (a) two 1D convolutional layers (kernel size = 9, stride = 1, padding = 4), followed by batch normalization and ReLu activation, and (b) a max-pooling layer (pool size = 2) for downsampling.
- 3.
- Bottleneck: The bottleneck consists of two 1D convolutional layers that are each followed by batch normalization and ReLU activation.
- 4.
- Expanding Path (Decoder): The decoder mirrors the encoder and also consists of five blocks. Each block contains the following: (a) upsampling of feature maps using transposed convolution or nn.Upsample with scale factor = 2; (b) concatenation with the corresponding feature maps from the encoder path; and (c) two 1D convolutional layers with batch normalization and ReLU activation.
- 5.
- Output Layer: The output is produced with a 1D convolutional layers (kernel size = 1). This reduces the number of channels to 1 for the purpose of binary classification. A sigmoid activation function is applied to the output to obtain the binary classification result.
- 6.
- Xavier and Kaiming methods for weight initialization.
2.6. Model Evaluation
3. Results
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
1D-UNet | one-dimensional U-Net |
AI | artificial intelligence |
CV | computer vision |
ECG | electrocardiogram |
EEG | electroencephalography |
fNIRS | functional near-infrared spectroscopy |
fMRI | functional magnet resonance imaging |
References
- Pinti, P.; Tachtsidis, I.; Hamilton, A.; Hirsch, J.; Aichelburg, C.; Gilbert, S.; Burgess, P.W. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience. Ann. N. Y. Acad. Sci. 2020, 1464, 5–29. [Google Scholar] [CrossRef] [PubMed]
- Bizzego, A.; Gabrieli, G.; Azhari, A.; Lim, M.; Esposito, G. Dataset of parent-child hyperscanning functional near-infrared spectroscopy recordings. Sci. Data 2022, 9, 625. [Google Scholar] [CrossRef] [PubMed]
- Carollo, A.; Cataldo, I.; Fong, S.; Corazza, O.; Esposito, G. Unfolding the real-time neural mechanisms in addiction: Functional near-infrared spectroscopy (fNIRS) as a resourceful tool for research and clinical practice. Addict. Neurosci. 2022, 4, 100048. [Google Scholar] [CrossRef]
- Bizzego, A.; Balagtas, J.P.M.; Esposito, G. Commentary: Current status and issues regarding pre-processing of fNIRS neuroimaging data: An investigation of diverse signal filtering methods within a general linear model framework. Front. Hum. Neurosci. 2020, 14, 247. [Google Scholar] [CrossRef]
- Bizzego, A.; Neoh, M.; Gabrieli, G.; Esposito, G. A machine learning perspective on fnirs signal quality control approaches. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2292–2300. [Google Scholar] [CrossRef]
- Fishburn, F.A.; Ludlum, R.S.; Vaidya, C.J.; Medvedev, A.V. Temporal derivative distribution repair (TDDR): A motion correction method for fNIRS. Neuroimage 2019, 184, 171–179. [Google Scholar] [CrossRef]
- Brigadoi, S.; Ceccherini, L.; Cutini, S.; Scarpa, F.; Scatturin, P.; Selb, J.; Gagnon, L.; Boas, D.A.; Cooper, R.J. Motion artifacts in functional near-infrared spectroscopy: A comparison of motion correction techniques applied to real cognitive data. Neuroimage 2014, 85, 181–191. [Google Scholar] [CrossRef]
- Virtanen, J.; Noponen, T.; Kotilahti, K.; Virtanen, J.; Ilmoniemi, R.J. Accelerometer-based method for correcting signal baseline changes caused by motion artifacts in medical near-infrared spectroscopy. J. Biomed. Opt. 2011, 16, 087005. [Google Scholar] [CrossRef]
- Izzetoglu, M.; Devaraj, A.; Bunce, S.; Onaral, B. Motion artifact cancellation in NIR spectroscopy using Wiener filtering. IEEE Trans. Biomed. Eng. 2005, 52, 934–938. [Google Scholar] [CrossRef]
- Izzetoglu, M.; Chitrapu, P.; Bunce, S.; Onaral, B. Motion artifact cancellation in NIR spectroscopy using discrete Kalman filtering. Biomed. Eng. Online 2010, 9, 1–10. [Google Scholar] [CrossRef]
- Scholkmann, F.; Spichtig, S.; Muehlemann, T.; Wolf, M. How to detect and reduce movement artifacts in near-infrared imaging using moving standard deviation and spline interpolation. Physiol. Meas. 2010, 31, 649. [Google Scholar] [CrossRef] [PubMed]
- Molavi, B.; Dumont, G.A. Wavelet-based motion artifact removal for functional near-infrared spectroscopy. Physiol. Meas. 2012, 33, 259. [Google Scholar] [CrossRef] [PubMed]
- Yücel, M.A.; Selb, J.; Cooper, R.J.; Boas, D.A. Targeted principle component analysis: A new motion artifact correction approach for near-infrared spectroscopy. J. Innov. Opt. Health Sci. 2014, 7, 1350066. [Google Scholar] [CrossRef]
- Kim, M.; Lee, S.; Dan, I.; Tak, S. A deep convolutional neural network for estimating hemodynamic response function with reduction of motion artifacts in fNIRS. J. Neural Eng. 2022, 19, 016017. [Google Scholar] [CrossRef]
- Kim, C.K.; Lee, S.; Koh, D.; Kim, B.M. Development of wireless NIRS system with dynamic removal of motion artifacts. Biomed. Eng. Lett. 2011, 1, 254–259. [Google Scholar] [CrossRef]
- Metz, A.J.; Wolf, M.; Achermann, P.; Scholkmann, F. A new approach for automatic removal of movement artifacts in near-infrared spectroscopy time series by means of acceleration data. Algorithms 2015, 8, 1052–1075. [Google Scholar] [CrossRef]
- Islam, M.T.; Zabir, I.; Ahamed, S.T.; Yasar, M.T.; Shahnaz, C.; Fattah, S.A. A time-frequency domain approach of heart rate estimation from photoplethysmographic (PPG) signal. Biomed. Signal Process. Control 2017, 36, 146–154. [Google Scholar] [CrossRef]
- von Lühmann, A.; Boukouvalas, Z.; Müller, K.R.; Adalı, T. A new blind source separation framework for signal analysis and artifact rejection in functional near-infrared spectroscopy. Neuroimage 2019, 200, 72–88. [Google Scholar] [CrossRef]
- Duraj, K.; Piaseczna, N.; Kostka, P.; Tkacz, E. Semantic segmentation of 12-lead ECG using 1D residual U-net with squeeze-excitation blocks. Appl. Sci. 2022, 12, 3332. [Google Scholar] [CrossRef]
- Moskalenko, V.; Zolotykh, N.; Osipov, G. Deep learning for ECG segmentation. In Proceedings of the Advances in Neural Computation, Machine Learning, and Cognitive Research III: Selected Papers from the XXI International Conference on Neuroinformatics, Dolgoprudny, Russia, 7–11 October 2019; Springer: Cham, Switzerland, 2020; pp. 246–254. [Google Scholar]
- Wu, C.Y.; Xu, Q.; Neumann, U. Synergy between 3dmm and 3d landmarks for accurate 3d facial geometry. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; pp. 453–463. [Google Scholar]
- Lanka, P.; Bortfeld, H.; Huppert, T.J. Correction of global physiology in resting-state functional near-infrared spectroscopy. Neurophotonics 2022, 9, 035003. [Google Scholar] [CrossRef]
- Santosa, H.; Zhai, X.; Fishburn, F.; Sparto, P.J.; Huppert, T.J. Quantitative comparison of correction techniques for removing systemic physiological signal in functional near-infrared spectroscopy studies. Neurophotonics 2020, 7, 035009. [Google Scholar] [CrossRef] [PubMed]
- Strangman, G.E.; Li, Z.; Zhang, Q. Depth sensitivity and source-detector separations for near infrared spectroscopy based on the Colin27 brain template. PLoS ONE 2013, 8, e66319. [Google Scholar] [CrossRef] [PubMed]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Carollo, A.; Bizzego, A.; Gabrieli, G.; Wong, K.K.Y.; Raine, A.; Esposito, G. I’m alone but not lonely. U-shaped pattern of self-perceived loneliness during the COVID-19 pandemic in the UK and Greece. Public Health Pract. 2021, 2, 100219. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Zeiler, M.D. Adadelta: An adaptive learning rate method. arXiv 2012, arXiv:1212.5701. [Google Scholar]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, 14 September 2017; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2017; pp. 240–248. [Google Scholar]
- Ogwok, D.; Ehlers, E.M. Jaccard index in ensemble image segmentation: An approach. In Proceedings of the 2022 5th International Conference on Computational Intelligence and Intelligent Systems, Quzhou, China, 4–6 November 2022; pp. 9–14. [Google Scholar]
- Bizzego, A.; Carollo, A.; Lim, M.; Esposito, G. Effects of individual research practices on fNIRS signal quality and latent characteristics. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 3515–3521. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bizzego, A.; Carollo, A.; Senay, B.; Fong, S.; Furlanello, C.; Esposito, G. Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms. Sensors 2024, 24, 6821. https://doi.org/10.3390/s24216821
Bizzego A, Carollo A, Senay B, Fong S, Furlanello C, Esposito G. Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms. Sensors. 2024; 24(21):6821. https://doi.org/10.3390/s24216821
Chicago/Turabian StyleBizzego, Andrea, Alessandro Carollo, Burak Senay, Seraphina Fong, Cesare Furlanello, and Gianluca Esposito. 2024. "Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms" Sensors 24, no. 21: 6821. https://doi.org/10.3390/s24216821
APA StyleBizzego, A., Carollo, A., Senay, B., Fong, S., Furlanello, C., & Esposito, G. (2024). Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms. Sensors, 24(21), 6821. https://doi.org/10.3390/s24216821