In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review
Abstract
:1. Introduction
Contributions of This Article
- A discussion on why in-home care monitoring systems using depth sensors are relevant;
- A systematic review on state-of-the-art computing techniques for in-home monitoring systems for seniors based on depth data;
- Survey on benchmark depth information datasets related to in-home seniors’ activities;
- Discussion on future directions and potential ideas for future research.
2. Terminology and Backgrounds
2.1. In-Home Monitoring Systems for Seniors
2.1.1. Human Fall
2.1.2. Other Elderly Activities
2.2. Computing
2.2.1. Machine Learning
2.2.2. Deep Learning
2.2.3. Edge Computing
2.3. Depth Sensor and Imagery
2.4. Gait Analysis
3. Survey on State-of-the-Art
3.1. Fall Detection
3.1.1. Fall Detection without Gait Parameter
3.1.2. Fall Detection with Gait Parameter
3.2. Activity Analysis
3.2.1. Activity Analysis without Gait Parameter
3.2.2. Activity Analysis with Gait Parameter
4. Survey of Benchmark Datasets
5. Discussions and Future Scopes
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Khan, H.T. Population ageing in a globalized world: Risks and dilemmas? J. Eval. Clin. Pract. 2019, 25, 754–760. [Google Scholar] [CrossRef]
- Mitchell, E.; Walker, R. Global ageing: Successes, challenges and opportunities. Br. J. Hosp. Med. 2020, 81, 1–9. [Google Scholar] [CrossRef]
- Busch, I.M.; Moretti, F.; Mazzi, M.; Wu, A.W.; Rimondini, M. What we have learned from two decades of epidemics and pandemics: A systematic review and meta-analysis of the psychological burden of frontline healthcare workers. Psychother. Psychosom. 2021, 90, 1–13. [Google Scholar] [CrossRef]
- Florence, C.S.; Bergen, G.; Atherly, A.; Burns, E.; Stevens, J.; Drake, C. Medical costs of fatal and nonfatal falls in older adults. J. Am. Geriatr. Soc. 2018, 66, 693–698. [Google Scholar] [CrossRef] [Green Version]
- Petersen, N.; König, H.H.; Hajek, A. The link between falls, social isolation and loneliness: A systematic review. Arch. Gerontol. Geriatr. 2020, 88, 104020. [Google Scholar] [CrossRef] [PubMed]
- Alam, E.; Sufian, A.; Dutta, P.; Leo, M. Vision-based human fall detection systems using deep learning: A review. Comput. Biol. Med. 2022, 146, 105626. [Google Scholar] [CrossRef] [PubMed]
- Sabo, K.; Chin, E. Self-care needs and practices for the older adult caregiver: An integrative review. Geriatr. Nurs. 2021, 42, 570–581. [Google Scholar] [CrossRef] [PubMed]
- Maresova, P.; Rezny, L.; Bauer, P.; Fadeyia, O.; Eniayewu, O.; Barakovic, S.; Husic, J. An Effectiveness and Cost-Estimation Model for Deploying Assistive Technology Solutions in Elderly Care. Int. J. Healthc. Manag. 2022. [Google Scholar] [CrossRef]
- Abou Allaban, A.; Wang, M.; Padır, T. A systematic review of robotics research in support of in-home care for older adults. Information 2020, 11, 75. [Google Scholar] [CrossRef] [Green Version]
- Ho, A. Are we ready for artificial intelligence health monitoring in elder care? BMC Geriatr. 2020, 20, 358. [Google Scholar] [CrossRef]
- Qian, K.; Zhang, Z.; Yamamoto, Y.; Schuller, B.W. Artificial Intelligence Internet of Things for the Elderly: From Assisted Living to Health-Care Monitoring. IEEE Signal Process. Mag. 2021, 38, 78–88. [Google Scholar] [CrossRef]
- Szermer, M.; Zając, P.; Amrozik, P.; Maj, C.; Jankowski, M.; Jabłoński, G.; Kiełbik, R.; Nazdrowicz, J.; Napieralska, M.; Sakowicz, B. A capacitive 3-Axis MEMS accelerometer for medipost: A portable system dedicated to monitoring imbalance disorders. Sensors 2021, 21, 3564. [Google Scholar] [CrossRef] [PubMed]
- Liaqat, S.; Dashtipour, K.; Shah, S.A.; Rizwan, A.; Alotaibi, A.A.; Althobaiti, T.; Arshad, K.; Assaleh, K.; Ramzan, N. Novel Ensemble Algorithm for Multiple Activity Recognition in Elderly People Exploiting Ubiquitous Sensing Devices. IEEE Sens. J. 2021, 21, 18214–18221. [Google Scholar] [CrossRef]
- Philip, N.Y.; Rodrigues, J.J.; Wang, H.; Fong, S.J.; Chen, J. Internet of Things for in-home health monitoring systems: Current advances, challenges and future directions. IEEE J. Sel. Areas Commun. 2021, 39, 300–310. [Google Scholar] [CrossRef]
- Wang, J.; Spicher, N.; Warnecke, J.M.; Haghi, M.; Schwartze, J.; Deserno, T.M. Unobtrusive health monitoring in private spaces: The smart home. Sensors 2021, 21, 864. [Google Scholar] [CrossRef] [PubMed]
- Sufian, A.; You, C.; Dong, M. A Deep Transfer Learning-based Edge Computing Method for Home Health Monitoring. In Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 24–26 March 2021; pp. 1–6. [Google Scholar]
- Cippitelli, E.; Fioranelli, F.; Gambi, E.; Spinsante, S. Radar and RGB-depth sensors for fall detection: A review. IEEE Sens. J. 2017, 17, 3585–3604. [Google Scholar] [CrossRef] [Green Version]
- Eick, S.; Antón, A.I. Enhancing privacy in robotics via judicious sensor selection. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 7156–7165. [Google Scholar]
- Xia, T.; Han, J.; Mascolo, C. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues. Exp. Biol. Med. 2022. [Google Scholar] [CrossRef]
- Gokturk, S.B.; Yalcin, H.; Bamji, C. A time-of-flight depth sensor-system description, issues and solutions. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar]
- Monteiro, K.; Rocha, E.; Silva, E.; Santos, G.L.; Santos, W.; Endo, P.T. Developing an e-health system based on IoT, fog and cloud computing. In Proceedings of the 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), Zurich, Switzerland, 17–20 December 2018; pp. 17–18. [Google Scholar]
- Jurado Pérez, L.; Salvachúa, J. An Approach to Build e-Health IoT Reactive Multi-Services Based on Technologies around Cloud Computing for Elderly Care in Smart City Homes. Appl. Sci. 2021, 11, 5172. [Google Scholar] [CrossRef]
- Hartmann, M.; Hashmi, U.S.; Imran, A. Edge computing in smart health care systems: Review, challenges, and research directions. Trans. Emerg. Telecommun. Technol. 2019, 33, e3710. [Google Scholar] [CrossRef]
- Bloom, D.E.; Canning, D.; Lubet, A. Global population aging: Facts, challenges, solutions & perspectives. Daedalus 2015, 144, 80–92. [Google Scholar]
- Chang, A.Y.; Skirbekk, V.F.; Tyrovolas, S.; Kassebaum, N.J.; Dieleman, J.L. Measuring population ageing: An analysis of the Global Burden of Disease Study 2017. Lancet Public Health 2019, 4, e159–e167. [Google Scholar] [CrossRef] [PubMed]
- Aceto, G.; Persico, V.; Pescapé, A. The role of Information and Communication Technologies in healthcare: Taxonomies, perspectives, and challenges. J. Netw. Comput. Appl. 2018, 107, 125–154. [Google Scholar] [CrossRef]
- Malwade, S.; Abdul, S.S.; Uddin, M.; Nursetyo, A.A.; Fernandez-Luque, L.; Zhu, X.K.; Cilliers, L.; Wong, C.P.; Bamidis, P.; Li, Y.C.J. Mobile and wearable technologies in healthcare for the ageing population. Comput. Methods Programs Biomed. 2018, 161, 233–237. [Google Scholar] [CrossRef]
- Senbekov, M.; Saliev, T.; Bukeyeva, Z.; Almabayeva, A.; Zhanaliyeva, M.; Aitenova, N.; Toishibekov, Y.; Fakhradiyev, I. The recent progress and applications of digital technologies in healthcare: A review. Int. J. Telemed. Appl. 2020, 2020, 8830200. [Google Scholar] [CrossRef]
- Wang, Z.; Ramamoorthy, V.; Gal, U.; Guez, A. Possible life saver: A review on human fall detection technology. Robotics 2020, 9, 55. [Google Scholar] [CrossRef]
- Lu, N.; Wu, Y.; Feng, L.; Song, J. Deep learning for fall detection: Three-dimensional CNN combined with LSTM on video kinematic data. IEEE J. Biomed. Health Inform. 2018, 23, 314–323. [Google Scholar] [CrossRef] [PubMed]
- Singh, A.; Rehman, S.U.; Yongchareon, S.; Chong, P.H.J. Sensor technologies for fall detection systems: A review. IEEE Sens. J. 2020, 20, 6889–6919. [Google Scholar] [CrossRef]
- Lentzas, A.; Vrakas, D. Non-intrusive human activity recognition and abnormal behavior detection on elderly people: A review. Artif. Intell. Rev. 2019, 53, 1975–2021. [Google Scholar] [CrossRef]
- Sapci, A.H.; Sapci, H.A. Innovative assisted living tools, remote monitoring technologies, artificial intelligence-driven solutions, and robotic systems for aging societies: Systematic review. JMIR Aging 2019, 2, e15429. [Google Scholar] [CrossRef]
- Grossi, G.; Lanzarotti, R.; Napoletano, P.; Noceti, N.; Odone, F. Positive technology for elderly well-being: A review. Pattern Recognit. Lett. 2020, 137, 61–70. [Google Scholar] [CrossRef]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
- Virvou, M.; Alepis, E.; Tsihrintzis, G.A.; Jain, L.C. Machine learning paradigms. In Machine Learning Paradigms; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–5. [Google Scholar]
- Maulud, D.; Abdulazeez, A.M. A Review on Linear Regression Comprehensive in Machine Learning. J. Appl. Sci. Technol. Trends 2020, 1, 140–147. [Google Scholar] [CrossRef]
- Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef] [Green Version]
- Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef]
- Shinde, P.P.; Shah, S. A review of machine learning and deep learning applications. In Proceedings of the 2018 Fourth International Conference On Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
- Qayyum, A.; Qadir, J.; Bilal, M.; Al-Fuqaha, A. Secure and robust machine learning for healthcare: A survey. IEEE Rev. Biomed. Eng. 2020, 14, 156–180. [Google Scholar] [CrossRef]
- Sufian, A.; Ghosh, A.; Sadiq, A.S.; Smarandache, F. A survey on deep transfer learning to edge computing for mitigating the COVID-19 pandemic. J. Syst. Archit. 2020, 108, 101830. [Google Scholar] [CrossRef]
- Ghosh, A.; Sufian, A.; Sultana, F.; Chakrabarti, A.; De, D. Fundamental concepts of convolutional neural network. In Recent Trends and Advances in Artificial Intelligence and Internet of Things; Springer: Berlin/Heidelberg, Germany, 2020; pp. 519–567. [Google Scholar]
- Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
- Sejnowski, T.J. The Deep Learning Revolution; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef]
- Sengupta, S.; Basak, S.; Saikia, P.; Paul, S.; Tsalavoutis, V.; Atiah, F.; Ravi, V.; Peters, A. A review of deep learning with special emphasis on architectures, applications and recent trends. Knowl.-Based Syst. 2020, 194, 105596. [Google Scholar] [CrossRef] [Green Version]
- Ashton, K. That ‘internet of things’ thing. RFID J. 2009, 22, 97–114. [Google Scholar]
- De Donno, M.; Tange, K.; Dragoni, N. Foundations and evolution of modern computing paradigms: Cloud, iot, edge, and fog. IEEE Access 2019, 7, 150936–150948. [Google Scholar] [CrossRef]
- Sadeeq, M.M.; Abdulkareem, N.M.; Zeebaree, S.R.; Ahmed, D.M.; Sami, A.S.; Zebari, R.R. IoT and Cloud computing issues, challenges and opportunities: A review. Qubahan Acad. J. 2021, 1, 1–7. [Google Scholar] [CrossRef]
- Yousefpour, A.; Fung, C.; Nguyen, T.; Kadiyala, K.; Jalali, F.; Niakanlahiji, A.; Kong, J.; Jue, J.P. All one needs to know about fog computing and related edge computing paradigms: A complete survey. J. Syst. Archit. 2019, 98, 289–330. [Google Scholar] [CrossRef]
- Qiu, T.; Chi, J.; Zhou, X.; Ning, Z.; Atiquzzaman, M.; Wu, D.O. Edge computing in industrial internet of things: Architecture, advances and challenges. IEEE Commun. Surv. Tutorials 2020, 22, 2462–2488. [Google Scholar] [CrossRef]
- Dawar, N.; Kehtarnavaz, N. A convolutional neural network-based sensor fusion system for monitoring transition movements in healthcare applications. In Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AK, USA, 12–15 June 2018; pp. 482–485. [Google Scholar]
- Oyedotun, O.K.; Demisse, G.; El Rahman Shabayek, A.; Aouada, D.; Ottersten, B. Facial Expression Recognition via Joint Deep Learning of RGB-Depth Map Latent Representations. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Kim, K.; Jalal, A.; Mahmood, M. Vision-based Human Activity recognition system using depth silhouettes: A Smart home system for monitoring the residents. J. Electr. Eng. Technol. 2019, 14, 2567–2573. [Google Scholar] [CrossRef]
- Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery. Pattern Recognit. Lett. 2013, 34, 1995–2006. [Google Scholar] [CrossRef]
- Galna, B.; Barry, G.; Jackson, D.; Mhiripiri, D.; Olivier, P.; Rochester, L. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson’s disease. Gait Posture 2014, 39, 1062–1068. [Google Scholar] [CrossRef] [Green Version]
- Guzsvinecz, T.; Szucs, V.; Sik-Lanyi, C. Suitability of the Kinect sensor and Leap Motion controller—A literature review. Sensors 2019, 19, 1072. [Google Scholar] [CrossRef] [Green Version]
- Kadambi, A.; Bhandari, A.; Raskar, R. 3d depth cameras in vision: Benefits and limitations of the hardware. In Computer Vision and Machine Learning with RGB-D Sensors; Springer: Berlin/Heidelberg, Germany, 2014; pp. 3–26. [Google Scholar]
- Caldas, R.; Mundt, M.; Potthast, W.; de Lima Neto, F.B.; Markert, B. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms. Gait Posture 2017, 57, 204–210. [Google Scholar] [CrossRef] [PubMed]
- Jarchi, D.; Pope, J.; Lee, T.K.; Tamjidi, L.; Mirzaei, A.; Sanei, S. A review on accelerometry-based gait analysis and emerging clinical applications. IEEE Rev. Biomed. Eng. 2018, 11, 177–194. [Google Scholar] [CrossRef]
- Gabel, M.; Gilad-Bachrach, R.; Renshaw, E.; Schuster, A. Full body gait analysis with Kinect. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 1964–1967. [Google Scholar]
- Xefteris, V.R.; Tsanousa, A.; Meditskos, G.; Vrochidis, S.; Kompatsiaris, I. Performance, challenges, and limitations in multimodal fall detection systems: A review. IEEE Sensors J. 2021, 21, 18398–18409. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, Y.; Yang, W. Video Based Fall Detection Using Human Poses. In Proceedings of the CCF Conference on Big Data, Guangzhou, China, 8–10 January 2022; pp. 283–296. [Google Scholar]
- Khraief, C.; Benzarti, F.; Amiri, H. Elderly fall detection based on multi-stream deep convolutional networks. Multimed. Tools Appl. 2020, 79, 19537–19560. [Google Scholar] [CrossRef]
- Abobakr, A.; Hossny, M.; Abdelkader, H.; Nahavandi, S. RGB-D fall detection via deep residual convolutional lstm networks. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, ACT, Australia, 10–13 December 2018; pp. 1–7. [Google Scholar]
- Xu, T.; Zhou, Y. Elders’ fall detection based on biomechanical features using depth camera. Int. J. Wavelets, Multiresolution Inf. Process. 2018, 16, 1840005. [Google Scholar] [CrossRef]
- Biswas, A.; Dey, B.; Poudyel, B.; Sarkar, N.; Olariu, T. Automatic fall detection using Orbbec Astra 3D pro depth images. J. Intell. Fuzzy Syst. 2022, 43, 1707–1715. [Google Scholar] [CrossRef]
- Mazurek, P.; Wagner, J.; Morawski, R.Z. Use of kinematic and mel-cepstrum-related features for fall detection based on data from infrared depth sensors. Biomed. Signal Process. Control. 2018, 40, 102–110. [Google Scholar] [CrossRef]
- Akagündüz, E.; Aslan, M.; Şengür, A.; Wang, H.; Ince, M.C. Silhouette orientation volumes for efficient fall detection in depth videos. IEEE J. Biomed. Health Inform. 2016, 21, 756–763. [Google Scholar] [CrossRef]
- Aslan, M.; Sengur, A.; Xiao, Y.; Wang, H.; Ince, M.C.; Ma, X. Shape feature encoding via fisher vector for efficient fall detection in depth-videos. Appl. Soft Comput. 2015, 37, 1023–1028. [Google Scholar] [CrossRef]
- Ma, X.; Wang, H.; Xue, B.; Zhou, M.; Ji, B.; Li, Y. Depth-based human fall detection via shape features and improved extreme learning machine. IEEE J. Biomed. Health Inform. 2014, 18, 1915–1922. [Google Scholar] [CrossRef] [PubMed]
- Bian, Z.P.; Hou, J.; Chau, L.P.; Magnenat-Thalmann, N. Fall detection based on body part tracking using a depth camera. IEEE J. Biomed. Health Inform. 2014, 19, 430–439. [Google Scholar] [CrossRef] [PubMed]
- Kepski, M.; Kwolek, B. Fall detection using ceiling-mounted 3d depth camera. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; Volume 2, pp. 640–647. [Google Scholar]
- Rougier, C.; Auvinet, E.; Rousseau, J.; Mignotte, M.; Meunier, J. Fall detection from depth map video sequences. In Proceedings of the International Conference on Smart Homes And Health Telematics, Montreal, QC, Canada, 20–22 June 2011; pp. 121–128. [Google Scholar]
- Nghiem, A.T.; Auvinet, E.; Meunier, J. Head detection using kinect camera and its application to fall detection. In Proceedings of the 2012 11th International Conference on Information Science, Signal Processing and Their Applications (ISSPA), Montreal, QC, Canada, 2–5 July 2012; pp. 164–169. [Google Scholar]
- Zhang, Z.; Liu, W.; Metsis, V.; Athitsos, V. Athitsos, V. A viewpoint-independent statistical method for fall detection. In Proceedings of the Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 3626–3630.
- Kepski, M.; Kwolek, B. Human fall detection using Kinect sensor. In Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013, Milkow, Poland, 27–29 May 2013; pp. 743–752. [Google Scholar]
- Gasparrini, S.; Cippitelli, E.; Spinsante, S.; Gambi, E. A depth-based fall detection system using a Kinect® sensor. Sensors 2014, 14, 2756–2775. [Google Scholar] [CrossRef] [PubMed]
- Yang, L.; Ren, Y.; Hu, H.; Tian, B. New fast fall detection method based on spatio-temporal context tracking of head by using depth images. Sensors 2015, 15, 23004–23019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, L.; Ren, Y.; Zhang, W. 3D depth image analysis for indoor fall detection of elderly people. Digit. Commun. Netw. 2016, 2, 24–34. [Google Scholar] [CrossRef] [Green Version]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [Green Version]
- Chen, W.; Jiang, Z.; Guo, H.; Ni, X. Fall detection based on key points of human-skeleton using openpose. Symmetry 2020, 12, 744. [Google Scholar] [CrossRef]
- Sampath Dakshina Murthy, A.; Karthikeyan, T.; Vinoth Kanna, R. Gait-based person fall prediction using deep learning approach. Soft Comput. 2021, 26, 12933–12941. [Google Scholar] [CrossRef]
- Amsaprabhaa, M.; Jane, Y.N.; Nehemiah, H.K. Multimodal Spatiotemporal Skeletal Kinematic Gait Feature Fusion for Vision-based Fall Detection. Expert Syst. Appl. 2022, 212, 118681. [Google Scholar]
- Xu, Y.; Chen, J.; Yang, Q.; Guo, Q. Human posture recognition and fall detection using Kinect V2 camera. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; pp. 8488–8493. [Google Scholar]
- Dubois, A.; Charpillet, F. A gait analysis method based on a depth camera for fall prevention. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4515–4518. [Google Scholar]
- Parajuli, M.; Tran, D.; Ma, W.; Sharma, D. Senior health monitoring using Kinect. In Proceedings of the 2012 Fourth International Conference on Communications and Electronics (ICCE), Hue, Vietnam, 1–3 August 2012; pp. 309–312. [Google Scholar]
- Stone, E.E.; Skubic, M. Evaluation of an inexpensive depth camera for passive in-home fall risk assessment. In Proceedings of the 2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, Dublin, Ireland, 23–26 May 2011; pp. 71–77. [Google Scholar]
- Stone, E.E.; Skubic, M. Passive in-home measurement of stride-to-stride gait variability comparing vision and Kinect sensing. In Proceedings of the 2011 Annual International Conference of The IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 6491–6494. [Google Scholar]
- Baldewijns, G.; Verheyden, G.; Vanrumste, B.; Croonenborghs, T. Validation of the kinect for gait analysis using the GAITRite walkway. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 5920–5923. [Google Scholar]
- Jaouedi, N.; Perales, F.J.; Buades, J.M.; Boujnah, N.; Bouhlel, M.S. Prediction of human activities based on a new structure of skeleton features and deep learning model. Sensors 2020, 20, 4944. [Google Scholar] [CrossRef]
- Phyo, C.N.; Zin, T.T.; Tin, P. Deep learning for recognizing human activities using motions of skeletal joints. IEEE Trans. Consum. Electron. 2019, 65, 243–252. [Google Scholar] [CrossRef]
- Bagate, A.; Shah, M. Human activity recognition using rgb-d sensors. In Proceedings of the 2019 International Conference on Intelligent Computing and Control Systems (ICCS), Madurai, India, 15–17 May 2019; pp. 902–905. [Google Scholar]
- Gu, Y.; Ye, X.; Sheng, W. Depth MHI Based Deep Learning Model for Human Action Recognition. In Proceedings of the 2018 13th World Congress on Intelligent Control and Automation (WCICA), Changsha, China, 4–8 July 2018; pp. 395–400. [Google Scholar]
- Uddin, M.Z.; Hassan, M.M.; Almogren, A.; Alamri, A.; Alrubaian, M.; Fortino, G. Facial expression recognition utilizing local direction-based robust features and deep belief network. IEEE Access 2017, 5, 4525–4536. [Google Scholar] [CrossRef]
- Ji, X.; Zhao, Q.; Cheng, J.; Ma, C. Exploiting spatio-temporal representation for 3D human action recognition from depth map sequences. Knowl. -Based Syst. 2021, 227, 107040. [Google Scholar] [CrossRef]
- Yadav, S.K.; Tiwari, K.; Pandey, H.M.; Akbar, S.A. Skeleton-based human activity recognition using ConvLSTM and guided feature learning. Soft Comput. 2022, 26, 877–890. [Google Scholar] [CrossRef]
- Jalal, A.; Kamal, S.; Kim, D. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems. Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 54–62. [Google Scholar] [CrossRef] [Green Version]
- Kamal, S.; Jalal, A.; Kim, D. Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM. J. Electr. Eng. Technol. 2016, 11, 1857–1862. [Google Scholar] [CrossRef] [Green Version]
- Farooq, A.; Jalal, A.; Kamal, S. Dense RGB-D map-based human tracking and activity recognition using skin joints features and self-organizing map. KSII Trans. Internet Inf. Syst. TIIS 2015, 9, 1856–1869. [Google Scholar]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. Action recognition from depth sequences using depth motion maps-based local binary patterns. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 1092–1099. [Google Scholar]
- Jalal, A.; Kamal, S.; Kim, D. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments. Sensors 2014, 14, 11735–11759. [Google Scholar] [CrossRef]
- Wang, J.; Liu, Z.; Wu, Y.; Yuan, J. Mining actionlet ensemble for action recognition with depth cameras. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1290–1297. [Google Scholar]
- Jalal, A.; Kamal, S. Real-time life logging via a depth silhouette-based human activity recognition system for smart home services. In Proceedings of the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, Korea, 26–29 August 2014; pp. 74–80. [Google Scholar]
- Kosmopoulos, D.I.; Doliotis, P.; Athitsos, V.; Maglogiannis, I. Fusion of color and depth video for human behavior recognition in an assistive environment. In Proceedings of the International Conference on Distributed, Ambient, and Pervasive Interactions, Toronto, ON, Canada, 17–22 July 2013; pp. 42–51. [Google Scholar]
- Bulbul, M.F.; Ali, H. Gradient local auto-correlation features for depth human action recognition. SN Appl. Sci. 2021, 3, 535. [Google Scholar] [CrossRef]
- Srivastav, V.; Gangi, A.; Padoy, N. Human pose estimation on privacy-preserving low-resolution depth images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 583–591. [Google Scholar]
- Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299.
- Uddin, M.Z.; Kim, M.R. A deep learning-based gait posture recognition from depth information for smart home applications. In Advances in Computer Science and Ubiquitous Computing; Springer: Berlin/Heidelberg, Germany, 2016; pp. 407–413. [Google Scholar]
- Bari, A.H.; Gavrilova, M.L. Artificial neural network based gait recognition using kinect sensor. IEEE Access 2019, 7, 162708–162722. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, J.; Yan, W.Q. Gait recognition using multichannel convolution neural networks. Neural Comput. Appl. 2020, 32, 14275–14285. [Google Scholar] [CrossRef]
- Zia Uddin, M.; Kim, T.S.; Kim, J.T. Video-based indoor human gait recognition using depth imaging and hidden Markov model: A smart system for smart home. Indoor Built Environ. 2011, 20, 120–128. [Google Scholar] [CrossRef]
- Nandy, A.; Chakraborty, P. A new paradigm of human gait analysis with Kinect. In Proceedings of the 2015 Eighth International Conference on Contemporary Computing (IC3), Noida, India, 20–22 August 2015; pp. 443–448. [Google Scholar]
- Mondal, S.; Nandy, A.; Chakrabarti, A.; Chakraborty, P.; Nandi, G.C. A framework for synthesis of human gait oscillation using intelligent gait oscillation detector (IGOD). In Proceedings of the International Conference on Contemporary Computing, Noida, India, 9–11 August 2010; pp. 340–349. [Google Scholar]
- Chaaraoui, A.A.; Padilla-López, J.R.; Flórez-Revuelta, F. Abnormal gait detection with RGB-D devices using joint motion history features. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015; Volume 7, pp. 1–6. [Google Scholar]
- Twomey, N.; Diethe, T.; Kull, M.; Song, H.; Camplani, M.; Hannuna, S.; Fafoutis, X.; Zhu, N.; Woznowski, P.; Flach, P.; et al. The SPHERE challenge: Activity recognition with multimodal sensor data. arXiv 2016, arXiv:1603.00797. [Google Scholar]
- Dao, N.L.; Zhang, Y.; Zheng, J.; Cai, J. Kinect-based non-intrusive human gait analysis and visualization. In Proceedings of the 2015 IEEE 17th International Workshop on Multimedia Signal Processing (MMSP), Xiamen, China, 19–21 October 2015; pp. 1–6. [Google Scholar]
- Dubois, A.; Charpillet, F. Measuring frailty and detecting falls for elderly home care using depth camera. J. Ambient. Intell. Smart Environ. 2017, 9, 469–481. [Google Scholar] [CrossRef] [Green Version]
- Bei, S.; Zhen, Z.; Xing, Z.; Taocheng, L.; Qin, L. Movement disorder detection via adaptively fused gait analysis based on kinect sensors. IEEE Sens. J. 2018, 18, 7305–7314. [Google Scholar] [CrossRef]
- Cheng, Z.; Qin, L.; Ye, Y.; Huang, Q.; Tian, Q. Human daily action analysis with multi-view and color-depth data. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 52–61. [Google Scholar]
- Leightley, D.; Yap, M.H.; Coulson, J.; Barnouin, Y.; McPhee, J.S. Benchmarking human motion analysis using kinect one: An open source dataset. In Proceedings of the 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; pp. 1–7. [Google Scholar]
- Shahroudy, A.; Liu, J.; Ng, T.T.; Wang, G. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
- Liu, C.; Hu, Y.; Li, Y.; Song, S.; Liu, J. PKU-MMD: A large scale benchmark for skeleton-based human action understanding. In Proceedings of the Proceedings of the Workshop on Visual Analysis in Smart and Connected Communities, Mountain View, CA, USA, 23 October 2017; pp. 1–8.
- Aloba, A.; Flores, G.; Woodward, J.; Shaw, A.; Castonguay, A.; Cuba, I.; Dong, Y.; Jain, E.; Anthony, L. Kinder-Gator: The UF Kinect Database of Child and Adult Motion. In Proceedings of the Eurographics (Short Papers), Delft, The Netherlands, 16–20 April 2018; pp. 13–16. [Google Scholar]
- Jang, J.; Kim, D.; Park, C.; Jang, M.; Lee, J.; Kim, J. ETRI-Activity3D: A Large-Scale RGB-D Dataset for Robots to Recognize Daily Activities of the Elderly. arXiv 2020, arXiv:2003.01920. [Google Scholar]
- Fiorini, L.; Cornacchia Loizzo, F.G.; Sorrentino, A.; Rovini, E.; Di Nuovo, A.; Cavallo, F. The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition. Sci. Data 2022, 9, 218. [Google Scholar] [CrossRef]
- Byeon, Y.H.; Kim, D.; Lee, J.; Kwak, K.C. Body and hand–object ROI-based behavior recognition using deep learning. Sensors 2021, 21, 1838. [Google Scholar] [CrossRef]
- Byeon, Y.H.; Kim, D.; Lee, J.; Kwak, K.C. Ensemble Three-Stream RGB-S Deep Neural Network for Human Behavior Recognition Under Intelligent Home Service Robot Environments. IEEE Access 2021, 9, 73240–73250. [Google Scholar] [CrossRef]
- Hwang, H.; Jang, C.; Park, G.; Cho, J.; Kim, I.J. Eldersim: A synthetic data generation platform for human action recognition in eldercare applications. arXiv 2020, arXiv:2010.14742. [Google Scholar] [CrossRef]
- Dong, Y.; Aloba, A.; Anthony, L.; Jain, E. Style Translation to Create Child-like Motion. In Proceedings of the Eurographics (Posters), Delft, The Netherlands, 16–20 April 2018; pp. 31–32. [Google Scholar]
- Vatavu, R.D. The dissimilarity-consensus approach to agreement analysis in gesture elicitation studies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–13. [Google Scholar]
- Aloba, A.; Luc, A.; Woodward, J.; Dong, Y.; Zhang, R.; Jain, E.; Anthony, L. Quantifying differences between child and adult motion based on gait features. In Proceedings of the International Conference on Human-Computer Interaction, Paphos, Cyprus, 2–6 September 2019; pp. 385–402. [Google Scholar]
- Duan, L.; Liu, J.; Yang, W.; Huang, T.; Gao, W. Video coding for machines: A paradigm of collaborative compression and intelligent analytics. IEEE Trans. Image Process. 2020, 29, 8680–8695. [Google Scholar] [CrossRef] [PubMed]
- Karanam, S.; Li, R.; Yang, F.; Hu, W.; Chen, T.; Wu, Z. Towards contactless patient positioning. IEEE Trans. Med. Imaging 2020, 39, 2701–2710. [Google Scholar] [CrossRef] [PubMed]
- Mathe, E.; Maniatis, A.; Spyrou, E.; Mylonas, P. A deep learning approach for human action recognition using skeletal information. In GeNeDis 2018; Springer: Berlin/Heidelberg, Germany, 2020; pp. 105–114. [Google Scholar]
- Bai, Y.; Tao, Z.; Wang, L.; Li, S.; Yin, Y.; Fu, Y. Collaborative attention mechanism for multi-view action recognition. arXiv 2020, arXiv:2009.06599. [Google Scholar]
- Bai, Y.; Wang, L.; Tao, Z.; Li, S.; Fu, Y. Correlative Channel-Aware Fusion for Multi-View Time Series Classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 6714–6722. [Google Scholar]
- Peng, W.; Hong, X.; Chen, H.; Zhao, G. Learning graph convolutional network for skeleton-based human action recognition by neural searching. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 2669–2676. [Google Scholar]
- Leightley, D.; McPhee, J.S.; Yap, M.H. Automated analysis and quantification of human mobility using a depth sensor. IEEE J. Biomed. Health Inform. 2016, 21, 939–948. [Google Scholar] [CrossRef] [Green Version]
- Maudsley-Barton, S.; McPhee, J.; Bukowski, A.; Leightley, D.; Yap, M.H. A comparative study of the clinical use of motion analysis from kinect skeleton data. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 2808–2813. [Google Scholar]
- Leightley, D.; Yap, M.H. Digital analysis of sit-to-stand in masters athletes, healthy old people, and young adults using a depth sensor. Healthcare 2018, 6, 21. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Chen, L.; Xu, D.; Van Gool, L. Visual recognition in RGB images and videos by learning from RGB-D data. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2030–2036. [Google Scholar] [CrossRef] [PubMed]
- Sun, B.; Kong, D.; Wang, S.; Wang, L.; Yin, B. Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition. ACM Trans. Knowl. Discov. Data TKDD 2021, 15, 1–23. [Google Scholar] [CrossRef]
- Wang, Y.; Xiao, Y.; Lu, J.; Tan, B.; Cao, Z.; Zhang, Z.; Zhou, J.T. Discriminative Multi-View Dynamic Image Fusion for Cross-View 3-D Action Recognition. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 5332–5345. [Google Scholar] [CrossRef]
- Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. CSUR 2020, 53, 63. [Google Scholar] [CrossRef]
Study with Year | Key Points & Features | Computing Technique Used |
---|---|---|
Amrita et al. [72], 2022 | used subject’s height to width ratio and fall velocity. | CNN |
Chen et al. [68], 2022 | Used 2D and 3D poses from depth video sequences. | CNN |
Z.Chen et al. [87], 2020 | Used symmetry principle and calculated speed, angles and width-to-height ratio. | OpenPose algorithm |
Khraief et al. [69], 2020 | Combines motion, shape, color and depth information. Used transfer learning and data augmentation technique to deal with training data. | CNN and Transfer learning |
Abobakr et al. [70], 2018 | Deep hierarchical visual representation and complex temporal dynamics using residual ConvNet. | Recurrent LSTM |
T. Xu and Y. Zhou [71], 2018 | Accelerated velocity of left-of-Mass (COM) and 3D skeleton data | LSTM network |
Mazurek et al. [73], 2018 | Kinematic feature and mel-frequency-cepstrum-related features | SVM, ANN and Naïve Bayes classifier(NBC) |
Akagunduz et al. [74], 2016 | Silhouette Orientation Volume (SOV) feature, bag-of-words approach for characterization, K-medoids clustering for constructing codebook. | Naïve Bayes classifier |
Yang et al. [85], 2016 | Floor plane and shape information, as well as the threshold were calculated. Depth images were preprocessed by median filter. | V-disparity map and least square method |
Aslan et al. [75], 2015 | Curvature Scale Space (CSS) features and Fisher Vector (FV) encoding | SVM classifier |
Yang et al. [84], 2015 | Extracted silhouette with SGM and calculates threshold for the distances from head and centroid to the floor-plane. | STC algorithm |
Gasparrini et al. [83], 2014 | Uses head–ground and head–shoulder distance gap and head dimension features and calculates threshold for fall. | Ad-Hoc segmentation algorithm |
Bian et al. [77], 2014 | 3D human body joints extraction and tracking using RDT algorithm. | SVM classifier |
M. Kepski & B. Kwolek [78], 2014 | Accelerometer and features such as head–floor distance, person area and shape’s major length to width were used. | KNN classifier |
M. Kepski & B. Kwolek [82], 2013 | Extracts ground plane distance and uses segmented depth reference images. | v-disparity, Hough transform and the RANSAC algorithm |
Zhang et al. [81], 2012 | Combines viewpoint invariance, simple system setup, and statistical decision making. Uses features such as distance from the floor and acceleration and computed threshold. | Background Subtraction algorithm |
Nghiem et al. [80], 2012 | Uses centroid speed and position as the main features, and incorporates the head detection algorithm | Modified HOG Algorithm |
Rougier et al. [79], 2011 | Uses features such as human centroid height relative to the ground and body velocity. Ground plane detection and segmentation was were performed. | V-disparity approach |
Study with Year | Key Points & Features | Computing Technique Used |
---|---|---|
M.Amsaprabhaa et al. [89], 2022 | used spatiotemporal kinemetic gait features. | CNN. |
Murthy et al. [88], 2021 | Uses gait energy images | Deep convolutional neural network (DCNN) |
Xu et al. [90], 2019 | Skeleton tracking technology of Microsoft Kinect v2 sensor, Body tracker (NITE) | Optimized BP neural network |
Baldewijns et al. [95], 2014 | Calculates step length and time, centre of mass (COM), mean position, etc. Used connected component analysis to remove noisy pixels | Player detection algorithm |
A. Dubois & F. Charpillet [91], 2014 | Extracted length and duration of steps and speed of the gait, tracks centre-of-mass. | Hidden Markov Model (HMM) |
Parajuli et al. [92], 2012 | Measures gait and change in posture from sitting to standing or vice versa. Data transformation, cleaning and reduction were performed. | SVM classifier |
E.E. Stone & M. Skubic [94], 2011 | Measures stride-to-stride gait variability and assesses the ability of the two vision-based monitoring systems. | Background subtraction technique |
E.E. Stone & M. Skubic [93], 2011 | Measures temporal and spatial gait parameters, also measures walking speed, stride length, stride time, etc. | Background subtraction algorithm |
Study with Year | Key Points & Features | Computing Technique Used |
---|---|---|
S.K. Yadav et al. [102], 2022 | Used geometrical and kinematic features. | CNN, LSTM, Fully connected layer. |
X. Ji et al. [101], 2021 | Used frame-level feature termed depth-oriented gradient vector(DOGV) and captured human appearance and motion. | 3D ResNet-based CNN. |
M.F. Bulbul and H. Ali [111], 2021 | Motion and static history images were used. LBP algorithm and GLAC descriptor were also used. | KELM classifier. |
Jaouedi et al. [96], 2020 | Uses visual, temporal and 2D human skeleton features and kalman filter. A hybrid combination of different models was used. | RNN, CNN, Transfer learning. |
Srivastav et al. [112], 2019 | Integration of a super-resolution image estimator and a 2D multi-person pose estimator in a joint architecture | Modified RTPose network |
Phyo et al. [97], 2019 | Motion history images extracted using Color Skl-MHI and relative distance using RJI. Used image processing. | DCNN |
A. Bagate & M. Shah [98], 2019 | Uses spatial, i.e., skeletal joints and temporal features and reduces the convolution layer. | Convolution Neural Network |
Gu et al. [99], 2018 | MHI and evaluated on both 3D human action datasets RGBD-HuDaAct and NTU RGB+D. | ResNet-101 |
Uddin et al. [100], 2017 | Local directional strengths features were extracted by PCA, GDA and LDPP | Deep Belief network (DBN) |
Jalal et al. [103], 2017 | Extracts 3D human silhouettes and spatiotemporal joints and several other features are also fused to make some changes. | Hidden Markov Model (HMM) |
Chen et al. [106], 2015 | Depth motion maps (DMMs) and local binary patterns (LBPs) were used to capture motion cues and to achieve compact feature representation. | KELM classifier |
Jalal et al. [107], 2014 | Skeletal model and joint position were collected and life logs that contains human daily activities were generated. | Hidden Markov Model (HMM) |
Jalal et al. [109], 2014 | Human skeletal images with joint information were produced that generate life logs and also utilize magnitude and directional angular features from the joint points. | Hidden Markov Model (HMM) |
A. Jalal & S. Kamal [110], 2013 | Fused color and depth video, extracted forward and backward feature vectors and calculated some other features that describes human body information. | Hidden Markov Model(HMM) and Fused time-series classifier |
Kamal et al. [104], 2016 | Spatial depth shape and temporal joints features were fused. Human silhouettes extracted using noisy background subtraction and floor removal techniques. | Modified Hidden Markov model (M-HMM) |
Farooq et al. [105], 2015 | Extracts depth silhouettes & body skin joint features using distance position and centroid distance. | K-means clustering |
Study with Year | Key Points & Features | Computing Technique Used |
---|---|---|
Wang et al. [116], 2020 | Trituple gait silhouettes(TTGS) feature | Multichannel CNN |
A.H. Bari & M.L. Gavrilova [115], 2019 | Two features of joint relative triangle area (JRTA) and joint relative cosine dissimilarity (JRCD) | DL model |
Bei et al. [124], 2018 | Step length and gait cycle extracted using the zero-crossing detection method, combining gait symmetry and spatiotemporal parameters. | K-means and Bayesian method |
A. Dubois & M. Charpillet [123], 2017 | Centre of mass and vertical distribution silhouette features were extracted, measuring the degree of frailty. | Hidden Markov model (HMM) |
M.Z. Uddin & M.R. Kim [114], 2016 | Local directional feature and Restricted Boltzman Machine (RBM) | Deep Belief Network (DBN) |
Dao et al. [122], 2015 | Generates BVH file, uses motion analysis, motion visualization and integrates data capturing, data filtering, body reconstruction, and animation. | SVM classifier |
Chaaraoui et al. [120], 2015 | Joint motion history feature (JMH) encodes spatial and temporal information. | BagOfKeyPoses algorithm |
A. Nandy & P. Chakraborty [118], 2015 | Knee and hip angular movement, using IGOD biometric suit. Features were measured by Fisher’s discriminant analysis. | Naïve Bayes’ rule and k-Nearest Neighbor |
M. Gabel et al. [66], 2012 | Measures arm kinematics, stride duration, used 3D virtual skeleton to extract body gaits | Supervised learning approach, MART algorithm, and regression trees |
Uddin et al. [117], 2011 | Spatiotemporal features were extracted and feature space was generated using ICA and PCA, with the background removed by Gaussian probability distribution function. | Hidden Markov Model (HMM) |
Dataset | Year | Activity | Brief Description | Recently Used in |
---|---|---|---|---|
VISTA dataset [131] | 2022 | Basic gestures and daily activities | Contains 7682 action instances for the training phase and 3361 action instances for the testing phase. | New dataset (no published work available) |
ETRI-Activity3D [130] | 2020 | Daily seniors’ Activity | It contains 112,620 samples including RGB videos, depth maps, and skeleton sequences and 100 subjects performed 55 daily activities. | [132,133,134] |
Kinder-Gator [129] | 2018 | Human Motion Recognition | The dataset contains joint positions for 58 motions, such as wave, walk, kick, etc., from ten children (ages 5 to 9) and ten adults (ages 19 to 32). It also contains 19 RGB videos and 1159 motion trials. | [135,136,137] |
PKU-MMD [128] | 2017 | Human Action Analysis | Collection of 1076 long action sequences and 51 action classes. It also contains around 20,000 action instances and 5.4 millions frames. | [138,139,140] |
NTU RGB+D [127] | 2016 | Human Activity Analysis | Consists of 60 different classes and 56,880 video samples captured from 40 distinct human subjects using 80 camera viewpoints. | [141,142,143] |
K3Da [126] | 2015 | Human Motion Analysis | It includes motions collected from fifty-four participants of young and older men and women aged from 18–81 years. It captured balancing, walking, sitting, and standing. | [144,145,146] |
ACT4 [125] | 2012 | Human Daily Action | The dataset contains 6844 action clips with both color and depth information, collected from 4 viewpoints. | [147,148,149] |
Study with Year | Methods | Dataset with Accuracy | Used Resources | Running Time | Activities | Conditions | Drawbacks |
---|---|---|---|---|---|---|---|
In [102], 2022 | ConvLSTM | KinectHAR (98.89%) | NVidia TITAN-XGPU. | Not mentioned | Standing, walking slow, walking fast, sitting, bending, fall, and lying down activities. | Independent of the pose, position of the camera, individuals, clothing, etc. | Provides very high accuracy but is costly due to its complex model structure. |
In [111], 2021 | KELM Classifier | MSRAction3D (97.44%), DHA (99.13%) and UTD-MHAD (88.37%) | Desktop with intel i5-7500 Quad-core processor and 16 GB RAM | 731.4 ± 48.8 ms/40 frames | Sport actions, daily activities, and training exercises. | In consistent real-time operation, it processes 40 depth images in less than a second. | This method did not remove the noise to improve the performance, thus, some misclassifications were observed in activities such as waving, clapping, skipping, etc. |
In [150], 2020 | Multichannel CNN (MCNN) | CASIA gait B and OU-ISIR | Not mentioned | Not mentioned | Dynamic gait recognition | When there is a pause in the walking cycle, the leg is agile, walking wearing coats and walking carrying bags. | Performance reduces as they used only silhouette images, though they obtained original gait videos. |
In [97], 2019 | Image Processing and Deep Learning | UTKinect (97%) and CAD-60 (96.15%) | Not mentioned | 0.0081 s (UTKinect Action-3D) | Daily activities such as drinking water, answering the phone, and cooking. | In real time embedded systems | Complex actions related to health-problems, such as headaches and vomiting cannot be detected with this approach. |
In [124], 2018 | K-means and Bayesian | Own dataset of 120 walking | Lenovo Y700-15ISK with an i7-6700HQ CPU and 16G RAM | Not mentioned | Kinematic leg swing characteristics in combination with spatiotemporal parameters such as the step length and gait cycle. | Focused on gait analysis using frontal walking sequences. | Variation of clothing of the object decreases the accuracy. |
In [105], 2015 | K-means Clustering | Own dataset with 9 different activities. (89.72%) | PC as Intel Pentium IV 2.63GHz having 2GB RAM | Not mentioned | Walking, sit down, exercise, prepare food, stand up, cleaning, watching TV, eating meal and lying down. | In complex activities such as self-occlusion, overlapping among people, and hidden body parts, etc. | Comparatively low accuracy rate as it handles complex activities. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Momin, M.S.; Sufian, A.; Barman, D.; Dutta, P.; Dong, M.; Leo, M. In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review. Sensors 2022, 22, 9067. https://doi.org/10.3390/s22239067
Momin MS, Sufian A, Barman D, Dutta P, Dong M, Leo M. In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review. Sensors. 2022; 22(23):9067. https://doi.org/10.3390/s22239067
Chicago/Turabian StyleMomin, Md Sarfaraz, Abu Sufian, Debaditya Barman, Paramartha Dutta, Mianxiong Dong, and Marco Leo. 2022. "In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review" Sensors 22, no. 23: 9067. https://doi.org/10.3390/s22239067
APA StyleMomin, M. S., Sufian, A., Barman, D., Dutta, P., Dong, M., & Leo, M. (2022). In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review. Sensors, 22(23), 9067. https://doi.org/10.3390/s22239067