Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Work
1.3. Contribution of Current Work
- Presenting a light human-activity-recognition system using wearable sensors.
- Implementing a robust real-time model based on the Random Forest algorithm that outperforms other known classifiers and deep learning models.
- Handling a complex dataset of thirteen different human activities and gestures and improving the state-of-the-art results according to both subject-dependent and stratified k-fold cross-validation measures and using a different dataset, namely WISDM v1, for verifying model performance.
- Conducting sensitivity analysis for the applied model parameters (Random Forest size and depth).
1.4. Paper Organization
2. The Applied Approach
2.1. Dataset
2.2. Sensory Data Processing
- (F1-12) Mean and standard deviation (STD) of each of the acceleration signal (AS) and its jerk signal (JS)
- (F13-24) Autoregressive (AR) model coefficients for AS
- (F25) Signal magnitude area (SMA)
- (F26) Tilt angle (TA)
- (F27-30) Roll angle (RA) Equation (1): mean, STD, entropy of JS, and power
- (F31) Angle of x-component of AS Equation (2)
- (F32-34) Entropy of JS
- (F35-37) Power of AS
3. Experimental Results and Analysis
3.1. Setup
3.2. Subject-Dependent Evaluation
3.3. Stratified k-Fold Cross-Validation
4. Sensitivity Analysis for Model Parameters
5. Comparison with Previous Studies
6. Applied Model Performance for WISDM Dataset
7. Discussion
8. Conclusions and Future Trends
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Baker, S.B.; Xiang, W.; Atkinson, I. Internet of Things for Smart Healthcare: Technologies, Challenges, and Opportunities. IEEE Access 2017, 5, 26521–26544. [Google Scholar] [CrossRef]
- Qadri, Y.A.; Nauman, A.; Zikria, Y.B.; Vasilakos, A.V.; Kim, S.W. The Future of Healthcare Internet of Things: A Survey of Emerging Technologies. IEEE Commun. Surv. Tutor. 2020, 22, 1121–1167. [Google Scholar] [CrossRef]
- Kamruzzaman, M.M.; Alrashdi, I.; Alqazzaz, A. New Opportunities, Challenges, and Applications of Edge-AI for Connected Healthcare in Internet of Medical Things for Smart Cities. J. Healthc. Eng. 2022, 2022, 2950699. [Google Scholar] [CrossRef] [PubMed]
- Dimitrov, D.V. Medical internet of things and big data in healthcare. Healthc. Inform. Res. 2016, 22, 156–163. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- Pillai, S.G.; Haldorai, K.; Seo, W.S.; Kim, W.G. COVID-19 and hospitality 5.0: Redefining hospitality operations. Int. J. Hosp. Manag. 2021, 94, 102869. [Google Scholar] [CrossRef] [PubMed]
- Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and human–robot co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
- Vanagas, G.; Engelbrecht, R.; Damaševičius, R.; Suomi, R.; Solanas, A. eHealth Solutions for the Integrated Healthcare. J. Healthc. Eng. 2018, 2018, 3846892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hernández-Chan, G.S.; Ceh-Varela, E.E.; Sanchez-Cervantes, J.L.; Villanueva-Escalante, M.; Rodríguez-González, A.; Pérez-Gallardo, Y. Collective intelligence in medical diagnosis systems: A case study. Comput. Biol. Med. 2016, 74, 45–53. [Google Scholar] [CrossRef]
- Rodríguez-González, A.; Torres-Niño, J.; Mayer, M.A.; Alor-Hernandez, G.; Wilkinson, M.D. Analysis of a multilevel diagnosis decision support system and its implications: A case study. Comput. Math. Methods Med. 2012, 2012, e0148991. [Google Scholar] [CrossRef]
- Barsocchi, P.; Calabrò, A.; Crivello, A.; Daoudagh, S.; Furfari, F.; Girolami, M.; Marchetti, E. COVID-19 & privacy: Enhancing of indoor localization architectures towards effective social distancing. Array 2021, 9, 100051. [Google Scholar]
- Kiran, S.; Khan, M.A.; Javed, M.Y.; Alhaisoni, M.; Tariq, U.; Nam, Y.; Damaševǐcius, R.; Sharif, M. Multi-Layered Deep Learning Features Fusion for Human Action Recognition. Comput. Mater. Contin. 2021, 69, 4061–4075. [Google Scholar] [CrossRef]
- Şengül, G.; Ozcelik, E.; Misra, S.; Damaševičius, R.; Maskeliūnas, R. Fusion of smartphone sensor data for classification of daily user activities. Multimed. Tools Appl. 2021, 80, 33527–33546. [Google Scholar] [CrossRef]
- Şengül, G.; Karakaya, M.; Misra, S.; Abayomi-Alli, O.O.; Damaševičius, R. Deep learning based fall detection using smartwatches for healthcare applications. Biomed. Signal Process. Control. 2022, 71, 103242. [Google Scholar] [CrossRef]
- Khan, M.F.; Ghazal, T.M.; Said, R.A.; Fatima, A.; Abbas, S.; Khan, M.; Issa, G.F.; Ahmad, M.; Khan, M.A. An IoMT-Enabled Smart Healthcare Model to Monitor Elderly People Using Machine Learning Technique. Comput. Intell. Neurosci. 2021, 2021, 2487759. [Google Scholar] [CrossRef]
- Farsi, M. Application of ensemble RNN deep neural network to the fall detection through IoT environment. Alex. Eng. J. 2021, 60, 199–211. [Google Scholar] [CrossRef]
- Moualla, S.; Khorzom, K.; Jafar, A. Improving the performance of machine learning-based network intrusion detection systems on the UNSW-NB15 dataset. Comput. Intell. Neurosci. 2021, 2021, 5557577. [Google Scholar] [CrossRef]
- Maskeliunas, R.; Damaševicius, R.; Segal, S. A review of internet of things technologies for ambient assisted living environments. Future Internet 2019, 11, 259. [Google Scholar] [CrossRef] [Green Version]
- Agac, S.; Shoaib, M.; Incel, O.D. Context-aware and dynamically adaptable activity recognition with smart watches: A case study on smoking. Comput. Electr. Eng. 2021, 90, 106949. [Google Scholar] [CrossRef]
- Patalas-maliszewska, J.; Halikowski, D.; Damaševičius, R. An automated recognition of work activity in industrial manufacturing using convolutional neural networks. Electronics 2021, 10, 2946. [Google Scholar] [CrossRef]
- Bharti, R.; Khamparia, A.; Shabaz, M.; Dhiman, G.; Pande, S.; Singh, P. Prediction of heart disease using a combination of machine learning and deep learning. Comput. Intell. Neurosci. 2021, 2021, 8387680. [Google Scholar] [CrossRef]
- Girčys, R.; Kazanavičius, E.; Maskeliūnas, R.; Damaševičius, R.; Woźniak, M. Wearable system for real-time monitoring of hemodynamic parameters: Implementation and evaluation. Biomed. Signal Process. Control. 2020, 59, 101873. [Google Scholar] [CrossRef]
- Olmedo-Aguirre, J.O.; Reyes-Campos, J.; Alor-Hernández, G.; Machorro-Cano, I.; Rodríguez-Mazahua, L.; Sánchez-Cervantes, J.L. Remote Healthcare for Elderly People Using Wearables: A Review. Biosensors 2022, 12, 73. [Google Scholar] [CrossRef]
- Cruz-ramos, N.A.; Alor-hernández, G.; Colombo-mendoza, L.O.; Sánchez-cervantes, J.L.; Rodríguez-mazahua, L.; Guarneros-nolasco, L.R. mHealth Apps for Self-Management of Cardiovascular Diseases: A Scoping Review. Healthcare 2022, 10, 322. [Google Scholar] [CrossRef]
- Yang, Z. An Efficient Automatic Gait Anomaly Detection Method Based on Semisupervised Clustering. Comput. Intell. Neurosci. 2021, 2021, 8840156. [Google Scholar] [CrossRef]
- Priya, S.J.; Rani, A.J.; Subathra, M.S.P.; Mohammed, M.A.; Damaševičius, R.; Ubendran, N. Local pattern transformation based feature extraction for recognition of parkinson’s disease based on gait signals. Diagnostics 2021, 11, 1395. [Google Scholar] [CrossRef]
- Bokhari, S.M.; Sohaib, S.; Khan, A.R.; Shafi, M. DGRU based human activity recognition using channel state information. Measurement 2021, 167, 108245. [Google Scholar] [CrossRef]
- Cui, W.; Li, B.; Zhang, L.; Chen, Z. Device-free single-user activity recognition using diversified deep ensemble learning. Appl. Soft Comput. 2021, 102, 107066. [Google Scholar] [CrossRef]
- Al-qaness, M.A. Device-free human micro-activity recognition method using WiFi signals. Geo-Spat. Inf. Sci. 2019, 22, 128–137. [Google Scholar] [CrossRef]
- Al-Qaness, M.A.; Abd Elaziz, M.; Kim, S.; Ewees, A.A.; Abbasi, A.A.; Alhaj, Y.A.; Hawbani, A. Channel state information from pure communication to sense and track human motion: A survey. Sensors 2019, 19, 3329. [Google Scholar] [CrossRef] [Green Version]
- Ryselis, K.; Petkus, T.; Blažauskas, T.; Maskeliūnas, R.; Damaševičius, R. Multiple Kinect based system to monitor and analyze key performance indicators of physical training. Hum.-Centric Comput. Inf. Sci. 2020, 10, 51. [Google Scholar] [CrossRef]
- Al-Qaness, M.A.; Dahou, A.; Abd Elaziz, M.; Helmi, A. Multi-ResAtt: Multilevel Residual Network with Attention for Human Activity Recognition Using Wearable Sensors. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
- Siam, A.I.; Almaiah, M.A.; Al-Zahrani, A.; Elazm, A.A.; El Banby, G.M.; El-Shafai, W.; El-Samie, F.E.A.; El-Bahnasawy, N.A. Secure Health Monitoring Communication Systems Based on IoT and Cloud Computing for Medical Emergency Applications. Comput. Intell. Neurosci. 2021, 2021, 8016525. [Google Scholar] [CrossRef] [PubMed]
- Sousa Lima, W.; Souto, E.; El-Khatib, K.; Jalali, R.; Gama, J. Human Activity Recognition Using Inertial Sensors in a Smartphone: An Overview. Sensors 2019, 19, 3213. [Google Scholar] [CrossRef] [Green Version]
- Svetnik, V.; Liaw, A.; Tong, C.; Culberson, J.C.; Sheridan, R.P.; Feuston, B.P. Random forest: A classification and regression tool for compound classification and QSAR modeling. J. Chem. Inf. Comput. Sci. 2003, 43, 1947–1958. [Google Scholar] [CrossRef]
- Priddy, K.L.; Keller, P.E. Artificial Neural Networks: An Introduction; SPIE Press: Bellingham, WA, USA, 2005; Volume 68. [Google Scholar]
- Wang, L. Support Vector Machines: Theory and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005; Volume 177. [Google Scholar]
- Murphy, K.P. Naive bayes classifiers. Univ. Br. Columbia 2006, 18, 1–8. [Google Scholar]
- Khan, R.; Kumar, S.; Srivastava, A.K.; Dhingra, N.; Gupta, M.; Bhati, N.; Kumari, P. Machine Learning and IoT-Based Waste Management Model. Comput. Intell. Neurosci. 2021, 2021, 5942574. [Google Scholar] [CrossRef]
- Alsamhi, S.H.; Almalki, F.A.; Al-Dois, H.; Ben Othman, S.; Hassan, J.; Hawbani, A.; Sahal, R.; Lee, B.; Saleh, H. Machine learning for smart environments in B5G networks: Connectivity and QoS. Comput. Intell. Neurosci. 2021, 2021, 6805151. [Google Scholar] [CrossRef]
- Atalaa, B.A.; Ziedan, I.; Alenany, A.; Helmi, A. Feature Engineering for Human Activity Recognition. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 160–167. [Google Scholar] [CrossRef]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J. Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors 2016, 16, 426. [Google Scholar] [CrossRef]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Demrozi, F.; Pravadelli, G.; Bihorac, A.; Rashidi, P. Human activity recognition using inertial, physiological and environmental sensors: A comprehensive survey. IEEE Access 2020, 8, 210816–210836. [Google Scholar] [CrossRef]
- Baldominos, A.; Cervantes, A.; Saez, Y.; Isasi, P. A comparison of machine learning and deep learning techniques for activity recognition using mobile devices. Sensors 2019, 19, 521. [Google Scholar] [CrossRef] [Green Version]
- Alo, U.R.; Nweke, H.F.; Teh, Y.W.; Murtaza, G. Smartphone Motion Sensor-Based Complex Human Activity Identification Using Deep Stacked Autoencoder Algorithm for Enhanced Smart Healthcare System. Sensors 2020, 20, 6300. [Google Scholar] [CrossRef]
- Xia, K.; Huang, J.; Wang, H. LSTM-CNN architecture for human activity recognition. IEEE Access 2020, 8, 56855–56866. [Google Scholar] [CrossRef]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
- Ronald, M.; Poulose, A.; Han, D.S. iSPLInception: An inception-ResNet deep learning architecture for human activity recognition. IEEE Access 2021, 9, 68985–69001. [Google Scholar] [CrossRef]
- Mekruksavanich, S.; Jitpattanakul, A. Deep convolutional neural network with rnns for complex activity recognition using wrist-worn wearable sensor data. Electronics 2021, 10, 1685. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. Esann 2013, 3, 437–442. [Google Scholar]
- Hamäläinen, W.; Järvinen, M.; Martiskainen, P.; Mononen, J. Jerk-based feature extraction for robust activity recognition from acceleration data. In Proceedings of the 2011 11th International Conference on Intelligent Systems Design and Applications, Córdoba, Spain, 22–24 November 2011; pp. 831–836. [Google Scholar]
- Quiroz, J.C.; Banerjee, A.; Dascalu, S.M.; Lau, S.L. Feature selection for activity recognition from smartphone accelerometer data. Intell. Autom. Soft Comput. 2017, 1–9. [Google Scholar] [CrossRef]
- Dehghani, A.; Sarbishei, O.; Glatard, T.; Shihab, E. A quantitative comparison of overlapping and non-overlapping sliding windows for human activity recognition using inertial sensors. Sensors 2019, 19, 5026. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Garcia-Ceja, E.; Uddin, M.Z.; Torresen, J. Classification of recurrence plots’ distance matrices with a convolutional neural network for activity recognition. Procedia Comput. Sci. 2018, 130, 157–163. [Google Scholar] [CrossRef]
- Gao, W.; Zhang, L.; Teng, Q.; He, J.; Wu, H. DanHAR: Dual attention network for multimodal human activity recognition using wearable sensors. Appl. Soft Comput. 2021, 111, 107728. [Google Scholar] [CrossRef]
- Catal, C.; Tufekci, S.; Pirmit, E.; Kocabag, G. On the use of ensemble of classifiers for accelerometer-based activity recognition. Appl. Soft Comput. 2015, 37, 1018–1022. [Google Scholar] [CrossRef]
- Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
- Suwannarat, K.; Kurdthongmee, W. Optimization of Deep Neural Network-based Human Activity Recognition for a Wearable Device. Heliyon 2021, 7, e07797. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M.; Elhoseny, M.; Song, H. ST-DeepHAR: Deep learning model for human activity recognition in IoHT applications. IEEE Internet Things J. 2020, 8, 4969–4979. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, Z.; Zhang, Y.; Bao, J.; Zhang, Y.; Deng, H. Human activity recognition based on motion sensor using u-net. IEEE Access 2019, 7, 75213–75226. [Google Scholar] [CrossRef]
- Zhang, H.; Xiao, Z.; Wang, J.; Li, F.; Szczerbicki, E. A novel IoT-perceptive human activity recognition (HAR) approach using multihead convolutional attention. IEEE Internet Things J. 2019, 7, 1072–1080. [Google Scholar] [CrossRef]
- Fan, C.; Gao, F. Enhanced human activity recognition using wearable sensors via a hybrid feature selection method. Sensors 2021, 21, 6434. [Google Scholar] [CrossRef]
Parameter | Information |
---|---|
# Subjects | 10 |
# Activities | 13 |
Total # Observations | 1,170,000 |
Missing values | NO |
Device | Two Samsung Galaxy S2 smartphones |
Position on Body | Right pocket and right wrist |
Sensors | Accelerometer, Linear Accelerometer, Gyroscope and Magnetometer |
Frequency | 50 Hz |
Activity | Abbreviation | Duration (min) |
---|---|---|
Biking | BK | 3 |
Having Coffee | CO | 5 |
Walking Downstairs | DS | 3 |
Eating | ET | 5 |
Jogging | JO | 3 |
Sitting | ST | 3 |
Smoking | SM | 5 |
Standing | SN | 3 |
Giving a Talk | TK | 5 |
Typing | TP | 5 |
Walking Upstairs | UP | 3 |
Walking | WK | 3 |
Writing | WR | 5 |
Classifier | Function Call | Settings and Parameters |
---|---|---|
RF | RandomForestClassifier() | # estimators = 200, max. depth = 25, min. samples split = 2 |
MLP | MLPClassifier() | solver: quasi-Newton method, # hidden neurons = 75, activation function: tanh, max. # iterations = 1000, momentum = 0.9, initial learning rate is 0.01, validation ratio = 15% |
SVM | svm.SVC() | kernel: radial basis function, polynomial degree is 3 |
NB | GaussianNB() | and parameters of Gaussian distribution are estimated using maximum likelihood |
Accuracy | F-Measure | Training Time (sec) | Size on Disk (MB) | |||
---|---|---|---|---|---|---|
RF | Mean | 98.72 | Mean | 98.72 | 29.3 | 22.683 |
Std | 0.1015 | Std | 0.1015 | |||
SVM | Mean | 97.43 | Mean | 97.42 | 19.69 | 13.593 |
Std | 0.2279 | Std | 0.2398 | |||
MLP | Mean | 97.47 | Mean | 97.49 | 90.41 | 0.143 |
Std | 0.3837 | Std | 0.3736 | |||
NB | Mean | 88.82 | Mean | 88.87 | 1 | 0.046 |
Std | 0.3693 | Std | 0.3677 |
Reference | Input Signals | Segment Length (s) | Feature Extraction | Classifier | Accuracy (%) |
---|---|---|---|---|---|
Baldominos et al. [45] | Raw signals | 60 | Handcrafted | ET | 95.3 |
Baldominos et al. [45] | Raw signals | 60 | CNN hidden layers | FC layer | 85 |
Alo et al. [46] | Raw signals, magnitude vector, pitch and roll vectors | 2 | Sparse AE layers | FC layer | 97.13 |
Alo et al. [46] | Raw signals, magnitude vector, pitch and roll vectors | 2 | DBN hidden layers | DBN output layer | 91.57 |
Current model | Raw signals | 2.56 | Handcrafted | RF | 98.7 |
Ref. | BK | CO | DW | ET | JO | ST | SM | SN | TK | TY | UP | WK | WR | Accuracy (%) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Shoaib et al. [42] | 0.99 | 0.83 | 0.98 | 0.89 | 1 | 0.90 | 0.82 | 0.92 | 0.86 | 0.95 | 0.96 | 0.85 | 0.89 | 91.2 |
Current model | 0.99 | 0.92 | 0.91 | 0.99 | 0.99 | 0.76 | 0.95 | 0.94 | 0.97 | 0.98 | 0.83 | 0.80 | 0.97 | 92.54 |
Evaluation | Reference | Segment Length (s) | Feature Extraction | Classifier | Accuracy (%) |
---|---|---|---|---|---|
10-fold cross validation | Kwapisz et al. [43] | 10 | Handcrafted | MLP | 91.7 |
Garcia-Ceja et al. [55] | 5 | CNN | FC layer | 94.2 | |
Catal et al. [57] | 10 | Handcrafted | Ensemble of (LR, MLP, j48) | 91.62 | |
Ignatov [58] | 10 | CNN + Stat. Feat. | FC layer | 93.32 | |
Current model | 10 | Handcrafted | RF | 94 | |
70%/30% split | Gao et al. [56] | 10 | CNN + Att. M. | FC layer | 98.85 |
Suwannarat et al. [59] | 8 | CNN | FC layer | 95 | |
Abdel-Basset et al. [60] | 10 | CNN + R. B. + LSTM + Att. M. | MLP | 98.90 | |
Zhang et al. [61] | 11.2 | CNN | FC layers | 96.4 | |
Zhang et al. [62] | 10 | CNN + Att. | FC layer | 96.4 | |
Current model | 10 | Handcrafted | RF | 98.56 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Issa, M.E.; Helmi, A.M.; Al-Qaness, M.A.A.; Dahou, A.; Abd Elaziz, M.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084. https://doi.org/10.3390/healthcare10061084
Issa ME, Helmi AM, Al-Qaness MAA, Dahou A, Abd Elaziz M, Damaševičius R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare. 2022; 10(6):1084. https://doi.org/10.3390/healthcare10061084
Chicago/Turabian StyleIssa, Mohamed E., Ahmed M. Helmi, Mohammed A. A. Al-Qaness, Abdelghani Dahou, Mohamed Abd Elaziz, and Robertas Damaševičius. 2022. "Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things" Healthcare 10, no. 6: 1084. https://doi.org/10.3390/healthcare10061084
APA StyleIssa, M. E., Helmi, A. M., Al-Qaness, M. A. A., Dahou, A., Abd Elaziz, M., & Damaševičius, R. (2022). Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare, 10(6), 1084. https://doi.org/10.3390/healthcare10061084