An Unsupervised Method to Recognise Human Activity at Home Using Non-Intrusive Sensors
Abstract
:1. Introduction
2. Related Work
2.1. Summary of the Main HAR Supervised Learning Methods
2.2. Summary of the Main HAR Unsupervised Learning Methods
3. The Proposed HAR Approach
3.1. SDHAR-HOME Dataset: Analysis and Description
- Non-intrusive sensor network: The database has real-time measurements of the events provided by a sensor network implemented in the house. This network is composed of the following sensors: 8 motion sensors, 8 door contact sensors, 2 temperature and humidity sensors, 11 vibration sensors, 2 power consumption sensors and 2 lighting sensors. This sensor network provides low-power Zigbee signals that are collected by a central hub, which is responsible for storing the information and managing the devices.
- Bluetooth beacon triangulation: Each resident wears a smart band, and by using a network of beacons deployed in the house (one beacon in each room), the power of the Bluetooth signal is continuously measured to each of the smart bands in order to locate the user within the home. This information helps to differentiate which user is performing a specific activity.
- Wearable devices: As mentioned in the previous section, each resident wears a smart band to be able to locate him or her in the house. This smart band is able to provide information linked to the physical activity of each user (e.g., heart rate, calories, steps or data from the device’s gyroscopes).
3.2. Mathematical Principles and Application
- Hidden states S: These correspond to the system variables that are unknown and that are desired to be recognised. In the case being addressed, they correspond to the activities carried out by the users within the household. These hidden states can be represented as follows:In (1), N corresponds to the total number of hidden states of the system and each refers, in this case, to each of the activities analysed.
- Observations O: The sequence of observations corresponds to the observable (and measurable) facts from which information can be extracted from the environment in which the system is located. In the case under consideration, this sequence of observations corresponds to the information provided by the technology with which the household is equipped (e.g., sensors or imagery). The sequence of observations can be represented as follows:In (2), each observation is related to the time at which it occurs. T corresponds to the total number of observations to be analysed. The sequence of observations O includes all possible measurements obtained from the environment where the model is located. Thus, the total set of possible observations V can be represented as follows:In (3), M represents the total number of different signals entering the system and each represents each of the sensors.
- State transition matrix A: This corresponds to the probability matrix of transitions between the different hidden states of the previously defined Markov network. For this reason, it is an matrix (4). Moreover, the sum of all transition probabilities in the same row is equal to one (5). The probability of remaining in the same state corresponds to the values on the diagonal, while the probabilities of moving from one hidden state to another correspond to the remaining probabilities.
- Emission matrix B: This corresponds with the probability matrix of change to a certain hidden state depending on the concrete observation entering the system. For this reason, it is a matrix whose dimensions are (6).
- Initial probability vector: At the beginning of the execution of a Markov network, the different hidden states must start with an output probability (). This probability is given by (7).
- Event sensors: These sensors provide information about the different events that occur in the house. For example, an event could be the opening of a cupboard, the vibration of a chair or the presence of a user in a specific room. For this reason, the sensors that belong to this set are the following: presence, contact and vibration sensors.
- Environmental sensors: These sensors provide information about the conditions of the home over time. Therefore, their activation is not enough to establish a transition between activities unless they are accompanied by an action provided by an event sensor. For example, recognising that the TV is powered on upon its energy consumption does not imply that any user is actually watching it at that moment, as he or she may be wandering around the room or in another room. For this reason, it is necessary to complement this energy consumption with an event sensor, such as vibration on the sofa, in order to know that the user is actually watching TV. The sensors that belong to this group are the following: temperature and humidity, consumption and luminosity sensors.
- Forward algorithm: The forward algorithm is in charge of calculating the different activities and rooms successively each time a new event sensor record or beacon position arrives. Therefore, it has a progression in favour of the timeline. Hidden state probabilities are calculated by combining propagation stages (applying the A matrix) and update stages (applying the B matrix) in a similar way to the behaviour of Bayesian filters [104]. The algorithm for the user’s location system is the following (see Algorithm 1):
Algorithm 1 Location system: forward algorithm Input: HMM and Observations sequence Output: 1: if then 2: 3: else 4: for to t do 5: for to n do 6: 7: end for 8: end for 9: end if 10: return In this and the following algorithms, the parameter n corresponds to the total number of different hidden states. In addition, the parameter t refers to the instant at which the algorithm is applied. This algorithm is the one used for the Markov network that is responsible for locating users within the home. In contrast, for the Markov network that performs activity recognition, a variant has been created to implement the environmental sensors. The variant is as follows (see Algorithm 2):Algorithm 2 HAR: forward algorithm with environmental sensors Input: HMM, Observations sequence and Environment sequence Output: 1: if then 2: 3: else 4: for to t do 5: for to n do 6: 7: if then 8: 9: end if 10: end for 11: end for 12: end if 13: return - Backward algorithm: The backward algorithm is in charge of calculating the different as successive sensor events arrive at the system. For this reason, the flow of this algorithm goes against the timeline, reinforcing the value provided by the forward algorithm with information from subsequent events. A total of subsequent events has been chosen so that the system does not become too slow. This part of the proposed system is shown in Algorithm 3.
Algorithm 3 Backward algorithm |
Input: HMM and Observations sequence |
Output: |
1: if then |
2: |
3: else |
4: for to k do |
5: for to n do |
6: |
7: end for |
8: end for |
9: end if |
10: return |
4. Experiments
5. Discussion
- Unsupervised models based on HMMs can be used to process time series data provided by discrete event and environmental sensors together with indoor positioning signals.
- The results obtained with our HMM-based model are close to results obtained using supervised methods, such as RNN, LSTM or GRU [72].
- The system obtained in the present paper is highly generalist, since the internal parameters that determine the model can be freely modified to analyse another household, with other residents and in other circumstances than those used in the present experimentation.
- False positives obtained during the testing phase are mainly due to failures between activities of a similar nature or failures between activities within the same room.
- The use of beacon installation for indoor positioning is unavoidable in cases where two or more people cohabit within the same house, as the system needs to know where the residents are located for probability allocation. In the case where there is only one person, the position could be determined from the signals of the PIR motion sensors.
- Results obtained using an unsupervised method such as HMMs are high compared to the previous work discussed in Section 2.2 using similar unsupervised methods.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
DL | Deep Learning |
HAR | Human Activity Recognition |
IoT | Internet of Things |
ML | Machine Learning |
CNN | Convolutional Neural Network |
RNN | Recurrent Neural Network |
HMM | Hidden Markov Model |
BLS | Broad Learning System |
LSTM | Long Short-Term Memory |
GRU | Gated Recurrent Unit |
SVM | Support Vector Machine |
SGN | Semantics-Guided Neural Network |
GCN | Graph Convolutional Network |
FFT | Fast Fourier Transform |
VAE | Variational Autoencoder |
DRB | Deep Rule-Based |
ADL | Activity of Daily Living |
ROC | Receiver Operating Characteristic |
AUC | Area Under the Curve |
References
- Li, Q.; Gravina, R.; Li, Y.; Alsamhi, S.H.; Sun, F.; Fortino, G. Multi-user activity recognition: Challenges and opportunities. Inf. Fusion 2020, 63, 121–135. [Google Scholar] [CrossRef]
- Dhiman, C.; Vishwakarma, D.K. A review of state-of-the-art techniques for abnormal human activity recognition. Eng. Appl. Artif. Intell. 2019, 77, 21–45. [Google Scholar] [CrossRef]
- Jobanputra, C.; Bavishi, J.; Doshi, N. Human Activity Recognition: A Survey. Procedia Comput. Sci. 2019, 155, 698–703. [Google Scholar] [CrossRef]
- Wan, S.; Qi, L.; Xu, X.; Tong, C.; Gu, Z. Deep learning models for real-time human activity recognition with smartphones. Mob. Netw. Appl. 2020, 25, 743–755. [Google Scholar] [CrossRef]
- Kulsoom, F.; Narejo, S.; Mehmood, Z.; Chaudhry, H.N.; Butt, A.; Bashir, A.K. A review of machine learning-based human activity recognition for diverse applications. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–36. [Google Scholar]
- Xia, K.; Huang, J.; Wang, H. LSTM-CNN architecture for human activity recognition. IEEE Access 2020, 8, 56855–56866. [Google Scholar] [CrossRef]
- Tun, S.Y.Y.; Madanian, S.; Mirza, F. Internet of things (IoT) applications for elderly care: A reflective review. Aging Clin. Exp. Res. 2021, 33, 855–867. [Google Scholar] [CrossRef] [PubMed]
- Lentzas, A.; Vrakas, D. Non-intrusive human activity recognition and abnormal behavior detection on elderly people: A review. Artif. Intell. Rev. 2020, 53, 1975–2021. [Google Scholar] [CrossRef]
- Erickson, S.R.; Williams, B.C.; Gruppen, L.D. Relationship between symptoms and health-related quality of life in patients treated for hypertension. Pharmacother. J. Hum. Pharmacol. Drug Ther. 2004, 24, 344–350. [Google Scholar] [CrossRef]
- Bhattacharya, D.; Sharma, D.; Kim, W.; Ijaz, M.F.; Singh, P.K. Ensem-HAR: An ensemble deep learning model for smartphone sensor-based human activity recognition for measurement of elderly health monitoring. Biosensors 2022, 12, 393. [Google Scholar] [CrossRef]
- Sun, H.; Chen, Y. Real-Time Elderly Monitoring for Senior Safety by Lightweight Human Action Recognition. In Proceedings of the 2022 IEEE 16th International Symposium on Medical Information and Communication Technology (ISMICT), Lincoln, NE, USA, 2–4 May 2022; pp. 1–6. [Google Scholar]
- Gudur, G.K.; Sundaramoorthy, P.; Umaashankar, V. Activeharnet: Towards on-device deep bayesian active learning for human activity recognition. In Proceedings of the 3rd International Workshop on Deep Learning for Mobile Systems and Applications, Seoul, Korea, 19 June 2019; pp. 7–12. [Google Scholar]
- Shalaby, E.; ElShennawy, N.; Sarhan, A. Utilizing deep learning models in CSI-based human activity recognition. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–18. [Google Scholar]
- Zimmermann, L.C. Elderly Activity Recognition Using Smartphones and Wearable Devices. Ph.D. Thesis, Universidade de São Paulo, São Paulo, Brazil, 2019. [Google Scholar]
- Subasi, A.; Fllatah, A.; Alzobidi, K.; Brahimi, T.; Sarirete, A. Smartphone-based human activity recognition using bagging and boosting. Procedia Comput. Sci. 2019, 163, 54–61. [Google Scholar] [CrossRef]
- Demrozi, F.; Turetta, C.; Pravadelli, G. B-HAR: An open-source baseline framework for in depth study of human activity recognition datasets and workflows. arXiv 2021, arXiv:2101.10870. [Google Scholar]
- Bibbò, L.; Carotenuto, R.; Della Corte, F. An Overview of Indoor Localization System for Human Activity Recognition (HAR) in Healthcare. Sensors 2022, 22, 8119. [Google Scholar] [CrossRef] [PubMed]
- Muangprathub, J.; Sriwichian, A.; Wanichsombat, A.; Kajornkasirat, S.; Nillaor, P.; Boonjing, V. A novel elderly tracking system using machine learning to classify signals from mobile and wearable sensors. Int. J. Environ. Res. Public Health 2021, 18, 12652. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; He, Y.; Fioranelli, F.; Jing, X. Semisupervised human activity recognition with radar micro-Doppler signatures. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
- Popescu, A.C.; Mocanu, I.; Cramariuc, B. Fusion mechanisms for human activity recognition using automated machine learning. IEEE Access 2020, 8, 143996–144014. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Kevin, I.; Wang, K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-learning-enhanced human activity recognition for Internet of healthcare things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
- Franco, A.; Magnani, A.; Maio, D. A multimodal approach for human activity recognition based on skeleton and RGB data. Pattern Recognit. Lett. 2020, 131, 293–299. [Google Scholar] [CrossRef]
- Ke, S.R.; Thuc, H.L.U.; Lee, Y.J.; Hwang, J.N.; Yoo, J.H.; Choi, K.H. A review on video-based human activity recognition. Computers 2013, 2, 88–131. [Google Scholar] [CrossRef]
- Aggarwal, J.; Xia, L. Human activity recognition from 3D data: A review. Pattern Recognit. Lett. 2014, 48, 70–80. [Google Scholar] [CrossRef]
- Dang, L.M.; Min, K.; Wang, H.; Piran, M.J.; Lee, C.H.; Moon, H. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognit. 2020, 108, 107561. [Google Scholar] [CrossRef]
- San-Segundo, R.; Blunck, H.; Moreno-Pimentel, J.; Stisen, A.; Gil-Martín, M. Robust Human Activity Recognition using smartwatches and smartphones. Eng. Appl. Artif. Intell. 2018, 72, 190–202. [Google Scholar] [CrossRef]
- Janidarmian, M.; Roshan Fekr, A.; Radecka, K.; Zilic, Z. A comprehensive analysis on wearable acceleration sensors in human activity recognition. Sensors 2017, 17, 529. [Google Scholar] [CrossRef]
- De-La-Hoz-Franco, E.; Ariza-Colpas, P.; Quero, J.M.; Espinilla, M. Sensor-based datasets for human activity recognition–a systematic review of literature. IEEE Access 2018, 6, 59192–59210. [Google Scholar] [CrossRef]
- Wang, A.; Chen, G.; Yang, J.; Zhao, S.; Chang, C.Y. A comparative study on human activity recognition using inertial sensors in a smartphone. IEEE Sens. J. 2016, 16, 4566–4578. [Google Scholar] [CrossRef]
- Bi, S.; Hu, Z.; Zhao, M.; Zhang, H.; Di, J.; Sun, Z. Continuous frame motion sensitive self-supervised collaborative network for video representation learning. Adv. Eng. Inform. 2023, 56, 101941. [Google Scholar] [CrossRef]
- Ann, O.C.; Theng, L.B. Human activity recognition: A review. In Proceedings of the 2014 IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2014), Penang, Malaysia, 28–30 November 2014; pp. 389–393. [Google Scholar]
- Singh, Y.; Bhatia, P.K.; Sangwan, O. A review of studies on machine learning techniques. Int. J. Comput. Sci. Secur. 2007, 1, 70–84. [Google Scholar]
- Pramanik, R.; Sikdar, R.; Sarkar, R. Transformer-based deep reverse attention network for multi-sensory human activity recognition. Eng. Appl. Artif. Intell. 2023, 122, 106150. [Google Scholar] [CrossRef]
- Xu, C.; Chai, D.; He, J.; Zhang, X.; Duan, S. InnoHAR: A deep neural network for complex human activity recognition. IEEE Access 2019, 7, 9893–9902. [Google Scholar] [CrossRef]
- Liu, T.; Zheng, H.; Zheng, P.; Bao, J.; Wang, J.; Liu, X.; Yang, C. An expert knowledge-empowered CNN approach for welding radiographic image recognition. Adv. Eng. Inform. 2023, 56, 101963. [Google Scholar] [CrossRef]
- Hibat-Allah, M.; Ganahl, M.; Hayward, L.E.; Melko, R.G.; Carrasquilla, J. Recurrent neural network wave functions. Phys. Rev. Res. 2020, 2, 023358. [Google Scholar] [CrossRef]
- Li, M.; Zhang, W.; Hu, B.; Kang, J.; Wang, Y.; Lu, S. Automatic assessment of depression and anxiety through encoding pupil-wave from HCI in VR scenes. ACM Trans. Multimed. Comput. Commun. Appl. 2023, 20, 1–22. [Google Scholar] [CrossRef]
- Zhang, H.; Fritts, J.E.; Goldman, S.A. Image segmentation evaluation: A survey of unsupervised methods. Comput. Vis. Image Underst. 2008, 110, 260–280. [Google Scholar] [CrossRef]
- Manouchehri, N.; Bouguila, N. Human Activity Recognition with an HMM-Based Generative Model. Sensors 2023, 23, 1390. [Google Scholar] [CrossRef] [PubMed]
- Bouchabou, D.; Nguyen, S.M.; Lohr, C.; LeDuc, B.; Kanellos, I. Using language model to bootstrap human activity recognition ambient sensors based in smart homes. Electronics 2021, 10, 2498. [Google Scholar] [CrossRef]
- Zhao, H.; Zheng, J.; Deng, W.; Song, Y. Semi-supervised broad learning system based on manifold regularization and broad network. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 983–994. [Google Scholar] [CrossRef]
- Chen, K.; Yao, L.; Zhang, D.; Wang, X.; Chang, X.; Nie, F. A semisupervised recurrent convolutional attention model for human activity recognition. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1747–1756. [Google Scholar] [CrossRef] [PubMed]
- Ahmim, A.; Maglaras, L.; Ferrag, M.A.; Derdour, M.; Janicke, H. A novel hierarchical intrusion detection system based on decision tree and rules-based models. In Proceedings of the 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), Santorini, Greece, 29–31 May 2019; pp. 228–233. [Google Scholar]
- Daghero, F.; Pagliari, D.J.; Poncino, M. Two-stage Human Activity Recognition on Microcontrollers with Decision Trees and CNNs. In Proceedings of the 2022 17th Conference on Ph.D Research in Microelectronics and Electronics (PRIME), Villasimius, Italy, 12–15 June 2022; pp. 173–176. [Google Scholar]
- Kelly, P.; Marshall, S.J.; Badland, H.; Kerr, J.; Oliver, M.; Doherty, A.R.; Foster, C. An ethical framework for automated, wearable cameras in health behavior research. Am. J. Prev. Med. 2013, 44, 314–319. [Google Scholar] [CrossRef]
- Basak, H.; Kundu, R.; Singh, P.K.; Ijaz, M.F.; Woźniak, M.; Sarkar, R. A union of deep learning and swarm-based optimization for 3D human action recognition. Sci. Rep. 2022, 12, 5494. [Google Scholar] [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 168–172. [Google Scholar]
- Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; Weber, A. Mocap Database HDM05; Technical Report, No. CG-2007-2; Universität Bonn: Bonn, Germany, 2007; ISSN 1610-8892. [Google Scholar]
- Shahroudy, A.; Liu, J.; Ng, T.T.; Wang, G. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
- Domingo, J.D.; Gómez-García-Bermejo, J.; Zalama, E. Visual recognition of gymnastic exercise sequences. Application to supervision and robot learning by demonstration. Robot. Auton. Syst. 2021, 143, 103830. [Google Scholar] [CrossRef]
- Taud, H.; Mas, J. Multilayer perceptron (MLP). In Geomatic Approaches for Modeling Land Change Scenarios; Springer: Berlin/Heidelberg, Germany, 2018; pp. 451–455. [Google Scholar]
- Li, Y.; Wang, L. Human activity recognition based on residual network and BiLSTM. Sensors 2022, 22, 635. [Google Scholar] [CrossRef]
- Su, T.; Sun, H.; Ma, C.; Jiang, L.; Xu, T. HDL: Hierarchical deep learning model based human activity recognition using smartphone sensors. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Reiss, A.; Stricker, D. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 2012 16th International Symposium on Wearable Computers, Newcastle, UK, 18–22 June 2012; pp. 108–109. [Google Scholar]
- Challa, S.K.; Kumar, A.; Semwal, V.B. A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. Vis. Comput. 2022, 38, 4095–4109. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2013; Volume 3, p. 3. [Google Scholar]
- Dua, N.; Singh, S.N.; Semwal, V.B. Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 2021, 103, 1461–1478. [Google Scholar] [CrossRef]
- Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
- Ramos, R.G.; Domingo, J.D.; Zalama, E.; Gómez-García-Bermejo, J. Daily human activity recognition using non-intrusive sensors. Sensors 2021, 21, 5270. [Google Scholar] [CrossRef]
- Liciotti, D.; Bernardini, M.; Romeo, L.; Frontoni, E. A sequential deep learning application for recognising human activities in smart homes. Neurocomputing 2020, 396, 501–513. [Google Scholar] [CrossRef]
- Cook, D.J.; Crandall, A.S.; Thomas, B.L.; Krishnan, N.C. CASAS: A smart home in a box. Computer 2012, 46, 62–69. [Google Scholar] [CrossRef] [PubMed]
- Sazonov, E.; Hegde, N.; Browning, R.C.; Melanson, E.L.; Sazonova, N.A. Posture and activity recognition and energy expenditure estimation in a wearable platform. IEEE J. Biomed. Health Inform. 2015, 19, 1339–1346. [Google Scholar] [CrossRef] [PubMed]
- D’Arco, L.; Wang, H.; Zheng, H. Assessing impact of sensors and feature selection in smart-insole-based human activity recognition. Methods Protoc. 2022, 5, 45. [Google Scholar] [CrossRef]
- Zhang, P.; Lan, C.; Zeng, W.; Xing, J.; Xue, J.; Zheng, N. Semantics-guided neural networks for efficient skeleton-based human action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1112–1121. [Google Scholar]
- Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Liu, J.; Shahroudy, A.; Perez, M.; Wang, G.; Duan, L.Y.; Kot, A.C. Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2684–2701. [Google Scholar] [CrossRef]
- Hu, J.F.; Zheng, W.S.; Lai, J.; Zhang, J. Jointly learning heterogeneous features for RGB-D activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5344–5352. [Google Scholar]
- Li, Y.; Yang, G.; Su, Z.; Li, S.; Wang, Y. Human activity recognition based on multienvironment sensor data. Inf. Fusion 2023, 91, 47–63. [Google Scholar] [CrossRef]
- Ruan, D.; Wang, J.; Yan, J.; Gühmann, C. CNN parameter design based on fault signal analysis and its application in bearing fault diagnosis. Adv. Eng. Inform. 2023, 55, 101877. [Google Scholar] [CrossRef]
- Ramos, R.G.; Domingo, J.D.; Zalama, E.; Gómez-García-Bermejo, J.; López, J. SDHAR-HOME: A sensor dataset for human activity recognition at home. Sensors 2022, 22, 8109. [Google Scholar] [CrossRef]
- Medsker, L.R.; Jain, L. Recurrent neural networks. Des. Appl. 2001, 5, 64–67. [Google Scholar]
- Ahn, D.; Kim, S.; Hong, H.; Ko, B.C. STAR-Transformer: A Spatio-temporal Cross Attention Transformer for Human Action Recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 3330–3339. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Zhang, W.; Zhu, M.; Derpanis, K.G. From actemes to action: A strongly-supervised representation for detailed action understanding. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 2248–2255. [Google Scholar]
- Jiang, W.; Zhou, K.; Xiong, C.; Du, G.; Ou, C.; Zhang, J. KSCB: A novel unsupervised method for text sentiment analysis. Appl. Intell. 2023, 53, 301–311. [Google Scholar] [CrossRef]
- Kwon, Y.; Kang, K.; Bae, C. Unsupervised learning for human activity recognition using smartphone sensors. Expert Syst. Appl. 2014, 41, 6067–6074. [Google Scholar] [CrossRef]
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
- Cooley, J.W.; Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comput. 1965, 19, 297–301. [Google Scholar] [CrossRef]
- Lin, J.F.S.; Kulic, D. Automatic human motion segmentation and identification using feature guided hmm for physical rehabilitation exercises. In Proceedings of the Robotics for Neurology and Rehabilitation, Workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011. [Google Scholar]
- Trabelsi, D.; Mohammed, S.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. An unsupervised approach for automatic activity recognition based on hidden Markov model regression. IEEE Trans. Autom. Sci. Eng. 2013, 10, 829–835. [Google Scholar] [CrossRef]
- Li, W.; Xu, Y.; Tan, B.; Piechocki, R.J. Passive wireless sensing for unsupervised human activity recognition in healthcare. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 1528–1533. [Google Scholar]
- Kim, Y.; Ling, H. Human activity classification based on micro-Doppler signatures using a support vector machine. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1328–1337. [Google Scholar]
- Bai, L.; Yeung, C.; Efstratiou, C.; Chikomo, M. Motion2Vector: Unsupervised learning in human activity recognition using wrist-sensing data. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 537–542. [Google Scholar]
- Kingma, D.P.; Welling, M. An introduction to variational autoencoders. Found. Trends® Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
- Valarezo, A.E.; Rivera, L.P.; Park, H.; Park, N.; Kim, T.S. Human activities recognition with a single writs IMU via a Variational Autoencoder and android deep recurrent neural nets. Comput. Sci. Inf. Syst. 2020, 17, 581–597. [Google Scholar] [CrossRef]
- Stisen, A.; Blunck, H.; Bhattacharya, S.; Prentow, T.S.; Kjærgaard, M.B.; Dey, A.; Sonne, T.; Jensen, M.M. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, Seoul, Korea, 1–4 November 2015; pp. 127–140. [Google Scholar]
- Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical human activity recognition using wearable sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed]
- Sinaga, K.P.; Yang, M.S. Unsupervised K-means clustering algorithm. IEEE Access 2020, 8, 80716–80727. [Google Scholar] [CrossRef]
- Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Yang, X.; Song, Z.; King, I.; Xu, Z. A survey on deep semi-supervised learning. IEEE Trans. Knowl. Data Eng. 2022, 35, 8934–8954. [Google Scholar] [CrossRef]
- Janarthanan, R.; Doss, S.; Baskar, S. Optimized unsupervised deep learning assisted reconstructed coder in the on-nodule wearable sensor for human activity recognition. Measurement 2020, 164, 108050. [Google Scholar] [CrossRef]
- Gu, T.; Chen, S.; Tao, X.; Lu, J. An unsupervised approach to activity recognition and segmentation based on object-use fingerprints. Data Knowl. Eng. 2010, 69, 533–544. [Google Scholar] [CrossRef]
- Ezeiza, N.; Alegria, I.; Arriola, J.M.; Urizar, R.; Aduriz, I. Combining stochastic and rule-based methods for disambiguation in agglutinative languages. In COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics; Association for Computational Linguistics: Stroudsburg, PA, USA, 1998. [Google Scholar]
- Sargano, A.B.; Gu, X.; Angelov, P.; Habib, Z. Human action recognition using deep rule-based classifier. Multimed. Tools Appl. 2020, 79, 30653–30667. [Google Scholar] [CrossRef]
- Nurwulan, N.; Selamaj, G. Human daily activities recognition using decision tree. Proc. J. Phys. Conf. Ser. 2021, 1833, 012039. [Google Scholar] [CrossRef]
- Sánchez, V.G.; Skeie, N.O. Decision Trees for Human Activity Recognition in Smart House Environments. Linköping Electron. Conf. Proc. 2018, 153, 222–229. [Google Scholar] [CrossRef]
- Ordónez, F.J.; De Toledo, P.; Sanchis, A. Activity recognition using hybrid generative/discriminative models on home environments using binary sensors. Sensors 2013, 13, 5460–5477. [Google Scholar] [CrossRef] [PubMed]
- Zeng, Y. Evaluation of physical education teaching quality in colleges based on the hybrid technology of data mining and Hidden Markov Model. Int. J. Emerg. Technol. Learn. (IJET) 2020, 15, 4–15. [Google Scholar] [CrossRef]
- Wang, X.; Liu, J.; Moore, S.J.; Nugent, C.D.; Xu, Y. A behavioural hierarchical analysis framework in a smart home: Integrating HMM and probabilistic model checking. Inf. Fusion 2023, 95, 275–292. [Google Scholar] [CrossRef]
- Chadza, T.; Kyriakopoulos, K.G.; Lambotharan, S. Analysis of hidden Markov model learning algorithms for the detection and prediction of multi-stage network attacks. Future Gener. Comput. Syst. 2020, 108, 636–649. [Google Scholar] [CrossRef]
- Yu, S.Z.; Kobayashi, H. An efficient forward-backward algorithm for an explicit-duration hidden Markov model. IEEE Signal Process. Lett. 2003, 10, 11–14. [Google Scholar]
- Valdiviezo-Diaz, P.; Ortega, F.; Cobos, E.; Lara-Cabrera, R. A collaborative filtering approach based on Naïve Bayes classifier. IEEE Access 2019, 7, 108581–108592. [Google Scholar] [CrossRef]
- Nica, I.; Alexandru, D.B.; Craciunescu, S.L.P.; Ionescu, S. Automated Valuation Modelling: Analysing Mortgage Behavioural Life Profile Models Using Machine Learning Techniques. Sustainability 2021, 13, 5162. [Google Scholar] [CrossRef]
- Ekström, J.; Åkerrén Ögren, J.; Sjöblom, T. Exact Probability Distribution for the ROC Area under Curve. Cancers 2023, 15, 1788. [Google Scholar] [CrossRef]
- Mingote, V.; Miguel, A.; Ortega, A.; Lleida, E. Optimization of the area under the ROC curve using neural network supervectors for text-dependent speaker verification. Comput. Speech Lang. 2020, 63, 101078. [Google Scholar] [CrossRef]
- Khosravani Pour, L.; Farrokhi, A. Language recognition by convolutional neural networks. Sci. Iran. 2023, 30, 116–123. [Google Scholar]
Development | DL Method | Database | Activities | Accuracy |
---|---|---|---|---|
[46] | Inception-ResNet | UTD_MHAD | 27 | 98.13% |
HDM05 | 130 | 90.67% | ||
NTU RGB+D 60 | 60 | 85.45% | ||
[51] | MLP + HMM | Own development | 19 | 98.05% |
[53] | LSTM | WISDM | 6 | 97.32% |
PAMAP2 | 18 | 97.15% | ||
[57] | CNN + BiLSTM | UCI-HAR | 6 | 96.37% |
WISDM | 6 | 96.05% | ||
PAMAP2 | 18 | 94.29% | ||
[59] | CNN + GRU | UCI-HAR | 6 | 96.20% |
WISDM | 6 | 97.21% | ||
PAMAP2 | 18 | 95.27% | ||
[61] | BiLSTM | Milan | 16 | 95.42% |
[65] | SVM | Own development | 6 | 94.66% |
[66] | SGN -> GCN + CNN | NTU RGB+D 60 | 60 | 89% |
NTU RGB+D 120 | 120 | 79.20% | ||
SYSU | 12 | 90.60% | ||
[70] | CNN | Cairo | 13 | 91.99% |
Milan | 15 | 95.35% | ||
Kyoto7 | 13 | 86.68% | ||
Kyoto8 | 12 | 97.08% | ||
Kyoto11 | 25 | 90.27% | ||
[72] | RNN, LSTM and GRU | SDHAR-HOME | 18 | 90.91% |
[74] | Transformer | Penn-Action | 15 | 98.7% |
NTU RGB+D 60 | 60 | 92% | ||
NTU RGB+D 120 | 120 | 90.3% |
Development | DL Method | Database | Activities | Accuracy |
---|---|---|---|---|
[78] | FFT | WEKA | 5 | 79.98% |
[82] | HMM | Own development | 12 | 89% |
[83] | HMM | Own development | 5 | 69% |
[85] | VAE | HHAR | 9 | 87% |
[89] | k_Means | Own development | 12 | 72.95% |
GMM | 75.60% | |||
HMM | 83.89% | |||
[93] | UDR-RC | WISDM | 6 | 97.28% |
[94] | MaxGap | Own development | 17 | 91.40% |
HMM | 93.50% | |||
[96] | DRB | UCF50 | 50 | 82.00% |
[98] | Decision trees | ADLs | 8 | 88.02% |
Proposed | Proposed | SDHAR-HOME | 18 | 91.68% |
Bath. Act. | Chores | Cook | Dish | Dress | Eat | Laundry | Out of Home | |
---|---|---|---|---|---|---|---|---|
User 1 | 505 | 1594 | 630 | 723 | 268 | 1649 | 224 | 24,003 |
User 2 | 914 | 1887 | 604 | 631 | 607 | 2213 | 832 | 27,261 |
Pet | Read | Relax | Shower | Sleep | Take Meds | Watch TV | Work | |
User 1 | 146 | 1997 | 1960 | 634 | 25,268 | 103 | 4319 | 5202 |
User 2 | 304 | 8924 | 3788 | 1602 | 28,907 | 60 | 4262 | 4244 |
Activity | Precision | Recall | F1-Score |
---|---|---|---|
Bathroom Activity | 0.81–0.63 | 0.77–0.6 | 0.79–0.61 |
Chores | 0.28–0.25 | 0.65–0.57 | 0.39–0.35 |
Cook | 0.68–0.37 | 0.83–0.6 | 0.75–0.46 |
Dishwashing | 0.24–0.15 | 0.71–0.79 | 0.36–0.25 |
Dress | 0.12–0.19 | 0.78–0.61 | 0.21–0.29 |
Eat | 0.91–0.65 | 0.82–0.64 | 0.86–0.64 |
Laundry | 0.85–0.11 | 1.0–0.54 | 0.92–0.18 |
Out of Home | 0.99–0.99 | 0.97–0.91 | 0.98–0.95 |
Pet | 0.82–0.11 | 0.86–0.55 | 0.84–0.18 |
Read | 0.64–0.58 | 0.54–0.67 | 0.59–0.62 |
Relax | 0.42–0.76 | 0.69–0.63 | 0.52–0.69 |
Shower | 0.87–0.47 | 0.88–0.73 | 0.87–0.57 |
Sleep | 0.96–0.94 | 0.92–0.94 | 0.94–0.94 |
Take Meds | 0.09–0.09 | 0.83–0.96 | 0.16–0.16 |
Watch TV | 0.89–0.79 | 0.82–0.8 | 0.85–0.79 |
Work | 0.99–0.57 | 0.94–0.77 | 0.96–0.66 |
Accuracy | 0.92–0.87 | ||
Macro avg. | 0.66–0.48 | 0.81–0.71 | 0.69–0.52 |
Weighted avg. | 0.94–0.9 | 0.92–0.87 | 0.93–0.88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gómez-Ramos, R.; Duque-Domingo, J.; Zalama, E.; Gómez-García-Bermejo, J. An Unsupervised Method to Recognise Human Activity at Home Using Non-Intrusive Sensors. Electronics 2023, 12, 4772. https://doi.org/10.3390/electronics12234772
Gómez-Ramos R, Duque-Domingo J, Zalama E, Gómez-García-Bermejo J. An Unsupervised Method to Recognise Human Activity at Home Using Non-Intrusive Sensors. Electronics. 2023; 12(23):4772. https://doi.org/10.3390/electronics12234772
Chicago/Turabian StyleGómez-Ramos, Raúl, Jaime Duque-Domingo, Eduardo Zalama, and Jaime Gómez-García-Bermejo. 2023. "An Unsupervised Method to Recognise Human Activity at Home Using Non-Intrusive Sensors" Electronics 12, no. 23: 4772. https://doi.org/10.3390/electronics12234772
APA StyleGómez-Ramos, R., Duque-Domingo, J., Zalama, E., & Gómez-García-Bermejo, J. (2023). An Unsupervised Method to Recognise Human Activity at Home Using Non-Intrusive Sensors. Electronics, 12(23), 4772. https://doi.org/10.3390/electronics12234772