Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning
Abstract
:1. Introduction
2. Related Research
2.1. Human Activity Recognition
2.2. Deep Learning for Human Activity Recognition
2.3. Labeling Reduction Technologies
3. Basic Theory for Labeling Reduction
3.1. Active Transfer Learning (ATL)
3.2. Semi-Supervised Learning
4. Proposed Methods
4.1. Human Activity Recognition Dataset Description
4.2. Proposed Process
Algorithm 1. Semi-Supervised Active Transfer Learning Algorithm |
Input: HAR Dataset BEGIN Step 1: Train the basic model with the training set Step 2: Create correct classifier that transfers the learned basic model Step 3: Input validation dataset into the learned basic model Step 4: Create a correct dataset according to the prediction of the basic model compared with the output and the actual value Step 5: Train the correct classifier model with the correct dataset (validation set) Step 6: Input the unlabeled dataset to the classifier to compare probability Step 7: Correct high probability data are sampled for semi-supervised learning Step 8: Incorrect high probability data are sampled for learning Step 9: Add sampled data to training set to retrain the basic model Step 10: Repeat the following process to efficiently label the unlabeled dataset to proceed with learning END |
5. The Performance According to the Number of Labeling
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ramamurthy, S.R.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
- Kim, E.; Helal, S.; Cook, D. Human Activity Recognition and Pattern Discovery. IEEE Pervasive Comput. 2010, 9, 48–53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Vrigkas, M.; Nikou, C.; Kakadiaris, I.A. A review of human activity recognition methods. Front. Robot. AI 2015, 2, 28. [Google Scholar] [CrossRef] [Green Version]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Ke, S.-R.; Thuc, H.L.U.; Lee, Y.-J.; Hwang, J.-N.; Yoo, J.-H.; Choi, K.-H. A Review on Video-Based Human Activity Recognition. Computers 2013, 2, 88–131. [Google Scholar] [CrossRef]
- Robertson, N.; Reid, I. A general method for human activity recognition in video. Comput. Vis. Image Underst. 2006, 104, 232–248. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Ma-chine Learning, Bruges, Belgium, 24–26 April 2013; pp. 437–442. [Google Scholar]
- Bayat, A.; Pomplun, M.; Tran, D.A. A Study on Human Activity Recognition Using Accelerometer Data from Smartphones. Procedia Comput. Sci. 2014, 34, 450–457. [Google Scholar] [CrossRef] [Green Version]
- San-Segundo, R.; Echeverry-Correa, J.; Salamea, C.; Pardo, J.M. Human activity monitoring based on hidden Markov models using a smartphone. IEEE Instrum. Meas. Mag. 2016, 19, 27–31. [Google Scholar] [CrossRef]
- Murad, A.; Pyun, J.-Y. Deep Recurrent Neural Networks for Human Activity Recognition. Sensors 2017, 17, 2556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, J.B.; Nguyen, M.N.; San, P.P.; Li, X.L.; Krishnaswamy, S. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI 15), Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
- Ronao, C.A.; Cho, S.-B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Al-Saffar, A.A.M.; Tao, H.; Talab, M.A. Review of deep convolution neural network in image classification. In Proceedings of the 2017 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Jakarta, Indonesia, 23–24 October 2017; pp. 26–31. [Google Scholar]
- Zhang, L.; Wu, X.; Luo, D. Human activity recognition with HMM-DNN model. In Proceedings of the 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC), Beijing, China, 6–8 July 2015; pp. 192–197. [Google Scholar]
- Hassan, M.M.; Uddin, Z.; Mohamed, A.; Almogren, A. A robust human activity recognition system using smartphone sensors and deep learning. Future Gener. Comput. Syst. 2018, 81, 307–313. [Google Scholar] [CrossRef]
- Wan, S.; Qi, L.; Xu, X.; Tong, C.; Gu, Z. Deep Learning Models for Real-time Human Activity Recognition with Smartphones. Mob. Netw. Appl. 2019, 25, 743–755. [Google Scholar] [CrossRef]
- Ullah, S.; Kim, D.-H. Sparse Feature Learning for Human Activity Recognition. In Proceedings of the 2021 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Korean, 17–20 January 2021; pp. 309–312. [Google Scholar]
- Chang, J.C.; Amershi, S.; Kamar, E. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2017; pp. 2334–2346. [Google Scholar]
- Fu, Y.; Zhu, X.; Li, B. A survey on instance selection for active learning. Knowl. Inf. Syst. 2012, 35, 249–283. [Google Scholar] [CrossRef]
- Tomanek, K.; Hahn, U. Semi-supervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, Suntec, Singapore, 2–7 August 2009. [Google Scholar]
- Liu, R.; Chen, T.; Huang, L. Research on human activity recognition based on active learning. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010; Volume 1, pp. 285–290. [Google Scholar] [CrossRef]
- Bota, P.; Silva, J.; Folgado, D.; Gamboa, H. A Semi-Automatic Annotation Approach for Human Activity Recognition. Sensors 2019, 19, 501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Stikic, M.; Van Laerhoven, K.; Schiele, B. Exploring semi-supervised and active learning for activity recognition. In Proceedings of the 2008 12th IEEE International Symposium on Wearable Computers, Pittaburgh, PA, USA, 28 September–1 October 2008; Volume 1, pp. 81–88. [Google Scholar] [CrossRef]
- Gudur, G.K.; Sundaramoorthy, P.; Umaashankar, V. Activeharnet: Towards on-Device Deep Bayesian Active Learning for Human Activity Recognition; Association for Computing Machinery: New York, NY, USA, 2019; pp. 7–12. ISBN 978-145-036-771-4. [Google Scholar]
- Monarch, R. Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI; Manning Publications: New York, NY, USA, 2021; pp. 1–456. ISBN 978-161-729-674-1. [Google Scholar]
- Asuncion, A.; Newman, D.J. UCI Machine Learning Repository; University of California, School of Information and Computer Science: Irvine, CA, USA, 2007; Available online: http://archive.ics.uci.edu/ml/index.php (accessed on 2 October 2018).
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
- Banos, O.; Garcia, R.; Holgado, J.A.; Damas, M.; Pomares, H.; Rojas, I.; Saez, A.; Villalonga, C. mHealthDroid: A novel framework for agile development of mobile health applications. In Proceedings of the 6th International Work-conference on Ambient Assisted Living an Active Ageing, Belfast, UK, 2–5 December 2014; pp. 91–98. [Google Scholar]
Parameter | Value |
---|---|
Booster | gbtree |
Scale pos weight | 1 |
Learning rate | 0.01 |
Col-sample by tree | 0.4 |
Subsample | 0.8 |
N estimators | 200 |
Max depth | 4 |
Gamma | 10 |
Extracted Features | |||
---|---|---|---|
fBodyAcc-skewness()-X | tBodyGyro-sma() | tGravityAccMag-std() | fBodyAccJerk-bandsEnergy()-9,16.1 |
tGravityAcc-min()-X | fBodyAcc-std()-Z | angle(X,gravityMean) | fBodyAccJerk-bandsEnergy()-1,24.1 |
fBodyAccJerk-std()-Y | fBodyAcc-max()-X | fBodyAccMag-std() | fBodyAcc-bandsEnergy()-17,24.2 |
tGravityAcc-energy()-X | fBodyAcc-mad()-X | tGravityAcc-max()-X | fBodyAcc-bandsEnergy()-1,8.1 |
fBodyAcc-max()-Z | angle(Y,gravityMean) | tBodyAccMag-std() | fBodyAcc-bandsEnergy()-1,24 |
tBodyAcc-iqr()-X | tBodyAcc-mad()-X | tBodyGyroMag-sma() | fBodyAcc-bandsEnergy()-1,16.2 |
tBodyAcc-max()-X | tGravityAcc-mean()-Y | tBodyAccJerk-std()-X | fBodyAcc-bandsEnergy()-9,16.2 |
fBodyAcc-kurtosis()-X | tBodyAccJerk-mad()-Y | tBodyGyro-energy()-Z | fBodyAccJerk-bandsEnergy()-1,16.2 |
fBodyAccMag-mad() | tGravityAcc-max()-Y | tBodyGyroJerk-std()-X | fBodyAccJerk-bandsEnergy()-17,24.2 |
tGravityAcc-mean()-X | tBodyGyroJerk-sma() | tBodyAccJerk-max()-Y | fBodyAccJerk-bandsEnergy()-33,48.2 |
tGravityAcc-arCoeff()-Z,1 | tBodyGyroJerkMag-mean() | tBodyGyroJerkMag-entropy() | fBodyAcc-bandsEnergy()-1,8.2 |
fBodyAccMag-energy() | tBodyGyroMag-mean() | fBodyBodyAccJerkMag-max() | fBodyGyro-bandsEnergy()-17,24 |
tBodyGyroJerk-mad()-X | tBodyGyroJerk-energy()-X |
Total Data | Training Data | Validation Data | Testing Data | Unlabeled Data |
---|---|---|---|---|
10,299 | 500 | 1000 | 1000 | 7799 |
DNN Based Basic Model | Transferred Correct Classifier | ||||
---|---|---|---|---|---|
Layers | Output Shape | Weight Freeze | Layers | Output Shape | Weight Freeze |
FC Layer (Linear) | 50, 256 | False | FC Layer (Linear) | 50, 256 | True |
ReLU | 50, 256 | False | ReLU | 50, 256 | True |
FC Layer (Linear) | 256, 128 | False | FC Layer | 256, 128 | True |
ReLU | 256, 128 | False | ReLU | 256, 128 | True |
Dropout | 0.2 | False | Dropout | 0.2 | False |
FC Layer (Linear) | 128, 128 | False | FC Layer | 128, 128 | False |
ReLU | 128, 128 | False | ReLU | 128, 128 | False |
Dropout | 0.2 | False | Dropout | 0.2 | False |
FC Layer (Linear) | 128, 6 | False | FC Layer | 128, 2 | False |
Random Sampling | Active Transfer Learning | Proposed Method | |||
---|---|---|---|---|---|
Number of Queries | Accuracy | Number of Queries | Accuracy | Number of Queries | Accuracy |
1000 | 92.9% | 224 | 95.8% | 198 | 95.5% |
Total Data | Training Data | Validation Data | Testing Data | Unlabeled Data |
---|---|---|---|---|
10,239 | 100 | 1000 | 1000 | 8139 |
DNN Based Basic Model | Transferred Correct Classifier | ||||
---|---|---|---|---|---|
Layers | Output Shape | Weight Freeze | Layers | Output Shape | Weight Freeze |
1D CNN | 5, 8, kernel_size = 5 | False | 1D CNN | 5, 8, kernel_size = 5 | True |
ReLU | 5, 8, kernel_size = 5 | False | ReLU | 5, 8, kernel_size = 5 | True |
1D CNN | 8, 16, kernel_size = 5 | False | 1D CNN | 8, 16, kernel_size = 5 | True |
ReLU | 8, 16, kernel_size = 5 | False | ReLU | 8, 16, kernel_size = 5 | True |
1D CNN | 16, 8, kernel_size = 5 | False | 1D CNN | 16, 8, kernel_size = 5 | True |
ReLU | 16, 8, kernel_size = 5 | False | ReLU | 16, 8, kernel_size = 5 | True |
Dropout | 0.5 | False | Dropout | 0.5 | False |
MaxPooling1D | Kernel_size = 5 | False | MaxPooling1D | Kernel_size = 5 | False |
FC Layer (Linear) | 872, 100 | False | FC Layer (Linear) | 872, 100 | False |
ReLU | 872, 100 | False | ReLU | 872, 100 | False |
FC Layer (Linear) | 100, 6 | False | FC Layer (Linear) | 100, 2 | False |
Total Data | Training Data | Validation Data | Testing Data | Unlabeled Data |
---|---|---|---|---|
16,384 | 1000 | 2000 | 2000 | 11,384 |
Active Transfer Learning | Proposed Method | ||
---|---|---|---|
Number of Queries | Accuracy | Number of Queries | Accuracy |
766 | 0.949% | 693 | 0.959% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Oh, S.; Ashiquzzaman, A.; Lee, D.; Kim, Y.; Kim, J. Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning. Sensors 2021, 21, 2760. https://doi.org/10.3390/s21082760
Oh S, Ashiquzzaman A, Lee D, Kim Y, Kim J. Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning. Sensors. 2021; 21(8):2760. https://doi.org/10.3390/s21082760
Chicago/Turabian StyleOh, Seungmin, Akm Ashiquzzaman, Dongsu Lee, Yeonggwang Kim, and Jinsul Kim. 2021. "Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning" Sensors 21, no. 8: 2760. https://doi.org/10.3390/s21082760
APA StyleOh, S., Ashiquzzaman, A., Lee, D., Kim, Y., & Kim, J. (2021). Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning. Sensors, 21(8), 2760. https://doi.org/10.3390/s21082760