Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications †
Abstract
:1. Introduction
2. Related Work
3. Approach
3.1. Approach Fundamentals
3.2. Framework Description
- A statistical-based classification that runs as a conventional prediction stage in human activity recognition processes using supervised statistical learning methods. In this classification approach, the activity patterns, previously stored as a training model, are exploited to recognize user states by deriving statistical inference.
- A semantic-based classification that uses human activity recognition ontology and knowledge-driven mechanisms to carry out such recognition processes even with small trained data. This type of method analyses and detects the performed user activity according to contextual knowledge by applying a set of potential semantic inferences.
- Frequency-based reasoning is in charge of managing and handling the heterogeneity in terms of outputs derived from the semantic-based classification method. It offers a way to reason on these activity outputs in order to select the more appropriate inferred semantic activity based on counting the occurrences of each semantic activity.
- Transitional probabilities reasoning is the final stage and it is used to match the statistical activity label inferred from supervised statistical machine learning with a semantic activity label in order to refine the prediction outcome of the current activity. The purpose of such refinement aims at adjusting such a prediction through transition probability weights, which explore the probability value from previous to current states based on a TPG (see Figure 3).
3.3. Inference Rules
4. Case Study
4.1. ARApp Description
4.2. Training Model and Activity Recognition Scenarios
5. Results
5.1. Assessment of Activity Prediction
5.2. Improvement of Transition Delays
6. Conclusions
References
- Lane, N.D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A.T. A survey of mobile phone sensing. IEEE Commun. Mag. 2010, 48, 140–150. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2012, 15, 1192–1209. [Google Scholar] [CrossRef]
- Incel, O.D.; Kose, M.; Ersoy, C. A Review and Taxonomy of Activity Recognition on Mobile Phones. BioNanoScience 2013, 3, 145–171. [Google Scholar] [CrossRef]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J. A Survey of Online Activity Recognition Using Mobile Phones. Sensors 2015, 15, 2059–2085. [Google Scholar] [CrossRef] [PubMed]
- Cornacchia, M.; Zheng, Y.; Velipasalar, S.; Ozcan, K. A Survey on Activity Detection and Classification Using Wearable Sensors. IEEE Sens. J. 2016, 17, 386–403. [Google Scholar] [CrossRef]
- Brezmes, T.; Gorricho, J.-L.; Cotrina, J. Activity Recognition from Accelerometer Data on a Mobile Phone. In Proceedings of the International Work-Conference on Artificial Neural Networks, Salamanca, Spain, 10–12 June 2009; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5518, pp. 796–799. [Google Scholar]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 2011, 12, 74. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A mobile platform for real-time human activity recognition. In Proceedings of the 2012 IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 14–17 January 2012; pp. 667–671. [Google Scholar]
- Pham, C. MobiRAR: Real-Time Human Activity Recognition Using Mobile Devices. In Proceedings of the 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), HoChiMinh City, Vietnam, 8–10 October 2015; pp. 144–149. [Google Scholar]
- Mascret, Q.; Bielmann, M.; Fall, C.-L.; Bouyer, L.J.; Gosselin, B. Real-Time Human Physical Activity Recognition with Low Latency Prediction Feedback Using Raw IMU Data. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 17–21 July 2018; pp. 239–242. [Google Scholar]
- Chen, L.; Hoey, J.; Nugent, C.D.; Cook, D.J.; Yu, Z. Sensor-based activity recognition. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2012, 42, 790–808. [Google Scholar] [CrossRef]
- Riboni, D.; Pareschi, L.; Radaelli, L.; Bettini, C. Is ontology-based activity recognition really effective? In Proceedings of the 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Seattle, WA, USA, 21–25 March 2011; pp. 427–431. [Google Scholar]
- Wan, J.; O’grady, M.J.; O’Hare, G.M. Dynamic sensor event segmentation for real-time activity recognition in a smart home context. Pers. Ubiquitous Comput. 2015, 19, 287–301. [Google Scholar] [CrossRef]
- Espinilla, M.; Medina, J.; Hallberg, J.; Nugent, C. A new approach based on temporal sub-windows for online sensor-based activity recognition. J. Ambient. Intell. Humaniz. Comput. 2018, 1–13. [Google Scholar] [CrossRef]
- Riboni, D.; Bettini, C. COSAR: Hybrid reasoning for context-aware activity recognition. Pers. Ubiquitous Comput. 2011, 15, 271–289. [Google Scholar] [CrossRef]
- Chen, L.; Nugent, C.; Okeyo, G. An Ontology-Based Hybrid Approach to Activity Modeling for Smart Homes. IEEE Trans. Hum.-Mach. Syst. 2013, 44, 92–105. [Google Scholar] [CrossRef]
- Azkune, G.; Almeida, A.; Lopez-De-Ipina, D.; Chen, L. Extending knowledge-driven activity models through data-driven learning techniques. Expert Syst. Appl. 2015, 42, 3115–3128. [Google Scholar] [CrossRef]
- Riboni, D.; Sztyler, T.; Civitarese, G.; Stuckenschmidt, H. Unsupervised recognition of interleaved activities of daily living through ontological and probabilistic reasoning. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp ’16, Heidelberg, Germany, 12–16 September 2016; pp. 1–12. [Google Scholar]
- Salguero, A.G.; Espinilla, M.; Delatorre, P.; Medina, J. Using Ontologies for the Online Recognition of Activities of Daily Living. Sensors 2018, 18, 1202. [Google Scholar] [CrossRef] [PubMed]
- Civitarese, G.; Bettini, C.; Sztyler, T.; Riboni, D.; Stuckenschmidt, H. NECTAR: Knowledge-based Collaborative Active Learning for Activity Recognition. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018; pp. 1–10. [Google Scholar]
- Safyan, M.; Qayyum, Z.U.; Sarwar, S.; García-Castro, R.; Ahmed, M. Ontology-driven semantic unified modelling for concurrent activity recognition (OSCAR). Multimedia Tools Appl. 2019, 78, 2073–2104. [Google Scholar] [CrossRef]
- Triboan, D.; Chen, L.; Chen, F.; Wang, Z. A semantics-based approach to sensor data segmentation in real-time Activity Recognition. Future Gener. Comput. Syst. 2019, 93, 224–236. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, X.; Zhai, Z.; Chen, R.; Zhang, B.; Jiang, Y. Timely daily activity recognition from headmost sensor events. ISA Trans. 2019, in press. [Google Scholar] [CrossRef] [PubMed]
- Banos, O.; Galvez, J.-M.; Damas, M.; Pomares, H.; Rojas, I. Window Size Impact in Human Activity Recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef] [PubMed]
- Mannini, A.; Sabatini, A.M. Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers. Sensors 2010, 10, 1154–1175. [Google Scholar] [CrossRef] [PubMed]
- Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H.; Hall, M. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10. [Google Scholar]
- UCI Repository. Available online: http://archive.ics.uci.edu/ml/datasets.html (accessed on 21 March 2019).
- Stisen, A.; Blunck, H.; Bhattacharya, S.; Prentow, T.S.; Kjærgaard, M.B.; Dey, A.; Jensen, M.M. Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, Seoul, Korea, 1–4 November 2015; ACM: New York, NY, USA, 2015; pp. 127–140. [Google Scholar]
- Jabla, R.; Braham, A.; Buendía, F.; Khemaja, M. A Computing Framework to Check Real-Time Requirements in Ambient Intelligent Systems. In Ambient Intelligence—Software and Applications, 10th International Symposium on Ambient Intelligence; ISAmI. Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2019; pp. 19–26. [Google Scholar]
Name | Description | Label |
---|---|---|
Sitting | The body rests supported by some kind of furniture | St |
Standing | Upright position on the feet | Sd |
Walking | Move along on feet | Wl |
Lying | Horizontal or flat position, as on a bed or the ground | Ly |
Running | Go steadily by springing steps so that both feet leave the ground in each step | Rn |
Upstairs | Move to a higher floor | Us |
Downstairs | Move to a lower floor | Ds |
Module | Description |
---|---|
Sensor | This module is considered as the backbone of all other modules and it is designed to provide semantic activity inferences and interact with the rest of the modules. |
Activity | This module is to identify the physical activity carried out at a given time. |
User | This module represents information concerning users such as their preferences, needs or personal interests. |
Location | The location module is used to determine where the users are performing activities. Each location can be either indoor or outdoor. This ontology enables geographic coordinates to be measured (e.g., latitude and longitude). |
Time | This module aims at pointing out the interval of an activity. In addition, it is used to record activity start time as well as end time information. |
Object | Object ontology includes physical objects such as smartphones, tablets and smart watches. Basically, this ontology covers devices with embedded sensors, which are explored to gather and send sensor information to detect any physical activity changes. |
Condition | Action |
---|---|
[Transition-aware-rule-for-sitting-activity: (?previousAct rdf:type uni:HumanActivity) (?previousAct uni:ActivityName ‘Sitting’) (?previousTime rdf:type uni:TimeInstant) (?previousAct uni:takePlaceAt ?previousTime) (?previousTime uni:Instant ‘“+ ftime +”’) (?currentAct rdf:type uni:SemanticOutput) (?currentTime rdf:type uni:TimeInstant)… | → (?currentAct uni:FirstSemanticArgument ‘Sitting’) (?currentAct uni:SecondSemanticArgument ‘Standing’) (?currentAct uni:ThirdSemanticArgument ‘Lying’)]; |
Classifier Algorithm | Transition Delay (s) | New Transition Delay (s) |
---|---|---|
Random Forest (RF) | 0.379 | 0.145 |
Decision Tree (DT) | 0.508 | 0.344 |
K-Nearest Neighbor (KNN) | 0.519 | 0.340 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jabla, R.; Buendía, F.; Khemaja, M.; Faiz, S. Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications. Proceedings 2019, 31, 15. https://doi.org/10.3390/proceedings2019031015
Jabla R, Buendía F, Khemaja M, Faiz S. Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications. Proceedings. 2019; 31(1):15. https://doi.org/10.3390/proceedings2019031015
Chicago/Turabian StyleJabla, Roua, Félix Buendía, Maha Khemaja, and Sami Faiz. 2019. "Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications" Proceedings 31, no. 1: 15. https://doi.org/10.3390/proceedings2019031015
APA StyleJabla, R., Buendía, F., Khemaja, M., & Faiz, S. (2019). Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications. Proceedings, 31(1), 15. https://doi.org/10.3390/proceedings2019031015