A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques
Abstract
:1. Introduction
- Self-contained. Our system only uses a built-in accelerometer for detecting the movement of the mobile device without any extra hardware.
- Intuitive. Our system applies a natural movement gesture to directly manipulate contents on a large display. Since users are already familiar with the movement gesture on desktop interactions, our technique does not require extra training.
- Physically Unconstrained. Physically unconstrained typically means that the cross-device application should not require the user be physically close to the large display device. With a remote-control mechanism, multi-user participation becomes viable since the users do not need to stand close to each other in front of the large display.
2. Related Work
2.1. Interaction-Sensing Techniques
- Direct touch on a large display
- Using pointing devices
- Using built-in sensors
- Using built-in cameras
2.2. Sensor-Based Motion Detection
2.3. Cross-Platform Applications
3. System Overview
4. Cursor Initialization
5. Detecting Cursor Movements
5.1. Data Collection
5.2. Data Preprocessing
5.3. Feature Extraction
6. Classification and Results
6.1. Basic Cross-Validation
6.2. Ten-Folds Cross-Validation
7. Feature Selection
7.1. Algorithm-Based Methods
7.1.1. Linear Correlation Analysis
7.1.2. Select K Best
7.1.3. Recursive Feature Elimination
7.1.4. Algorithm-Based Results
7.2. Manual Feature Selection
7.2.1. Vector-Based Feature Selection
7.2.2. Feature-Category-Based Feature Selection
7.2.3. Combined Analysis
8. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Available online: http://www.pewinternet.org/fact-sheet/mobile/ (accessed on 25 December 2020).
- Sarabadani Tafreshi, A.E.; Soro, A.; Tröster, G. Automatic, Gestural, Voice, Positional, or Cross-Device Interaction? Comparing Interaction Methods to Indicate Topics of Interest to Public Displays. Front. ICT 2018, 5, 1–13. [Google Scholar] [CrossRef]
- Ikematsu, K.; Siio, I. Memory Stones: An Intuitive Information Transfer Technique between Multi-Touch Computers. In HotMobile ’15, Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, Santa Fe, NM, USA, 12–13 February 2015; ACM Press: New York, NY, USA, 2015; pp. 3–8. [Google Scholar]
- Marquardt, N.; Ballendat, T.; Boring, S.; Greenberg, S.; Hinckley, K. Gradual Engagement: Facilitating Information Exchange between Digital Devices as a Function of Proximity. In ITS ’12, Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, Cambridge, MA, USA, 11–14 November 2012; ACM Press: New York, NY, USA, 2012; pp. 31–40. [Google Scholar]
- Paay, J.; Raptis, D.; Kjeldskov, J.; Skov, M.B.; Ruder, E.V.; Lauridsen, B.M. Investigating Cross-Device Interaction between a Handheld Device and a Large Display. In CHI ’17, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Denver, Colorado, USA, 6–11 May 2017; ACM Press: New York, NY, USA, 2017; pp. 6608–6619. [Google Scholar]
- Boring, S.; Dominikus, B.; Butz, A.; Gustafson, S.; Baudisch, P. Touch Projector: Mobile InterAction through Video. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 2287–2296. [Google Scholar]
- Khan, A.M.; Lee, Y.K.; Lee, S.Y.; Kim, T.S. Human Activity Recognition via an Accelerometer-Enabled-Smartphone Using Kernel Discriminant Analysis. In Proceedings of the 5th International Conference on Future Information Technology, Busan, Korea, 20–24 May 2010; pp. 1–6. [Google Scholar]
- Kwapisz, J.R.; Weiss, G.M.; Moore, A.M. Activity Recognition Using Cell Phone Accelerometers. ACM SIGKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Strohmeier, P. DisplayPointers: Seamless Cross-Device Interactions. In ACE ’15, Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology, Iskandar, Malaysia, 16–19 November 2015; ACM Press: New York, NY, USA, 2015; pp. 1–8. [Google Scholar]
- Schmidt, D.; Seifert, J.; Rukzio, E.; Gellersen, H. A Cross-Device Interaction Style for Mobiles and Surfaces. In DIS ’12, Proceedings of the Designing Interactive Systems Conference on, Newcastle Upon Tyne, UK, 11–15 June 2012; ACM Press: New York, NY, USA, 2012; pp. 318–327. [Google Scholar]
- Von Zadow, U.; Büschel, W.; Langner, R.; Dachselt, R. SleeD: Using a Sleeve Display to Interact with Touch-Sensitive Display Walls. In ITS ’14, Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, Dresden, Germany, 16–19 November 2014; ACM Press: New York, NY, USA, 2014; pp. 129–138. [Google Scholar]
- Seifert, J.; Bayer, A.; Rukzio, E. PointerPhone: Using Mobile Phones for Direct Pointing Interactions with Remote Displays. In Human-Computer Interaction—INTERACT; Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8119. [Google Scholar]
- Nancel, M.; Chapuis, O.; Pietriga, E.; Yang, X.-D.; Irani, P.P.; Beaudouin-Lafon, M. High-Precision Pointing on Large Wall Displays Using Small Handheld Devices. In CHI ’13: SIGCHI Conference on Human Factors and Computing Systems; ACM: Paris, France, 2003; pp. 831–840. [Google Scholar]
- Boring, S.; Jurmu, M.; Butz, A. Scroll, Tilt or Move It: Using Mobile Phones to Continuously Control Pointers on Large Public Displays. In OZCHI ’09, Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group on Design: Open 24/7, Melbourne, Australia, 23–27 November 2009; ACM Press: New York, NY, USA, 2009; pp. 161–168. [Google Scholar]
- Rekimoto, J. SyncTap: Synchronous User Operation for Spontaneous Network Connection. Pers. Ubiquitous Comput. 2004, 8, 126–134. [Google Scholar] [CrossRef]
- Peng, C.; Shen, G.; Zhang, Y.; Lu, S. Point&Connect: Intention-based Device Paring for Mobile Phone Users. In Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services (MobiSys 2009), Kraków, Poland, 22–25 June 2009; ACM Press: New York, NY, USA, 2009; pp. 137–150. [Google Scholar]
- Yuan, H.; Maple, C.; Chen, C.; Watson, T. Cross-Device Tracking through Identification of User Typing Behaviours. Electron. Lett. 2018, 54, 957–959. [Google Scholar] [CrossRef] [Green Version]
- Baur, D.; Boring, S.; Feiner, S. Virtual Projection: Exploring Optical Projection as a Metaphor for Multi-device Interaction. In CHI ’12, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1693–1702. [Google Scholar]
- Hagiwara, T.; Takashima, K.; Fjeld, M.; Kitamura, Y. CamCutter: Impromptu Vision-Based Cross-Device Application Sharing. Interact. Comput. 2019, 31, 539–554. [Google Scholar] [CrossRef]
- Chen, K.-H.; Yang, J.-J.; Jaw, F.-S. Accelerometer-Based Fall Detection Using Feature Extraction and Support Vector Machine Algorithms. Instrum. Sci. Technol. 2016, 44, 333–342. [Google Scholar] [CrossRef]
- Rakhman, A.Z.; Nugroho, L.E.; Widyawan, K. Fall Detection System Using Accelerometer and Gyroscope Based on Smartphone. In Proceedings of the 2014 1st International Conference on Information Technology, Computer, and Electrical Engineering, Semarang, Indonesia, 7–8 November 2014; pp. 99–104. [Google Scholar]
- Ferrero, R.; Gandino, F.; Montrucchio, B.; Rebaudengo, M.; Velasco, A.; Benkhelifa, I. On Gait Recognition with Smartphone Accelerometer. In Proceedings of the 2015 4th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 14–18 June 2015; pp. 368–373. [Google Scholar]
- Biørn-Hansen, A.; Grønli, T.M.; Ghinea, G. A Survey and Taxonomy of Core Concepts and Research Challenges in Cross-Platform Mobile Development. ACM Comput. Surv. 2019, 51, 108:1–108:34. [Google Scholar]
- Rieger, C.; Majchrzak, T.A. Towards the Definitive Evaluation Framework for Cross-platform App Development Approaches. J. Syst. Softw. 2019, 153, 175–199. [Google Scholar] [CrossRef]
- Bohan, M.; Thompson, S.; Samuelson, P.J. Kinematic Analysis of Mouse Cursor Positioning as a Function of Movement Scale and Joint SET. In Proceedings of the 8th Annual International Conference on Industrial Engineering—Theory, Applications and Practice, Las Vegas, NV, USA, 10–12 November 2003; pp. 442–447. [Google Scholar]
- Calmes, L. Binaural sound source localization—Software. [online] Laurentcalmes.lu. 2019. Available online: http://www.laurentcalmes.lu/soundloc_software.html (accessed on 28 July 2019).
- Experiment: How Fast Your Brain Reacts to Stimuli. Available online: https://backyardbrains.com/experiments/reactiontime (accessed on 24 January 2019).
- Welch, P. The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging over Short, Modified Periodograms. IEEE Trans. Audio Electroacoust. 1967, 15, 70–73. [Google Scholar] [CrossRef] [Green Version]
- Stein, J.Y. Digital Signal Processing: A Computer Science Perspective; John Wiley & Sons: Hoboken, NJ, USA, 2000; p. 115. [Google Scholar]
- Zhang, T.; Wang, J.; Xu, L.; Liu, P. Fall Detection by Wearable Sensor and One-Class SVM Algorithm. Intell. Comput. Signal Process. Pattern Recognit. 2006, 345, 858–863. [Google Scholar]
- Preece, S.J.; Goulermas, J.Y.; Kenney, L.P.J.; Howard, D. A Comparison of Feature Extraction Methods for the Classification of Dynamic Activities from Accelerometer Data. IEEE Trans. Biomed. Eng. 2009, 56, 871–879. [Google Scholar] [CrossRef] [PubMed]
- Bao, L.; Intille, S.S. Activity Recognition from User-Annotated Acceleration Data. In Pervasive Computing; Ferscha, A., Mattern, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3001. [Google Scholar]
- Lee, K.; Kwan, M.-P. Physical Activity Classification in Free-Living Conditions Using Smartphone Accelerometer Data and Exploration of Predicted Results. Comput. Environ. Urban Syst. 2018, 67, 124–131. [Google Scholar] [CrossRef]
- Liu, Z.-T.; Wu, M.; Cao, W.-H.; Mao, J.-W.; Xu, J.-P.; Tan, G.-Z. Speech Emotion Recognition Based on Feature Selection and Extreme Learning Machine Decision Tree. Neurocomputing 2018, 273, 271–280. [Google Scholar] [CrossRef]
- Ali, M.; Aittokallio, T. Machine Learning and Feature Selection for Drug Response Prediction in Precision Oncology Applications. Biophys. Rev. 2019, 11, 31–39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- How to Interpret a Correlation Coefficient r. Dummies. Available online: https://www.dummies.com/education/math/statistics/how-to-interpret-a-correlation-coefficient-r/ (accessed on 28 March 2019).
- Yan, K.; Zhang, D. Feature Selection and Analysis on Correlated Gas Sensor Data with Recursive Feature Elimination. Sens. Actuators B Chem. 2015, 212, 353–363. [Google Scholar] [CrossRef]
- Li, F.; Shui, Y.; Chen, W. Up and down Buses Activity Recognition Using Smartphone Accelerometer. In Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 20–22 May 2016; pp. 761–765. [Google Scholar]
Actions | Timestamp |
---|---|
Stand | 0–2 s |
Move right | 2–5 s |
Move left | 5–7 s |
Stand | 0–2 s |
Move up | 2–5 s |
Move down | 5–7 s |
Labels | Timestamp |
---|---|
0(stand_on_x) | 0–1 s |
1(move_right) | 3–4 s |
2(move_left) | 6–7 s |
3(stand_on_y) | 0–1 s |
4(move_up) | 3–4 s |
5(move_down) | 6–7 s |
Domains | Feature Categories |
---|---|
Time domain | Mean |
Standard deviation | |
Minimum-maximum difference | |
Median | |
Energy | |
Frequency domain | Dominant frequency |
Spectral energy |
Vectors | Axes | Mean | Standard Deviation | Minimum-Maximum Difference | Median | Energy | Dominant Frequency | Spectral Energy |
---|---|---|---|---|---|---|---|---|
Acceleration | X | mean*ax | std*ax | min_max_gap*ax | median*ax | energy*ax | main_freq*ax | spectral_energy*ax |
Y | mean*ay | std*ay | min_max_gap*ay | median*ay | energy*ay | main_freq*ay | spectral_energy*ay | |
Z | mean*az | std*az | min_max_gap*az | median*az | energy*az | main_freq*az | spectral_energy*az | |
Rotation | X | mean*gx | std*gx | min_max_gap*gx | median*gx | energy*gx | main_freq*gx | spectral_energy*gx |
Y | mean*gy | std*gy | min_max_gap*gy | median*gy | energy*gy | main_freq*gy | spectral_energy*gy | |
Z | mean*gz | std*gz | min_max_gap*gz | median*gz | energy*gz | main_freq*gz | spectral_energy*gz | |
Speed | X | mean*vx | std*vx | min_max_gap*vx | median*vx | energy*vx | main_freq*vx | spectral_energy*vx |
Y | mean*vy | std*vy | min_max_gap*vy | median*vy | energy*vy | main_freq*vy | spectral_energy*vy | |
Z | mean*vz | std*vz | min_max_gap*vz | median*vz | energy*vz | main_freq*vz | spectral_energy*vz |
Classifiers | Accuracy Mean | Accuracy Std |
---|---|---|
Gradient boosting | 83.65% | 11.18% |
LDA | 83.43% | 7.43% |
naïve Bayes | 79.42% | 6.68% |
Decision tree | 76.91% | 11.66% |
Linear SVM | 76.24% | 5.60% |
Random forest | 72.70% | 6.22% |
Neural net | 72.47% | 7.06% |
Nearest neighbors | 66.51% | 11.05% |
AdaBoost | 48.24% | 4.03% |
RBF SVM | 44.27% | 8.66% |
Gradient Boosting | ||||||
stand_on_x | move_right | move_left | stand_on_y | move_up | move_down | Classified as |
208 | 1 | 0 | 116 | 0 | 0 | stand_on_x |
0 | 277 | 14 | 0 | 38 | 0 | move_right |
0 | 1 | 311 | 0 | 5 | 19 | move_left |
70 | 1 | 0 | 261 | 0 | 0 | stand_on_y |
0 | 26 | 3 | 0 | 289 | 0 | move_up |
0 | 0 | 22 | 0 | 0 | 307 | move_down |
LDA | ||||||
stand_on_x | move_right | move_left | stand_on_y | move_up | move_down | Classified as |
218 | 0 | 0 | 107 | 0 | 0 | stand_on_x |
0 | 307 | 0 | 0 | 22 | 0 | move_right |
0 | 6 | 307 | 0 | 18 | 5 | move_left |
127 | 1 | 0 | 204 | 0 | 0 | stand_on_y |
0 | 29 | 9 | 1 | 279 | 0 | move_up |
0 | 0 | 0 | 0 | 1 | 328 | move_down |
naïve Bayes | ||||||
stand_on_x | move_right | move_left | stand_on_y | move_up | move_down | Classified as |
206 | 2 | 2 | 115 | 0 | 0 | stand_on_x |
0 | 242 | 23 | 0 | 64 | 0 | move_right |
0 | 16 | 252 | 0 | 52 | 16 | move_left |
73 | 3 | 0 | 255 | 1 | 0 | stand_on_y |
0 | 3 | 19 | 0 | 296 | 0 | move_up |
0 | 2 | 14 | 0 | 0 | 313 | move_down |
Feature Selection Method | Selected Features | Classifiers | Accuracy_Mean | Accuracy_Std |
---|---|---|---|---|
Linear correlation | “mean*vz”, “median*vz”, “energy*vz”, “spectral_energy*vz” | naïve Bayes | 73.20% | 12.07% |
RBF SVM | 72.57% | 11.50% | ||
Neural Nnet | 72.13% | 13.20% | ||
Select k best (Anova) | “median*vz”, “mean*vz”, “spectral_energy*vz”, “energy*vz”, “spectral_energy*vy”, “energy*vy”, “spectral_energy*vx”, “energy*vx”, “median*vy”, “mean*vy” | LDA | 78.75% | 8.13% |
Random forest | 76.26% | 9.59% | ||
Gradient boosting | 76.12% | 10.81% | ||
Recursive feature elimination | “mean*ax”, “mean*ay”, “mean*az”, “mean*vx”, “mean*vy”, “mean*vz”, “mean*gx”, “mean*gz”, “min_max_gap*ax”, “min_max_gap*ay”, “min_max_gap*az”, “min_max_gap*vy”, “min_max_gap*gy”, “min_max_gap*gz”, “median*ax”, “median*ay”, “median*az”, “median*vx”, “median*vy”, “median*vz”, “median*gz”, “main_freq*gy”, “main_freq*gz”, “energy*ax”, “energy*az”, “energy*vx”, “energy*vz”, “energy*gy”, “energy*gz”, “spectral_energy*vy”, “spectral_energy*vz” | Gradient boosting | 84.07% | 10.74% |
LDA | 82.82% | 8.14% | ||
naïve Bayes | 80.40% | 4.26% |
Vector Selection | Selected Features | Number of Features | Classifiers | Accuracy_Mean | Accuracy_Std |
---|---|---|---|---|---|
Acceleration | “mean*ax”, “mean*ay”,”mean*az”,”std*ax”, “std*ay”,”std*az”, “std*az”,”min_max_gap*ax”, “min_max_gap*ay”,”min_max_gap*az”,”median*ax”, “median*ay”,”median*az”,”energy*ax”, “energy*ay”,”energy*az”, “main_freq*ax”, “main_freq*ay”,”main_freq*az”,”spectral_energy*ax”, “spectral_energy*ay”,”spectral_energy*az” | 21 | Gradient boosting | 63.87% | 5.27% |
Random forest | 61.07% | 5.26% | |||
Decision tree | 57.12% | 4.28% | |||
Angular velocity | “mean*gx”, “mean*gy”,”mean*gz”,”std*gx”, “std*gy”,”std*gz”, “std*gz”,”min_max_gap*gx”, “min_max_gap*gy”,”min_max_gap*gz”,”median*gx”, “median*gy”,”median*gz”,”energy*gx”, “energy*gy”,”energy*gz”, “main_freq*gx”, “main_freq*gy”,”main_freq*gz”,”spectral_energy*gx”, “spectral_energy*gy”,”spectral_energy*gz” | 21 | Decision tree | 51.96% | 5.31% |
Gradient boosting | 51.96% | 3.97% | |||
Random forest | 50.67% | 7.45% | |||
Speed | “mean*vx”, “mean*vy”,”mean*vz”,”std*vx”, “std*vy”,”std*vz”, “std*vz”,”min_max_gap*vx”, “min_max_gap*vy”,”min_max_gap*vz”,”median*vx”, “median*vy”,”median*vz”,”energy*vx”, “energy*vy”,”energy*vz”, “main_freq*vx”, “main_freq*vy”,”main_freq*vz”,”spectral_energy*vx”, “spectral_energy*vy”,”spectral_energy*vz” | 21 | LDA | 80.03% | 9.84% |
Random forest | 79.98% | 8.92% | |||
Gradient boosting | 79.06% | 6.92% | |||
Acceleration and angular velocity | “mean*gx”, “mean*gy”,”mean*gz”,”std*gx”, “std*gy”,”std*gz”, “std*gz”,”min_max_gap*gx”, “min_max_gap*gy”,”min_max_gap*gz”,”median*gx”, “median*gy”,”median*gz”,”energy*gx”, “energy*gy”,”energy*gz”, “main_freq*gx”, “main_freq*gy”,”main_freq*gz”,”spectral_energy*gx”, “spectral_energy*gy”,”spectral_energy*gz”,”mean*gx”, “mean*gy”,”mean*gz”,”std*gx”, “std*gy”,”std*gz”, “std*gz”,”min_max_gap*gx”, “min_max_gap*gy”,”min_max_gap*gz”,”median*gx”, “median*gy”,”median*gz”,”energy*gx”, “energy*gy”,”energy*gz”, “main_freq*gx”, “main_freq*gy”,”main_freq*gz”,”spectral_energy*gx”, “spectral_energy*gy”,”spectral_energy*gz” | 42 | Gradient boosting | 65.08% | 5.33% |
Random forest | 62.20% | 3.51% | |||
Decision Tree | 59.59% | 5.95% | |||
Speed and acceleration | “mean*ax”, “mean*ay”,”mean*az”,”std*ax”, “std*ay”,”std*az”, “std*az”,”min_max_gap*ax”, “min_max_gap*ay”,”min_max_gap*az”,”median*ax”, “median*ay”,”median*az”,”energy*ax”, “energy*ay”,”energy*az”, “main_freq*ax”, “main_freq*ay”,”main_freq*az”,”spectral_energy*ax”, “spectral_energy*ay”,”spectral_energy*az”,”mean*vx”, “mean*vy”,”mean*vz”,”std*vx”, “std*vy”,”std*vz”, “std*vz”,”min_max_gap*vx”, “min_max_gap*vy”,”min_max_gap*vz”,”median*vx”, “median*vy”,”median*vz”,”energy*vx”, “energy*vy”,”energy*vz”, “main_freq*vx”, “main_freq*vy”,”main_freq*vz”,”spectral_energy*vx”, “spectral_energy*vy”,”spectral_energy*vz” | 42 | Gradient boosting | 83.25% | 10.90% |
naïve Bayes | 83.01% | 7.36% | |||
LDA | 81.71% | 7.05% | |||
Speed and angular velocity | “mean*vx”, “mean*vy”,”mean*vz”,”std*vx”, “std*vy”,”std*vz”, “std*vz”,”min_max_gap*vx”, “min_max_gap*vy”,”min_max_gap*vz”,”median*vx”, “median*vy”,”median*vz”,”energy*vx”, “energy*vy”,”energy*vz”, “main_freq*vx”, “main_freq*vy”,”main_freq*vz”,”spectral_energy*vx”, “spectral_energy*vy”,”spectral_energy*vz”,”mean*gx”, “mean*gy”,”mean*gz”,”std*gx”, “std*gy”,”std*gz”, “std*gz”,”min_max_gap*gx”, “min_max_gap*gy”,”min_max_gap*gz”,”median*gx”, “median*gy”,”median*gz”,”energy*gx”, “energy*gy”,”energy*gz”, “main_freq*gx”, “main_freq*gy”,”main_freq*gz”,”spectral_energy*gx”, “spectral_energy*gy”,”spectral_energy*gz” | 42 | LDA | 82.31% | 8.94% |
Gradient boosting | 81.75% | 7.76% | |||
Linear SVM | 75.71% | 6.55% |
Feature-Category | Selected Features | Number of Features | Classifiers | Accuracy_Mean | Accuracy_Std |
---|---|---|---|---|---|
Mean | “mean*ax”,”mean*ay”,”mean*az”,”mean*gx”,”mean*gy”,”mean*gz”,”mean*vx”,”mean*vy”,”mean*vz” | 9 | naïve Bayes | 85.25% | 6.53% |
Gradient boosting | 84.00% | 12.42% | |||
LDA | 77.98% | 11.57% | |||
Std | “std*ax”,”std*ay”,”std*az”,”std*gx”,”std*gy”,”std*gz”,”std*vx”,”std*vy”,”std*vz” | 9 | Gradient boosting | 56.51% | 4.01% |
Decision tree | 48.54% | 6.84% | |||
Random forest | 46.51% | 6.40% | |||
Min_max_gap | “min_max_gap*ax”,”min_max_gap*ay”,”min_max_gap*az”,”min_max_gap*gx”,”min_max_gap*gy”,”min_max_gap*gz”,”min_max_gap*vx”,”min_max_gap*vy”,”min_max_gap*vz” | 9 | Gradient boosting | 58.05% | 4.26% |
Decision tree | 50.29% | 5.56% | |||
Random forest | 50.27% | 4.88% | |||
Median | “median*ax”,”median*ay”,”median*az”,”median*gx”,”median*gy”,”median*gz”,”median*vx”,”median*vy”,”median*vz” | 9 | naïve Bayes | 85.36% | 7.27% |
Gradient boosting | 82.54% | 10.69% | |||
Random forest | 78.55% | 7.82% | |||
Energy | “energy*ax”,”energy*ay”,”energy*az”,”energy*gx”,”energy*gy”,”energy*gz”,”energy*vx”,”energy*vy”,”energy*vz” | 9 | Gradient boosting | 83.00% | 11.06% |
naïve Bayes | 78.50% | 6.23% | |||
Decision tree | 77.93% | 11.29% | |||
Main_freq | “main_freq*ax”,”main_freq*ay”,”main_freq*az”,”main_freq*gx”,”main_freq*gy”,”main_freq*gz”,”main_freq*vx”,”main_freq*vy”,”main_freq*vz” | 9 | LDA | 25.95% | 1.60% |
Linear SVM | 25.85% | 0.72% | |||
Decision tree | 25.49% | 1.06% | |||
Spectral_energy | “spectral_energy*ax”,”spectral_energy*ay”,”spectral_energy*az”,”spectral_energy*gx”,”spectral_energy*gy”,”spectral_energy*gz”,”spectral_energy*vx”,”spectral_energy*vy”,”spectral_energy*vz” | 9 | Gradient boosting | 83.10% | 11.42% |
Decision tree | 78.42% | 11.84% | |||
naïve Bayes | 78.29% | 7.19% |
Description | Feature Set | Number of Features | Gradient Boosting | LDA | Naïve Bayes |
---|---|---|---|---|---|
Speed and mean | “mean*vx”,”mean*vy”,”mean*vz” | 3 | 79.10% | 72.34% | 74.38% |
All vectors and mean | mean*vx,”mean*vy”,”mean*vz”,”mean*ax”,”mean*ay”,”mean*az”,”mean*gx”,”mean*gy”,”mean*gz” | 9 | 84.00% | 77.98% | 85.25% |
Speed and median | “median*vx”,”median*vy”,”median*vz” | 3 | 78.32% | 72.34% | 74.38% |
All vectors and median | “median*vx”,”median*vy”,”median*vz”,”median*ax”,”median*ay”,”median*az”,”median*gx”,”median*gy”,”median*gz” | 9 | 82.54% | 77.88% | 85.36% |
Speed and all feature categories | mean*vx, mean*vy”,”mean*vz”,”std*vx”, “std*vy”,”std*vz”,”min_max_gap*vx”, “min_max_gap*vy”,”min_max_gap*vz”,”median*vx”, “median*vy”,”median*vz”,”energy*vx”, “energy*vy”,”energy*vz”, “main_freq*vx”, “main_freq*vy”,”main_freq*vz”,”spectral_energy*vx”, “spectral_energy*vy”,”spectral_energy*vz” | 21 | 79.06% | 80.03% | 71.43% |
All 63 features | All 63 features | 63 | 83.65% | 83.43% | 79.42% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, J.; Kong, J.; Zhao, C. A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques. Sensors 2021, 21, 1665. https://doi.org/10.3390/s21051665
Yang J, Kong J, Zhao C. A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques. Sensors. 2021; 21(5):1665. https://doi.org/10.3390/s21051665
Chicago/Turabian StyleYang, Juechen, Jun Kong, and Chunying Zhao. 2021. "A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques" Sensors 21, no. 5: 1665. https://doi.org/10.3390/s21051665
APA StyleYang, J., Kong, J., & Zhao, C. (2021). A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques. Sensors, 21(5), 1665. https://doi.org/10.3390/s21051665