MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems
Abstract
:1. Introduction
1.1. Research Questions and Motivation
1.2. Background and Scope
1.3. Organization of Contents
2. Experimental Methodology and Data Exploration
2.1. Eye Tracking Data Collection
2.2. Eye Tracking UI and Device
2.3. Setup and Experiment Details
2.3.1. Setup Description
2.3.2. Participants and Demography Information
2.4. Data Preparation
2.5. Exploratory data analysis and visualizations
2.5.1. Studying Gaze Data Statistics for Desktop and Tablets
2.5.2. Studying Gaze Error Distributions for Desktop and Tablet Data
2.5.3. Studying Spatial Error Distribution Properties
2.5.4. Studying Data Correlations
2.5.5. Discussions
3. Identification of Error Patterns in Eye Gaze Data
3.1. Objectives and Task Definition for Classification of Desktop and Tablet Data
3.2. Data Augmentation Strategies
3.3. Feature Engineering, Exploration, and Selection
3.4. Classification Models: k-NN, SVM, and ANN
3.5. KNN, SVM, and MLP-Based Classification Results on Desktop Data
3.5.1. Results, Task I: Classification of Desktop Data for Different User Distances
3.5.2. Results, Task II: Classification of Desktop Data for Different Head Poses
3.5.3. Results, Task III: Classification on Merged User Distance and Head Pose Datasets from Desktop
3.6. KNN, SVM, and MLP-Based Classification Results on Tablet Data
3.6.1. Results, Task IV: Classification of Tablet Data for Different User Distances
3.6.2. Results, Task V: Classification of Tablet Data for Different Tablet Poses
3.6.3. Results, Task VI: Classification of Tablet Data for Mixed User Distance and Tablet Pose Datasets
3.7. Discussion
4. Modelling and Prediction of Gaze Errors
4.1. Head Pose Error Model
4.2. Platform Pose Error Model
4.3. Establishment of the Error Models
5. MLGaze: An Open Machine Learning Repository for Gaze Error Pattern Recognition
6. Conclusions and Future Work
Supplementary Materials
Funding
Conflicts of Interest
Appendix A
Appendix B
References
- Kar, A.; Corcoran, P. Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations. Sensors 2018, 18, 3151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cheon, M.; Lee, J. Gaze pattern analysis for video contents with different frame rates. In Proceedings of the 2013 Visual Communications and Image Processing (VCIP), Kuching, Malaysia, 17–20 November 2013; pp. 1–5. [Google Scholar]
- Zhu, D.; Mendis, B.S.U.; Gedeon, T.; Asthana, A.; Goecke, R. A Hybrid Fuzzy Approach for Human Eye Gaze Pattern Recognition. In ICONIP 2008: Advances in Neuro-Information Processing; Lecture Notes in Computer Science; Köppen, M., Kasabov, N., Coghill, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5507. [Google Scholar]
- Horiguchi, Y.; Suzuki, T.; Sawaragi, T.; Nakanishi, H.; Takimoto, T. Extraction and Investigation of Dominant Eye-Gaze Pattern in Train Driver’s Visual Behavior Using Markov Cluster Algorithm. In Proceedings of the 2016 Joint 8th International Conference on Soft Computing and Intelligent Systems (SCIS) and 17th International Symposium on Advanced Intelligent Systems (ISIS), Sapporo, Japan, 25–28 August 2016; pp. 578–581. [Google Scholar]
- Braunagel, C.; Geisler, D.; Stolzmann, W.; Rosenstiel, W.; Kasneci, E. On the necessity of adaptive eye movement classification in conditionally automated driving scenarios. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (ETRA 16); ACM: New York, NY, USA, 2016; pp. 19–26. [Google Scholar]
- Koochaki, F.; Najafizadeh, L. Predicting Intention Through Eye Gaze Patterns. In Proceedings of the 2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), Cleveland, OH, USA, 17–19 October 2018; pp. 1–4. [Google Scholar]
- Pekkanen, J.; Lappi, O. A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Sci. Rep. 2017, 7, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Zemblys, R.; Niehorster, D.C.; Komogortsev, O.; Holmqvist, K. Using machine learning to detect events in eye-tracking data. Behav. Res. Methods 2018, 50, 160–181. [Google Scholar] [CrossRef]
- Barz, M.; Daiber, F.; Sonntag, D.; Bulling, A. Error-aware gaze-based interfaces for robust mobile gaze interaction. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA’18); ACM: New York, NY, USA, 2018; Article 24; 10p. [Google Scholar]
- Barz, M.; Daiber, F.; Sonntag, D.; Bulling, A. Prediction of gaze estimation error for error-aware gaze-based interfaces. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (ETRA’16); ACM: New York, NY, USA, 2016; pp. 275–278. [Google Scholar]
- Chandola, V.; Banerjee, A.; Kumar, V. Anomaly Detection: A Survey. ACM Comput. Surv. (CSUR) 2009, 41, 15:1–15:58. [Google Scholar] [CrossRef]
- Song, F.; Diao, Y.; Read, J.; Stiegler, A.; Bifet, A. EXAD: A System for Explainable Anomaly Detection on Big Data Traces. In Proceedings of the 2018 IEEE International Conference on Data Mining Workshops, Singapore, 17–20 November 2016; 2018; pp. 1435–1440. [Google Scholar]
- Ishikawa, T.; Baker, S.; Matthews, I.; Kanade, T. Passive Driver Gaze Tracking with Active Appearance Models. Proc. World Congr. Intell. Transp. Syst. 2004, 1–12. [Google Scholar] [CrossRef]
- Chen, T.; Ma, K.; Chen, L.H. Tri-state median filter for image denoising. IEEE Trans. Image Process. 1999, 8, 1834–1838. [Google Scholar] [CrossRef] [Green Version]
- Khalil, H.H.; Rahmat, R.O.K.; Mahmoud, W.A. Chapter 15: Estimation of Noise in Gray-Scale and Colored Images Using Median Absolute Deviation (MAD). In Proceedings of the 3rd International Conference on Geometric Modeling and Imaging, London, UK, 9–11 July 2008; pp. 92–97. [Google Scholar]
- Chen, Y.C. A tutorial on kernel density estimation and recent advances. Biostat. Epidemiol. 2017, 1, 161–187. [Google Scholar] [CrossRef]
- Koydemir, H.C.; Feng, S.; Liang, K.; Nadkarni, R.; Tseng, D.; Benien, P.; Ozcan, A. A survey of supervised machine learning models for mobile-phone based pathogen identification and classification. In Proceedings of the SPIE 10055, Optics and Bio-photonics in Low-Resource Settings III, 100550A (7 March 2017); International Society for Optics and Photonics: Washington, DC, USA, 2017. [Google Scholar]
- Qiu, J.; Wu, Q.; Ding, G.; Xu, Y.; Feng, S. A survey of machine learning for big data processing. EURASIP J. Adv. Sig. Process. 2016, 2016, 67. [Google Scholar] [CrossRef] [Green Version]
- Bjerrum, E.J.; Glahder, M.; Skov, T. Data Augmentation of Spectral Data for Convolutional Neural Network (CNN) Based Deep Chemometrics. arXiv 2017, arXiv:1710.01927. [Google Scholar]
- Polson, N.G.; Scott, S.L. Data augmentation for support vector machines. Bayesian Anal. 2011, 6, 23. [Google Scholar] [CrossRef]
- Sáiz-Abajo, M.J.; Mevik, B.H.; Segtnan, V.H.; Næs, T. Ensemble methods and data augmentation by noise addition applied to the analysis of spectroscopic data. Anal. Chim. Acta 2005, 533, 147–159, ISSN 0003-2670. [Google Scholar]
- Duchowski, A.; Jörg, S.; Allen, T.N.; Giannopoulos, I.; Krejtz, K. Eye movement synthesis. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (ETRA ’16); ACM: New York, NY, USA, 2016; pp. 147–154. [Google Scholar]
- Duchowski, A.; Jörg, S.; Lawson, A.; Bolte, T.; Świrski, L.; Krejtz, K. Eye movement synthesis with 1/f pink noise. In Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games (MIG’15); ACM: New York, NY, USA, 2015; pp. 47–56. [Google Scholar]
- Devries, T.; Taylor, G.W. Dataset Augmentation in Feature Space. arXiv 2017, arXiv:1702.05538. [Google Scholar]
- Um, T.; Pfister, F.; Pichler, D.; Endo, S.; Lang, M.; Hirche, S.; Fietzek, U.; Kulić, D. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI’17); ACM: New York, NY, USA, 2017; pp. 216–220. [Google Scholar]
- Salamon, J.; Bello, J.P. Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification. IEEE Signal Process. Lett. 2017, 24, 279–283. [Google Scholar] [CrossRef]
- Li, Q.; Zhou, Y.; Chen, D. Research on machine learning algorithms and feature extraction for time series. In Proceedings of the 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–5. [Google Scholar]
- Popescu, M.C.; Sasu, L.M. Feature extraction, feature selection and machine learning for image classification: A case study. In Proceedings of the 2014 International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Bran, Romania, 22–24 May 2014; pp. 968–973. [Google Scholar]
- Khalid, S.; Khalil, T.; Nasreen, S. A survey of feature selection and feature extraction techniques in machine learning. In Proceedings of the 2014 Science and Information Conference, London, UK, 27–29 August 2014; pp. 372–378. [Google Scholar]
- Oravec, M. Feature extraction and classification by machine learning methods for biometric recognition of face and iris. In Proceedings of the ELMAR-2014, Zadar, Croatia, 10–12 September 2014; pp. 1–4. [Google Scholar]
- van der Maaten, L.J.P.; Hinton, G.E. Visualizing High-Dimensional Data Using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Rogovschi, N.; Kitazono, J.; Grozavu, N.; Omori, T.; Ozawa, S. t-Distributed stochastic neighbor embedding spectral clustering. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1628–1632. [Google Scholar]
- Mounce, S. Visualizing Smart Water Meter Dataset Clustering With Parametric T-distributed Stochastic Neighbour Embedding. In Proceedings of the 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017; pp. 1940–1945. [Google Scholar]
- Retsinas, G.; Stamatopoulos, N.; Louloudis, G.; Sfikas, G.; Gatos, B. Nonlinear Manifold Embedding on Keyword Spotting Using t-SNE. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 487–492. [Google Scholar]
- Pancerz, K.; Paja, W.; Gomuła, J. Random forest feature selection for data coming from evaluation sheets of subjects with ASDs. In Proceedings of the 2016 Federated Conference on Computer Science and Information Systems (FedCSIS), Gdansk, Poland, 11–14 September 2016; pp. 299–302. [Google Scholar]
- Cao, W.; Xu, J.; Liu, Z. Speaker-independent speech emotion recognition based on random forest feature selection algorithm. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 10995–10998. [Google Scholar]
- Gomes, R.; Ahsan, M.; Denton, A. Random Forest Classifier in SDN Framework for User-Based Indoor Localization. In Proceedings of the 2018 IEEE International Conference on Electro/Information Technology (EIT), Rochester, MI, USA, 3–5 May 2018; pp. 0537–0542. [Google Scholar]
- Song, Q.; Liu, X.; Yang, L. The random forest classifier applied in droplet fingerprint recognition. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015; pp. 722–726. [Google Scholar]
- Okfalisa, I.; Gazalba, M.; Reza, N.G.I. Comparative analysis of k-nearest neighbor and modified k-nearest neighbor algorithm for data classification. In Proceedings of the 2017 2nd International conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 1–2 November 2017; pp. 294–298. [Google Scholar]
- Guan, F.; Shi, J.; Ma, X.; Cui, W.; Wu, J. A Method of False Alarm Recognition Based on k-Nearest Neighbor. In Proceedings of the 2017 International Conference on Dependable Systems and Their Applications (DSA), Beijing, China, 31 October–2 November 2017; pp. 8–12. [Google Scholar]
- Cai, Y.; Huang, H.; Cai, H.; Qi, Y. A K-nearest neighbor locally search regression algorithm for short-term traffic flow forecasting. In Proceedings of the 2017 9th International Conference on Modelling, Identification and Control (ICMIC), Kunming, China, 10–12 July 2017; pp. 624–629. [Google Scholar]
- Zhang, X.; Li, B.; Sun, X. A k-nearest neighbor text classification algorithm based on fuzzy integral. In Proceedings of the 2010 Sixth International Conference on Natural Computation, Yantai, China, 10–12 August 2010; pp. 2228–2231. [Google Scholar]
- Waske, B.; Benediktsson, J.A. Fusion of Support Vector Machines for Classification of Multisensor Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3858–3866. [Google Scholar] [CrossRef]
- Sheng, L.; Mengjun, W.; Lanyong, Z. Research on information fusion of infrared and radar sensor based on SVM. In Proceedings of the 2012 International Conference on Measurement, Information and Control, Harbin, China, 18–20 May 2012; pp. 98–101. [Google Scholar]
- Nakano, T.; Nukala, B.T.; Tsay, J.; Zupancic, S.; Rodriguez, A.; Lie, D.Y.C.; Lopez, J.; Nguyen, T.Q. Gaits Classification of Normal vs Patients by Wireless Gait Sensor and Support Vector Machine (SVM) Classifier. Int. J. Softw. Innov. (IJSI) 2017, 5, 17–29. [Google Scholar] [CrossRef]
- Jeong, G.; Truong, P.H.; Choi, S. Classification of Three Types of Walking Activities Regarding. IEEE Sens. J. 2017, 17, 2638–2639. [Google Scholar] [CrossRef]
- Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Koldowski, U.M.M. Spiking neural network vs multilayer perceptron: Who is the winner in the racing car computer game. Soft Comput. 2015, 12, 3465–3478. [Google Scholar]
- Bieniasz, J.; Rawski, M.; Skowron, K.; Trzepiński, M. Evaluation of multilayer perceptron algorithms for an analysis of network flow data. In Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2016; International Society for Optics and Photonics: Washington, DC, USA, 2016. [Google Scholar]
- Lang, B. Monotonic Multi-layer Perceptron Networks as Universal Approximators. In Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005; Lecture Notes in Computer Science; Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3697. [Google Scholar]
- Astorino, A.; Fuduli, A. The Proximal Trajectory Algorithm in SVM Cross Validation. IEEE Trans. Neural Networks Learn. Syst. 2016, 27, 966–977. [Google Scholar] [CrossRef]
- Huang, Q.; Mao, J.; Liu, Y. An Improved Grid Search Algorithm of SVR Parameters Optimization. In Proceedings of the 2012 IEEE 14th International Conference on Communication Technology, Chengdu, China, 9–11 November 2012; pp. 1022–1026. [Google Scholar]
- Chen, S.; Liu, C. Eye detection using discriminatory Haar features and a new efficient SVM. Image Vis. Comput. 2015, 33, 68–77. [Google Scholar] [CrossRef]
- Shang, W.; Cui, J.; Song, C.; Zhao, J.; Zeng, P. Research on Industrial Control Anomaly Detection Based on FCM and SVM. In Proceedings of the 12th IEEE International Conference on Big Data Science and Engineering, TrustCom/BigDataSE, New York, NY, USA, 1–3 August 2018; pp. 218–222. [Google Scholar]
- Xie, Y.; Zhang, Y. An intelligent anomaly analysis for intrusion detection based on SVM. In Proceedings of the 2012 International Conference on Computer Science and Information Processing (CSIP), Xi’an, China, 24–26 August 2012; pp. 739–742. [Google Scholar]
- Guang, Y.; Min, N.I.E. Anomaly Intrusion Detection Based on Wavelet Kernel LS-SVM. In Proceedings of the 2013 3rd International Conference on Computer Science and Network Technology, Dalian, China, 12–13 October 2013; pp. 434–437. [Google Scholar]
- Eswaran, C.; Logeswaran, R.A. Comparison of ARIMA, Neural Network and Linear Regression Models for the Prediction of Infant Mortality Rate. In Proceedings of the 2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation, Bornea, Malaysia, 26–28 May 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 34–39. [Google Scholar]
- Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [Green Version]
- Jones, L.K. Local Minimax Learning of Functions with Best Finite Sample Estimation Error Bounds: Applications to Ridge and Lasso Regression, Boosting, Tree Learning, Kernel Machines, and Inverse Problems. IEEE Trans. Inf. Theory 2009, 55, 5700–5727. [Google Scholar] [CrossRef]
- Kirpich, A.; Ainsworth, E.A.; Wedow, J.M.; Newman, J.R.B.; Michailidis, G.; McIntyre, L. M Variable selection in omics data: A practical evaluation of small sample sizes. PLoS ONE 2018, 13, e0197910. [Google Scholar] [CrossRef] [PubMed]
- Bayindir, R.; Gok, M.; Kabalci, E.; Kaplan, O. An Intelligent Power Factor Correction Approach Based on Linear Regression and Ridge Regression Methods. In Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops, Honolulu, HI, USA, 18–21 December 2011; pp. 313–315. [Google Scholar]
- Verma, T.; Tiwana, A.P.S.; Reddy, C.C. Data Analysis to Generate Models Based on Neural Network and Regression for Solar Power Generation Forecasting. In Proceedings of the 2016 7th International Conference on Intelligent Systems, Modelling and Simulation (ISMS), Bangkok, Thailand, 25–27 January 2016; pp. 97–100. [Google Scholar]
- Vabalas, A.; Gowen, E.; Poliakoff, E.; Casson, A. Machine learning algorithm validation with a limited sample size. PLoS ONE 2019, 14, e0224365. [Google Scholar] [CrossRef] [PubMed]
Desktop | UD50 | UD60 | UD70 | UD80 | Roll 20 | Yaw 20 | Pitch 20 |
---|---|---|---|---|---|---|---|
Mean | 3.37 | 2.04 | 1.21 | 1.02 | 3.7 | 8.51 | 3.15 |
MAD | 3.49 | 1.77 | 0.82 | 0.66 | 3.63 | 10.0 | 1.90 |
IQR | 1.13 | 0.77 | 0.76 | 0.79 | 1.21 | 1.49 | 1.59 |
95% interval | 3.15–3.59 | 1.90–2.18 | 1.15–1.26 | 1.16–1.24 | 3.30–4.09 | 7.60–9.43 | 2.83–3.47 |
Tablet | UD50 | UD60 | UD70 | UD80 | Roll 20 | Yaw 20 | Pitch 20 |
---|---|---|---|---|---|---|---|
Mean | 2.68 | 2.46 | 0.59 | 1.55 | 7.74 | 4.25 | 2.45 |
MAD | 0.38 | 0.42 | 0.29 | 0.24 | 0.77 | 0.60 | 0.46 |
IQR | 0.39 | 0.54 | 0.33 | 0.22 | 0.75 | 0.53 | 0.23 |
95% interval | 2.65–2.71 | 2.43–2.48 | 0.57–0.61 | 1.53–1.57 | 7.69–7.80 | 4.22–4.29 | 2.41–2.49 |
Samples for Train, Test | Original Feature Set (20 Features) | Augmentation Strategies | Samples Per Person | Samples and Classes |
---|---|---|---|---|
Total participants: 20 12–16 participants for training, 8-5 for test Data labelled and randomly shuffled |
Mean, SD, IQR, 0.95 interval bounds for each sample. |
| 10 × gaze error 10 × yaw error 10 × pitch error Merged dataset 30 samples |
|
| ||||
| ||||
| ||||
| ||||
|
For User Distance | For Head Pose | On Mixed Datasets | |||||||
---|---|---|---|---|---|---|---|---|---|
Dataset | k-NN | SVM | MLP | k-NN | SVM | MLP | k-NN | SVM | MLP |
Only gaze angle features | 89% | 69% | 86% | 91% | 83% | 90% | 87% | 74% | 83% |
Only gaze yaw features | 83% | 64% | 84% | 88% | 78% | 86% | 81% | 66% | 80% |
Only gaze pitch features | 96% | 89% | 73% | 95% | 96% | 83% | 92% | 91% | 79% |
Merged (gaze, yaw, pitch) feature dataset | 84% | 67% | 81% | 89% | 80% | 84% | 85% | 70% | 77% |
For User Distance Dataset | For Head Pose Dataset | On Mixed Dataset | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
KNN | SVM | MLP | KNN | SVM | MLP | KNN | SVM | MLP | |||
TPR | 0.96 | 0.95 | 0.85 | TPR | 0.97 | 0.91 | 0.95 | TPR | 0.83 | 0.93 | 0.83 |
FPR | 0.01 | 0.01 | 0.04 | FPR | 0.01 | 0.02 | 0.01 | FPR | 0.02 | 0.01 | 0.02 |
TNR | 0.98 | 0.98 | 0.95 | TNR | 0.99 | 0.97 | 0.98 | TNR | 0.97 | 0.98 | 0.97 |
FNR | 0.04 | 0.04 | 0.14 | FNR | 0.03 | 0.08 | 0.04 | FNR | 0.16 | 0.06 | 0.16 |
Precision | 0.98 | 0.93 | 0.85 | Precision | 0.97 | 0.97 | 0.95 | Precision | 0.85 | 0.93 | 0.85 |
For User-Tablet Distance | For Tablet Pose | On Mixed Datasets | |||||||
---|---|---|---|---|---|---|---|---|---|
Dataset | k-NN | SVM | MLP | k-NN | SVM | MLP | k-NN | SVM | MLP |
Only gaze angle features | 81% | 82% | 81% | 86% | 88% | 75% | 80% | 80% | 78% |
Only gaze yaw features | 84% | 86% | 78% | 93% | 92% | 85% | 84% | 84% | 70% |
Only gaze pitch features | 90% | 81% | 71% | 94% | 91% | 89% | 84% | 84% | 79% |
Merged (gaze, yaw, pitch) feature dataset | 85% | 84% | 76% | 88% | 85% | 82% | 83% | 83% | 75% |
User-Tablet Distance Dataset | Tablet Pose Dataset | Mixed Dataset | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
UD | KNN | SVM | MLP | Pose | KNN | SVM | MLP | Mixed | KNN | SVM | MLP |
TPR | 0.87 | 0.85 | 0.71 | TPR | 0.90 | 0.83 | 0.79 | TPR | 0.80 | 0.83 | 0.70 |
FPR | 0.04 | 0.04 | 0.09 | FPR | 0.03 | 0.27 | 0.06 | FPR | 0.03 | 0.02 | 0.04 |
TNR | 0.95 | 0.95 | 0.90 | TNR | 0.96 | 0.97 | 0.93 | TNR | 0.96 | 0.97 | 0.95 |
FNR | 0.12 | 0.14 | 0.28 | FNR | 0.09 | 0.16 | 0.20 | FNR | 0.19 | 0.16 | 0.29 |
Prec.* | 0.87 | 0.88 | 0.72 | Prec.* | 0.91 | 0.86 | 0.79 | Prec.* | 0.82 | 0.88 | 0.72 |
Neutral | R20 | P20 | Y20 | Neutral | R20 | P20 | Y20 | |
---|---|---|---|---|---|---|---|---|
Linear | 2.24 | 1.36 | 1.45 | 2.71 | 0.73 | 1.79 | 0.58 | 0.72 |
Polynomial | 7.48 | 1.82 | 7.88 | 2.88 | 2.61 | 1.99 | 5.24 | 0.81 |
Ridge | 2.24 | 1.36 | 1.44 | 2.70 | 0.73 | 1.79 | 0.58 | 0.72 |
Lasso | 2.24 | 1.15 | 1.31 | 2.70 | 0.73 | 1.73 | 0.58 | 0.71 |
ElasticNet | 2.25 | 1.07 | 1.29 | 2.69 | 0.73 | 1.71 | 0.50 | 0.70 |
Neural network | 4.01 | 1.21 | 2.09 | 2.75 | 1.75 | 1.78 | 2.39 | 1.18 |
Platform/Condition | ElasticNet Coefficients B1, B2 and B3 | Intercept B0 |
---|---|---|
Desktop: Head pose neutral | [0.09336917, 0.19406989, −0.00279198] | −1.99912371 × 10−16 |
Desktop: Head Roll 20 | [0.0, 0.65189252, 0.07303053] | 3.23942926 × 10−16 |
Desktop: Head Pitch 20 | [0.22606558, 0.11028886, 0.05731872] | −9.55229176 × 10−17 |
Desktop: Head Yaw 20 | [0.0, 0.51352565, 0.08149052] | 8.94037155 × 10−17 |
Tablet: Platform pose neutral | [0.07333954, 0.0, −0.17956056] | 1.76076146 × 10−16 |
Tablet: Platform Roll 20 | [0.0, −0.31460996, −0.23620848] | −2.87637414 × 10−16 |
Tablet: Platform Pitch 20 | [0.0, −0.05682588, −0.20804325] | 2.34007877 × 10−16 |
Tablet: Platform Yaw20 | [0.0, −0.01596391, −0.06346607] | −2.41027682 × 10−17 |
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kar, A. MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems. Vision 2020, 4, 25. https://doi.org/10.3390/vision4020025
Kar A. MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems. Vision. 2020; 4(2):25. https://doi.org/10.3390/vision4020025
Chicago/Turabian StyleKar, Anuradha. 2020. "MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems" Vision 4, no. 2: 25. https://doi.org/10.3390/vision4020025
APA StyleKar, A. (2020). MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems. Vision, 4(2), 25. https://doi.org/10.3390/vision4020025