Saliency-Based Gaze Visualization for Eye Movement Analysis †
Abstract
:1. Introduction
- We propose a novel gaze data visualization technique embedding the saliency features of visual stimulus as visual clues for analyzing the visual attention of an observer.
- We evaluate whether our gaze visualization benefits to understanding the visual attention of an observer.
2. Related Work
2.1. Gaze Data Visualization
2.2. Visual Saliency
3. Fixation Identification and Extraction Saliency Feature
3.1. Environment
3.2. Fixation Identification
3.3. Extraction Saliency Feature
4. Gaze Data Visualization
5. Gaze Analysis
5.1. Gaze Analysis with Saliency Features
5.2. Visual Search Analysis with Visual Attention
5.3. Influence of the Saliency Features on Gaze Movement
5.4. User Study
- Q1:
- Can you use the gaze distribution in the gaze analysis?
- Q2:
- Can you follow the gaze flow in the time order?
- Q3:
- Can you find the area in which the participant is interested?
- Q4:
- Can you spot an area where the gaze stayed for a long time?
- Q5:
- Can you reason the gaze concentration by viewing only the visual stimulus?
- Q6:
- Can you analyze how the participant viewed to understand the visual stimulus?
- Q7:
- Can you discover observer characteristics in the given visualization?
- Q8:
- Can you analyze how the eye movement is related with the visual stimulus in the given visualization?
- Point-based: We can see the gaze distribution easily, but it is difficult to analyze the meaning of the gaze by only the point visualization (p1, p2). This visualization also interferes with the analysis since it blocks the visual stimulus (p1, p2). However, it is easy to understand the visualization due to the simplicity (p5).
- Heatmap: This is the visualization that we often see (p5). It seems to be helpful to discover what the gaze focuses on (p1, p2, p3, p4, p5). However, the scope of analysis seems to be limited (p1, p3).
- Scanpath: This is also a visualization that we often see (p5). The gaze movement can be analyzed easily according to the time flow, and the shift of interest can be extracted (p1, p2, p5). However, the analysis is limited, and when the gaze becomes complicated, it seems difficult to analyze (p1, p3, p4, p5).
- AoI-based: It seems to be a useful visualization for unfamiliar information analysis (p1, p2, p3). However, it does not seem to be very helpful for the information already known (p1, p2). It would be helpful if we could see the gaze flow together with this visualization (p1, p3).
- Saliency-based: This visualization seems confusing at first because it shows other information than the existing visualizations, but after learning the meaning of the information provided, it was easy to analyze the visualization (p1, p4, p5). It was possible to know the fixation range using the field of view, but it was difficult to actively utilize it in the analysis (p5). This visualization is efficient because it can intuitively identify saliency features without additional data (p1, p5). In addition, by using saliency features, it is possible to analyze gaze behavior from more diverse viewpoints (p1, p5).
6. Limitations and Discussion
6.1. Fixation Clustering
6.2. Field of View
6.3. Fixation Overlaps
6.4. Saliency Features
7. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Serences, J.T.; Yantis, S. Selective visual attention and perceptual coherence. Trends Cogn. Sci. 2006, 10, 38–45. [Google Scholar] [CrossRef]
- Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 185–207. [Google Scholar] [CrossRef]
- Henderson, J.M.; Brockmole, J.R.; Castelhano, M.S.; Mack, M. Visual saliency does not account for eye movements during visual search in real-world scenes. In Eye Movements; Elsevier: Amsterdam, The Netherlands, 2007; pp. 537–562. [Google Scholar]
- Veale, R.; Hafed, Z.M.; Yoshida, M. How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling. Philos. Trans. R. Soc. B Biol. Sci. 2017, 372, 20160113. [Google Scholar] [CrossRef] [PubMed]
- Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Wang, W.; Wang, Y.; Huang, Q.; Gao, W. Measuring visual saliency by site entropy rate. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2368–2375. [Google Scholar]
- Wolfe, J.M.; Yee, A.; Friedman-Hill, S.R. Curvature is a basic feature for visual search tasks. Perception 1992, 21, 465–480. [Google Scholar] [CrossRef]
- Treisman, A.; Gormican, S. Feature analysis in early vision: Evidence from search asymmetries. Psychol. Rev. 1988, 95, 15. [Google Scholar] [CrossRef]
- Oliva, A.; Torralba, A.; Castelhano, M.S.; Henderson, J.M. Top-down control of visual attention in object detection. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003; Volume 1, pp. 1–253. [Google Scholar]
- Ehinger, K.A.; Hidalgo-Sotelo, B.; Torralba, A.; Oliva, A. Modelling search for people in 900 scenes: A combined source model of eye guidance. Vis. Cogn. 2009, 17, 945–978. [Google Scholar] [CrossRef] [Green Version]
- Hwang, A.D.; Wang, H.C.; Pomplun, M. Semantic guidance of eye movements in real-world scenes. Vis. Res. 2011, 51, 1192–1205. [Google Scholar] [CrossRef] [Green Version]
- Tatler, B.W. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 2007, 7, 4. [Google Scholar] [CrossRef] [Green Version]
- Jänicke, H.; Chen, M. A Salience-based Quality Metric for Visualization. Comput. Graph. Forum 2010, 29, 1183–1192. [Google Scholar] [CrossRef]
- Liu, H.; Heynderickx, I. Visual attention in objective image quality assessment: Based on eye-tracking data. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 971–982. [Google Scholar]
- Wolfe, J.M. Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1994, 1, 202–238. [Google Scholar] [CrossRef] [Green Version]
- Gottlieb, J.P.; Kusunoki, M.; Goldberg, M.E. The representation of visual salience in monkey parietal cortex. Nature 1998, 391, 481. [Google Scholar] [CrossRef] [PubMed]
- Button, C.; Dicks, M.; Haines, R.; Barker, R.; Davids, K. Statistical modelling of gaze behaviour as categorical time series: What you should watch to save soccer penalties. Cogn. Process. 2011, 12, 235–244. [Google Scholar] [CrossRef]
- Mazumdar, D.; Meethal, N.S.K.; George, R.; Pel, J.J. Saccadic reaction time in mirror image sectors across horizontal meridian in eye movement perimetry. Sci. Rep. 2021, 11, 2630. [Google Scholar] [CrossRef]
- Krejtz, K.; Szmidt, T.; Duchowski, A.T.; Krejtz, I. Entropy-based statistical analysis of eye movement transitions. In Proceedings of the Symposium on Eye Tracking Research and Applications, Safety Harbo, FL, USA, 26–28 March 2014; pp. 159–166. [Google Scholar]
- Caldara, R.; Miellet, S. i Map: A novel method for statistical fixation mapping of eye movement data. Behav. Res. Methods 2011, 43, 864–878. [Google Scholar] [CrossRef] [Green Version]
- Dink, J.W.; Ferguson, B. eyetrackingR: An R Library for Eye-Tracking Data Analysis. 2015. Available online: www.eyetracking-r.com (accessed on 21 May 2021).
- Blascheck, T.; Schweizer, M.; Beck, F.; Ertl, T. Visual Comparison of Eye Movement Patterns. Comput. Graph. Forum 2017, 36, 87–97. [Google Scholar] [CrossRef]
- Hansen, D.W.; Ji, Q. In the eye of the beholder: A survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 478–500. [Google Scholar] [CrossRef]
- Kurzhals, K.; Hlawatsch, M.; Heimerl, F.; Burch, M.; Ertl, T.; Weiskopf, D. Gaze stripes: Image-based visualization of eye tracking data. IEEE Trans. Vis. Comput. Graph. 2016, 22, 1005–1014. [Google Scholar] [CrossRef]
- Bal, E.; Harden, E.; Lamb, D.; Van Hecke, A.V.; Denver, J.W.; Porges, S.W. Emotion recognition in children with autism spectrum disorders: Relations to eye gaze and autonomic state. J. Autism Dev. Disord. 2010, 40, 358–370. [Google Scholar] [CrossRef]
- Murias, M.; Major, S.; Davlantis, K.; Franz, L.; Harris, A.; Rardin, B.; Sabatos-DeVito, M.; Dawson, G. Validation of eye-tracking measures of social attention as a potential biomarker for autism clinical trials. Autism Res. 2018, 11, 166–174. [Google Scholar] [CrossRef]
- Traver, V.J.; Zorío, J.; Leiva, L.A. Glimpse: A Gaze-Based Measure of Temporal Salience. Sensors 2021, 21, 3099. [Google Scholar] [CrossRef]
- Parkhurst, D.; Niebur, E. Scene content selected by active vision. Spat. Vis. 2003, 16, 125–154. [Google Scholar] [CrossRef] [Green Version]
- Krieger, G.; Rentschler, I.; Hauske, G.; Schill, K.; Zetzsche, C. Object and scene analysis by saccadic eye-movements: An investigation with higher-order statistics. Spat. Vis. 2000, 13, 201–214. [Google Scholar]
- Liang, H.; Liang, R.; Sun, G. Looking into saliency model via space-time visualization. IEEE Trans. Multimed. 2016, 18, 2271–2281. [Google Scholar] [CrossRef]
- Yoo, S.; Kim, S.; Jeong, D.; Kim, Y.; Jang, Y. Gaze Visualization Embedding Saliency Features. In Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), Tianjin, China, 3–5 June 2020. [Google Scholar]
- Blascheck, T.; Kurzhals, K.; Raschke, M.; Burch, M.; Weiskopf, D.; Ertl, T. State-of-the-art of visualization for eye tracking data. In Proceedings of the EuroVis, Swansea, UK, 9–13 June 2014; Volume 2014. [Google Scholar]
- Song, H.; Lee, J.; Kim, T.J.; Lee, K.H.; Kim, B.; Seo, J. GazeDx: Interactive Visual Analytics Framework for Comparative Gaze Analysis with Volumetric Medical Images. IEEE Trans. Vis. Comput. Graph. 2017, 23, 311–320. [Google Scholar] [CrossRef] [PubMed]
- Burch, M.; Kumar, A.; Mueller, K.; Weiskopf, D. Color bands: Visualizing dynamic eye movement patterns. In Proceedings of the IEEE Second Workshop on Eye Tracking and Visualization (ETVIS), Baltimore, MD, USA, 23 October 2016; pp. 40–44. [Google Scholar]
- Fuhl, W.; Kuebler, T.; Brinkmann, H.; Rosenberg, R.; Rosenstiel, W.; Kasneci, E. Region of Interest Generation Algorithms for Eye Tracking Data. In Proceedings of the 3rd Workshop on Eye Tracking and Visualization. Association for Computing Machinery, ETVIS ’18, Warsaw, Poland, 14–17 June 2018. [Google Scholar]
- Zhou, X.; Xue, C.; Zhou, L.; Niu, Y. An Evaluation Method of Visualization Using Visual Momentum Based on Eye-Tracking Data. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1850016. [Google Scholar] [CrossRef]
- Steichen, B.; Carenini, G.; Conati, C. User-adaptive information visualization: Using eye gaze data to infer visualization tasks and user cognitive abilities. In Proceedings of the 2013 International Conference on Intelligent User Interfaces, Santa Monica, CA, USA, 19–22 March 2013; pp. 317–328. [Google Scholar]
- Goldberg, J.; Helfman, J. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Inf. Vis. 2011, 10, 182–195. [Google Scholar] [CrossRef]
- Matzen, L.E.; Haass, M.J.; Divis, K.M.; Wang, Z.; Wilson, A.T. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations. IEEE Trans. Vis. Comput. Graph. 2018, 24, 563–573. [Google Scholar] [CrossRef]
- Ho, H.Y.; Yeh, I.; Lai, Y.C.; Lin, W.C.; Cherng, F.Y. Evaluating 2D flow visualization using eye tracking. Comput. Graph. Forum 2015, 34, 501–510. [Google Scholar] [CrossRef]
- Fuhl, W.; Kuebler, T.; Santini, T.; Kasneci, E. Automatic Generation of Saliency-based Areas of Interest for the Visualization and Analysis of Eye-tracking Data. In Proceedings of the Vision, Modeling and Visualization, Stuttgart, Germany, 10–12 October 2018. [Google Scholar]
- Judd, T.; Ehinger, K.; Durand, F.; Torralba, A. Learning to predict where humans look. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2106–2113. [Google Scholar]
- Geisler, D.; Weber, D.; Castner, N.; Kasneci, E. Exploiting the GBVS for Saliency Aware Gaze Heatmaps. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications, Stuttgart, Germany, 2–5 June 2020; pp. 1–5. [Google Scholar]
- Kümmerer, M.; Theis, L.; Bethge, M. Deep gaze i: Boosting saliency prediction with feature maps trained on imagenet. arXiv 2014, arXiv:1411.1045. [Google Scholar]
- Harel, J.; Koch, C.; Perona, P. Graph-Based Visual Saliency. In Proceedings of the 19th International Conference on Neural Information Processing Systems. MIT Press, NIPS’06, Vancouver, Canada, 4–7 June 2006; pp. 545–552. [Google Scholar]
- Malik, J.; Perona, P. Preattentive texture discrimination with early vision mechanisms. JOSA A 1990, 7, 923–932. [Google Scholar] [CrossRef]
- Pekkanen, J.; Lappi, O. A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Sci. Rep. 2017, 7, 17726. [Google Scholar] [CrossRef]
- Špakov, O. Comparison of eye movement filters used in HCI. In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012; pp. 281–284. [Google Scholar]
- Salvucci, D.D.; Goldberg, J.H. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, Palm Beach Gardens, FL, USA, 6–8 November 2000; pp. 71–78. [Google Scholar]
- Wan, X.; Wang, W.; Liu, J.; Tong, T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med. Res. Methodol. 2014, 14, 135. [Google Scholar] [CrossRef] [Green Version]
- Greenspan, H.; Belongie, S.; Goodman, R.; Perona, P.; Rakshit, S.; Anderson, C.H. Overcomplete steerable pyramid filters and rotation invariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 222–228. [Google Scholar]
- Ma, X.; Xie, X.; Lam, K.M.; Zhong, Y. Efficient saliency analysis based on wavelet transform and entropy theory. J. Vis. Commun. Image Represent. 2015, 30, 201–207. [Google Scholar] [CrossRef]
- Engel, S.; Zhang, X.; Wandell, B. Colour Tuning in Human Visual Cortex Measured With Functional Magnetic Resonance Imaging. Nature 1997, 388, 68–71. [Google Scholar] [CrossRef] [PubMed]
- Bergstrom, J.R.; Schall, A. Eye Tracking in User Experience Design; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Collins, C.; Penn, G.; Carpendale, S. Bubble sets: Revealing set relations with isocontours over existing visualizations. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1009–1016. [Google Scholar] [CrossRef]
- Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1979, 28, 100–108. [Google Scholar] [CrossRef]
- Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
- Santella, A.; DeCarlo, D. Robust clustering of eye movement recordings for quantification of visual interest. In Proceedings of the 2004 Symposium on Eye Tracking Research & Applications, San Antonio, TX, USA, 22–24 March 2004; pp. 27–34. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-based Algorithm for Discovering Clusters a Density-based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD’96, Portland, OR, USA, 22–24 March 1996; pp. 226–231. [Google Scholar]
- Yoo, S.; Jeong, S.; Kim, S.; Jang, Y. Gaze Attention and Flow Visualization using the Smudge Effect. Pacific Graphics (Short Papers). In Proceedings of the Eurographics Association, Seoul, Korea, 14–17 October 2019; pp. 21–26. [Google Scholar]
- Sugano, Y.; Matsushita, Y.; Sato, Y. Graph-based joint clustering of fixations and visual entities. ACM Trans. Appl. Percept. (TAP) 2013, 10, 10. [Google Scholar] [CrossRef]
- Špakov, O.; Miniotas, D. Application of clustering algorithms in eye gaze visualizations. Inf. Technol. Control 2015, 36, 213–216. [Google Scholar]
- Urruty, T.; Lew, S.; Djeraba, C.; Simovici, D.A. Detecting eye fixations by projection clustering. In Proceedings of the 14th International Conference of Image Analysis and Processing-Workshops (ICIAPW 2007), Modena, Italy, 10–13 September 2007; pp. 45–50. [Google Scholar]
- Alfano, P.L.; Michel, G.F. Restricting the Field of View: Perceptual and Performance Effects. Percept. Mot. Ski. 1990, 70, 35–45. [Google Scholar] [CrossRef]
Short Biography of Authors
Sangbong Yoo received the bachelors degree in computer engineering in 2015 from Sejong University in Seoul, South Korea. He is currently pursuing a Ph.D. at Sejong University. His research interests include gaze analysis, eye tracking technique, mobile security, and data visualization. | |
Seongmin Jeong received the BS degree in computer engineering from Sejong University, South Korea, in 2016. He is currently working toward the Ph.D. degree at Sejong University. His research interests include flow map visualization and visual analytics. | |
Seokyeon Kim received the BS and doctoral degree in computer engineering in 2014 and 2020 from Sejong University in South Korea. He is a postdoctoral researcher at Sejong University. His research interests include computer graphics, data visualization, and volume rendering. | |
Yun Jang received the bachelors degree in electrical engineering from Seoul National University, South Korea in 2000, and the masters and doctoral degree in electrical and computer engineering from Purdue University in 2002 and 2007, respectively. He is an associate professor of computer engineering at Sejong University, Seoul, South Korea. He was a postdoctoral researcher at CSCS and ETH Zürich, Switzerland from 2007 to 2011. His research interests include interactive visualization, volume rendering, HCI, machine learning, and visual analytics. |
Study | Eye Movements | Visual Saliency | Style | |||
---|---|---|---|---|---|---|
Visualization | Measures | Visualization | Saliency Feature | |||
Gaze visualization | [26] | point-based, scanpath, AoI-based, space-time cube | x, y, time, stimulus | - | - | - |
[35] | point-based | x, y, stimulus | - | - | - | |
[36] | new style (AoI-based) | x, y, duration | - | - | - | |
[24] | new style (AoI-based) | transitions, duration | - | - | - | |
[22] | heatmap | x, y, duration, RoI, stimulus | - | - | - | |
[19] | - | fixation number, duration | - | - | - | |
[20] | - | EMP (Eye movement perimetry), SRT (saccadic reaction time) | - | - | - | |
[21] | scanpath, AoI-based | x, y, AoI, duration | - | - | - | |
Visual saliency | [6] | scanpath | x, y, duration, stimulus | saliency map | intensity, color, orientation | separate |
[15] | - | - | saliency map, contribution map | intensity, color, orientation | - | |
[41] | - | x, y, duration, stimulus | saliency map | color, text-specific | - | |
[44] | point-based, heatmap | x, y, RoI | saliency map | intensity, color, orientation, center, horizontal line, face, person | overlap | |
[46] | point-based | x, y | saliency map | data-driven | separate | |
[8] | heatmap | x, y | saliency map | entropy | separate | |
[9] | - | reaction time | - | curvature | - | |
[7] | - | - | saliency map, object map | log spectrum | - | |
Both | [45] | scanpath, heatmap | x, y, duration, stimulus | heatmap | texture (GBVS [47,48]) | separate |
our proposal | new style (scanpath) | x, y, duration, stimulus | - | intensity, color, orientation (not fixed) | combined |
Case Study 1 | Case Study 2 | Case Study 3 | ||
---|---|---|---|---|
Visual Stimlus 1 | Visual Stimulus 2 | Visual Stimulus 3 | Visual Stimulu 4 | |
All records number | 8855 | 12,995 | 7981 | 6256 |
Average records number | 385 | 565 | 347 | 272 |
Point-Based | Heatmap | AoI-Based | Scanpath | Saliency-Based | ||
---|---|---|---|---|---|---|
Q1 | mean | 7.74 | 7.67 | 3.09 | 5.87 | 7.67 |
SD | 2.24 | 2.70 | 3.36 | 2.93 | 2.80 | |
Q2 | mean | 3.39 | 2.96 | 1.44 | 4.92 | 8.35 |
SD | 3.45 | 3.36 | 2.31 | 3.30 | 2.89 | |
Q3 | mean | 7.40 | 7.13 | 2.57 | 4.26 | 7.78 |
SD | 2.87 | 2.65 | 3.40 | 3.57 | 3.22 | |
Q4 | mean | 6.44 | 6.57 | 7.14 | 2.78 | 7.70 |
SD | 2.84 | 3.09 | 2.65 | 3.23 | 3.10 | |
Q5 | mean | 6.83 | 6.29 | 2.48 | 4.46 | 7.13 |
SD | 2.96 | 2.73 | 3.25 | 3.62 | 3.31 | |
Q6 | mean | 6.13 | 5.87 | 3.83 | 5.00 | 6.91 |
SD | 3.05 | 2.67 | 3.66 | 2.83 | 3.19 | |
Q7 | mean | 3.52 | 3.30 | 1.48 | 4.00 | 5.52 |
SD | 3.10 | 2.91 | 2.40 | 3.26 | 3.67 | |
Q8 | mean | 5.91 | 5.30 | 2.17 | 2.22 | 6.48 |
SD | 3.01 | 3.48 | 3.05 | 2.86 | 3.65 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yoo, S.; Jeong, S.; Kim, S.; Jang, Y. Saliency-Based Gaze Visualization for Eye Movement Analysis. Sensors 2021, 21, 5178. https://doi.org/10.3390/s21155178
Yoo S, Jeong S, Kim S, Jang Y. Saliency-Based Gaze Visualization for Eye Movement Analysis. Sensors. 2021; 21(15):5178. https://doi.org/10.3390/s21155178
Chicago/Turabian StyleYoo, Sangbong, Seongmin Jeong, Seokyeon Kim, and Yun Jang. 2021. "Saliency-Based Gaze Visualization for Eye Movement Analysis" Sensors 21, no. 15: 5178. https://doi.org/10.3390/s21155178
APA StyleYoo, S., Jeong, S., Kim, S., & Jang, Y. (2021). Saliency-Based Gaze Visualization for Eye Movement Analysis. Sensors, 21(15), 5178. https://doi.org/10.3390/s21155178