Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers
Abstract
:1. Introduction
2. Background and Related Work
2.1. Eye tracking for Human–Computer Interaction
2.2. Gaze-Driven Adaptive (Geo)visualization
3. Gaze-Adaptive Approach: Design
3.1. Design Considerations
- Gaze dynamic adaptation (GD). In this method, the grid information viewed by the user is displayed in a dynamic window that is always near the gaze focus (Figure 2a). Based on the results of [39], we placed the window at the right-bottom corner of the user’s gaze point at approximately 2.7° (≈2.36 cm, 180 px). Displaying the information window besides the user’s gaze point is considered intuitive because it can decrease the visual search distance between the current gaze and the legend. Note that the window is always visible to ensure that users can obtain the grid information as quickly as possible. In the dynamic information window, the items, including the year, color block and label of the gaze position, were automatically extracted from the layers and their symbology, meaning that the information in the window is consistent with the layer panel (shown on the left). The gaze dynamic adaptation is able to deal with discrete, stratified and continuous raster maps (see Figure 3).
- Gaze fixed adaptation (GF). Different from gaze dynamic adaptation, in this method, the position of the information window of the grid is fixed at the top-left corner of the screen, but its content is adapted to gaze (Figure 2b). The other settings are the same as the GD. Göbel et al. [39] found that participants preferred fixed adaptation rather than dynamic adaptation. Therefore, in this study, we tested whether a fixed information window is preferred in gaze-based raster map reading.
- Traditional identification (TR). This method was used as a baseline for the comparison experiment. No adaptation was provided in this method and the participants needed to use the identify tool to obtain the raster values. By clicking the visible layer, the information (including the layer names, classes and colormaps) of the clicking grid is displayed in the form of a pop-up window (Figure 2c). Then, users can interpret the raster maps by combining the information in the layer control and the pop-up window. When users want to view the information of other layers, however, they have to switch the visible layer and repeat the previous operation.
- Mouse dynamic adaptation (MD). In mouse dynamic adaptation, we used a mouse pointer to replace gaze, but kept other settings unchanged, as in GD. The grid information that was pointed to by the mouse was displayed in the dynamic window (Figure 2d). Since a mouse is more accurate than gaze in pinpointing and without the consideration of the Midas contact problem, this method seemed to perform the best. We designed this method to include a high-precision input as a benchmark to better understand the limitations of eye-tracking technology in practical scenarios. Additionally, the comparison with MD may help identify the unique advantages of a gaze-based interaction, such as its potential for hands-free and more intuitive interactions in certain contexts.
3.2. Technical Framework
- The backend and database are connected with the Tobii Eye Tracker 5 and are responsible for real-time gaze data collection and map storage. Two kinds of gaze data were provided by the Tobii API: the raw gaze data stream and the fixation data stream. We adopted the fixation data stream that was calculated by the built-in Tobii I-VT algorithm from the raw gaze data in real time [43].
- The middle end manages the delivery of the adaptation based on the fixations generated by the backend. When the system is working, the user’s fixation is indicated as a black crosshair on the screen as feedback of the user’s fixation position. Fixations from screen coordinates are first converted to map (e.g., georeferenced) coordinates. Then, the system obtains the raster grid values of different layers according to the current fixation position and sends them to the client side.
- The client side presents the data and adaptation. It first displays maps from the database. After receiving the raster grid values from the middle end, the client side then renders the legend to present the layer information to users. A “+” marker is displayed on the screen to show the user’s current gaze position.
4. Evaluation
4.1. Experiment
4.1.1. Participants
4.1.2. Apparatus and Software
4.1.3. Materials and Tasks
- Question-reading phase. For each task, the question and its four possible answers were first displayed in the center of the screen. According to the description of the questions, these tasks were divided into two types: single-layer identification tasks (IDE) and multilayer comparison tasks (COM). For example, “How was the land use type of block B in 2010? (Identification task)” and “How did the GDP of Block A change from 2000 to 2020? (Comparison task)”. In this phase, participants had enough time to read the question and the choices, and then they could press the space bar to switch to the map-reading phase.
- Map-reading phase. In this phase, participants had time to read the map associated with the task. Meanwhile, they also needed to collect the grid information using different identification methods to complete the tasks. Once participants felt they found the answers, they were required to press the space bar to switch to the question-answer phase as soon as possible.
- Question-answer phase. In the question-answer phase, the task question and the four possible answers were displayed on the screen again. Participants had enough time to consider and make their choice and then submit it by pressing the space bar. In addition, participants were asked to speak their answers aloud before submitting them. This was to ensure that the choice they submitted was what they were thinking to avoid misoperations during this phase. Note that participants could also press the enter key to skip the task if they forgot the answer or for any other reason. Whether participants pressed the space bar or the enter key, the next task was presented.
4.1.4. Procedure
4.2. Data Quality Check
4.3. Metrics
4.3.1. Efficiency
4.3.2. Effectiveness
4.3.3. Visual Behavior
- Mean fixation duration. Fixation occurs when the gaze focuses on a target and remains relatively still for a period. The fixation duration (milliseconds, ms, for single fixations) indicated how long a fixation lasted. According to Goldberg and Kotval [46], the fixation duration was closely associated with one’s interpretation process of visual information. In this study, a longer fixation duration is considered as greater difficulty comprehending visual information. Fixations were obtained using Tobii Interactor APIs (see Section 3.2 for more details).
- Proportion of fixation duration on the layer panel. The layer panel shows the most basic information (e.g., colormap and value labels) of different layers. Since gaze adaptations were utilized in the GF, GD and MD, participants could obtain the grid information without paying attention to the layer panel. Therefore, we first created an area of interest (AOI) on the layer panel and found the fixations that fell in the AOI. We then calculated the proportion of fixation duration on the layer panel to investigate how participants’ attention to the layer panel changes in different methods [49].
- Minimum gaze bounding area. The minimum gaze bounding area is the area of the smallest convex polygon enclosing all the gaze points of a participant in a task. It is an extensiveness measure that denotes the on-screen search breadth. Combined with the saccade amplitude metric, we can determine whether a visual search covers a broader area or is limited to a smaller region [46]. We first found the convex hull of the gaze points and then calculated the area of the convex hull. This was realized using the Python Scipy ConvexHull function (https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html, accessed on 5 September 2023) and Shapely (https://shapely.readthedocs.io/en/stable/reference/shapely.area.html, accessed on 5 September 2023).
4.3.4. Questionnaire
5. Results
5.1. Efficiency and Effectiveness
5.2. Visual Behavior
5.3. NASA-TLX and UEQ
5.4. User Feedback
6. Discussion
6.1. Performance and Visual Behavior
6.2. Comparison between Identification and Comparison Tasks
6.3. Design Issues
6.4. Limitation
7. Conclusions and Future Work
- Compared to the traditional method, both gaze- and mouse-based adaptations can significantly enhance user efficiency in both the identification and comparison tasks. However, the gaze-based adaptations (GF and GD) had lower efficiency and effectiveness than the mouse dynamic adaptation in both tasks. In the identification tasks, the gaze-based methods even exhibited lower effectiveness than the traditional method. This is probably because the gaze-adaptive legends that contained three layers (i.e., redundant information exists) may confuse the participants when the participants intended to focus on only one certain layer.
- Despite incorporating both content and placement adaptations, the gaze dynamic method exhibited inferior efficiency compared to the mouse dynamic method. This is primarily due to the lower spatial tracking precision of the low-cost eye tracker which led to longer average fixation durations and visual fatigue. This is the most commonly mentioned issue in the user feedback.
- Different adaptation methods resulted in different visual behavior characteristics. First, participants switched their visual focus to the layer content panel considerably less under the adaptive methods (GF, GD and MD) than under the traditional method, as we predicted. Second, when using methods with placement adaptation (GD and MD), participants’ visual searches covered smaller regions than those without placement adaptation (TR and GF). Third, when using methods based on gaze interaction (GF and GD), participants had longer fixation durations than those using a mouse (TR and MD).
- The gaze-adaptive methods (GF and GD) were generally well received by the participants, but they were also perceived to be somewhat distracting and insensitive. However, it did not seem to hinder performance or the user experience in this study, but left further improvement to reduce the negative perceptions.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Robinson, A.H.; Morrison, J.L.; Muehrcke, P.C.; Kimmerling, A.J.; Guptil, S.C. Elements of cartography. Geod. List 1995, 50, 408–409. [Google Scholar]
- Kubicek, P.; Sasinka, C.; Stachon, Z.; Sterba, Z.; Apeltauer, J.; Urbanek, T. Cartographic Design and Usability of Visual Variables for Linear Features. Cartogr. J. 2017, 54, 91–102. [Google Scholar] [CrossRef]
- Bednarik, R.; Vrzakova, H.; Hradis, M. What do you want to do next: A novel approach for intent prediction in gaze-based interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012; pp. 83–90. [Google Scholar]
- Liao, H.; Dong, W.H.; Huang, H.S.; Gartner, G.; Liu, H.P. Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. Int. J. Geogr. Inf. Sci. 2019, 33, 739–763. [Google Scholar] [CrossRef]
- David-John, B.; Peacock, C.; Zhang, T.; Murdison, T.S.; Benko, H.; Jonker, T.R. Towards gaze-based prediction of the intent to interact in virtual reality. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications, Virtual Event, 24–27 May 2021; pp. 1–7, Article 2. [Google Scholar]
- Chen, X.; Hou, W. Gaze-Based Interaction Intention Recognition in Virtual Reality. Electronics 2022, 11, 1647. [Google Scholar] [CrossRef]
- Stachoň, Z.; Šašinka, Č.; Čeněk, J.; Angsüsser, S.; Kubíček, P.; Štěrba, Z.; Bilíková, M. Effect of Size, Shape and Map Background in Cartographic Visualization: Experimental Study on Czech and Chinese Populations. Isprs Int. J. Geo Inf. 2018, 7, 427. [Google Scholar] [CrossRef]
- Cybulski, P.; Krassanakis, V. The effect of map label language on the visual search of cartographic point symbols. Cartogr. Geogr. Inf. Sci. 2022, 49, 189–204. [Google Scholar] [CrossRef]
- Keskin, M.; Ooms, K.; Dogru, A.O.; De Maeyer, P. Exploring the Cognitive Load of Expert and Novice Map Users Using EEG and Eye Tracking. Isprs Int. J. Geo Inf. 2020, 9, 429. [Google Scholar] [CrossRef]
- Popelka, S.; Herman, L.; Reznik, T.; Parilova, M.; Jedlicka, K.; Bouchal, J.; Kepka, M.; Charvat, K. User Evaluation of Map-Based Visual Analytic Tools. Isprs Int. J. Geo Inf. 2019, 8, 363. [Google Scholar] [CrossRef]
- Edler, D.; Keil, J.; Tuller, M.C.; Bestgen, A.K.; Dickmann, F. Searching for the ‘Right’ Legend: The Impact of Legend Position on Legend Decoding in a Cartographic Memory Task. Cartogr. J. 2020, 57, 6–17. [Google Scholar] [CrossRef]
- Duchowski, A.T. Gaze-based interaction: A 30 year retrospective. Comput. Graph. 2018, 73, 59–69. [Google Scholar] [CrossRef]
- Kasprowski, P.; Harezlak, K.; Niezabitowski, M. Eye movement tracking as a new promising modality for human computer interaction. In Proceedings of the 17th International Carpathian Control Conference (ICCC), High Tatras, Slovakia, 29 May–1 June 2016; pp. 314–318. [Google Scholar]
- Singh, R.; Miller, T.; Newn, J.; Velloso, E.; Vetere, F.; Sonenberg, L. Combining gaze and AI planning for online human intention recognition. Artif. Intell. 2020, 284, 103275. [Google Scholar] [CrossRef]
- Fairbairn, D.; Hepburn, J. Eye-tracking in map use, map user and map usability research: What are we looking for? Int. J. Cartogr. 2023, 9, 1–24. [Google Scholar] [CrossRef]
- Ooms, K.; Krassanakis, V. Measuring the Spatial Noise of a Low-Cost Eye Tracker to Enhance Fixation Detection. J. Imaging 2018, 4, 96. [Google Scholar] [CrossRef]
- Jacob, R.J.K. The use of eye movements in human-computer interaction techniques: What you look at is what you get. ACM Trans. Inf. Syst. 1991, 9, 152–169. [Google Scholar] [CrossRef]
- Ware, C.; Mikaelian, H.H. An evaluation of an eye tracker as a device for computer input2. In Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface, Toronto, ON, Canada, 1 May 1986; pp. 183–188. [Google Scholar]
- Zhang, H.; Hu, Y.; Zhu, J.; Fu, L.; Xu, B.; Li, W. A gaze-based interaction method for large-scale and large-space disaster scenes within mobile virtual reality. Trans. GIS 2022, 26, 1280–1298. [Google Scholar] [CrossRef]
- Piumsomboon, T.; Lee, G.; Lindeman, R.W.; Billinghurst, M. Exploring natural eye-gaze-based interaction for immersive virtual reality. In Proceedings of the 2017 IEEE Symposium on 3D User Interfaces (3DUI), Los Angeles, CA, USA, 18–19 March 2017; pp. 36–39. [Google Scholar]
- Isomoto, T.; Yamanaka, S.; Shizuki, B. Interaction Design of Dwell Selection Toward Gaze-based AR/VR Interaction. In Proceedings of the 2022 Symposium on Eye Tracking Research and Applications, Seattle, WA, USA, 8 June 2022; pp. 31–32, Article 39. [Google Scholar]
- Deng, C.; Tian, C.; Kuai, S. A combination of eye-gaze and head-gaze interactions improves efficiency and user experience in an object positioning task in virtual environments. Appl. Ergon. 2022, 103, 103785. [Google Scholar] [CrossRef] [PubMed]
- Hirzle, T.; Gugenheimer, J.; Geiselhart, F.; Bulling, A.; Rukzio, E. A Design Space for Gaze Interaction on Head-mounted Displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2 May 2019; p. 625. [Google Scholar]
- Majaranta, P.; Ahola, U.-K.; Špakov, O. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4 April 2009; pp. 357–360. [Google Scholar]
- Paulus, Y.T.; Remijn, G.B. Usability of various dwell times for eye-gaze-based object selection with eye tracking. Displays 2021, 67, 101997. [Google Scholar] [CrossRef]
- Hansen, J.P.; Johansen, A.S.; Hansen, D.W.; Itoh, K.; Mashino, S. Command without a Click: Dwell Time Typing by Mouse and Gaze Selections. In Human-Computer Interaction INTERACT’03; Rauterberg, M., Ed.; IOS Press: Amsterdam, The Netherlands, 2003; pp. 121–128. [Google Scholar]
- Dunphy, P.; Fitch, A.; Olivier, P. Gaze-contingent passwords at the ATM. In Proceedings of the 4th Conference on Communication by Gaze Interaction—Communication, Environment and Mobility Control by Gaze COGAIN 2008, Prague, Czech Republic, 2–3 September 2008. [Google Scholar]
- Feit, A.M.; Williams, S.; Toledo, A.; Paradiso, A.; Kulkarni, H.; Kane, S.; Morris, M.R. Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2 May 2017; pp. 1118–1130. [Google Scholar]
- Drewes, H.; Schmidt, A. Interacting with the computer using gaze gestures. In Proceedings of the IFIP Conference on Human-Computer Interaction, Rio de Janeiro, Brazil, 10 September 2007; pp. 475–488. [Google Scholar]
- Hyrskykari, A.; Istance, H.; Vickers, S. Gaze gestures or dwell-based interaction? In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012. [Google Scholar]
- Kytö, M.; Ens, B.; Piumsomboon, T.; Lee, G.; Billinghurst, M. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
- Barz, M.; Kapp, S.; Kuhn, J.; Sonntag, D. Automatic Recognition and Augmentation of Attended Objects in Real-time using Eye Tracking and a Head-mounted Display. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications, Virtual Event, 24–27 May 2021; pp. 1–4, Article 3. [Google Scholar]
- Göbel, F.; Bakogioannis, N.; Henggeler, K.; Tschümperlin, R.; Xu, Y.; Kiefer, P.; Raubal, M. A Public Gaze-Controlled Campus Map. In Proceedings of the Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop, ETH, Zurich, Switzerland, 14 January 2018. [Google Scholar]
- Zhu, L.; Wang, S.; Yuan, W.; Dong, W.; Liu, J. An Interactive Map Based on Gaze Control. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 736–743. (In Chinese) [Google Scholar] [CrossRef]
- Liao, H.; Zhang, C.B.; Zhao, W.D.; Dong, W.H. Toward Gaze-Based Map Interactions: Determining the Dwell Time and Buffer Size for the Gaze-Based Selection of Map Features. Isprs Int. J. Geo-Inf. 2022, 11, 127. [Google Scholar] [CrossRef]
- Bektaş, K.; Çöltekin, A.; Krüger, J.; Duchowski, A.T.; Fabrikant, S.I. GeoGCD: Improved visual search via gaze-contingent display. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, Denver, CO, USA, 25–28 June 2019; pp. 1–10, Article 84. [Google Scholar]
- Giannopoulos, I.; Kiefer, P.; Raubal, M. GeoGazemarks: Providing gaze history for the orientation on small display maps. In Proceedings of the 14th ACM international conference on Multimodal interaction, Santa Monica, CA, USA, 22 October 2012; pp. 165–172. [Google Scholar]
- Tateosian, L.G.; Glatz, M.; Shukunobe, M.; Chopra, P. GazeGIS: A Gaze-Based Reading and Dynamic Geographic Information System. In Proceedings of the ETVIS 2015: Eye Tracking and Visualization, Chicago, IL, USA, 25 October 2015; pp. 129–147. [Google Scholar]
- Göbel, F.; Kiefer, P.; Giannopoulos, I.; Duchowski, A.T.; Raubal, M. Improving map reading with gaze-adaptive legends. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, Warsaw, Poland, 14 June 2018; pp. 1–9. [Google Scholar]
- Lallé, S.; Toker, D.; Conati, C. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2941–2952. [Google Scholar] [CrossRef]
- Barral, O.; Lallé, S.; Iranpour, A.; Conati, C. Effect of Adaptive Guidance and Visualization Literacy on Gaze Attentive Behaviors and Sequential Patterns on Magazine-Style Narrative Visualizations. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–46. [Google Scholar] [CrossRef]
- Keskin, M.; Kettunen, P. Potential of eye-tracking for interactive geovisual exploration aided by machine learning. Int. J. Cartogr. 2023, 9, 1–23. [Google Scholar] [CrossRef]
- Olsen, A. The Tobii IVT Fixation Filter Algorithm Description. Available online: http://www.vinis.co.kr/ivt_filter.pdf (accessed on 18 April 2023).
- Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; North Holland: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
- Laugwitz, B.; Held, T.; Schrepp, M. Construction and Evaluation of a User Experience Questionnaire. In HCI and Usability for Education and Work; Lecture Notes in Computer Science; Holzinger, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 63–76. [Google Scholar]
- Goldberg, J.H.; Kotval, X.P. Computer interface evaluation using eye movements: Methods and constructs. Int. J. Ind. Ergon. 1999, 24, 631–645. [Google Scholar] [CrossRef]
- Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
- Yang, N.; Wu, G.; MacEachren, A.M.; Pang, X.; Fang, H. Comparison of font size and background color strategies for tag weights on tag maps. Cartogr. Geogr. Inf. Sci. 2023, 50, 162–177. [Google Scholar] [CrossRef]
- Jia, F.; Wang, W.; Yang, J.; Li, T.; Song, G.; Xu, Y. Effectiveness of Rectangular Cartogram for Conveying Quantitative Information: An Eye Tracking-Based Evaluation. ISPRS Int. J. Geo-Inf. 2023, 12, 39. [Google Scholar] [CrossRef]
- Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: New York, NY, USA, 2013. [Google Scholar]
- Çöltekin, A.; Heil, B.; Garlandini, S.; Fabrikant, S.I. Evaluating the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis. Cartogr. Geogr. Inf. Sci. 2009, 36, 5–17. [Google Scholar] [CrossRef]
- Çöltekin, A.; Fabrikant, S.I.; Lacayo, M. Exploring the efficiency of users’ visual analytics strategies based on sequence analysis of eye movement recordings. Int. J. Geogr. Inf. Sci. 2010, 24, 1559–1575. [Google Scholar] [CrossRef]
- Ooms, K.; De Maeyer, P.; Fack, V. Study of the attentive behavior of novice and expert map users using eye tracking. Cartogr. Geogr. Inf. Sci. 2014, 41, 37–54. [Google Scholar] [CrossRef]
- Kiefer, P.; Giannopoulos, I.; Raubal, M. Where Am I? Investigating Map Matching During Self-Localization with Mobile Eye Tracking in an Urban Environment. Trans. GIS 2014, 18, 660–686. [Google Scholar] [CrossRef]
- Grossman, T.; Balakrishnan, R. The bubble cursor: Enhancing target acquisition by dynamic resizing of the cursor’s activation area. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 281–290. [Google Scholar]
- Niu, Y.F.; Gao, Y.; Zhang, Y.T.; Xue, C.Q.; Yang, L.X. Improving Eye-Computer Interaction Interface Design: Ergonomic Investigations of the Optimum Target Size and Gaze-triggering Dwell Time. J. Eye Mov. Res. 2019, 12, 8. [Google Scholar] [CrossRef] [PubMed]
- Demsar, U.; Coltekin, A. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology. PLoS ONE 2017, 12, e0181818. [Google Scholar] [CrossRef]
Task Trials | ||
---|---|---|
Method | Valid Trials | Invalid (Skipped) Trials |
Traditional identification (TR) | 555 | 3 |
Gaze fixed adaptation (GF) | 542 | 16 |
Gaze dynamic adaptation (GD) | 548 | 10 |
Mouse dynamic adaptation (MD) | 558 | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, C.; Liao, H.; Huang, Y.; Dong, W. Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers. ISPRS Int. J. Geo-Inf. 2023, 12, 412. https://doi.org/10.3390/ijgi12100412
Zhang C, Liao H, Huang Y, Dong W. Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers. ISPRS International Journal of Geo-Information. 2023; 12(10):412. https://doi.org/10.3390/ijgi12100412
Chicago/Turabian StyleZhang, Changbo, Hua Liao, Yongbo Huang, and Weihua Dong. 2023. "Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers" ISPRS International Journal of Geo-Information 12, no. 10: 412. https://doi.org/10.3390/ijgi12100412
APA StyleZhang, C., Liao, H., Huang, Y., & Dong, W. (2023). Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers. ISPRS International Journal of Geo-Information, 12(10), 412. https://doi.org/10.3390/ijgi12100412