sensors-logo

Journal Browser

Journal Browser

Eye Tracking Techniques, Applications, and Challenges

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 56566

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering, University of Pavia, Via A. Ferrata 5, 27100 Pavia, Italy
Interests: vision-based perceptive interfaces; eye tracking; human-computer interaction

E-Mail Website
Guest Editor
Department of Applied Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: eye-tracking; data mining; human-computer interaction; machine learning; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy

E-Mail Website
Guest Editor
Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Via Ferrata 5, 27100 Pavia, Italy
Interests: computer vision; human-computer interaction; 3D modelling; digital humanities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Eye tracking technology is becoming more widespread nowadays, thanks to the recent availability of cheap commercial devices (such as gaze detection sensors). At the same time, novel techniques are being continuously pursued to improve gaze detection precision, and new ways to fully exploit the potential of eye data are also continuously being explored.

The number of potential applications of eye tracking is also growing constantly, including health care (both medical diagnosing and treatment), Human-Computer Interaction, user behavior understanding, psychology, biometric identification, education, and many more.

The purpose of this Special Issue is to present the recent advancements in eye tracking research, including advancements in eye signal acquisition using different optical and nonoptical sensors, and different ways of eye movement signal processing. Any research that exploits eye tracking sensors for various purposes, be it Human-Computer Interaction, user behavior understanding, biometrics, or others, is also welcome.

The Special Issue will include extended versions of selected papers from the ETTAC 2020 Workshop, which was the first workshop to be held on eye tracking techniques, applications, and challenges. It was organized during the 25th International Conference on Pattern Recognition (ICPR 2020). However, we also solicit other contributions from researchers involved and interested in this research area.

Prof. Dr. Marco Porta
Dr. Pawel Kasprowski
Dr. Luca Lombardi
Dr. Piercarlo Dondi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Gaze detection
  • Human-computer interaction
  • Usability evaluation
  • Virtual and Augmeted reality
  • User behavior understanding
  • Biometrics, security and privacy
  • Medicine and health care
  • Gaze data visualization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6409 KiB  
Article
Combining Implicit and Explicit Feature Extraction for Eye Tracking: Attention Classification Using a Heterogeneous Input
by Lisa-Marie Vortmann and Felix Putze
Sensors 2021, 21(24), 8205; https://doi.org/10.3390/s21248205 - 8 Dec 2021
Cited by 9 | Viewed by 3433
Abstract
Statistical measurements of eye movement-specific properties, such as fixations, saccades, blinks, or pupil dilation, are frequently utilized as input features for machine learning algorithms applied to eye tracking recordings. These characteristics are intended to be interpretable aspects of eye gazing behavior. However, prior [...] Read more.
Statistical measurements of eye movement-specific properties, such as fixations, saccades, blinks, or pupil dilation, are frequently utilized as input features for machine learning algorithms applied to eye tracking recordings. These characteristics are intended to be interpretable aspects of eye gazing behavior. However, prior research has demonstrated that when trained on implicit representations of raw eye tracking data, neural networks outperform these traditional techniques. To leverage the strengths and information of both feature sets, we integrated implicit and explicit eye tracking features in one classification approach in this work. A neural network was adapted to process the heterogeneous input and predict the internally and externally directed attention of 154 participants. We compared the accuracies reached by the implicit and combined features for different window lengths and evaluated the approaches in terms of person- and task-independence. The results indicate that combining implicit and explicit feature extraction techniques for eye tracking data improves classification results for attentional state detection significantly. The attentional state was correctly classified during new tasks with an accuracy better than chance, and person-independent classification even outperformed person-dependently trained classifiers for some settings. For future experiments and applications that require eye tracking data classification, we suggest to consider implicit data representation in addition to interpretable explicit features. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

14 pages, 4725 KiB  
Communication
Accurate Pupil Center Detection in Off-the-Shelf Eye Tracking Systems Using Convolutional Neural Networks
by Andoni Larumbe-Bergera, Gonzalo Garde, Sonia Porta, Rafael Cabeza and Arantxa Villanueva
Sensors 2021, 21(20), 6847; https://doi.org/10.3390/s21206847 - 15 Oct 2021
Cited by 17 | Viewed by 4239
Abstract
Remote eye tracking technology has suffered an increasing growth in recent years due to its applicability in many research areas. In this paper, a video-oculography method based on convolutional neural networks (CNNs) for pupil center detection over webcam images is proposed. As the [...] Read more.
Remote eye tracking technology has suffered an increasing growth in recent years due to its applicability in many research areas. In this paper, a video-oculography method based on convolutional neural networks (CNNs) for pupil center detection over webcam images is proposed. As the first contribution of this work and in order to train the model, a pupil center manual labeling procedure of a facial landmark dataset has been performed. The model has been tested over both real and synthetic databases and outperforms state-of-the-art methods, achieving pupil center estimation errors below the size of a constricted pupil in more than 95% of the images, while reducing computing time by a 8 factor. Results show the importance of use high quality training data and well-known architectures to achieve an outstanding performance. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

22 pages, 1879 KiB  
Article
IEyeGASE: An Intelligent Eye Gaze-Based Assessment System for Deeper Insights into Learner Performance
by Chandrika Kamath Ramachandra and Amudha Joseph
Sensors 2021, 21(20), 6783; https://doi.org/10.3390/s21206783 - 13 Oct 2021
Cited by 14 | Viewed by 3697
Abstract
In the current education environment, learning takes place outside the physical classroom, and tutors need to determine whether learners are absorbing the content delivered to them. Online assessment has become a viable option for tutors to establish the achievement of course learning outcomes [...] Read more.
In the current education environment, learning takes place outside the physical classroom, and tutors need to determine whether learners are absorbing the content delivered to them. Online assessment has become a viable option for tutors to establish the achievement of course learning outcomes by learners. It provides real-time progress and immediate results; however, it has challenges in quantifying learner aspects like wavering behavior, confidence level, knowledge acquired, quickness in completing the task, task engagement, inattentional blindness to critical information, etc. An intelligent eye gaze-based assessment system called IEyeGASE is developed to measure insights into these behavioral aspects of learners. The system can be integrated into the existing online assessment system and help tutors re-calibrate learning goals and provide necessary corrective actions. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

40 pages, 48332 KiB  
Article
Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study
by Jonas Aust, Antonija Mitrovic and Dirk Pons
Sensors 2021, 21(18), 6135; https://doi.org/10.3390/s21186135 - 13 Sep 2021
Cited by 17 | Viewed by 5358
Abstract
Background—The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor ‘cleanliness’ was analysed [...] Read more.
Background—The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor ‘cleanliness’ was analysed among other factors. Method—Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results—The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions—Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Graphical abstract

18 pages, 876 KiB  
Article
Biometric Identification Based on Eye Movement Dynamic Features
by Katarzyna Harezlak, Michal Blasiak and Pawel Kasprowski
Sensors 2021, 21(18), 6020; https://doi.org/10.3390/s21186020 - 8 Sep 2021
Cited by 6 | Viewed by 2583
Abstract
The paper presents studies on biometric identification methods based on the eye movement signal. New signal features were investigated for this purpose. They included its representation in the frequency domain and the largest Lyapunov exponent, which characterizes the dynamics of the eye movement [...] Read more.
The paper presents studies on biometric identification methods based on the eye movement signal. New signal features were investigated for this purpose. They included its representation in the frequency domain and the largest Lyapunov exponent, which characterizes the dynamics of the eye movement signal seen as a nonlinear time series. These features, along with the velocities and accelerations used in the previously conducted works, were determined for 100-ms eye movement segments. 24 participants took part in the experiment, composed of two sessions. The users’ task was to observe a point appearing on the screen in 29 locations. The eye movement recordings for each point were used to create a feature vector in two variants: one vector for one point and one vector including signal for three consecutive locations. Two approaches for defining the training and test sets were applied. In the first one, 75% of randomly selected vectors were used as the training set, under a condition of equal proportions for each participant in both sets and the disjointness of the training and test sets. Among four classifiers: kNN (k = 5), decision tree, naïve Bayes, and random forest, good classification performance was obtained for decision tree and random forest. The efficiency of the last method reached 100%. The outcomes were much worse in the second scenario when the training and testing sets when defined based on recordings from different sessions; the possible reasons are discussed in the paper. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

18 pages, 20589 KiB  
Article
Saliency-Based Gaze Visualization for Eye Movement Analysis
by Sangbong Yoo, Seongmin Jeong, Seokyeon Kim and Yun Jang
Sensors 2021, 21(15), 5178; https://doi.org/10.3390/s21155178 - 30 Jul 2021
Cited by 6 | Viewed by 3363
Abstract
Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts [...] Read more.
Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

21 pages, 1819 KiB  
Article
Low-Cost Eye Tracking Calibration: A Knowledge-Based Study
by Gonzalo Garde, Andoni Larumbe-Bergera, Benoît Bossavit, Sonia Porta, Rafael Cabeza and Arantxa Villanueva
Sensors 2021, 21(15), 5109; https://doi.org/10.3390/s21155109 - 28 Jul 2021
Cited by 6 | Viewed by 3427
Abstract
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based [...] Read more.
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based gaze estimation is proposed for low-resolution systems. To this end, features extracted from the synthetic U2Eyes dataset are used in a fully connected network in order to isolate the effect of specific user’s features, such as kappa angles. Then, the impact of system calibration in a real setup employing I2Head dataset images is studied. The obtained results show accuracy improvements over 50%, probing that calibration is a key process also in low-resolution gaze estimation scenarios. Furthermore, we show that after calibration accuracy values close to those obtained by high-resolution systems, in the range of 0.7°, could be theoretically obtained if a careful selection of image features was performed, demonstrating significant room for improvement for off-the-shelf eye tracking systems. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

16 pages, 8702 KiB  
Article
OpenEDS2020 Challenge on Gaze Tracking for VR: Dataset and Results
by Cristina Palmero, Abhishek Sharma, Karsten Behrendt, Kapil Krishnakumar, Oleg V. Komogortsev and Sachin S. Talathi
Sensors 2021, 21(14), 4769; https://doi.org/10.3390/s21144769 - 13 Jul 2021
Cited by 7 | Viewed by 4002
Abstract
This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on [...] Read more.
This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on a sequence of previous eye images, and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames. Both competitions were based on the OpenEDS2020 dataset, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display with two synchronized eye-facing cameras. The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task. The proposed baselines, based on deep learning approaches, obtained an average angular error of 5.37 degrees for gaze prediction, and a mean intersection over union score (mIoU) of 84.1% for semantic segmentation. The winning solutions were able to outperform the baselines, obtaining up to 3.17 degrees for the former task and 95.2% mIoU for the latter. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

21 pages, 3692 KiB  
Article
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels
by Sangbong Yoo, Seongmin Jeong and Yun Jang
Sensors 2021, 21(14), 4686; https://doi.org/10.3390/s21144686 - 8 Jul 2021
Cited by 4 | Viewed by 3841
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult [...] Read more.
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

13 pages, 1125 KiB  
Article
On the Improvement of Eye Tracking-Based Cognitive Workload Estimation Using Aggregation Functions
by Monika Kaczorowska, Paweł Karczmarek, Małgorzata Plechawska-Wójcik and Mikhail Tokovarov
Sensors 2021, 21(13), 4542; https://doi.org/10.3390/s21134542 - 2 Jul 2021
Cited by 4 | Viewed by 3143
Abstract
Cognitive workload, being a quantitative measure of mental effort, draws significant interest of researchers, as it allows to monitor the state of mental fatigue. Estimation of cognitive workload becomes especially important for job positions requiring outstanding engagement and responsibility, e.g., air-traffic dispatchers, pilots, [...] Read more.
Cognitive workload, being a quantitative measure of mental effort, draws significant interest of researchers, as it allows to monitor the state of mental fatigue. Estimation of cognitive workload becomes especially important for job positions requiring outstanding engagement and responsibility, e.g., air-traffic dispatchers, pilots, car or train drivers. Cognitive workload estimation finds its applications also in the field of education material preparation. It allows to monitor the difficulty degree for specific tasks enabling to adjust the level of education materials to typical abilities of students. In this study, we present the results of research conducted with the goal of examining the influence of various fuzzy or non-fuzzy aggregation functions upon the quality of cognitive workload estimation. Various classic machine learning models were successfully applied to the problem. The results of extensive in-depth experiments with over 2000 aggregation operators shows the applicability of the approach based on the aggregation functions. Moreover, the approach based on aggregation process allows for further improvement of classification results. A wide range of aggregation functions is considered and the results suggest that the combination of classical machine learning models and aggregation methods allows to achieve high quality of cognitive workload level recognition preserving low computational cost. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

13 pages, 28408 KiB  
Communication
Ultrasound for Gaze Estimation—A Modeling and Empirical Study
by Andre Golard and Sachin S. Talathi
Sensors 2021, 21(13), 4502; https://doi.org/10.3390/s21134502 - 30 Jun 2021
Cited by 5 | Viewed by 4177
Abstract
Most eye tracking methods are light-based. As such, they can suffer from ambient light changes when used outdoors, especially for use cases where eye trackers are embedded in Augmented Reality glasses. It has been recently suggested that ultrasound could provide a low power, [...] Read more.
Most eye tracking methods are light-based. As such, they can suffer from ambient light changes when used outdoors, especially for use cases where eye trackers are embedded in Augmented Reality glasses. It has been recently suggested that ultrasound could provide a low power, fast, light-insensitive alternative to camera-based sensors for eye tracking. Here, we report on our work on modeling ultrasound sensor integration into a glasses form factor AR device to evaluate the feasibility of estimating eye-gaze in various configurations. Next, we designed a benchtop experimental setup to collect empirical data on time of flight and amplitude signals for reflected ultrasound waves for a range of gaze angles of a model eye. We used this data as input for a low-complexity gradient-boosted tree machine learning regression model and demonstrate that we can effectively estimate gaze (gaze RMSE error of 0.965 ± 0.178 degrees with an adjusted R2 score of 90.2 ± 4.6). Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

14 pages, 4411 KiB  
Article
Convolutional Neural Networks Cascade for Automatic Pupil and Iris Detection in Ocular Proton Therapy
by Luca Antonioli, Andrea Pella, Rosalinda Ricotti, Matteo Rossi, Maria Rosaria Fiore, Gabriele Belotti, Giuseppe Magro, Chiara Paganelli, Ester Orlandi, Mario Ciocca and Guido Baroni
Sensors 2021, 21(13), 4400; https://doi.org/10.3390/s21134400 - 27 Jun 2021
Cited by 8 | Viewed by 3105
Abstract
Eye tracking techniques based on deep learning are rapidly spreading in a wide variety of application fields. With this study, we want to exploit the potentiality of eye tracking techniques in ocular proton therapy (OPT) applications. We implemented a fully automatic approach based [...] Read more.
Eye tracking techniques based on deep learning are rapidly spreading in a wide variety of application fields. With this study, we want to exploit the potentiality of eye tracking techniques in ocular proton therapy (OPT) applications. We implemented a fully automatic approach based on two-stage convolutional neural networks (CNNs): the first stage roughly identifies the eye position and the second one performs a fine iris and pupil detection. We selected 707 video frames recorded during clinical operations during OPT treatments performed at our institute. 650 frames were used for training and 57 for a blind test. The estimations of iris and pupil were evaluated against the manual labelled contours delineated by a clinical operator. For iris and pupil predictions, Dice coefficient (median = 0.94 and 0.97), Szymkiewicz–Simpson coefficient (median = 0.97 and 0.98), Intersection over Union coefficient (median = 0.88 and 0.94) and Hausdorff distance (median = 11.6 and 5.0 (pixels)) were quantified. Iris and pupil regions were found to be comparable to the manually labelled ground truths. Our proposed framework could provide an automatic approach to quantitatively evaluating pupil and iris misalignments, and it could be used as an additional support tool for clinical activity, without impacting in any way with the consolidated routine. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

18 pages, 4997 KiB  
Article
Glimpse: A Gaze-Based Measure of Temporal Salience
by V. Javier Traver, Judith Zorío and Luis A. Leiva
Sensors 2021, 21(9), 3099; https://doi.org/10.3390/s21093099 - 29 Apr 2021
Cited by 3 | Viewed by 3405
Abstract
Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. [...] Read more.
Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse’s software and data are publicly available. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

16 pages, 2866 KiB  
Article
Industrial Energy Assessment Training Effectiveness Evaluation: An Eye-Tracking Study
by Laleh Ghanbari, Chao Wang and Hyun Woo Jeon
Sensors 2021, 21(5), 1584; https://doi.org/10.3390/s21051584 - 24 Feb 2021
Cited by 4 | Viewed by 3118
Abstract
It is essential to understand the effectiveness of any training program so it can be improved accordingly. Various studies have applied standard metrics for the evaluation of visual behavior to recognize the areas of interest that attract individuals’ attention as there is a [...] Read more.
It is essential to understand the effectiveness of any training program so it can be improved accordingly. Various studies have applied standard metrics for the evaluation of visual behavior to recognize the areas of interest that attract individuals’ attention as there is a high correlation between attentional behavior and where one is focusing on. However, through reviewing the literature, we believe that studies that applied eye-tracking technologies for training purposes are still limited, especially in the industrial energy assessment training field. In this paper, the effectiveness of industrial energy assessment training was quantitatively evaluated by measuring the attentional allocation of trainees using eye-tracking technology. Moreover, this study identifies the areas that require more focus based on evaluating the performance of subjects after receiving the training. Additionally, this research was conducted in a controlled environment to remove the distractions that may be caused by environmental factors to only concentrate on variables that influence the learning behavior of subjects. The experiment results showed that after receiving the training, the subjects’ performance in energy assessment was significantly improved in two areas: production, and recycling and waste management, and the designed training program enhanced the knowledge of participants in identifying energy-saving opportunities to the knowledge level of experienced participants. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Show Figures

Figure 1

Back to TopTop