Evaluating Sensitivity Parameters in Smartphone-Based Gaze Estimation: A Comparative Study of Appearance-Based and Infrared Eye Trackers

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility and robustness of smartphone-based appearance-based eye-tracking in real-world mobile scenarios. We propose a lightweight MobileNet-V3-LSTM model that takes grayscale facial images as input and evaluates gaze estimation accuracy using Euclidean distance error. To systematically assess performance degradation, we introduce the first mobile-oriented evaluation framework quantifying the impact of multiple real-world factors—including age, gender, visual correction (e.g., glasses), ambient illumination, device type, and head pose—on contactless eye-tracking. Experimental results show a mean gaze error of 17.76 mm, comparable to the Tobii Pro Nano (16.53 mm). Critical performance degradation is observed under low-light conditions, when users wear eyeglasses, and among older adults. Our work establishes an empirical benchmark and a reproducible evaluation paradigm for deploying eye-tracking technology on mobile platforms.

Technology Category

Application Category

📝 Abstract
This study evaluates a smartphone-based, deep-learning eye-tracking algorithm by comparing its performance against a commercial infrared-based eye tracker, the Tobii Pro Nano. The aim is to investigate the feasibility of appearance-based gaze estimation under realistic mobile usage conditions. Key sensitivity factors, including age, gender, vision correction, lighting conditions, device type, and head position, were systematically analysed. The appearance-based algorithm integrates a lightweight convolutional neural network (MobileNet-V3) with a recurrent structure (Long Short-Term Memory) to predict gaze coordinates from grayscale facial images. Gaze data were collected from 51 participants using dynamic visual stimuli, and accuracy was measured using Euclidean distance. The deep learning model produced a mean error of 17.76 mm, compared to 16.53 mm for the Tobii Pro Nano. While overall accuracy differences were small, the deep learning-based method was more sensitive to factors such as lighting, vision correction, and age, with higher failure rates observed under low-light conditions among participants using glasses and in older age groups. Device-specific and positional factors also influenced tracking performance. These results highlight the potential of appearance-based approaches for mobile eye tracking and offer a reference framework for evaluating gaze estimation systems across varied usage conditions.
Problem

Research questions and friction points this paper is trying to address.

Compare smartphone deep-learning gaze estimation with infrared eye tracker
Analyze sensitivity factors like lighting, age, and device type
Evaluate feasibility of appearance-based gaze tracking in mobile conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight CNN with LSTM for gaze estimation
Smartphone-based appearance gaze tracking algorithm
Comparative analysis with infrared eye tracker
🔎 Similar Papers
No similar papers found.
N
N. Gunawardena
School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, New South Wales, Australia
G
G. Lui
The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
J
J. A. Ginige
School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, New South Wales, Australia
Bahman Javadi
Bahman Javadi
Full Professor, Western Sydney University
Distributed ComputingEdge ComputingReliabilityInternet of ThingsSmart Computing