Focal Depth Estimation: A Calibration-Free, Subject- and Daytime Invariant Approach

📅 2024-08-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing gaze-based vergence depth estimation methods heavily rely on user-specific calibration, severely limiting their scalability and practicality in extended reality applications. This paper proposes a calibration-free, cross-subject, and illumination-robust focal depth estimation method that models the temporal dynamics of short-duration eye movement sequences. By integrating domain-informed oculomotor feature engineering with an LSTM network, our approach eliminates the need for subject-specific parameters or lighting priors. It overcomes the conventional dependency of eye-tracking systems on frequent recalibration. Evaluated under real-world conditions, the method achieves a mean absolute error of less than 10 cm. This advancement significantly enhances the real-time performance, generalizability across users, and deployment efficiency of autofocus eyewear systems.

Technology Category

Application Category

📝 Abstract
In an era where personalized technology is increasingly intertwined with daily life, traditional eye-tracking systems and autofocal glasses face a significant challenge: the need for frequent, user-specific calibration, which impedes their practicality. This study introduces a groundbreaking calibration-free method for estimating focal depth, leveraging machine learning techniques to analyze eye movement features within short sequences. Our approach, distinguished by its innovative use of LSTM networks and domain-specific feature engineering, achieves a mean absolute error (MAE) of less than 10 cm, setting a new focal depth estimation accuracy standard. This advancement promises to enhance the usability of autofocal glasses and pave the way for their seamless integration into extended reality environments, marking a significant leap forward in personalized visual technology.
Problem

Research questions and friction points this paper is trying to address.

Estimates fixation depth without calibration
Addresses subject variability in eye-tracking
Handles limited and noisy gaze data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LSTM networks for spatiotemporal sequence modeling
Subject-invariant feature engineering and normalization
Calibration-free approach with cross-dataset generalization
🔎 Similar Papers
No similar papers found.