🤖 AI Summary
Mobile eye-tracking in real-world and XR environments is highly susceptible to occlusions (e.g., blinking) and illumination variations, resulting in substantial gaze data loss and severely hindering robust fixation analysis. To address this, we propose the first head-pose–assisted gaze reconstruction method, introducing a multimodal diffusion model—built upon a Transformer architecture—that jointly encodes temporal features from eye movements and head pose. Our approach achieves high-fidelity gaze imputation for partial missingness and enables unconditional gaze sequence generation under complete absence of eye-tracking data. The framework is generalizable to modeling other body motions (e.g., wrist kinematics). Evaluated on Nymeria, Ego-Exo4D, and HOT3D benchmarks, it significantly outperforms conventional interpolation and state-of-the-art deep learning baselines. Critically, generated gaze velocity distributions closely match those observed in human behavior. This work establishes a new paradigm for robust, physiology-informed gaze analysis in challenging real-world settings.
📝 Abstract
Mobile eye tracking plays a vital role in capturing human visual attention across both real-world and extended reality (XR) environments, making it an essential tool for applications ranging from behavioural research to human-computer interaction. However, missing values due to blinks, pupil detection errors, or illumination changes pose significant challenges for further gaze data analysis. To address this challenge, we introduce HAGI++ - a multi-modal diffusion-based approach for gaze data imputation that, for the first time, uses the integrated head orientation sensors to exploit the inherent correlation between head and eye movements. HAGI++ employs a transformer-based diffusion model to learn cross-modal dependencies between eye and head representations and can be readily extended to incorporate additional body movements. Extensive evaluations on the large-scale Nymeria, Ego-Exo4D, and HOT3D datasets demonstrate that HAGI++ consistently outperforms conventional interpolation methods and deep learning-based time-series imputation baselines in gaze imputation. Furthermore, statistical analyses confirm that HAGI++ produces gaze velocity distributions that closely match actual human gaze behaviour, ensuring more realistic gaze imputations. Moreover, by incorporating wrist motion captured from commercial wearable devices, HAGI++ surpasses prior methods that rely on full-body motion capture in the extreme case of 100% missing gaze data (pure gaze generation). Our method paves the way for more complete and accurate eye gaze recordings in real-world settings and has significant potential for enhancing gaze-based analysis and interaction across various application domains.