ARGaze: Autoregressive Transformers for Online Egocentric Gaze Estimation

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of online gaze estimation from first-person videos without explicit head or eye signals by proposing a novel visual conditional autoregressive modeling approach. Formulating the task as a sequence prediction problem, the method employs a Transformer decoder under causal constraints to fuse current visual features with a fixed-length window of historical gaze estimates, enabling low-latency streaming inference. To the best of our knowledge, this is the first effort to introduce autoregressive mechanisms into online first-person gaze estimation, effectively capturing the temporal continuity inherent in goal-directed behavior. The proposed model achieves state-of-the-art performance under online settings across multiple benchmarks, and ablation studies confirm that both historical gaze context and autoregressive modeling are crucial for robust prediction.

Technology Category

Application Category

📝 Abstract
Online egocentric gaze estimation predicts where a camera wearer is looking from first-person video using only past and current frames, a task essential for augmented reality and assistive technologies. Unlike third-person gaze estimation, this setting lacks explicit head or eye signals, requiring models to infer current visual attention from sparse, indirect cues such as hand-object interactions and salient scene content. We observe that gaze exhibits strong temporal continuity during goal-directed activities: knowing where a person looked recently provides a powerful prior for predicting where they look next. Inspired by vision-conditioned autoregressive decoding in vision-language models, we propose ARGaze, which reformulates gaze estimation as sequential prediction: at each timestep, a transformer decoder predicts current gaze by conditioning on (i) current visual features and (ii) a fixed-length Gaze Context Window of recent gaze target estimates. This design enforces causality and enables bounded-resource streaming inference. We achieve state-of-the-art performance across multiple egocentric benchmarks under online evaluation, with extensive ablations validating that autoregressive modeling with bounded gaze history is critical for robust prediction. We will release our source code and pre-trained models.
Problem

Research questions and friction points this paper is trying to address.

egocentric gaze estimation
online prediction
first-person vision
visual attention
temporal continuity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive Transformers
Online Gaze Estimation
Egocentric Vision
Gaze Context Window
Causal Streaming Inference
🔎 Similar Papers
No similar papers found.