UniGaze: Towards Universal Gaze Estimation via Large-scale Pre-Training

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gaze estimation models exhibit poor cross-domain generalization and heavily rely on large-scale labeled data. To address this, we introduce large-scale self-supervised pretraining—first applied to geometry-sensitive gaze regression—by proposing a normalized facial input space tailored for gaze modeling and establishing a collaborative pretraining paradigm leveraging heterogeneous in-the-wild facial datasets. We further innovate by adapting the Masked Autoencoder (MAE) framework to gaze estimation, integrating it with Vision Transformer (ViT) backbones and adopting a rigorous cross-dataset evaluation protocol (e.g., leave-one-dataset-out). Experiments demonstrate substantial improvements in cross-domain generalization: our method achieves state-of-the-art transfer performance without fine-tuning or with only lightweight adaptation, significantly reducing dependency on annotated data while maintaining geometric fidelity in gaze prediction.

Technology Category

Application Category

📝 Abstract
Despite decades of research on data collection and model architectures, current gaze estimation models face significant challenges in generalizing across diverse data domains. While recent advances in self-supervised pre-training have shown remarkable potential for improving model generalization in various vision tasks, their effectiveness in gaze estimation remains unexplored due to the geometric nature of the gaze regression task. We propose UniGaze, which leverages large-scale, in-the-wild facial datasets through self-supervised pre-training for gaze estimation. We carefully curate multiple facial datasets that capture diverse variations in identity, lighting, background, and head poses. By directly applying Masked Autoencoder (MAE) pre-training on normalized face images with a Vision Transformer (ViT) backbone, our UniGaze learns appropriate feature representations within the specific input space required by downstream gaze estimation models. Through comprehensive experiments using challenging cross-dataset evaluation and novel protocols, including leave-one-dataset-out and joint-dataset settings, we demonstrate that UniGaze significantly improves generalization across multiple data domains while minimizing reliance on costly labeled data. The source code and pre-trained models will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Improves gaze estimation across diverse domains
Leverages self-supervised pre-training for generalization
Reduces reliance on expensive labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised pre-training
Masked Autoencoder (MAE)
Vision Transformer (ViT)
🔎 Similar Papers
No similar papers found.