GA3CE: Unconstrained 3D Gaze Estimation with Gaze-Aware 3D Context Encoding

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenging problem of single-frame 3D gaze direction estimation under unconstrained conditions—such as long distance or back-facing camera setups—where close-up eye imagery is unavailable. To tackle this, we propose the first main-view-aligned 3D contextual modeling framework. Our method introduces (1) a Direction–Distance-Decoupled positional encoding (D³), which explicitly encodes the relative 3D geometric relationships between human and objects; and (2) an end-to-end learnable 3D context encoding network that jointly fuses 3D pose, scene object locations, and egocentric spatial alignment representations. Evaluated on multiple benchmarks, our approach achieves average angular error reductions of 13%–37% over state-of-the-art methods, demonstrating significant improvements in robustness and accuracy for real-world 3D gaze estimation under complex, unconstrained conditions.

Technology Category

Application Category

📝 Abstract
We propose a novel 3D gaze estimation approach that learns spatial relationships between the subject and objects in the scene, and outputs 3D gaze direction. Our method targets unconstrained settings, including cases where close-up views of the subject's eyes are unavailable, such as when the subject is distant or facing away. Previous approaches typically rely on either 2D appearance alone or incorporate limited spatial cues using depth maps in the non-learnable post-processing step. Estimating 3D gaze direction from 2D observations in these scenarios is challenging; variations in subject pose, scene layout, and gaze direction, combined with differing camera poses, yield diverse 2D appearances and 3D gaze directions even when targeting the same 3D scene. To address this issue, we propose GA3CE: Gaze-Aware 3D Context Encoding. Our method represents subject and scene using 3D poses and object positions, treating them as 3D context to learn spatial relationships in 3D space. Inspired by human vision, we align this context in an egocentric space, significantly reducing spatial complexity. Furthermore, we propose D$^3$ (direction-distance-decomposed) positional encoding to better capture the spatial relationship between 3D context and gaze direction in direction and distance space. Experiments demonstrate substantial improvements, reducing mean angle error by 13%-37% compared to leading baselines on benchmark datasets in single-frame settings.
Problem

Research questions and friction points this paper is trying to address.

Estimating 3D gaze direction from 2D observations in unconstrained settings
Learning spatial relationships between subject and scene objects in 3D space
Reducing spatial complexity by aligning context in egocentric space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D poses and object positions as context
Aligns context in egocentric space for simplicity
Employs D$^3$ encoding for spatial relationship capture
🔎 Similar Papers
No similar papers found.