🤖 AI Summary
This work addresses the limited robustness of existing 3D gaze estimation methods under complex contextual variations—such as lighting conditions, head poses, and background clutter—by proposing a semantic-modulated multi-scale Transformer architecture. The approach innovatively integrates CLIP-derived semantic priors with a learnable context prototype bank to condition CLIP’s global features, while unifying CLIP image patches and high-resolution CNN features within a shared attention space. Furthermore, it introduces the Mixture-of-Experts (MoE) mechanism into gaze estimation for the first time, enhancing conditional representational capacity through a routing-and-sharing strategy. The method achieves state-of-the-art angular errors of 2.49°, 3.22°, 10.16°, and 1.44° on MPIIFaceGaze, EYEDIAP, Gaze360, and ETH-XGaze benchmarks, respectively, improving up to 64% over previous best-performing approaches.
📝 Abstract
We present a semantics modulated, multi scale Transformer for 3D gaze estimation. Our model conditions CLIP global features with learnable prototype banks (illumination, head pose, background, direction), fuses these prototype-enriched global vectors with CLIP patch tokens and high-resolution CNN tokens in a unified attention space, and replaces several FFN blocks with routed/shared Mixture of Experts to increase conditional capacity. Evaluated on MPIIFaceGaze, EYEDIAP, Gaze360 and ETH-XGaze, our model achieves new state of the art angular errors of 2.49{\deg}, 3.22{\deg}, 10.16{\deg}, and 1.44{\deg}, demonstrating up to a 64% relative improvement over previously reported results. ablations attribute gains to prototype conditioning, cross scale fusion, MoE and hyperparameter. Our code is publicly available at https://github. com/AIPMLab/Gazeformer.