GazeQwen: Lightweight Gaze-Conditioned LLM Modulation for Streaming Video Understanding

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language large models struggle to effectively integrate eye-gaze fixation signals to enhance video understanding. This work proposes a novel paradigm that dynamically injects lightweight gaze modulation signals into specific decoding layers of a large language model (LLM) via forward hooks. The approach employs a compact gaze resampling module to fuse video features extracted by V-JEPA 2.1 with positional encodings of gaze fixations, combined with LoRA for parameter-efficient fine-tuning. Evaluated on all ten tasks of the StreamGaze benchmark, the method achieves 63.9% accuracy—surpassing the Qwen2.5-VL-7B baseline by 16.1 percentage points and outperforming GPT-4o—establishing a new state of the art among both open- and closed-source models. These results demonstrate that precise integration of gaze information is more effective than merely scaling up model size or refining prompt engineering.
📝 Abstract
Current multimodal large language models (MLLMs) cannot effectively utilize eye-gaze information for video understanding, even when gaze cues are supplied via visual overlays or text descriptions. We introduce GazeQwen, a parameter efficient approach that equips an open-source MLLM with gaze awareness through hidden-state modulation. At its core is a compact gaze resampler (~1-5 M trainable parameters) that encodes V-JEPA 2.1 video features together with fixation-derived positional encodings and produces additive residuals injected into selected LLM decoder layers via forward hooks. An optional second training stage adds low-rank adapters (LoRA) to the LLM for tighter integration. Evaluated on all 10 tasks of the StreamGaze benchmark, GazeQwen reaches 63.9% accuracy, a +16.1 point gain over the same Qwen2.5-VL-7B backbone with gaze as visual prompts and +10.5 points over GPT-4o, the highest score among all open-source and proprietary models tested. These results suggest that learning where to inject gaze within an LLM is more effective than scaling model size or engineering better prompts. All code and checkpoints are available at https://github.com/phamtrongthang123/gazeqwen .
Problem

Research questions and friction points this paper is trying to address.

gaze
video understanding
multimodal large language models
eye-tracking
streaming video
Innovation

Methods, ideas, or system contributions that make the work stand out.

gaze-conditioned modulation
parameter-efficient adaptation
hidden-state injection
multimodal LLM
eye-gaze integration
🔎 Similar Papers
No similar papers found.