Unbiased Visual Reasoning with Controlled Visual Inputs

πŸ“… 2025-12-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision-language models (VLMs) often rely on spurious correlations rather than causal visual evidence in visual question answering (VQA), with shortcut biases exacerbated by fine-tuning. To address this, we propose VISTAβ€”a novel framework that decouples the frozen visual encoder from a pure-text reasoning module, establishing a controlled visual input interface. VISTA introduces, for the first time, an explicit information bottleneck mechanism to enforce strict separation between perception and reasoning. This design enables cross-sensor transfer, perception failure detection and recovery, and enhances reasoning neutrality and evidence grounding. Experiments demonstrate a +16.29% robustness gain on SpuriVerse (Qwen2.5-VL-7B), competitive performance on MMVP and the balanced SeedBench subset, and human evaluations confirm significantly more objective reasoning and markedly reduced reliance on spurious attributes.

Technology Category

Application Category

πŸ“ Abstract
End-to-end Vision-language Models (VLMs) often answer visual questions by exploiting spurious correlations instead of causal visual evidence, and can become more shortcut-prone when fine-tuned. We introduce VISTA (Visual-Information Separation for Text-based Analysis), a modular framework that decouples perception from reasoning via an explicit information bottleneck. A frozen VLM sensor is restricted to short, objective perception queries, while a text-only LLM reasoner decomposes each question, plans queries, and aggregates visual facts in natural language. This controlled interface defines a reward-aligned environment for training unbiased visual reasoning with reinforcement learning. Instantiated with Qwen2.5-VL and Llama3.2-Vision sensors, and trained with GRPO from only 641 curated multi-step questions, VISTA significantly improves robustness to real-world spurious correlations on SpuriVerse (+16.29% with Qwen-2.5-VL-7B and +6.77% with Llama-3.2-Vision-11B), while remaining competitive on MMVP and a balanced SeedBench subset. VISTA transfers robustly across unseen VLM sensors and is able to recognize and recover from VLM perception failures. Human analysis further shows that VISTA's reasoning traces are more neutral, less reliant on spurious attributes, and more explicitly grounded in visual evidence than end-to-end VLM baselines.
Problem

Research questions and friction points this paper is trying to address.

Prevents vision-language models from using spurious correlations
Decouples visual perception from reasoning via information bottleneck
Trains unbiased visual reasoning using controlled visual inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework decouples perception from reasoning
Frozen VLM sensor provides objective visual perception queries
Text-only LLM reasoner aggregates visual facts via reinforcement learning
πŸ”Ž Similar Papers
No similar papers found.