π€ AI Summary
This work addresses the issue of image-irrelevant hallucinations in multimodal large language models, which often arise from relying solely on single-layer features from the visual encoder. To mitigate this, the authors propose the TGIF moduleβthe first approach to enable text-guided, query-adaptive fusion of multi-layer visual features. Treating each layer of the visual encoder as a depth-specific expert, TGIF employs a lightweight external architecture to dynamically predict prompt-conditioned fusion weights, thereby enhancing visual grounding without fine-tuning the visual encoder itself. Integrated into the LLaVA-1.5-7B framework, the method achieves significant performance gains on hallucination, OCR, and VQA benchmarks, while maintaining or surpassing baseline results on comprehensive multimodal tasks such as ScienceQA, GQA, and MMBench.
π Abstract
Multimodal large language models (MLLMs) typically rely on a single late-layer feature from a frozen vision encoder, leaving the encoder's rich hierarchy of visual cues under-utilized. MLLMs still suffer from visually ungrounded hallucinations, often relying on language priors rather than image evidence. While many prior mitigation strategies operate on the text side, they leave the visual representation unchanged and do not exploit the rich hierarchy of features encoded across vision layers. Existing multi-layer fusion methods partially address this limitation but remain static, applying the same layer mixture regardless of the query. In this work, we introduce TGIF (Text-Guided Inter-layer Fusion), a lightweight module that treats encoder layers as depth-wise"experts"and predicts a prompt-dependent fusion of visual features. TGIF follows the principle of direct external fusion, requires no vision-encoder updates, and adds minimal overhead. Integrated into LLaVA-1.5-7B, TGIF provides consistent improvements across hallucination, OCR, and VQA benchmarks, while preserving or improving performance on ScienceQA, GQA, and MMBench. These results suggest that query-conditioned, hierarchy-aware fusion is an effective way to strengthen visual grounding and reduce hallucination in modern MLLMs.