How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The internal vision-language coordination mechanisms of multimodal large language models (MLLMs) remain poorly understood. Method: We propose a lightweight, model-agnostic probing framework that trains linear classifiers on token embeddings layer-wise, combined with controlled prompt perturbations across three dimensions—lexical choice, semantic negation, and output format—to systematically dissect MLLMs’ hierarchical dynamics in visual grounding, task reasoning, and answer generation. Contribution/Results: Experiments across mainstream MLLMs—including LLaVA and Qwen2-VL—reveal a consistent three-stage functional decomposition across architectures. Critically, we identify, for the first time, that substituting the underlying language model induces systematic shifts in the layer positions of these stages. Our findings uncover interpretable, layer-resolved functional specialization in MLLMs, establishing a new paradigm for model diagnosis and controllable architectural editing.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated strong performance across a wide range of vision-language tasks, yet their internal processing dynamics remain underexplored. In this work, we introduce a probing framework to systematically analyze how MLLMs process visual and textual inputs across layers. We train linear classifiers to predict fine-grained visual categories (e.g., dog breeds) from token embeddings extracted at each layer, using a standardized anchor question. To uncover the functional roles of different layers, we evaluate these probes under three types of controlled prompt variations: (1) lexical variants that test sensitivity to surface-level changes, (2) semantic negation variants that flip the expected answer by modifying the visual concept in the prompt, and (3) output format variants that preserve reasoning but alter the answer format. Applying our framework to LLaVA-1.5, LLaVA-Next-LLaMA-3, and Qwen2-VL, we identify a consistent stage-wise structure in which early layers perform visual grounding, middle layers support lexical integration and semantic reasoning, and final layers prepare task-specific outputs. We further show that while the overall stage-wise structure remains stable across variations in visual tokenization, instruction tuning data, and pretraining corpus, the specific layer allocation to each stage shifts notably with changes in the base LLM architecture. Our findings provide a unified perspective on the layer-wise organization of MLLMs and offer a lightweight, model-agnostic approach for analyzing multimodal representation dynamics.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how MLLMs process visual and textual inputs across layers
Investigating functional roles of layers through controlled prompt variations
Identifying stage-wise structure in visual grounding and reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise visual grounding analysis
Linear classifiers probe token embeddings
Controlled prompt variations test functionality
🔎 Similar Papers
No similar papers found.
Zhuoran Yu
Zhuoran Yu
University of Wisconsin-Madison
Computer VisionMachine Learning
Y
Yong Jae Lee
University of Wisconsin–Madison