🤖 AI Summary
This study addresses the instability of multimodal large language models (MLLMs) in pixel-level tasks such as semantic segmentation and investigates their poorly understood spatial reasoning mechanisms. Through systematic layer-wise linear probing and attention intervention analyses, the authors evaluate the representational capabilities across the visual encoder, adapter, and large language model (LLM) stages. They uncover a previously unknown “degradation–recovery” mechanism: segmentation performance degrades within the adapter but is progressively restored in the LLM via attention mechanisms. Furthermore, the work demonstrates that correctly classified image tokens can guide neighboring misclassified tokens toward correction through bidirectional attention, effectively mitigating the limitations imposed by causal attention. These findings provide crucial mechanistic insights and architectural guidance for designing MLLMs with robust segmentation capabilities.
📝 Abstract
Multimodal Large Language Models (MLLMs) are increasingly applied to pixel-level vision tasks, yet their intrinsic capacity for spatial understanding remains poorly understood. We investigate segmentation capacity through a layerwise linear probing evaluation across the entire MLLM pipeline: vision encoder, adapter, and LLM. We further conduct an intervention based attention knockout analysis to test whether cross-token attention progressively refines visual representations, and an evaluation of bidirectional attention among image tokens on spatial consistency. Our analysis reveals that the adapter introduces a segmentation representation drop-off, but LLM layers progressively recover through attention-mediated refinement, where correctly classified tokens steer misclassified neighbors toward the correct label. At early image token positions, this recovery is bounded by causal attention, which bidirectional attention among image tokens alleviates. These findings provide a mechanistic account of how MLLMs process visual information for segmentation, informing the design of future segmentation-capable models.