Instruction Anchors: Dissecting the Causal Dynamics of Modality Arbitration

๐Ÿ“… 2026-02-03
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the unclear mechanisms by which multimodal large language models (MLLMs) selectively leverage multimodal context in response to user instructionsโ€”a gap that limits their safety and reliability. From an information flow perspective, the work reveals that instruction tokens act as structural anchors: in shallow attention layers, they non-selectively aggregate multimodal inputs, while in deeper layers, they drive modality competition aligned with instruction intent; meanwhile, MLP layers exhibit semantic inertia. Through information flow analysis, causal intervention, and attention head ablation, the authors identify a sparse set of specialized attention heads critical to modality arbitration. Remarkably, modulating only 5% of these key heads can increase or decrease modality adherence by up to 60%, offering an effective framework for enhancing MLLM transparency and enabling precise multimodal control.

Technology Category

Application Category

๐Ÿ“ Abstract
Modality following serves as the capacity of multimodal large language models (MLLMs) to selectively utilize multimodal contexts based on user instructions. It is fundamental to ensuring safety and reliability in real-world deployments. However, the underlying mechanisms governing this decision-making process remain poorly understood. In this paper, we investigate its working mechanism through an information flow lens. Our findings reveal that instruction tokens function as structural anchors for modality arbitration: Shallow attention layers perform non-selective information transfer, routing multimodal cues to these anchors as a latent buffer; Modality competition is resolved within deep attention layers guided by the instruction intent, while MLP layers exhibit semantic inertia, acting as an adversarial force. Furthermore, we identify a sparse set of specialized attention heads that drive this arbitration. Causal interventions demonstrate that manipulating a mere $5\%$ of these critical heads can decrease the modality-following ratio by $60\%$ through blocking, or increase it by $60\%$ through targeted amplification of failed samples. Our work provides a substantial step toward model transparency and offers a principled framework for the orchestration of multimodal information in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

modality following
multimodal large language models
modality arbitration
instruction anchors
causal dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction Anchors
Modality Arbitration
Causal Intervention
Multimodal Large Language Models
Attention Heads
๐Ÿ”Ž Similar Papers
No similar papers found.