Revisiting Multimodal Positional Encoding in Vision-Language Models

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal position encoding lacks systematic investigation, hindering fine-grained visual-language understanding. To address this, we propose three principled design criteria for multimodal positional encoding: positional consistency, full-frequency utilization, and text-domain prior preservation. Based on these, we introduce two plug-and-play modules—Multi-Head RoPE (MH-RoPE) and MRoPE-Interleave—both built upon Rotary Position Embedding (RoPE) and requiring no backbone modification. MH-RoPE employs multi-head frequency allocation to decouple modality-specific positional signals, while MRoPE-Interleave achieves cross-modal alignment via interleaved frequency mapping. Both methods efficiently model inter-modal positional relationships without increasing inference latency. Extensive experiments demonstrate consistent and significant improvements over state-of-the-art methods across diverse multimodal understanding benchmarks (e.g., VQAv2, OK-VQA, TextVQA) and general-purpose vision-language tasks, with particularly notable gains in fine-grained reasoning. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.
Problem

Research questions and friction points this paper is trying to address.

Systematically investigates multimodal positional encoding design
Identifies key guidelines for coherent layout and representation
Proposes plug-and-play variants enhancing multimodal understanding performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Multi-Head RoPE for multimodal encoding
Introduces MRoPE-Interleave plug-and-play variant
Enhances positional coherence and frequency utilization
🔎 Similar Papers
No similar papers found.