🤖 AI Summary
Multimodal position encoding lacks systematic investigation, hindering fine-grained visual-language understanding. To address this, we propose three principled design criteria for multimodal positional encoding: positional consistency, full-frequency utilization, and text-domain prior preservation. Based on these, we introduce two plug-and-play modules—Multi-Head RoPE (MH-RoPE) and MRoPE-Interleave—both built upon Rotary Position Embedding (RoPE) and requiring no backbone modification. MH-RoPE employs multi-head frequency allocation to decouple modality-specific positional signals, while MRoPE-Interleave achieves cross-modal alignment via interleaved frequency mapping. Both methods efficiently model inter-modal positional relationships without increasing inference latency. Extensive experiments demonstrate consistent and significant improvements over state-of-the-art methods across diverse multimodal understanding benchmarks (e.g., VQAv2, OK-VQA, TextVQA) and general-purpose vision-language tasks, with particularly notable gains in fine-grained reasoning. Our implementation is publicly available.
📝 Abstract
Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.