🤖 AI Summary
This work addresses the limitations of Rotary Position Embedding (RoPE) in existing large 3D models, which disrupts the spatial continuity of visual features during multimodal processing and suffers from long-range attention decay for early visual tokens due to its causal assumption. To overcome these issues, the authors propose C²ROPE, a novel positional encoding mechanism that integrates temporal indices with Cartesian spatial coordinates to explicitly model both local spatial continuity and causal dependencies. The method innovatively introduces a triplet-based hybrid positional indexing scheme and a Chebyshev distance–based causal mask, enabling optimized long-sequence attention allocation while preserving the integrity of 3D visual structures. Experimental results demonstrate that C²ROPE significantly outperforms current approaches on tasks such as 3D scene reasoning and 3D visual question answering.
📝 Abstract
Recent advances in 3D Large Multimodal Models (LMMs) built on Large Language Models (LLMs) have established the alignment of 3D visual features with LLM representations as the dominant paradigm. However, the inherited Rotary Position Embedding (RoPE) introduces limitations for multimodal processing. Specifically, applying 1D temporal positional indices disrupts the continuity of visual features along the column dimension, resulting in spatial locality loss. Moreover, RoPE follows the prior that temporally closer image tokens are more causally related, leading to long-term decay in attention allocation and causing the model to progressively neglect earlier visual tokens as the sequence length increases. To address these issues, we propose C^2RoPE, an improved RoPE that explicitly models local spatial Continuity and spatial Causal relationships for visual processing. C^2RoPE introduces a spatio-temporal continuous positional embedding mechanism for visual tokens. It first integrates 1D temporal positions with Cartesian-based spatial coordinates to construct a triplet hybrid positional index, and then employs a frequency allocation strategy to encode spatio-temporal positional information across the three index components. Additionally, we introduce Chebyshev Causal Masking, which determines causal dependencies by computing the Chebyshev distance of image tokens in 2D space. Evaluation results across various benchmarks, including 3D scene reasoning and 3D visual question answering, demonstrate C^2RoPE's effectiveness. The code is be available at https://github.com/ErikZ719/C2RoPE.