π€ AI Summary
Existing 3D large vision-language models (LVLMs) employ RoPE positional encoding, which struggles to effectively model three-dimensional spatial structures and neglects angular dependencies, thereby limiting the modelβs sensitivity to directional variations. To address this limitation, this work proposes Spherical Positional Encoding (SoPE), which introduces the spherical coordinate system into 3D LVLMs for the first time. SoPE unifies the representation of radial distance and azimuthal angles for point cloud tokens and incorporates a multi-scale frequency mixing mechanism to enhance both consistency and richness in geometric representation. Experimental results demonstrate that SoPE significantly improves performance across multiple 3D scene understanding benchmarks, exhibiting strong generalization capabilities.
π Abstract
3D Large Vision-Language Models (3D LVLMs) built upon Large Language Models (LLMs) have achieved remarkable progress across various multimodal tasks. However, their inherited position-dependent modeling mechanism, Rotary Position Embedding (RoPE), remains suboptimal for 3D multimodal understanding. The vanilla RoPE formulation fails to preserve essential three-dimensional spatial structures when encoding 3D tokens, and its relative distance computation overlooks angular dependencies, hindering the model's ability to capture directional variations in visual representations. To overcome these limitations, we introduce Spherical Coordinate-based Positional Embedding (SoPE). Our method maps point-cloud token indices into a 3D spherical coordinate space, enabling unified modeling of spatial locations and directional angles. This formulation preserves the inherent geometric structure of point-cloud data, enhances spatial awareness, and yields more consistent and expressive geometric representations for multimodal learning. In addition, we introduce a multi-scale frequency mixing strategy to fuse feature information across different frequency domains. Experimental results on multiple 3D scene benchmarks validate the effectiveness of our approach, while real-world deployment experiments further demonstrate its strong generalization capability.