SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs

πŸ“… 2026-02-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing 3D large vision-language models (LVLMs) employ RoPE positional encoding, which struggles to effectively model three-dimensional spatial structures and neglects angular dependencies, thereby limiting the model’s sensitivity to directional variations. To address this limitation, this work proposes Spherical Positional Encoding (SoPE), which introduces the spherical coordinate system into 3D LVLMs for the first time. SoPE unifies the representation of radial distance and azimuthal angles for point cloud tokens and incorporates a multi-scale frequency mixing mechanism to enhance both consistency and richness in geometric representation. Experimental results demonstrate that SoPE significantly improves performance across multiple 3D scene understanding benchmarks, exhibiting strong generalization capabilities.

Technology Category

Application Category

πŸ“ Abstract
3D Large Vision-Language Models (3D LVLMs) built upon Large Language Models (LLMs) have achieved remarkable progress across various multimodal tasks. However, their inherited position-dependent modeling mechanism, Rotary Position Embedding (RoPE), remains suboptimal for 3D multimodal understanding. The vanilla RoPE formulation fails to preserve essential three-dimensional spatial structures when encoding 3D tokens, and its relative distance computation overlooks angular dependencies, hindering the model's ability to capture directional variations in visual representations. To overcome these limitations, we introduce Spherical Coordinate-based Positional Embedding (SoPE). Our method maps point-cloud token indices into a 3D spherical coordinate space, enabling unified modeling of spatial locations and directional angles. This formulation preserves the inherent geometric structure of point-cloud data, enhances spatial awareness, and yields more consistent and expressive geometric representations for multimodal learning. In addition, we introduce a multi-scale frequency mixing strategy to fuse feature information across different frequency domains. Experimental results on multiple 3D scene benchmarks validate the effectiveness of our approach, while real-world deployment experiments further demonstrate its strong generalization capability.
Problem

Research questions and friction points this paper is trying to address.

3D Large Vision-Language Models
Rotary Position Embedding
spatial perception
spherical coordinates
point-cloud representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spherical Coordinate
Positional Embedding
3D LVLMs
Spatial Perception
Frequency Mixing
πŸ”Ž Similar Papers
No similar papers found.