🤖 AI Summary
To address the challenge of jointly capturing fine-grained local geometry and global semantic context in 3D point clouds for large language models (LLMs), this paper proposes a perception-aware vision-language assistant tailored for point cloud understanding. Methodologically, it introduces Hilbert curve encoding to preserve spatial locality in point cloud representations; designs a hybrid architecture integrating cross-attention and graph neural networks (GNNs) for joint local-global feature fusion; and incorporates multi-scale feature aggregation with a contrastive local representation consistency loss to enhance training stability and discriminability. Evaluated on ScanQA, ScanRefer, and Nr3D benchmarks, our method achieves CiDEr improvements of +1.34, +4.22, and +3.88, respectively—outperforming existing 3D language understanding models. These results empirically validate the effectiveness of co-optimizing local fidelity and contextual modeling for robust 3D visual grounding and language generation.
📝 Abstract
Enabling Large Language Models (LLMs) to understand the 3D physical world is an emerging yet challenging research direction. Current strategies for processing point clouds typically downsample the scene or divide it into smaller parts for separate analysis. However, both approaches risk losing key local details or global contextual information. In this paper, we introduce PerLA, a 3D language assistant designed to be more perceptive to both details and context, making visual representations more informative for the LLM. PerLA captures high-resolution (local) details in parallel from different point cloud areas and integrates them with (global) context obtained from a lower-resolution whole point cloud. We present a novel algorithm that preserves point cloud locality through the Hilbert curve and effectively aggregates local-to-global information via cross-attention and a graph neural network. Lastly, we introduce a novel loss for local representation consensus to promote training stability. PerLA outperforms state-of-the-art 3D language assistants, with gains of up to +1.34 CiDEr on ScanQA for question answering, and +4.22 on ScanRefer and +3.88 on Nr3D for dense captioning. https://gfmei.github.io/PerLA/