PerLA: Perceptive 3D Language Assistant

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly capturing fine-grained local geometry and global semantic context in 3D point clouds for large language models (LLMs), this paper proposes a perception-aware vision-language assistant tailored for point cloud understanding. Methodologically, it introduces Hilbert curve encoding to preserve spatial locality in point cloud representations; designs a hybrid architecture integrating cross-attention and graph neural networks (GNNs) for joint local-global feature fusion; and incorporates multi-scale feature aggregation with a contrastive local representation consistency loss to enhance training stability and discriminability. Evaluated on ScanQA, ScanRefer, and Nr3D benchmarks, our method achieves CiDEr improvements of +1.34, +4.22, and +3.88, respectively—outperforming existing 3D language understanding models. These results empirically validate the effectiveness of co-optimizing local fidelity and contextual modeling for robust 3D visual grounding and language generation.

Technology Category

Application Category

📝 Abstract
Enabling Large Language Models (LLMs) to understand the 3D physical world is an emerging yet challenging research direction. Current strategies for processing point clouds typically downsample the scene or divide it into smaller parts for separate analysis. However, both approaches risk losing key local details or global contextual information. In this paper, we introduce PerLA, a 3D language assistant designed to be more perceptive to both details and context, making visual representations more informative for the LLM. PerLA captures high-resolution (local) details in parallel from different point cloud areas and integrates them with (global) context obtained from a lower-resolution whole point cloud. We present a novel algorithm that preserves point cloud locality through the Hilbert curve and effectively aggregates local-to-global information via cross-attention and a graph neural network. Lastly, we introduce a novel loss for local representation consensus to promote training stability. PerLA outperforms state-of-the-art 3D language assistants, with gains of up to +1.34 CiDEr on ScanQA for question answering, and +4.22 on ScanRefer and +3.88 on Nr3D for dense captioning. https://gfmei.github.io/PerLA/
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D understanding for Large Language Models (LLMs)
Preserving local details and global context in point clouds
Improving visual representations for 3D language assistants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel high-resolution local detail capture
Hilbert curve preserves point cloud locality
Cross-attention and GNN aggregate local-global info
🔎 Similar Papers
No similar papers found.