🤖 AI Summary
Existing interactive point cloud segmentation methods suffer from weak local semantic associations and poor generalization due to the absence of direct cross-modal alignment between 3D point features and textual embeddings. To address this, we propose a two-stage framework: (1) an explicit cross-modal alignment module establishes fine-grained semantic mappings among point clouds, text, and images; (2) a multi-memory architecture—comprising separate memory banks for text, visual, and cross-modal features—enables dynamic feature refinement via self-attention and cross-attention, jointly modeling scene consistency. Our approach is the first to integrate direct 3D-text alignment with a hierarchical memory structure, co-optimizing the 3D point cloud encoder and vision-language model. Evaluated on multiple benchmarks—including 3D instruction-based segmentation, referring segmentation, and semantic segmentation—it achieves state-of-the-art performance, significantly improving segmentation accuracy and interactive robustness.
📝 Abstract
The rapid advancement of 3D vision-language models (VLMs) has spurred significant interest in interactive point cloud processing tasks, particularly for real-world applications. However, existing methods often underperform in point-level tasks, such as segmentation, due to missing direct 3D-text alignment, limiting their ability to link local 3D features with textual context. To solve this problem, we propose TSDASeg, a Two-Stage model coupled with a Direct cross-modal Alignment module and memory module for interactive point cloud Segmentation. We introduce the direct cross-modal alignment module to establish explicit alignment between 3D point clouds and textual/2D image data. Within the memory module, we employ multiple dedicated memory banks to separately store text features, visual features, and their cross-modal correspondence mappings. These memory banks are dynamically leveraged through self-attention and cross-attention mechanisms to update scene-specific features based on prior stored data, effectively addressing inconsistencies in interactive segmentation results across diverse scenarios. Experiments conducted on multiple 3D instruction, reference, and semantic segmentation datasets demonstrate that the proposed method achieves state-of-the-art performance.