TSDASeg: A Two-Stage Model with Direct Alignment for Interactive Point Cloud Segmentation

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interactive point cloud segmentation methods suffer from weak local semantic associations and poor generalization due to the absence of direct cross-modal alignment between 3D point features and textual embeddings. To address this, we propose a two-stage framework: (1) an explicit cross-modal alignment module establishes fine-grained semantic mappings among point clouds, text, and images; (2) a multi-memory architecture—comprising separate memory banks for text, visual, and cross-modal features—enables dynamic feature refinement via self-attention and cross-attention, jointly modeling scene consistency. Our approach is the first to integrate direct 3D-text alignment with a hierarchical memory structure, co-optimizing the 3D point cloud encoder and vision-language model. Evaluated on multiple benchmarks—including 3D instruction-based segmentation, referring segmentation, and semantic segmentation—it achieves state-of-the-art performance, significantly improving segmentation accuracy and interactive robustness.

Technology Category

Application Category

📝 Abstract
The rapid advancement of 3D vision-language models (VLMs) has spurred significant interest in interactive point cloud processing tasks, particularly for real-world applications. However, existing methods often underperform in point-level tasks, such as segmentation, due to missing direct 3D-text alignment, limiting their ability to link local 3D features with textual context. To solve this problem, we propose TSDASeg, a Two-Stage model coupled with a Direct cross-modal Alignment module and memory module for interactive point cloud Segmentation. We introduce the direct cross-modal alignment module to establish explicit alignment between 3D point clouds and textual/2D image data. Within the memory module, we employ multiple dedicated memory banks to separately store text features, visual features, and their cross-modal correspondence mappings. These memory banks are dynamically leveraged through self-attention and cross-attention mechanisms to update scene-specific features based on prior stored data, effectively addressing inconsistencies in interactive segmentation results across diverse scenarios. Experiments conducted on multiple 3D instruction, reference, and semantic segmentation datasets demonstrate that the proposed method achieves state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Lack of direct 3D-text alignment in point cloud segmentation
Difficulty linking local 3D features with textual context
Inconsistencies in interactive segmentation across diverse scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-Stage model for point cloud segmentation
Direct cross-modal alignment module
Memory banks with dynamic attention mechanisms
🔎 Similar Papers
No similar papers found.
C
Chade Li
State Key Laboratory of Multimodal Artificial Intelligence, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Pengju Zhang
Pengju Zhang
University of Bristol
AIBioinformaticsStatistical PhysicsFinancial Technology
Y
Yihong Wu
State Key Laboratory of Multimodal Artificial Intelligence, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences