OnlineAnySeg: Online Zero-Shot 3D Segmentation by Visual Foundation Model Guided 2D Mask Merging

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for online zero-shot open-vocabulary 3D instance segmentation struggle to achieve spatially consistent fusion of 2D masks into 3D voxels under real-time constraints. Method: We propose the first online, zero-shot, and noise-robust 3D instance segmentation framework. It leverages vision foundation models to generate 2D masks, introduces a voxel hashing mechanism to efficiently model spatial overlap, reducing mask matching complexity from O(n²) to O(n), and integrates 2D mask similarity filtering with online 3D reconstruction for real-time, unified 3D instance fusion in dynamic scenes. Contribution/Results: Our method achieves state-of-the-art performance on ScanNet and SceneNN, simultaneously attaining high accuracy (mAP↑) and real-time inference (>15 FPS). It is the first deployable online 3D instance segmentation framework supporting open-vocabulary recognition—enabling scalable, adaptive 3D understanding without task-specific training.

Technology Category

Application Category

📝 Abstract
Online 3D open-vocabulary segmentation of a progressively reconstructed scene is both a critical and challenging task for embodied applications. With the success of visual foundation models (VFMs) in the image domain, leveraging 2D priors to address 3D online segmentation has become a prominent research focus. Since segmentation results provided by 2D priors often require spatial consistency to be lifted into final 3D segmentation, an efficient method for identifying spatial overlap among 2D masks is essential - yet existing methods rarely achieve this in real time, mainly limiting its use to offline approaches. To address this, we propose an efficient method that lifts 2D masks generated by VFMs into a unified 3D instance using a hashing technique. By employing voxel hashing for efficient 3D scene querying, our approach reduces the time complexity of costly spatial overlap queries from $O(n^2)$ to $O(n)$. Accurate spatial associations further enable 3D merging of 2D masks through simple similarity-based filtering in a zero-shot manner, making our approach more robust to incomplete and noisy data. Evaluated on the ScanNet and SceneNN benchmarks, our approach achieves state-of-the-art performance in online, open-vocabulary 3D instance segmentation with leading efficiency.
Problem

Research questions and friction points this paper is trying to address.

Online 3D open-vocabulary segmentation for progressive scene reconstruction.
Efficient spatial overlap identification for 2D mask merging in 3D.
Real-time 3D instance segmentation using visual foundation models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages visual foundation models for 2D mask generation
Uses voxel hashing for efficient 3D scene querying
Implements zero-shot 3D merging via similarity-based filtering
🔎 Similar Papers
No similar papers found.