AGO: Adaptive Grounding for Open World 3D Occupancy Prediction

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses open-world 3D semantic occupancy prediction, aiming to generate voxelized 3D representations from multi-view sensor inputs while jointly recognizing both known and unknown objects—thereby overcoming limitations of conventional closed-set label spaces and cross-modal representation misalignment between vision and language. We propose an adaptive grounding mechanism comprising: (i) similarity-driven 3D pseudo-labeling for self-supervised training; (ii) a cross-modal adapter that aligns image embeddings from vision-language models (VLMs) with voxel features; and (iii) integration of text-prompt guidance with self-supervised learning. On Occ3D-nuScenes, our method achieves significant zero-shot and few-shot performance gains for unknown object prediction. In the closed-world setting, it attains state-of-the-art self-supervised performance, improving mean IoU by 4.09 percentage points over prior art.

Technology Category

Application Category

📝 Abstract
Open-world 3D semantic occupancy prediction aims to generate a voxelized 3D representation from sensor inputs while recognizing both known and unknown objects. Transferring open-vocabulary knowledge from vision-language models (VLMs) offers a promising direction but remains challenging. However, methods based on VLM-derived 2D pseudo-labels with traditional supervision are limited by a predefined label space and lack general prediction capabilities. Direct alignment with pretrained image embeddings, on the other hand, fails to achieve reliable performance due to often inconsistent image and text representations in VLMs. To address these challenges, we propose AGO, a novel 3D occupancy prediction framework with adaptive grounding to handle diverse open-world scenarios. AGO first encodes surrounding images and class prompts into 3D and text embeddings, respectively, leveraging similarity-based grounding training with 3D pseudo-labels. Additionally, a modality adapter maps 3D embeddings into a space aligned with VLM-derived image embeddings, reducing modality gaps. Experiments on Occ3D-nuScenes show that AGO improves unknown object prediction in zero-shot and few-shot transfer while achieving state-of-the-art closed-world self-supervised performance, surpassing prior methods by 4.09 mIoU.
Problem

Research questions and friction points this paper is trying to address.

Open-world 3D semantic occupancy prediction for known and unknown objects
Challenges in transferring open-vocabulary knowledge from vision-language models
Inconsistent image and text representations in VLMs hinder direct alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive grounding for open-world 3D occupancy
Similarity-based training with 3D pseudo-labels
Modality adapter reduces VLM embedding gaps
🔎 Similar Papers
No similar papers found.