View Selection for 3D Captioning via Diffusion Ranking

📅 2024-04-11
🏛️ European Conference on Computer Vision
📈 Citations: 21
Influential: 0
📄 PDF
🤖 AI Summary
To address vision-language hallucinations in 3D object captioning caused by rendering viewpoint mismatch, this paper proposes an unsupervised view-ranking method based on diffusion models. Specifically, we leverage pre-trained text-to-3D models (e.g., Stable Diffusion 3D) to quantify 3D–2D view alignment scores and select the most discriminative 2D views for input to multimodal large language models (e.g., GPT-4V) to generate accurate captions. This approach extends the view-ranking paradigm to 3D visual question answering (3D-VQA), achieving significant improvements over CLIP-based baselines on Objaverse/XL. Furthermore, we correct 200K erroneous captions in Cap3D and construct the first million-scale, high-quality 3D-caption dataset—Cap3D-v2—establishing a robust benchmark for 3D understanding and generation.

Technology Category

Application Category

📝 Abstract
Scalable annotation approaches are crucial for constructing extensive 3D-text datasets, facilitating a broader range of applications. However, existing methods sometimes lead to the generation of hallucinated captions, compromising caption quality. This paper explores the issue of hallucination in 3D object captioning, with a focus on Cap3D method, which renders 3D objects into 2D views for captioning using pre-trained models. We pinpoint a major challenge: certain rendered views of 3D objects are atypical, deviating from the training data of standard image captioning models and causing hallucinations. To tackle this, we present DiffuRank, a method that leverages a pre-trained text-to-3D model to assess the alignment between 3D objects and their 2D rendered views, where the view with high alignment closely represent the object's characteristics. By ranking all rendered views and feeding the top-ranked ones into GPT4-Vision, we enhance the accuracy and detail of captions, enabling the correction of 200k captions in the Cap3D dataset and extending it to 1 million captions across Objaverse and Objaverse-XL datasets. Additionally, we showcase the adaptability of DiffuRank by applying it to pre-trained text-to-image models for a Visual Question Answering task, where it outperforms the CLIP model.
Problem

Research questions and friction points this paper is trying to address.

Reducing hallucinated captions in 3D object captioning
Improving alignment between 3D objects and 2D views
Enhancing caption accuracy and detail via view ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses DiffuRank to rank 3D-2D view alignment
Leverages GPT4-Vision for top-ranked view captioning
Extends Cap3D dataset to 1 million captions
🔎 Similar Papers
No similar papers found.