Know3D: Prompting 3D Generation with Knowledge from Vision-Language Models

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing single-view 3D generation methods often produce random or implausible geometry in occluded regions due to the absence of global structural priors, failing to align with user intent. This work proposes a hybrid architecture that synergistically integrates a vision-language model (VLM) with a diffusion model, leveraging an implicit state injection mechanism to infuse rich semantic knowledge from the VLM into the 3D generation process. This enables text-guided reconstruction of backside structures, effectively harnessing the semantic priors of large multimodal language models for the first time. By doing so, the method overcomes the limitations of conventional approaches that rely solely on limited 3D training data and lack explicit semantic guidance, significantly improving the geometric plausibility, semantic consistency, and user controllability of the generated 3D shapes.

Technology Category

Application Category

📝 Abstract
Recent advances in 3D generation have improved the fidelity and geometric details of synthesized 3D assets. However, due to the inherent ambiguity of single-view observations and the lack of robust global structural priors caused by limited 3D training data, the unseen regions generated by existing models are often stochastic and difficult to control, which may sometimes fail to align with user intentions or produce implausible geometries. In this paper, we propose Know3D, a novel framework that incorporates rich knowledge from multimodal large language models into 3D generative processes via latent hidden-state injection, enabling language-controllable generation of the back-view for 3D assets. We utilize a VLM-diffusion-based model, where the VLM is responsible for semantic understanding and guidance. The diffusion model acts as a bridge that transfers semantic knowledge from the VLM to the 3D generation model. In this way, we successfully bridge the gap between abstract textual instructions and the geometric reconstruction of unobserved regions, transforming the traditionally stochastic back-view hallucination into a semantically controllable process, demonstrating a promising direction for future 3D generation models.
Problem

Research questions and friction points this paper is trying to address.

3D generation
single-view ambiguity
unseen region hallucination
geometric plausibility
structural prior
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D generation
vision-language models
semantic control
diffusion models
latent injection
🔎 Similar Papers
No similar papers found.