🤖 AI Summary
This work addresses the challenge of simultaneously achieving structural precision, semantic interpretability, and identity controllability in existing 3D/4D scene representations. We propose “Scene Language”—a unified 3D/4D scene representation framework that integrates executable program structures, natural-language semantic tokens, and visual identity embeddings. To our knowledge, this is the first method enabling zero-shot cross-modal reasoning: without fine-tuning, it directly synthesizes structured scene programs from pretrained language models and vision encoders, while explicitly modeling hierarchical relationships to support fine-grained editing. The representation is renderer-agnostic, interfacing seamlessly with traditional, neural, and hybrid renderers to produce high-fidelity images. Experiments demonstrate significant improvements over baselines—including scene graphs—on complex scene generation tasks, achieving breakthroughs in fidelity, controllability, and editability.
📝 Abstract
We introduce the Scene Language, a visual scene representation that concisely and precisely describes the structure, semantics, and identity of visual scenes. It represents a scene with three key components: a program that specifies the hierarchical and relational structure of entities in the scene, words in natural language that summarize the semantic class of each entity, and embeddings that capture the visual identity of each entity. This representation can be inferred from pre-trained language models via a training-free inference technique, given text or image inputs. The resulting scene can be rendered into images using traditional, neural, or hybrid graphics renderers. Together, this forms a robust, automated system for high-quality 3D and 4D scene generation. Compared with existing representations like scene graphs, our proposed Scene Language generates complex scenes with higher fidelity, while explicitly modeling the scene structures to enable precise control and editing.