🤖 AI Summary
Current text-to-visual generation models achieve high visual fidelity but suffer from limited compositional generalization and fine-grained semantic alignment—primarily due to weak compositional diversity, substantial annotation noise, and scarce scalable, high-quality dense supervision in training data. To address this, we propose **Scene Graph Programming (SGP)**, a novel framework that introduces a programmable scene graph generation paradigm grounded in a structured taxonomy of visual elements. SGP enables infinite combinatorial composition, controllable cross-reality (realistic/fantastical) scene synthesis, and automatic graph-to-text conversion. Leveraging SGP, we construct the first structured, scalable multimodal benchmark for evaluating image/video/3D generative models. Through DiT/UNet comparative analysis, self-improving training, knowledge distillation, and synthetic adversarial testing, we find DiTs exhibit superior text alignment; uncover systematic deficits in video/3D models regarding dynamic consistency and human preference; and demonstrate significant gains in model self-enhancement, cross-capability transfer, and content moderation robustness.
📝 Abstract
DALL-E and Sora have gained attention by producing implausible images, such as"astronauts riding a horse in space."Despite the proliferation of text-to-vision models that have inundated the internet with synthetic visuals, from images to 3D assets, current benchmarks predominantly evaluate these models on real-world scenes paired with captions. We introduce Generate Any Scene, a framework that systematically enumerates scene graphs representing a vast array of visual scenes, spanning realistic to imaginative compositions. Generate Any Scene leverages 'scene graph programming', a method for dynamically constructing scene graphs of varying complexity from a structured taxonomy of visual elements. This taxonomy includes numerous objects, attributes, and relations, enabling the synthesis of an almost infinite variety of scene graphs. Using these structured representations, Generate Any Scene translates each scene graph into a caption, enabling scalable evaluation of text-to-vision models through standard metrics. We conduct extensive evaluations across multiple text-to-image, text-to-video, and text-to-3D models, presenting key findings on model performance. We find that DiT-backbone text-to-image models align more closely with input captions than UNet-backbone models. Text-to-video models struggle with balancing dynamics and consistency, while both text-to-video and text-to-3D models show notable gaps in human preference alignment. We demonstrate the effectiveness of Generate Any Scene by conducting three practical applications leveraging captions generated by Generate Any Scene: 1) a self-improving framework where models iteratively enhance their performance using generated data, 2) a distillation process to transfer specific strengths from proprietary models to open-source counterparts, and 3) improvements in content moderation by identifying and generating challenging synthetic data.