🤖 AI Summary
Existing controllable image generation methods often induce global distortions in non-target regions during object-level editing (e.g., recoloring, background replacement). To address this, we propose Neural Universal Scene Descriptor (Neural USD), a hierarchical, object-centric structured representation that explicitly decouples appearance, geometry, and pose. Built upon the USD standard, our conditional control framework integrates signal isolation mechanisms and lightweight fine-tuning strategies to ensure attribute independence and local consistency across multi-attribute edits. The method is compatible with mainstream diffusion- and GAN-based generative models, enabling high-fidelity, incremental, fine-grained, object-level iterative editing in complex scenes. Experiments demonstrate significant improvements in editing accuracy and controllability. Neural USD establishes a general, scalable paradigm for scene editing in controllable image generation, offering principled structural guidance for disentangled, localized manipulation.
📝 Abstract
Amazing progress has been made in controllable generative modeling, especially over the last few years. However, some challenges remain. One of them is precise and iterative object editing. In many of the current methods, trying to edit the generated image (for example, changing the color of a particular object in the scene or changing the background while keeping other elements unchanged) by changing the conditioning signals often leads to unintended global changes in the scene. In this work, we take the first steps to address the above challenges. Taking inspiration from the Universal Scene Descriptor (USD) standard developed in the computer graphics community, we introduce the "Neural Universal Scene Descriptor" or Neural USD. In this framework, we represent scenes and objects in a structured, hierarchical manner. This accommodates diverse signals, minimizes model-specific constraints, and enables per-object control over appearance, geometry, and pose. We further apply a fine-tuning approach which ensures that the above control signals are disentangled from one another. We evaluate several design considerations for our framework, demonstrating how Neural USD enables iterative and incremental workflows. More information at: https://escontrela.me/neural_usd .