Neural USD: An object-centric framework for iterative editing and control

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing controllable image generation methods often induce global distortions in non-target regions during object-level editing (e.g., recoloring, background replacement). To address this, we propose Neural Universal Scene Descriptor (Neural USD), a hierarchical, object-centric structured representation that explicitly decouples appearance, geometry, and pose. Built upon the USD standard, our conditional control framework integrates signal isolation mechanisms and lightweight fine-tuning strategies to ensure attribute independence and local consistency across multi-attribute edits. The method is compatible with mainstream diffusion- and GAN-based generative models, enabling high-fidelity, incremental, fine-grained, object-level iterative editing in complex scenes. Experiments demonstrate significant improvements in editing accuracy and controllability. Neural USD establishes a general, scalable paradigm for scene editing in controllable image generation, offering principled structural guidance for disentangled, localized manipulation.

Technology Category

Application Category

📝 Abstract
Amazing progress has been made in controllable generative modeling, especially over the last few years. However, some challenges remain. One of them is precise and iterative object editing. In many of the current methods, trying to edit the generated image (for example, changing the color of a particular object in the scene or changing the background while keeping other elements unchanged) by changing the conditioning signals often leads to unintended global changes in the scene. In this work, we take the first steps to address the above challenges. Taking inspiration from the Universal Scene Descriptor (USD) standard developed in the computer graphics community, we introduce the "Neural Universal Scene Descriptor" or Neural USD. In this framework, we represent scenes and objects in a structured, hierarchical manner. This accommodates diverse signals, minimizes model-specific constraints, and enables per-object control over appearance, geometry, and pose. We further apply a fine-tuning approach which ensures that the above control signals are disentangled from one another. We evaluate several design considerations for our framework, demonstrating how Neural USD enables iterative and incremental workflows. More information at: https://escontrela.me/neural_usd .
Problem

Research questions and friction points this paper is trying to address.

Enables precise object editing in generated images without global changes
Provides hierarchical scene representation for per-object appearance and geometry control
Addresses disentanglement of control signals for iterative editing workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical scene representation for object editing
Disentangled control over appearance, geometry, pose
Fine-tuning approach for iterative incremental workflows
🔎 Similar Papers
No similar papers found.