Learning Object-Centric Representations Based on Slots in Real World Scenarios

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current diffusion models treat images holistically and rely heavily on text conditioning, limiting their capability for object-level, fine-grained editing. To address this, we propose SlotAdapt and the Invariant Slot Attention framework—the first approach to achieve unsupervised, cross-frame object representation disentanglement while preserving dynamic consistency. Our method integrates slot-based mechanisms with pretrained diffusion models, incorporating slot attention, a Transformer-based temporal aggregator, and registration-token-driven background modeling. It enables joint generation of objects and scenes via lightweight fine-tuning only. Evaluated on unsupervised video object segmentation and image compositing editing, our method achieves state-of-the-art performance. It significantly improves structural coherence and controllability in object removal, insertion, and replacement—overcoming the semantic granularity bottleneck inherent in text-conditioned diffusion generation.

Technology Category

Application Category

📝 Abstract
A central goal in AI is to represent scenes as compositions of discrete objects, enabling fine-grained, controllable image and video generation. Yet leading diffusion models treat images holistically and rely on text conditioning, creating a mismatch for object-level editing. This thesis introduces a framework that adapts powerful pretrained diffusion models for object-centric synthesis while retaining their generative capacity. We identify a core challenge: balancing global scene coherence with disentangled object control. Our method integrates lightweight, slot-based conditioning into pretrained models, preserving their visual priors while providing object-specific manipulation. For images, SlotAdapt augments diffusion models with a register token for background/style and slot-conditioned modules for objects, reducing text-conditioning bias and achieving state-of-the-art results in object discovery, segmentation, compositional editing, and controllable image generation. We further extend the framework to video. Using Invariant Slot Attention (ISA) to separate object identity from pose and a Transformer-based temporal aggregator, our approach maintains consistent object representations and dynamics across frames. This yields new benchmarks in unsupervised video object segmentation and reconstruction, and supports advanced editing tasks such as object removal, replacement, and insertion without explicit supervision. Overall, this work establishes a general and scalable approach to object-centric generative modeling for images and videos. By bridging human object-based perception and machine learning, it expands the design space for interactive, structured, and user-driven generative tools in creative, scientific, and practical domains.
Problem

Research questions and friction points this paper is trying to address.

Adapting diffusion models for object-centric image synthesis
Balancing global coherence with disentangled object control
Extending object-centric representations to video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Slot-based conditioning integrates lightweight object control
SlotAdapt augments diffusion with register tokens and modules
Invariant Slot Attention separates object identity from pose
🔎 Similar Papers
No similar papers found.