Dreamweaver: Learning Compositional World Representations from Pixels

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a core challenge in video self-supervised representation learning: unsupervised disentanglement of objects and their attributes (e.g., color, shape, motion) from raw pixels, enabling compositional future frame prediction. To this end, we propose the Recursive Block-Slot Unit (RBSU), the first model to achieve hierarchical disentanglement and compositional imagination of videos without any textual, mask-based, or bounding-box supervision. Our method jointly optimizes a multi-step future-frame prediction objective and incorporates the Disentanglement, Completeness, and Informativeness (DCI) framework for principled disentanglement evaluation. Experiments demonstrate that RBSU surpasses state-of-the-art world modeling approaches across multiple benchmarks. Crucially, it supports cross-object attribute recombination to generate novel videos—validating its genuine compositional imagination and strong out-of-distribution generalization capability.

Technology Category

Application Category

📝 Abstract
Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from different objects.
Problem

Research questions and friction points this paper is trying to address.

Autonomous Learning
Visual Understanding
Prediction Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dreamweaver
Recurrent Block Slot Units
Future Frame Prediction