Pixel-level Scene Understanding in One Token: Visual States Need What-is-Where Composition

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised methods struggle to learn visual state representations from videos that support sequential decision-making, as they fail to jointly model the semantic identity and spatial location of scene elements—the “what-is-where” structure. To address this limitation, this work proposes CroBo, a framework that compresses a reference image into a single global bottleneck token and leverages it as contextual prior to reconstruct heavily masked local image patches from sparse visible cues. By integrating masked image reconstruction, single-token global compression, and cross-frame perceptual consistency, CroBo enables fine-grained encoding of “what is where” in dynamic scenes. Experiments demonstrate that the method achieves state-of-the-art performance on multiple visual robotic policy learning benchmarks, effectively compressing scene composition while accurately capturing dynamic changes.

Technology Category

Application Category

📝 Abstract
For robotic agents operating in dynamic environments, learning visual state representations from streaming video observations is essential for sequential decision making. Recent self-supervised learning methods have shown strong transferability across vision tasks, but they do not explicitly address what a good visual state should encode. We argue that effective visual states must capture what-is-where by jointly encoding the semantic identities of scene elements and their spatial locations, enabling reliable detection of subtle dynamics across observations. To this end, we propose CroBo, a visual state representation learning framework based on a global-to-local reconstruction objective. Given a reference observation compressed into a compact bottleneck token, CroBo learns to reconstruct heavily masked patches in a local target crop from sparse visible cues, using the global bottleneck token as context. This learning objective encourages the bottleneck token to encode a fine-grained representation of scene-wide semantic entities, including their identities, spatial locations, and configurations. As a result, the learned visual states reveal how scene elements move and interact over time, supporting sequential decision making. We evaluate CroBo on diverse vision-based robot policy learning benchmarks, where it achieves state-of-the-art performance. Reconstruction analyses and perceptual straightness experiments further show that the learned representations preserve pixel-level scene composition and encode what-moves-where across observations.
Problem

Research questions and friction points this paper is trying to address.

visual state representation
what-is-where
scene understanding
robotic agents
dynamic environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual state representation
what-is-where composition
bottleneck token
global-to-local reconstruction
self-supervised learning
🔎 Similar Papers
No similar papers found.