🤖 AI Summary
This work investigates the representational structure of visual encoders in multi-object scenes, formalizing “structured representation” as the binding of object-specific information to discrete tokens with cross-object disentanglement. It introduces the first quantitative measures for two core properties: binding fidelity and isolation degree. Leveraging a novel object decoding task built on COCO, the study employs token-level attribution, inter-layer representation decomposition, and structuralness evaluation to systematically benchmark ViT, CLIP, and DINOv2. Results show that DINOv2 and deeper ViT layers exhibit superior structured representation capability; the [CLS] token exhibits significant task bias; and pretraining objectives critically influence object separation quality. Crucially, the proposed metrics strongly correlate with downstream multi-object task performance—enabling interpretable model selection and task-aware architecture adaptation. (149 words)
📝 Abstract
In this work, we interpret the representations of multi-object scenes in vision encoders through the lens of structured representations. Structured representations allow modeling of individual objects distinctly and their flexible use based on the task context for both scene-level and object-specific tasks. These capabilities play a central role in human reasoning and generalization, allowing us to abstract away irrelevant details and focus on relevant information in a compact and usable form. We define structured representations as those that adhere to two specific properties: binding specific object information into discrete representation units and segregating object representations into separate sets of tokens to minimize cross-object entanglement. Based on these properties, we evaluated and compared image encoders pre-trained on classification (ViT), large vision-language models (CLIP, BLIP, FLAVA), and self-supervised methods (DINO, DINOv2). We examine the token representations by creating object-decoding tasks that measure the ability of specific tokens to capture individual objects in multi-object scenes from the COCO dataset. This analysis provides insights into how object-wise representations are distributed across tokens and layers within these vision encoders. Our findings highlight significant differences in the representation of objects depending on their relevance to the pre-training objective, with this effect particularly pronounced in the CLS token (often used for downstream tasks). Meanwhile, networks and layers that exhibit more structured representations retain better information about individual objects. To guide practical applications, we propose formal measures to quantify the two properties of structured representations, aiding in selecting and adapting vision encoders for downstream tasks.