🤖 AI Summary
This work addresses a key limitation in self-supervised Vision Transformers (ViTs), where the [CLS] token, optimized for image-level objectives, struggles to localize objects precisely and thus fails to capture fine-grained local information. The authors make the novel observation that object-centric signals are consistently present in the query (Q), key (K), and value (V) components across all transformer layers. Building on this insight, they propose Object-DINO, a training-free method that identifies object centers by computing patch-level attention similarities across layers and clustering attention heads to automatically form object-centric clusters. Evaluated on unsupervised object discovery, Object-DINO achieves substantial improvements of 3.6–12.4 percentage points in CorLoc over prior approaches and effectively mitigates object hallucination in large multimodal models.
📝 Abstract
Self-supervised Vision Transformers (ViTs) like DINO show an emergent ability to discover objects, typically observed in [CLS] token attention maps of the final layer. However, these maps often contain spurious activations resulting in poor localization of objects. This is because the [CLS] token, trained on an image-level objective, summarizes the entire image instead of focusing on objects. This aggregation dilutes the object-centric information existing in the local, patch-level interactions. We analyze this by computing inter-patch similarity using patch-level attention components (query, key, and value) across all layers. We find that: (1) Object-centric properties are encoded in the similarity maps derived from all three components ($q, k, v$), unlike prior work that uses only key features or the [CLS] token. (2) This object-centric information is distributed across the network, not just confined to the final layer. Based on these insights, we introduce Object-DINO, a training-free method that extracts this distributed object-centric information. Object-DINO clusters attention heads across all layers based on the similarities of their patches and automatically identifies the object-centric cluster corresponding to all objects. We demonstrate Object-DINO's effectiveness on two applications: enhancing unsupervised object discovery (+3.6 to +12.4 CorLoc gains) and mitigating object hallucination in Multimodal Large Language Models by providing visual grounding. Our results demonstrate that using this distributed object-centric information improves downstream tasks without additional training.