Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether vision transformers (ViTs) inherently develop object-binding capability—specifically, the ability to discern whether two image patches originate from the same object (IsSameObject)—during self-supervised pretraining. Method: We design a similarity probe to decode IsSameObject signals from patch embeddings across ViT layers, complemented by low-dimensional subspace analysis and ablation studies of attention-guided mechanisms. Contribution/Results: We find that IsSameObject capability robustly emerges in mid-to-high layers of pretrained ViTs and resides within a separable, low-dimensional semantic subspace. The probe achieves >90% accuracy across multiple ViT variants; removing this subspace significantly degrades downstream segmentation and detection performance. These results demonstrate that symbolic object-level knowledge can spontaneously arise in purely connectionist architectures—challenging the prevailing view that ViTs lack explicit object understanding—and provide novel evidence for structured, interpretable representations in neural networks.

Technology Category

Application Category

📝 Abstract
Object binding, the brain's ability to bind the many features that collectively represent an object into a coherent whole, is central to human cognition. It groups low-level perceptual features into high-level object representations, stores those objects efficiently and compositionally in memory, and supports human reasoning about individual object instances. While prior work often imposes object-centric attention (e.g., Slot Attention) explicitly to probe these benefits, it remains unclear whether this ability naturally emerges in pre-trained Vision Transformers (ViTs). Intuitively, they could: recognizing which patches belong to the same object should be useful for downstream prediction and thus guide attention. Motivated by the quadratic nature of self-attention, we hypothesize that ViTs represent whether two patches belong to the same object, a property we term IsSameObject. We decode IsSameObject from patch embeddings across ViT layers using a similarity probe, which reaches over 90% accuracy. Crucially, this object-binding capability emerges reliably in self-supervised ViTs (DINO, MAE, CLIP), but markedly weaker in ImageNet-supervised models, suggesting that binding is not a trivial architectural artifact, but an ability acquired through specific pretraining objectives. We further discover that IsSameObject is encoded in a low-dimensional subspace on top of object features, and that this signal actively guides attention. Ablating IsSameObject from model activations degrades downstream performance and works against the learning objective, implying that emergent object binding naturally serves the pretraining objective. Our findings challenge the view that ViTs lack object binding and highlight how symbolic knowledge of"which parts belong together"emerges naturally in a connectionist system.
Problem

Research questions and friction points this paper is trying to address.

Investigating whether object binding emerges naturally in pretrained Vision Transformers
Decoding same-object relationships from patch embeddings across ViT layers
Comparing binding capabilities across different pretraining objectives and architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing emergent object binding in ViTs
Decoding IsSameObject from patch embeddings
Analyzing binding in self-supervised ViTs
🔎 Similar Papers
No similar papers found.