🤖 AI Summary
This work investigates the underlying mechanisms of DINOv2’s visual representations. Addressing the lack of clarity regarding its perceptual principles, we propose the Minkowski Representation Hypothesis (MRH), positing that internal concepts are not encoded via nonlinear sparsity but rather as convex combinations defined by prototypes—aligning with the geometric properties of multi-head attention. Integrating sparse autoencoders (SAEs) and the Linear Representation Hypothesis (LRH), we systematically analyze concept organization across classification, segmentation, and monocular depth estimation. We find that classification relies on suppression of “non-target” concepts; segmentation concentrates on boundary-detection subspaces; and depth estimation explicitly encodes three canonical monocular cues. Overall, the representation resides on a low-dimensional, connected manifold. This study establishes the first conceptual space interpretation framework grounded in Minkowski geometry, transcending conventional linear and sparse modeling paradigms.
📝 Abstract
DINOv2 is routinely deployed to recognize objects, scenes, and actions; yet the nature of what it perceives remains unknown. As a working baseline, we adopt the Linear Representation Hypothesis (LRH) and operationalize it using SAEs, producing a 32,000-unit dictionary that serves as the interpretability backbone of our study, which unfolds in three parts.
In the first part, we analyze how different downstream tasks recruit concepts from our learned dictionary, revealing functional specialization: classification exploits "Elsewhere" concepts that fire everywhere except on target objects, implementing learned negations; segmentation relies on boundary detectors forming coherent subspaces; depth estimation draws on three distinct monocular depth cues matching visual neuroscience principles.
Following these functional results, we analyze the geometry and statistics of the concepts learned by the SAE. We found that representations are partly dense rather than strictly sparse. The dictionary evolves toward greater coherence and departs from maximally orthogonal ideals (Grassmannian frames). Within an image, tokens occupy a low dimensional, locally connected set persisting after removing position. These signs suggest representations are organized beyond linear sparsity alone.
Synthesizing these observations, we propose a refined view: tokens are formed by combining convex mixtures of archetypes (e.g., a rabbit among animals, brown among colors, fluffy among textures). This structure is grounded in Gardenfors' conceptual spaces and in the model's mechanism as multi-head attention produces sums of convex mixtures, defining regions bounded by archetypes. We introduce the Minkowski Representation Hypothesis (MRH) and examine its empirical signatures and implications for interpreting vision-transformer representations.