From Colors to Classes: Emergence of Concepts in Vision Transformers

๐Ÿ“… 2025-03-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates the hierarchical evolution of visual representations in Vision Transformers (ViTs)โ€”specifically, how low-level features (e.g., color, texture) progressively abstract into high-level semantic concepts (e.g., objects, animals). Method: We propose a neuron-annotation-based, layer-wise conceptual quantification framework integrating feature visualization, concept saliency estimation, and cross-model comparative analysis. Contribution/Results: We first systematically observe a monotonic increase in concept complexity with network depth in ViTs. Quantitatively, we reveal how pretraining and fine-tuning modulate concept diversity and semantic drift. Experiments confirm ViTs exhibit CNN-like hierarchical representation: deeper layers host 42% more distinct concepts, with significantly enhanced category specificity. These findings provide theoretical foundations for model interpretability and architectural design.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision Transformers (ViTs) are increasingly utilized in various computer vision tasks due to their powerful representation capabilities. However, it remains understudied how ViTs process information layer by layer. Numerous studies have shown that convolutional neural networks (CNNs) extract features of increasing complexity throughout their layers, which is crucial for tasks like domain adaptation and transfer learning. ViTs, lacking the same inductive biases as CNNs, can potentially learn global dependencies from the first layers due to their attention mechanisms. Given the increasing importance of ViTs in computer vision, there is a need to improve the layer-wise understanding of ViTs. In this work, we present a novel, layer-wise analysis of concepts encoded in state-of-the-art ViTs using neuron labeling. Our findings reveal that ViTs encode concepts with increasing complexity throughout the network. Early layers primarily encode basic features such as colors and textures, while later layers represent more specific classes, including objects and animals. As the complexity of encoded concepts increases, the number of concepts represented in each layer also rises, reflecting a more diverse and specific set of features. Additionally, different pretraining strategies influence the quantity and category of encoded concepts, with finetuning to specific downstream tasks generally reducing the number of encoded concepts and shifting the concepts to more relevant categories.
Problem

Research questions and friction points this paper is trying to address.

Understanding layer-wise information processing in Vision Transformers
Analyzing concept emergence complexity across ViT layers
Investigating pretraining impact on encoded concept diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise analysis using neuron labeling
Reveals increasing complexity of concepts
Examines pretraining strategies' impact
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Teresa Dorszewski
Department of Applied Mathematics and Computer Science, Technical University of Denmark
L
Lenka Tvetkov'a
Department of Applied Mathematics and Computer Science, Technical University of Denmark
Robert Jenssen
Robert Jenssen
Visual Intelligence, UiT The Arctic University of Norway & Norw. Comp. Center & P1 Centre AI, UCPH
Machine learninginformation theoretic learningkernel methodsdeep learninghealth data analytics
Lars Kai Hansen
Lars Kai Hansen
Professor, Cognitive Systems, DTU Compute, Technical University of Denmark
Machine learningAIneuroimagingcognitive systemssignal processing
K
Kristoffer Knutsen Wickstrom
Department of Physics and Technology, UiT The Arctic University of Norway