๐ค AI Summary
This study investigates the hierarchical evolution of visual representations in Vision Transformers (ViTs)โspecifically, how low-level features (e.g., color, texture) progressively abstract into high-level semantic concepts (e.g., objects, animals). Method: We propose a neuron-annotation-based, layer-wise conceptual quantification framework integrating feature visualization, concept saliency estimation, and cross-model comparative analysis. Contribution/Results: We first systematically observe a monotonic increase in concept complexity with network depth in ViTs. Quantitatively, we reveal how pretraining and fine-tuning modulate concept diversity and semantic drift. Experiments confirm ViTs exhibit CNN-like hierarchical representation: deeper layers host 42% more distinct concepts, with significantly enhanced category specificity. These findings provide theoretical foundations for model interpretability and architectural design.
๐ Abstract
Vision Transformers (ViTs) are increasingly utilized in various computer vision tasks due to their powerful representation capabilities. However, it remains understudied how ViTs process information layer by layer. Numerous studies have shown that convolutional neural networks (CNNs) extract features of increasing complexity throughout their layers, which is crucial for tasks like domain adaptation and transfer learning. ViTs, lacking the same inductive biases as CNNs, can potentially learn global dependencies from the first layers due to their attention mechanisms. Given the increasing importance of ViTs in computer vision, there is a need to improve the layer-wise understanding of ViTs. In this work, we present a novel, layer-wise analysis of concepts encoded in state-of-the-art ViTs using neuron labeling. Our findings reveal that ViTs encode concepts with increasing complexity throughout the network. Early layers primarily encode basic features such as colors and textures, while later layers represent more specific classes, including objects and animals. As the complexity of encoded concepts increases, the number of concepts represented in each layer also rises, reflecting a more diverse and specific set of features. Additionally, different pretraining strategies influence the quantity and category of encoded concepts, with finetuning to specific downstream tasks generally reducing the number of encoded concepts and shifting the concepts to more relevant categories.