Layer by layer, module by module: Choose both for optimal OOD probing of ViT

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the degradation of intermediate-layer representations in Vision Transformers (ViTs) under out-of-distribution (OOD) conditions and identifies optimal probing locations. Through large-scale linear probing experiments, the authors systematically evaluate the representational capacity of different layers and modules in pretrained ViTs across varying degrees of distribution shift. They find that performance deterioration in deeper layers is primarily attributable to distributional shifts. Notably, at the module granularity, the internal activations of the feedforward network and the normalized outputs of multi-head attention emerge as optimal probing points under strong and weak distribution shifts, respectively—challenging the conventional practice of probing only block outputs. Extensive validation on multiple image classification benchmarks demonstrates that this probing strategy significantly improves downstream OOD performance.

Technology Category

Application Category

📝 Abstract
Recent studies have observed that intermediate layers of foundation models often yield more discriminative representations than the final layer. While initially attributed to autoregressive pretraining, this phenomenon has also been identified in models trained via supervised and discriminative self-supervised objectives. In this paper, we conduct a comprehensive study to analyze the behavior of intermediate layers in pretrained vision transformers. Through extensive linear probing experiments across a diverse set of image classification benchmarks, we find that distribution shift between pretraining and downstream data is the primary cause of performance degradation in deeper layers. Furthermore, we perform a fine-grained analysis at the module level. Our findings reveal that standard probing of transformer block outputs is suboptimal; instead, probing the activation within the feedforward network yields the best performance under significant distribution shift, whereas the normalized output of the multi-head self-attention module is optimal when the shift is weak.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution
vision transformer
intermediate layers
distribution shift
representation probing
Innovation

Methods, ideas, or system contributions that make the work stand out.

intermediate layer probing
distribution shift
vision transformer
module-level analysis
out-of-distribution generalization
🔎 Similar Papers
No similar papers found.