🤖 AI Summary
This study investigates whether different visual architectures—specifically CNNs, Transformers, and classifier-based models—employ similar intermediate processing steps when converging to comparable final representations. Through representational similarity analysis, cross-model inter-layer distance metrics, and dynamic tracking of processing trajectories, the work systematically compares the evolution of internal representations across model depths. The findings reveal that while representations at matched depths exhibit the highest similarity, substantial architectural differences persist in their processing pathways. Notably, classifiers actively discard low-level statistical information in their final layers, whereas Transformers exhibit smoother, more gradual representational transitions. These results uncover heterogeneous processing mechanisms underlying seemingly convergent representations, highlighting the fundamental influence of architectural design on information processing strategies.
📝 Abstract
Recent literature suggests that the bigger the model, the more likely it is to converge to similar, ``universal''representations, despite different training objectives, datasets, or modalities. While this literature shows that there is an area where model representations are similar, we study here how vision models might get to those representations -- in particular, do they also converge to the same intermediate steps and operations? We therefore study the processes that lead to convergent representations in different models. First, we quantify distance between different model representations at different stages. We follow the evolution of distances between models throughout processing, identifying the processing steps which are most different between models. We find that while layers at similar positions in different models have the most similar representations, strong differences remain. Classifier models, unlike the others, will discard information about low-level image statistics in their final layers. CNN- and transformer-based models also behave differently, with transformer models applying smoother changes to representations from one layer to the next. These distinctions clarify the level and nature of convergence between model representations, and enables a more qualitative account of the underlying processes in image models.