Training objective drives the consistency of representational similarity across datasets

📅 2024-11-08
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work investigates whether cross-dataset consistency in model representation similarity stems from intrinsic model properties or is confounded by biases inherent in common benchmark datasets. To address this, we conduct systematic representation comparison experiments across multimodal (image, image-text) and multitask (self-supervised, classification, image-text contrastive) models, using Centered Kernel Alignment (CKA) and linearly weighted similarity analysis on diverse domain-shifted datasets. Results demonstrate that training objective is the dominant factor governing cross-dataset representation similarity stability—significantly outweighing influences of data modality and network architecture. We propose the first evaluation framework explicitly designed for cross-dataset representational consistency. Furthermore, we reveal that self-supervised vision models exhibit the strongest generalization of representation similarity across datasets, and that the correlation between representation similarity and task performance is maximized on single-domain benchmarks.

Technology Category

Application Category

📝 Abstract
The Platonic Representation Hypothesis claims that recent foundation models are converging to a shared representation space as a function of their downstream task performance, irrespective of the objectives and data modalities used to train these models. Representational similarity is generally measured for individual datasets and is not necessarily consistent across datasets. Thus, one may wonder whether this convergence of model representations is confounded by the datasets commonly used in machine learning. Here, we propose a systematic way to measure how representational similarity between models varies with the set of stimuli used to construct the representations. We find that the objective function is the most crucial factor in determining the consistency of representational similarities across datasets. Specifically, self-supervised vision models learn representations whose relative pairwise similarities generalize better from one dataset to another compared to those of image classification or image-text models. Moreover, the correspondence between representational similarities and the models' task behavior is dataset-dependent, being most strongly pronounced for single-domain datasets. Our work provides a framework for systematically measuring similarities of model representations across datasets and linking those similarities to differences in task behavior.
Problem

Research questions and friction points this paper is trying to address.

Measures how representational similarity varies across datasets
Examines impact of objective function on similarity consistency
Analyzes link between model representations and task behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measures representational similarity across diverse datasets systematically
Self-supervised models generalize similarity better than others
Links representational similarity to task behavior variations