k-NN as a Simple and Effective Estimator of Transferability

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating transfer learning performance across domains, tasks, and architectures remains challenging due to the poor predictive reliability of existing transferability metrics. Method: We propose a lightweight transferability evaluator based on k-nearest neighbor (k-NN) classification in pretrained feature space—requiring no gradient computation, fine-tuning, or auxiliary model training, and relying solely on pretrained features and a small number of target-domain labels. Contribution/Results: Through systematic evaluation across 42,000+ experiments, 23 baseline metrics, and 16 datasets, we empirically demonstrate that mainstream transferability measures frequently fail; in contrast, our k-NN evaluator achieves an average 27% improvement in prediction correlation over all existing methods. The approach is computationally efficient, broadly applicable, and highly generalizable—establishing a robust, universal paradigm for transfer learning assessment.

Technology Category

Application Category

📝 Abstract
How well can one expect transfer learning to work in a new setting where the domain is shifted, the task is different, and the architecture changes? Many transfer learning metrics have been proposed to answer this question. But how accurate are their predictions in a realistic new setting? We conducted an extensive evaluation involving over 42,000 experiments comparing 23 transferability metrics across 16 different datasets to assess their ability to predict transfer performance. Our findings reveal that none of the existing metrics perform well across the board. However, we find that a simple k-nearest neighbor evaluation -- as is commonly used to evaluate feature quality for self-supervision -- not only surpasses existing metrics, but also offers better computational efficiency and ease of implementation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates transfer learning performance under domain, task, and architecture shifts
Compares 23 transferability metrics across 16 datasets via 42,000 experiments
Proposes k-NN as superior, efficient alternative to existing transferability metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

k-NN as transferability estimator
Extensive evaluation of 23 metrics
Better efficiency and implementation ease
🔎 Similar Papers
No similar papers found.
M
Moein Sorkhei
KTH Royal Institute of Technology, Stockholm, Sweden; Science for Life Laboratory, Stockholm, Sweden
Christos Matsoukas
Christos Matsoukas
AstraZeneca
Artificial IntelligenceMachine LearningComputer VisionMedical Image Analysis
Johan Fredin Haslum
Johan Fredin Haslum
PhD Student, Machine Learning, KTH - Royal Institute of Technology
K
Kevin Smith
KTH Royal Institute of Technology, Stockholm, Sweden; Science for Life Laboratory, Stockholm, Sweden