🤖 AI Summary
In transfer learning, the selection of pre-trained model transferability estimation (MTE) metrics is highly task-dependent, yet existing approaches lack an automatic, task-aware mechanism for metric selection. This paper proposes MetaRank, the first framework to formulate MTE metric selection as a learning-to-rank problem. It jointly encodes target dataset descriptions and MTE metrics using a pre-trained language model to construct a unified semantic embedding space, and introduces a meta-predictor trained under listwise ranking loss to learn task-adaptive metric preferences. MetaRank supports offline training and online fast inference. Extensive experiments across 11 pre-trained models and 11 downstream datasets demonstrate that MetaRank significantly outperforms both the average-performance baseline and manually selected metrics, improving the accuracy of transferability estimation and cross-task generalization capability.
📝 Abstract
Selecting an appropriate pre-trained source model is a critical, yet computationally expensive, task in transfer learning. Model Transferability Estimation (MTE) methods address this by providing efficient proxy metrics to rank models without full fine-tuning. In practice, the choice of which MTE metric to use is often ad hoc or guided simply by a metric's average historical performance. However, we observe that the effectiveness of MTE metrics is highly task-dependent and no single metric is universally optimal across all target datasets. To address this gap, we introduce MetaRank, a meta-learning framework for automatic, task-aware MTE metric selection. We formulate metric selection as a learning-to-rank problem. Rather than relying on conventional meta-features, MetaRank encodes textual descriptions of both datasets and MTE metrics using a pretrained language model, embedding them into a shared semantic space. A meta-predictor is then trained offline on diverse meta-tasks to learn the intricate relationship between dataset characteristics and metric mechanisms, optimized with a listwise objective that prioritizes correctly ranking the top-performing metrics. During the subsequent online phase, MetaRank efficiently ranks the candidate MTE metrics for a new, unseen target dataset based on its textual description, enabling practitioners to select the most appropriate metric a priori. Extensive experiments across 11 pretrained models and 11 target datasets demonstrate the strong effectiveness of our approach.