Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether pretraining metrics—such as perplexity—reliably predict downstream performance of large language models (LLMs) after fine-tuning, aiming to improve model selection efficiency under fixed computational budgets. The authors formulate checkpoint selection as a pairwise classification task and systematically evaluate 50 distinct 1B-parameter LLM variants across diverse downstream tasks, revealing that perplexity is frequently misleading. They propose novel unsupervised and supervised proxy metrics, which reduce prediction error rates by over 50% in multi-task supervised fine-tuning (SFT) evaluation. This study is the first to empirically demonstrate a non-monotonic relationship between pretraining metrics and downstream performance. The proposed proxies exhibit strong cross-task generalization and practical utility, offering a trustworthy, task-aware evaluation paradigm for optimizing pretraining strategies toward downstream objectives.

Technology Category

Application Category

📝 Abstract
While metrics available during pre-training, such as perplexity, correlate well with model performance at scaling-laws studies, their predictive capacities at a fixed model size remain unclear, hindering effective model selection and development. To address this gap, we formulate the task of selecting pre-training checkpoints to maximize downstream fine-tuning performance as a pairwise classification problem: predicting which of two LLMs, differing in their pre-training, will perform better after supervised fine-tuning (SFT). We construct a dataset using 50 1B parameter LLM variants with systematically varied pre-training configurations, e.g., objectives or data, and evaluate them on diverse downstream tasks after SFT. We first conduct a study and demonstrate that the conventional perplexity is a misleading indicator. As such, we introduce novel unsupervised and supervised proxy metrics derived from pre-training that successfully reduce the relative performance prediction error rate by over 50%. Despite the inherent complexity of this task, we demonstrate the practical utility of our proposed proxies in specific scenarios, paving the way for more efficient design of pre-training schemes optimized for various downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Predicting fine-tuning outcomes using pre-training indicators
Evaluating pre-training checkpoints for downstream task performance
Developing new metrics to replace misleading perplexity measures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise classification for pre-training checkpoint selection
Novel unsupervised and supervised proxy metrics
50% reduction in performance prediction error
🔎 Similar Papers
No similar papers found.