Predicting Task Performance with Context-aware Scaling Laws

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional scaling laws neglect the impact of context length on downstream task performance. This work pioneers the incorporation of context length into scaling law modeling, proposing an interpretable two-dimensional performance prediction framework parameterized by training compute and context length. Empirically calibrated on the Llama-2 family across joint variations in compute and context length, the framework captures performance trends for arithmetic reasoning, commonsense reasoning, and machine translation. Validated on 65,500 evaluation instances, it accurately characterizes in-distribution performance and demonstrates robust extrapolation to longer contexts beyond those seen during training. Moreover, it generalizes across diverse compute scales. The framework thus enables principled, context-aware performance forecasting—critical for efficient model selection and resource allocation in large language model deployment. All code is publicly released.

Technology Category

Application Category

📝 Abstract
Scaling laws have transformed our understanding of large language models by linking upstream metrics like cross-entropy loss to design factors such as model size, training data, and compute. However, these conventional laws fail to capture downstream task performance, where context plays a critical role. In this work, we propose a straightforward, interpretable framework that jointly models downstream performance as a function of the training compute and the provided context. We empirically validate our framework by fitting it on the observed downstream performance of extended-context variants of Llama-2-7B and Llama-2-13B across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately models in-distribution downstream performance, generalizes across three orders of magnitude in training compute, and reliably extrapolates performance as the amount of context increases. These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our code is available at https://github.com/wang-research-lab/context-scaling.
Problem

Research questions and friction points this paper is trying to address.

Predicting downstream task performance with context-aware scaling laws
Modeling performance as function of training compute and context
Generalizing scaling laws across diverse reasoning and translation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models downstream performance with training compute and context
Empirically validates framework on extended-context Llama-2 variants
Generalizes across compute scales and extrapolates context performance
🔎 Similar Papers
2024-05-22Citations: 1