Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models

📅 2024-08-01
📈 Citations: 65
Influential: 4
📄 PDF
🤖 AI Summary
This work addresses computational efficiency optimization during large language model (LLM) inference, systematically investigating the trade-off between model scale and generated token count. We propose a novel tree-search inference algorithm and comprehensively evaluate cost-performance Pareto frontiers under varying compute budgets—employing greedy search, best-of-n sampling, and weighted voting—across Llemma models of multiple scales (7B–34B). Experimental results demonstrate that scaling inference compute yields substantially greater gains than scaling model parameters. Notably, Llemma-7B equipped with our tree-search algorithm consistently outperforms Llemma-34B and all baseline methods on the MATH benchmark, achieving Pareto-optimal “small-model + strong-inference” performance. This work provides both a scalable algorithmic framework and empirical evidence for efficient LLM deployment.

Technology Category

Application Category

📝 Abstract
While the scaling laws of large language models (LLMs) training have been extensively studied, optimal inference configurations of LLMs remain underexplored. We study inference scaling laws (aka test-time scaling laws) and compute-optimal inference, focusing on the trade-offs between model sizes and generating additional tokens with different inference strategies. As a first step towards understanding and designing compute-optimal inference methods, we studied cost-performance trade-offs for inference strategies such as greedy search, majority voting, best-of-$n$, weighted voting, and two different tree search algorithms, using different model sizes and compute budgets. Our findings suggest that scaling inference compute with inference strategies can be more computationally efficient than scaling model parameters. Additionally, smaller models combined with advanced inference algorithms offer Pareto-optimal trade-offs in cost and performance. For example, the Llemma-7B model, when paired with our novel tree search algorithm, consistently outperforms the Llemma-34B model across all tested inference strategies on the MATH benchmark. We hope these insights contribute to a deeper understanding of inference scaling laws (test-time scaling laws) for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Explores optimal inference configurations for large language models.
Investigates trade-offs between model size and token generation strategies.
Demonstrates smaller models with advanced algorithms outperform larger models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores compute-optimal inference strategies for LLMs
Compares model sizes with advanced inference algorithms
Demonstrates smaller models with tree search outperform larger ones
🔎 Similar Papers
No similar papers found.
Yangzhen Wu
Yangzhen Wu
UC Berkeley
Zhiqing Sun
Zhiqing Sun
OpenAI
Machine LearningLanguage ModellingAI Alignment
Shanda Li
Shanda Li
Carnegie Mellon University
Machine Learning
S
S. Welleck
School of Computer Science, Carnegie Mellon University
Y
Yiming Yang
School of Computer Science, Carnegie Mellon University