Cost-of-Pass: An Economic Framework for Evaluating Language Models

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the value–cost trade-off of language models in economic applications. Method: We develop an economic evaluation framework grounded in production theory and introduce “cost-of-pass”—the expected monetary cost of generating a correct answer—as a unified metric capturing both accuracy and inference overhead. We pioneer the concept of the “frontier cost-of-pass,” integrating microeconomic modeling, empirical cost accounting, task taxonomy, and counterfactual frontier analysis to quantify the marginal returns of model architectures and inference techniques (e.g., majority voting, self-refinement). Contribution/Results: We find that lightweight models are most cost-efficient for basic quantitative tasks, large language models for knowledge-intensive tasks, and reasoning-focused models for complex quantitative tasks. Moreover, the frontier cost-of-pass for complex quantitative tasks has declined by approximately 50% per quarter over the past year, indicating rapid efficiency gains.

Technology Category

Application Category

📝 Abstract
The widespread adoption of AI systems in the economy hinges on their ability to generate economic value that outweighs their inference costs. Evaluating this tradeoff requires metrics that account for both performance and costs. We propose a framework grounded in production theory for evaluating language models by combining accuracy and inference cost. We introduce"cost-of-pass", the expected monetary cost of generating a correct solution. We then define the"frontier cost-of-pass"as the minimum cost-of-pass achievable across available models or the"human-expert, using the approximate cost of hiring an expert. Our analysis reveals distinct economic insights. First, lightweight models are most cost-effective for basic quantitative tasks, large models for knowledge-intensive ones, and reasoning models for complex quantitative problems, despite higher per-token costs. Second, tracking this frontier cost-of-pass over the past year reveals significant progress, particularly for complex quantitative tasks where the cost has roughly halved every few months. Third, to trace key innovations driving this progress, we examine counterfactual frontiers: estimates of cost-efficiency without specific model classes. We find that innovations in lightweight, large, and reasoning models have been essential for pushing the frontier in basic quantitative, knowledge-intensive, and complex quantitative tasks, respectively. Finally, we assess the cost-reductions afforded by common inference-time techniques like majority voting and self-refinement, finding that their marginal accuracy gains rarely justify their costs. Our findings underscore that complementary model-level innovations are the primary drivers of cost-efficiency, and our economic framework provides a principled tool for measuring this progress and guiding deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating economic value vs inference costs of AI systems
Proposing cost-of-pass framework for model performance and cost tradeoff
Identifying cost-effective model types for different task categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-of-pass framework combines accuracy and inference cost
Lightweight, large, reasoning models optimize cost-effectiveness
Model-level innovations drive cost-efficiency progress
🔎 Similar Papers
No similar papers found.