C3PO: Optimized Large Language Model Cascades with Probabilistic Cost Constraints for Reasoning

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) incur prohibitively high inference costs on complex reasoning tasks, hindering practical deployment. Method: This paper proposes a label-free cascaded inference optimization framework. Its core innovation is the first integration of conformal prediction into cascade modeling, enabling self-supervised optimization under probabilistic computational cost constraints—while theoretically guaranteeing both an upper bound on inference overhead and generalization error. The method synergistically combines self-supervised learning, regret minimization, and multi-granularity cascaded decision-making to dynamically allocate computational resources solely from unlabeled model outputs. Results: Evaluated on GSM8K and MATH-500, our approach achieves state-of-the-art accuracy at significantly lower average inference cost, striking a superior balance between performance and efficiency. It establishes a new paradigm for low-cost, high-reliability LLM reasoning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved impressive results on complex reasoning tasks, but their high inference cost remains a major barrier to real-world deployment. A promising solution is to use cascaded inference, where small, cheap models handle easy queries, and only the hardest examples are escalated to more powerful models. However, existing cascade methods typically rely on supervised training with labeled data, offer no theoretical generalization guarantees, and provide limited control over test-time computational cost. We introduce C3PO (Cost Controlled Cascaded Prediction Optimization), a self-supervised framework for optimizing LLM cascades under probabilistic cost constraints. By focusing on minimizing regret with respect to the most powerful model (MPM), C3PO avoids the need for labeled data by constructing a cascade using only unlabeled model outputs. It leverages conformal prediction to bound the probability that inference cost exceeds a user-specified budget. We provide theoretical guarantees on both cost control and generalization error, and show that our optimization procedure is effective even with small calibration sets. Empirically, C3PO achieves state-of-the-art performance across a diverse set of reasoning benchmarks including GSM8K, MATH-500, BigBench-Hard and AIME, outperforming strong LLM cascading baselines in both accuracy and cost-efficiency. Our results demonstrate that principled, label-free cascade optimization can enable scalable LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM cascades to reduce inference costs while maintaining accuracy
Providing theoretical guarantees for cost control and generalization in cascades
Enabling label-free cascade optimization using probabilistic budget constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised cascade optimization without labeled data
Probabilistic cost control using conformal prediction
Theoretical guarantees for generalization and cost bounds
🔎 Similar Papers