🤖 AI Summary
This work addresses the problem of minimizing computational and API costs for LLM inference while strictly satisfying user-specified reliability constraints. We propose a cost-aware agent orchestration framework wherein a lightweight orchestrator model dynamically schedules heterogeneous LLMs and toolchains, jointly optimizing invocation policies and adaptive confidence thresholds. To our knowledge, this is the first approach to integrate calibrated prediction—ensuring theoretically grounded reliability guarantees—with constrained policy optimization, enhanced by off-policy reinforcement learning for efficient, stable policy updates. Evaluated on two multi-hop question-answering benchmarks, our method reduces total cost by up to 30% over state-of-the-art cost-aware baselines, without compromising reliability. The core contribution is the first LLM decision-making framework that simultaneously ensures theoretical calibration, policy optimality under constraints, and online adaptability—enabling rigorous, cost-controllable inference.
📝 Abstract
While large language models (LLMs) have recently made tremendous progress towards solving challenging AI problems, they have done so at increasingly steep computational and API costs. We propose a novel strategy where we combine multiple LLM models with varying cost/accuracy tradeoffs in an agentic manner, where models and tools are run in sequence as determined by an orchestration model to minimize cost subject to a user-specified level of reliability; this constraint is formalized using conformal prediction to provide guarantees. To solve this problem, we propose Conformal Constrained Policy Optimization (CCPO), a training paradigm that integrates constrained policy optimization with off-policy reinforcement learning and recent advances in online conformal prediction. CCPO jointly optimizes a cost-aware policy (score function) and an adaptive threshold. Across two multi-hop question answering benchmarks, CCPO achieves up to a 30% cost reduction compared to other cost-aware baselines and LLM-guided methods without compromising reliability. Our approach provides a principled and practical framework for deploying LLM agents that are significantly more cost-effective while maintaining reliability.