🤖 AI Summary
This paper addresses the challenge of balancing cost and accuracy in large-scale text processing with large language models (LLMs). We propose BARGAIN, a framework that jointly leverages LLM output confidence (e.g., log-probabilities), task-specific characteristics, and data distribution to enable adaptive sampling and statistical estimation, dynamically orchestrating low-cost and high-cost models online. Its key contribution is the first provable theoretical guarantees on fundamental evaluation metrics—including accuracy, precision, and recall—overcoming the reliance on heuristic quality estimation in existing cascade-based approaches. Extensive experiments across eight real-world datasets demonstrate that BARGAIN reduces inference cost by 86% on average compared to state-of-the-art methods, while matching or exceeding the overall performance of top-tier models such as GPT-4o and Claude Sonnet.
📝 Abstract
Large Language Models (LLMs) are being increasingly used as a building block in data systems to process large text datasets. To do so, LLM model providers offer multiple LLMs with different sizes, spanning various cost-quality trade-offs when processing text at scale. Top-of-the-line LLMs (e.g., GPT-4o, Claude Sonnet) operate with high accuracy but are prohibitively expensive when processing many records. To avoid high costs, more affordable but lower quality LLMs (e.g., GPT-4o-mini, Claude Haiku) can be used to process records, but we need to ensure that the overall accuracy does not deviate substantially from that of the top-of-the-line LLMs. The model cascade framework provides a blueprint to manage this trade-off, by using the confidence of LLMs in their output (e.g., log-probabilities) to decide on which records to use the affordable LLM. However, existing solutions following this framework provide only marginal cost savings and weak theoretical guarantees because of poor estimation of the quality of the affordable LLM's outputs. We present BARGAIN, a method that judiciously uses affordable LLMs in data processing to significantly reduce cost while providing strong theoretical guarantees on the solution quality. BARGAIN employs a novel adaptive sampling strategy and statistical estimation procedure that uses data and task characteristics and builds on recent statistical tools to make accurate estimations with tight theoretical guarantees. Variants of BARGAIN can support guarantees on accuracy, precision, or recall of the output. Experimental results across 8 real-world datasets show that BARGAIN reduces cost, on average, by up to 86% more than state-of-the-art, while providing stronger theoretical guarantees on accuracy of output, with similar gains when guaranteeing a desired level of precision or recall.