π€ AI Summary
This work studies the NP-hard problem of selecting an optimal subset of large language models (LLMs) under budget constraints to maximize classification accuracy. We formally establish that the accuracy function is monotone non-decreasing but not submodularβa key structural insight that precludes direct application of standard submodular optimization techniques. To address this, we propose a dynamic programming algorithm with a provable approximation guarantee, integrating response aggregation and budget-aware combinatorial optimization. Extensive experiments on multiple real-world datasets demonstrate that our method consistently outperforms three classes of baselines, achieving average accuracy improvements of 3.2β7.8 percentage points under identical budget constraints. The approach thus offers a theoretically grounded, cost-effective solution for LLM selection and ensemble design in resource-constrained settings.
π Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in comprehending and generating natural language content, attracting widespread popularity in both industry and academia in recent years. An increasing number of services have sprung up which offer LLMs for various tasks via APIs. Different LLMs demonstrate expertise in different domains of queries (e.g., text classification queries). Meanwhile, LLMs of different scales, complexity, and performance are priced diversely. Driven by this observation, a growing number of researchers are investigating the LLM ensemble strategy with a focus on cost-effectiveness, aiming to decrease overall usage costs while enhancing performance. However, to the best of our knowledge, none of the existing works addresses the problem, i.e., how to find an LLM ensemble subject to a cost budget, which maximizes the ensemble performance. In this paper, we formalize the performance of an ensemble of models (LLMs) using the notion of prediction accuracy which we formally define. We develop an approach for aggregating responses from multiple LLMs to enhance ensemble performance. Building on this, we formulate the ensemble selection problem as that of selecting a set of LLMs subject to a cost budget such that the overall prediction accuracy is maximized. We theoretically establish the non-decreasing and non-submodular properties of the prediction accuracy function and provide evidence that the Optimal Ensemble Selection problem is likely to be NP-hard. Subsequently, we apply dynamic programming and propose an algorithm called ThriftLLM. We prove that ThriftLLM achieves a near-optimal approximation guarantee. In addition, it achieves state-of-the-art query performance on multiple real-world datasets against 3 competitors in our extensive experimental evaluation, strongly supporting the effectiveness and superiority of our method.