RouteLLM: Learning to Route LLMs with Preference Data

๐Ÿ“… 2024-06-26
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 33
โœจ Influential: 4
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of balancing cost and response quality in multi-LLM serving systems. We propose a lightweight, human-preference-driven dynamic routing method that trains a generalizable LLM routerโ€”e.g., a LoRA-finetuned TinyBERTโ€”to automatically select strong or weak base models per input, achieving substantial cost reduction without compromising quality. Our key contributions are: (1) the first zero-shot cross-model routing framework leveraging human preference data, eliminating reliance on fixed model deployments; and (2) integration of synthetic data augmentation to enhance router generalization. Experiments across multiple benchmarks demonstrate over 2ร— cost savings while matching the response quality of the strongest single LLM. Moreover, the router maintains over 92% accuracy when deployed with unseen underlying LLMs, confirming strong transferability.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) exhibit impressive capabilities across a wide range of tasks, yet the choice of which model to use often involves a trade-off between performance and cost. More powerful models, though effective, come with higher expenses, while less capable models are more cost-effective. To address this dilemma, we propose several efficient router models that dynamically select between a stronger and a weaker LLM during inference, aiming to optimize the balance between cost and response quality. We develop a training framework for these routers leveraging human preference data and data augmentation techniques to enhance performance. Our evaluation on widely-recognized benchmarks shows that our approach significantly reduces costs-by over 2 times in certain cases-without compromising the quality of responses. Interestingly, our router models also demonstrate significant transfer learning capabilities, maintaining their performance even when the strong and weak models are changed at test time. This highlights the potential of these routers to provide a cost-effective yet high-performance solution for deploying LLMs.
Problem

Research questions and friction points this paper is trying to address.

Optimize cost-quality trade-off in LLM usage
Dynamic selection between strong and weak LLMs
Leverage human preference data for training routers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic LLM selection router
Training with human preference data
Cost reduction without quality loss
๐Ÿ”Ž Similar Papers
No similar papers found.
I
Isaac Ong
UC Berkeley
Amjad Almahairi
Amjad Almahairi
Anyscale
V
Vincent Wu
UC Berkeley
Wei-Lin Chiang
Wei-Lin Chiang
UC Berkeley
AI Systems
T
Tianhao Wu
UC Berkeley
J
Joseph Gonzalez
UC Berkeley
M
M. W. Kadous
Canva
Ion Stoica
Ion Stoica
Professor of Computer Science, UC Berkeley
Cloud ComputingNetworkingDistributed SystemsBig Data