Chameleon: a Heterogeneous and Disaggregated Accelerator System for Retrieval-Augmented Language Models

📅 2023-10-15
🏛️ Proceedings of the VLDB Endowment
📈 Citations: 7
Influential: 1
📄 PDF
🤖 AI Summary
To address high latency and poor scalability in Retrieval-Augmented Language Model (RALM) services—caused by tight coupling between LLM inference and vector search—this paper proposes a heterogeneous decoupled acceleration architecture. It physically separates GPU-accelerated LLM inference from FPGA-accelerated vector search, with a CPU cluster orchestrating cross-device coordination. The work introduces two key innovations: (1) a cross-device memory-semantic communication mechanism enabling efficient data exchange, and (2) a load-aware collaborative scheduling strategy for dynamic resource allocation. This design supports independent, elastic scaling of both components and hardware-customized acceleration per workload. Experimental evaluation demonstrates up to 2.16× lower end-to-end latency and 3.18× higher throughput compared to conventional CPU–GPU hybrid architectures, significantly enhancing the real-time service capability of RALM systems.
📝 Abstract
A Retrieval-Augmented Language Model (RALM) combines a large language model (LLM) with a vector database to retrieve context-specific knowledge during text generation. This strategy facilitates impressive generation quality even with smaller models, thus reducing computational demands by orders of magnitude. To serve RALMs efficiently and flexibly, we propose Chameleon , a heterogeneous accelerator system integrating both LLM and vector search accelerators in a disaggregated architecture. The heterogeneity ensures efficient serving for both inference and retrieval, while the disaggregation allows independent scaling of LLM and vector search accelerators to fulfill diverse RALM requirements. Our Chameleon prototype implements vector search accelerators on FPGAs and assigns LLM inference to GPUs, with CPUs as cluster coordinators. Evaluated on various RALMs, Chameleon exhibits up to 2.16× reduction in latency and 3.18× speedup in throughput compared to the hybrid CPU-GPU architecture. The promising results pave the way for adopting heterogeneous accelerators for not only LLM inference but also vector search in future RALM systems.
Problem

Research questions and friction points this paper is trying to address.

Efficiently serving Retrieval-Augmented Language Models (RALMs)
Integrating heterogeneous accelerators for LLM and vector search
Reducing latency and increasing throughput in RALM systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous accelerator system for RALMs
Disaggregated architecture for flexible scaling
FPGA vector search and GPU LLM inference
🔎 Similar Papers
No similar papers found.