🤖 AI Summary
To address the high computational overhead, excessive hardware resource consumption, and suboptimal generation quality and response stability in long-context reasoning for large language models (LLMs), this paper proposes the first native vector database architecture designed specifically for LLM inference. Our core method decouples KV cache management from attention computation and unifies both as an optimizable vector search problem. We further design a native query optimizer supporting adaptive query optimization and service-level objective (SLO)-driven load scheduling, overcoming the dual bottlenecks—both in performance and quality—imposed by conventional KV caching and sparse attention mechanisms. Evaluated on industrial workloads and standard benchmarks, our approach reduces hardware resource consumption by 32%–57%, improves generation quality (e.g., BLEU, ROUGE) by 4.2–9.8 points, and decreases first-token latency variance by 41%, significantly enhancing inference efficiency and service stability.
📝 Abstract
AlayaDB is a cutting-edge vector database system natively architected for efficient and effective long-context inference for Large Language Models (LLMs) at AlayaDB AI. Specifically, it decouples the KV cache and attention computation from the LLM inference systems, and encapsulates them into a novel vector database system. For the Model as a Service providers (MaaS), AlayaDB consumes fewer hardware resources and offers higher generation quality for various workloads with different kinds of Service Level Objectives (SLOs), when comparing with the existing alternative solutions (e.g., KV cache disaggregation, retrieval-based sparse attention). The crux of AlayaDB is that it abstracts the attention computation and cache management for LLM inference into a query processing procedure, and optimizes the performance via a native query optimizer. In this work, we demonstrate the effectiveness of AlayaDB via (i) three use cases from our industry partners, and (ii) extensive experimental results on LLM inference benchmarks.