🤖 AI Summary
Large language models (LLMs) incur high inference costs, and conventional exact-match caching overlooks semantic similarity, leading to redundant computations. Method: This paper proposes the first theoretically grounded semantic caching framework for LLMs. It formulates cache eviction as an online learning problem that jointly optimizes semantic matching error and service cost, formally establishing the first theoretical foundation for semantic cache replacement under unknown query distributions and cost parameters. The framework integrates semantic similarity estimation, distribution learning, and a replacement policy with provable performance guarantees—achieving both offline optimality and online adaptability. Results: Extensive evaluation on synthetic datasets demonstrates that the proposed method significantly reduces redundant inference overhead, outperforming existing baselines in both cache efficiency and response accuracy.
📝 Abstract
Large Language Models (LLMs) are revolutionizing how users interact with information systems, yet their high inference cost poses serious scalability and sustainability challenges. Caching inference responses, allowing them to be retrieved without another forward pass through the LLM, has emerged as one possible solution. Traditional exact-match caching, however, overlooks the semantic similarity between queries, leading to unnecessary recomputation. Semantic caching addresses this by retrieving responses based on semantic similarity, but introduces a fundamentally different cache eviction problem: one must account for mismatch costs between incoming queries and cached responses. Moreover, key system parameters, such as query arrival probabilities and serving costs, are often unknown and must be learned over time. Existing semantic caching methods are largely ad-hoc, lacking theoretical foundations and unable to adapt to real-world uncertainty. In this paper, we present a principled, learning-based framework for semantic cache eviction under unknown query and cost distributions. We formulate both offline optimization and online learning variants of the problem, and develop provably efficient algorithms with state-of-the-art guarantees. We also evaluate our framework on a synthetic dataset, showing that our proposed algorithms perform matching or superior performance compared with baselines.