A Generative Caching System for Large Language Models

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency, excessive API costs, and poor responsiveness to long-tail queries in large language model (LLM) inference, this paper proposes the first generative caching system tailored for LLMs. Departing from conventional cache paradigms, our approach leverages semantic embedding similarity for query matching and employs a lightweight indexing structure for efficient retrieval. We introduce a novel generative caching mechanism that synthesizes high-quality responses for unseen queries via multi-response fusion. Furthermore, we design a multi-objective cache replacement and update policy that jointly optimizes semantic similarity, response quality, latency, and cost. Experimental results demonstrate that our system significantly reduces end-to-end latency and API invocation costs compared to GPTCache, while improving coverage of long-tail queries and response consistency—achieving both performance acceleration and effective knowledge reuse.

Technology Category

Application Category

📝 Abstract
Caching has the potential to be of significant benefit for accessing large language models (LLMs) due to their high latencies which typically range from a small number of seconds to well over a minute. Furthermore, many LLMs charge money for queries; caching thus has a clear monetary benefit. This paper presents a new caching system for improving user experiences with LLMs. In addition to reducing both latencies and monetary costs for accessing LLMs, our system also provides important features that go beyond the performance benefits typically associated with caches. A key feature we provide is generative caching, wherein multiple cached responses can be synthesized to provide answers to queries which have never been seen before. Our generative caches function as repositories of valuable information which can be mined and analyzed. We also improve upon past semantic caching techniques by tailoring the caching algorithms to optimally balance cost and latency reduction with the quality of responses provided. Performance tests indicate that our caches are considerably faster than GPTcache.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency and cost for LLM access
Generating answers from multiple cached responses
Optimizing cache algorithms for response quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative caching synthesizes multiple cached responses
Tailored algorithms balance cost, latency, and response quality
Repositories of information mined for improved user experiences
🔎 Similar Papers
No similar papers found.