🤖 AI Summary
Existing LLM caching approaches struggle to identify semantically similar queries and capture context dependencies, resulting in low hit rates and high error rates. This paper proposes MeanCache—the first user-centric semantic caching system for LLMs—designed to address these limitations. MeanCache integrates federated learning to preserve user privacy, introduces context-chain encoding to explicitly model conversational dependencies, and incorporates a lightweight local semantic similarity discriminator for efficient cache matching. Compared to state-of-the-art methods, MeanCache achieves a 17% improvement in F-score, a 20% gain in precision, an 83% reduction in storage overhead, and an 11% speedup in cache decision latency. These advances significantly enhance caching performance for context-sensitive queries, thereby reducing LLM inference costs, service load, and associated carbon footprint.
📝 Abstract
Large Language Models (LLMs) like ChatGPT and Llama have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175 billion parameters, where inference demands billions of floating-point operations. Caching is a natural solution to reduce LLM inference costs on repeated queries, which constitute about 31% of the total queries. However, existing caching methods are incapable of finding semantic similarities among LLM queries nor do they operate on contextual queries, leading to unacceptable false hit-and-miss rates. This paper introduces MeanCache, a user-centric semantic cache for LLM-based services that identifies semantically similar queries to determine cache hit or miss. Using MeanCache, the response to a user's semantically similar query can be retrieved from a local cache rather than re-querying the LLM, thus reducing costs, service provider load, and environmental impact. MeanCache leverages Federated Learning (FL) to collaboratively train a query similarity model without violating user privacy. By placing a local cache in each user's device and using FL, MeanCache reduces the latency and costs and enhances model performance, resulting in lower false hit rates. MeanCache also encodes context chains for every cached query, offering a simple yet highly effective mechanism to discern contextual query responses from standalone. Our experiments benchmarked against the state-of-the-art caching method, reveal that MeanCache attains an approximately 17% higher F-score and a 20% increase in precision during semantic cache hit-and-miss decisions while performing even better on contextual queries. It also reduces the storage requirement by 83% and accelerates semantic cache hit-and-miss decisions by 11%.