π€ AI Summary
This work addresses the challenge of balancing cost and latency in large language model (LLM) serving under limited cache capacity, where conventional caching strategies relying on recency and frequency signals exhibit unstable performance under real-world workloads. The paper proposes RAC, a semantic-aware online cache eviction policy that introduces semantic relatedness into LLM caching decisions for the first time. RAC employs an online learning framework that integrates topic modeling and graph-structured analysis to dynamically extract two novel signals: βtopic popularity,β capturing long-term reuse potential at the thematic level, and βstructural importance,β reflecting future reuse value within local dependency contexts. Experimental results demonstrate that RAC improves cache hit rates by 20%β30% over the strongest baselines across diverse real-world workloads, while exhibiting strong generalization and stability.
π Abstract
The scaling of Large Language Model (LLM) services faces significant cost and latency challenges, making effective caching under tight capacity crucial. Existing cache replacement policies, from heuristics to learning-based methods, predominantly rely on limited-window statistics such as recency and frequency. We show these signals are not robust for real-world LLM workloads, which exhibit long reuse distances and sparse local recurrence.
To address these limitations, we propose Relation-Aware Cache (RAC), an online eviction strategy that leverages semantic relations among requests to guide eviction decisions. RAC synthesizes two relation-aware signals: (1) Topical Prevalence, which aggregates access evidence at the topic level to capture long-horizon reuse; and (2) Structural Importance, which leverages local intra-topic dependency structure to discriminate entries by their future reuse value. Extensive evaluations show that RAC maintains high effectiveness across diverse workloads, consistently surpassing state-of-the-art baselines by 20%--30% in cache hit ratio.