🤖 AI Summary
To address the declining cache hit ratio caused by dynamic shifts in cache access distributions, this paper proposes AdaptiveClimb, an adaptive cache replacement framework, and its extension DynamicAdaptiveClimb. Methodologically, it innovatively integrates oblivious statistical sampling with an incremental ranking advancement (CLIMB) mechanism to enable online, dynamic tuning of the jump distance; it further introduces, for the first time, a cache capacity self-scaling strategy that requires no complex state maintenance. The key contributions are a lightweight, low-overhead, fully unsupervised dual-adaptation mechanism—adapting both jump distance and cache size—which significantly enhances responsiveness to working-set fluctuations. Experimental evaluation across 1,067 real-world traces shows that DynamicAdaptiveClimb improves hit ratio by up to 29% over FIFO, outperforms state-of-the-art algorithms—including AdaptiveClimb and SIEVE—by 10–15%, and effectively reduces miss penalty.
📝 Abstract
Efficient cache management is critical for optimizing the system performance, and numerous caching mechanisms have been proposed, each exploring various insertion and eviction strategies. In this paper, we present AdaptiveClimb and its extension, DynamicAdaptiveClimb, two novel cache replacement policies that leverage lightweight, cache adaptation to outperform traditional approaches. Unlike classic Least Recently Used (LRU) and Incremental Rank Progress (CLIMB) policies, AdaptiveClimb dynamically adjusts the promotion distance (jump) of the cached objects based on recent hit and miss patterns, requiring only a single tunable parameter and no per-item statistics. This enables rapid adaptation to changing access distributions while maintaining low overhead. Building on this foundation, DynamicAdaptiveClimb further enhances adaptability by automatically tuning the cache size in response to workload demands. Our comprehensive evaluation across a diverse set of real-world traces, including 1067 traces from 6 different datasets, demonstrates that DynamicAdaptiveClimb consistently achieves substantial speedups and higher hit ratios compared to other state-of-the-art algorithms. In particular, our approach achieves up to a 29% improvement in hit ratio and a substantial reduction in miss penalties compared to the FIFO baseline. Furthermore, it outperforms the next-best contenders, AdaptiveClimb and SIEVE [43], by approximately 10% to 15%, especially in environments characterized by fluctuating working set sizes. These results highlight the effectiveness of our approach in delivering efficient performance, making it well-suited for modern, dynamic caching environments.