๐ค AI Summary
In real-world scenarios such as VR, client requests exhibit strong temporal and contextual correlations, which conventional cache replacement policies (e.g., LRU, LFU) ignoreโleading to suboptimal hit rates. To address this, we propose a grouped client request model that captures diverse dependency patterns under shared contextual constraints. We further introduce causal inference into edge caching for the first time, designing LFRU: an online adaptive policy that models request groups via an enhanced Independent Reference Model, dynamically infers causal dependencies among content requests through online learning, and jointly leverages LRU- and LFU-inspired features for eviction decisions. Theoretical analysis characterizes how cache capacity governs optimal policy structure. Evaluated on a custom VR dataset, LFRU achieves up to 2.9ร and 1.9ร higher hit rates than LRU and LFU, respectively, and approaches offline-optimal performance in structured correlation settings.
๐ Abstract
Efficient edge caching reduces latency and alleviates backhaul congestion in modern networks. Traditional caching policies, such as Least Recently Used (LRU) and Least Frequently Used (LFU), perform well under specific request patterns. LRU excels in workloads with strong temporal locality, while LFU is effective when content popularity remains static. However, real-world client requests often exhibit correlations due to shared contexts and coordinated activities. This is particularly evident in Virtual Reality (VR) environments, where groups of clients navigate shared virtual spaces, leading to correlated content requests.
In this paper, we introduce the extit{grouped client request model}, a generalization of the Independent Reference Model that explicitly captures different types of request correlations. Our theoretical analysis of LRU under this model reveals that the optimal causal caching policy depends on cache size: LFU is optimal for small to moderate caches, while LRU outperforms it for larger caches. To address the limitations of existing policies, we propose Least Following and Recently Used (LFRU), a novel online caching policy that dynamically infers and adapts to causal relationships in client requests to optimize evictions. LFRU prioritizes objects likely to be requested based on inferred dependencies, achieving near-optimal performance compared to the offline optimal Belady policy in structured correlation settings.
We develop VR based datasets to evaluate caching policies under realistic correlated requests. Our results show that LFRU consistently performs at least as well as LRU and LFU, outperforming LRU by up to 2.9x and LFU by up to1.9x in certain settings.