Inferring Causal Relationships to Improve Caching for Clients with Correlated Requests: Applications to VR

๐Ÿ“… 2025-12-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In real-world scenarios such as VR, client requests exhibit strong temporal and contextual correlations, which conventional cache replacement policies (e.g., LRU, LFU) ignoreโ€”leading to suboptimal hit rates. To address this, we propose a grouped client request model that captures diverse dependency patterns under shared contextual constraints. We further introduce causal inference into edge caching for the first time, designing LFRU: an online adaptive policy that models request groups via an enhanced Independent Reference Model, dynamically infers causal dependencies among content requests through online learning, and jointly leverages LRU- and LFU-inspired features for eviction decisions. Theoretical analysis characterizes how cache capacity governs optimal policy structure. Evaluated on a custom VR dataset, LFRU achieves up to 2.9ร— and 1.9ร— higher hit rates than LRU and LFU, respectively, and approaches offline-optimal performance in structured correlation settings.

Technology Category

Application Category

๐Ÿ“ Abstract
Efficient edge caching reduces latency and alleviates backhaul congestion in modern networks. Traditional caching policies, such as Least Recently Used (LRU) and Least Frequently Used (LFU), perform well under specific request patterns. LRU excels in workloads with strong temporal locality, while LFU is effective when content popularity remains static. However, real-world client requests often exhibit correlations due to shared contexts and coordinated activities. This is particularly evident in Virtual Reality (VR) environments, where groups of clients navigate shared virtual spaces, leading to correlated content requests. In this paper, we introduce the extit{grouped client request model}, a generalization of the Independent Reference Model that explicitly captures different types of request correlations. Our theoretical analysis of LRU under this model reveals that the optimal causal caching policy depends on cache size: LFU is optimal for small to moderate caches, while LRU outperforms it for larger caches. To address the limitations of existing policies, we propose Least Following and Recently Used (LFRU), a novel online caching policy that dynamically infers and adapts to causal relationships in client requests to optimize evictions. LFRU prioritizes objects likely to be requested based on inferred dependencies, achieving near-optimal performance compared to the offline optimal Belady policy in structured correlation settings. We develop VR based datasets to evaluate caching policies under realistic correlated requests. Our results show that LFRU consistently performs at least as well as LRU and LFU, outperforming LRU by up to 2.9x and LFU by up to1.9x in certain settings.
Problem

Research questions and friction points this paper is trying to address.

Traditional caching policies fail with correlated client requests in VR environments
Existing methods lack adaptation to dynamic causal relationships in content requests
Current approaches perform suboptimally under structured correlation patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

LFRU policy infers causal relationships for eviction decisions
Grouped client model captures various request correlation types
Dynamic adaptation to inferred dependencies optimizes cache performance
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Agrim Bari
The University of Texas at Austin, USA
Gustavo de Veciana
Gustavo de Veciana
Professor of Electrical and Computer Engineering, U.T. Austin
Communication SystemsNetworksPerformance
Y
Yuqi Zhou
Purdue University, USA