🤖 AI Summary
To address the challenges of limited cache capacity and dynamic user demand in wireless video caching networks, this paper proposes a privacy-preserving two-stage caching optimization framework. First, a Transformer model is collaboratively trained via federated learning (FL) to jointly capture user-specific behavioral patterns and global content popularity for multi-slot demand forecasting. Second, the long-term system utility—comprising delivery rewards, transmission costs, and cache replacement costs—is formulated as a multi-stage knapsack problem and solved optimally using integer linear programming. The key innovation lies in the first integration of privacy-preserving FL with multi-stage utility-aware caching decisions, moving beyond conventional cache-hit-ratio (CHR)-centric approaches. Experiments demonstrate that the proposed method achieves prediction accuracy comparable to centralized learning, while significantly outperforming existing baselines in both system utility and caching decision quality.
📝 Abstract
Video caching can significantly improve delivery efficiency and enhance quality of video streaming, which constitutes the majority of wireless communication traffic. Due to limited cache size, caching strategies must be designed to adapt to and dynamic user demand in order to maximize system revenue. The system revenue depends on the benefits of delivering the requested videos and costs for (a) transporting the files to the users and (b) cache replacement. Since the cache content at any point in time impacts the replacement costs in the future, demand predictions over multiple cache placement slots become an important prerequisite for efficient cache planning. Motivated by this, we introduce a novel two-stage privacy-preserving solution for revenue optimization in wireless video caching networks. First, we train a Transformer using privacy-preserving federated learning (FL) to predict multi-slot future demands. Given that prediction results are never entirely accurate, especially for longer horizons, we further combine global content popularity with per-user prediction results to estimate the content demand distribution. Then, in the second stage, we leverage these estimation results to find caching strategies that maximize the long-term system revenue. This latter problem takes on the form of a multi-stage knapsack problem, which we then transform to a integer linear program. Our extensive simulation results demonstrate that (i) our FL solution delivers nearly identical performance to that of the ideal centralized solution and outperforms other existing caching methods, and (ii) our novel revenue optimization approach provides deeper system performance insights than traditional cache hit ratio (CHR)-based optimization approaches.