π€ AI Summary
This work addresses the challenge of highly coupled cache decision-making among multiple base stations in overlapping wireless network regions, where performance is sensitive to network topology and temporal reuse patterns. To this end, it pioneers the use of a large language model (LLM) as an autonomous decision-making engine, converting cache states and request statistics into textual prompts via a text-to-action interface to generate constraint-compliant cache replacement policies. The authors introduce an opportunity-aware reward mechanism and a two-stage alignment training paradigm that integrates supervised fine-tuning with Group Relative Policy Optimization. Evaluated in a five-base-station scenario, the proposed approach achieves near-exhaustive-search performance (0.610 vs. 0.617), outperforms LFU by 4.1%, and demonstrates strong generalization and robust zero-shot transfer capabilities under varying cache capacities, content catalog sizes, and user densities.
π Abstract
Cooperative edge caching in overlapping zones creates intricate coupling among Base Station (BS) decisions, making content replacement highly sensitive to topology and temporal reuse. While heuristics are often myopic and Deep Reinforcement Learning lacks robustness under dynamics, this paper proposes a Large Language Model (LLM)-based multi-BS orchestrator. The LLM acts as the sole autonomous engine, interacting with the environment via a validated text-to-action interface. Each time slot, the system renders environmental states -- including cache inventories and frequency statistics -- into prompts, parsing LLM-generated decisions against strict feasibility constraints. We align the model through a two-stage paradigm: Supervised Fine-Tuning on oracle trajectories for syntax and initialization, followed by Group Relative Policy Optimization. The latter employs an ``opportunity-aware''reward that prioritizes multi-step cooperative gains relative to a No-Operation baseline. Evaluated on identical request traces, the orchestrator approaches exhaustive-search performance (0.610 vs.\ 0.617 in a 5-BS scenario), outperforms classical baselines (e.g., +4.1\% over least-frequently used), and demonstrates robust zero-shot transfer across varying cache capacities, library sizes, and user densities.