LLM-Empowered Cooperative Content Caching in Vehicular Fog Caching-Assisted Platoon Networks

📅 2026-02-04
🏛️ IEEE Communications Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of content retrieval latency in vehicular networks caused by uneven content distribution and high vehicle mobility. To this end, the authors propose a three-tier collaborative fog caching architecture that, for the first time, integrates large language models (LLMs) into caching decisions. By leveraging prompt engineering, the system constructs contextual inputs encompassing user profiles, historical requests, and real-time system states, which are then processed through a hierarchical deterministic mapping strategy to enable adaptive cache placement without frequent model retraining. Experimental results demonstrate that the proposed approach significantly reduces content retrieval delay and achieves superior performance in terms of cache hit rate and response latency compared to existing methods.

Technology Category

Application Category

📝 Abstract
This letter proposes a novel three-tier content caching architecture for Vehicular Fog Caching (VFC)-assisted platoon, where the VFC is formed by the vehicles driving near the platoon. The system strategically coordinates storage across local platoon vehicles, dynamic VFC clusters, and cloud server (CS) to minimize content retrieval latency. To efficiently manage distributed storage, we integrate large language models (LLMs) for real-time and intelligent caching decisions. The proposed approach leverages LLMs'ability to process heterogeneous information, including user profiles, historical data, content characteristics, and dynamic system states. Through a designed prompting framework encoding task objectives and caching constraints, the LLMs formulate caching as a decision-making task, and our hierarchical deterministic caching mapping strategy enables adaptive requests prediction and precise content placement across three tiers without frequent retraining. Simulation results demonstrate the advantages of our proposed caching scheme.
Problem

Research questions and friction points this paper is trying to address.

Vehicular Fog Caching
Content Caching
Platoon Networks
Latency Minimization
Distributed Storage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models (LLMs)
Vehicular Fog Caching
Cooperative Content Caching
Three-tier Caching Architecture
Prompt-based Decision Making
🔎 Similar Papers
No similar papers found.