TrimCaching: Parameter-sharing Edge Caching for AI Model Downloading

๐Ÿ“… 2024-04-22
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 12
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing edge caching mechanisms for AI model delivery in 5G/6G networks overlook parameter-block reuseโ€”e.g., shared knowledge units across CNNs or LLMsโ€”leading to low storage efficiency and limited cache hit rates under stringent latency constraints. Method: We propose a parameter-sharing-aware edge model caching framework that, for the first time, formulates parameter-block reuse as a submodular optimization problem. We design a polynomial-time algorithm with theoretical approximation guarantees and provide a general greedy solution. The framework jointly optimizes storage efficiency and service latency in multi-edge wireless networks. Results: Simulation results demonstrate that our approach significantly improves cache hit rates over conventional content-based caching, validating the effectiveness and practicality of parameter-level sharing for edge AI deployment.

Technology Category

Application Category

๐Ÿ“ Abstract
Next-generation mobile networks are expected to facilitate fast AI model downloading to end users. By caching models on edge servers, mobile networks can deliver models to end users with low latency, resulting in a paradigm called edge model caching. In this paper, we develop a novel model placement scheme, called parameter-sharing model caching (TrimCaching). TrimCaching exploits the key observation that a wide range of AI models, such as convolutional neural networks or large language models, can share a significant proportion of parameter blocks containing reusable knowledge, thereby improving storage efficiency. To this end, we formulate a parameter-sharing model placement problem to maximize the cache hit ratio in multi-edge wireless networks by balancing the fundamental tradeoff between storage efficiency and service latency. We show that the formulated problem is a submodular maximization problem with submodular constraints, for which no polynomial-time approximation algorithm exists. To overcome this challenge, we study an important special case, where a small fixed number of parameter blocks are shared across models, which often holds in practice. In such a case, a polynomial-time algorithm with $left(1-epsilon ight)/2$-approximation guarantee is developed. Subsequently, we address the original problem for the general case by developing a greedy algorithm. Simulation results demonstrate that the proposed TrimCaching framework significantly improves the cache hit ratio compared with state-of-the-art content caching without exploiting shared parameters in AI models.
Problem

Research questions and friction points this paper is trying to address.

Optimizing AI model caching on edge servers for low latency
Maximizing cache hit ratio by sharing parameter blocks across models
Balancing storage efficiency and service latency in wireless networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-sharing model caching for AI models
Maximizes cache hit ratio in wireless networks
Greedy algorithm with approximation guarantees
๐Ÿ”Ž Similar Papers
No similar papers found.