Comet: Accelerating Private Inference for Large Language Model by Predicting Activation Sparsity

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead of secure multi-party computation (MPC) in private large language model (LLM) inference on cloud platforms—where privacy and efficiency are often at odds—this paper proposes the first MPC-based secure inference system leveraging activation sparsity priors. Our method introduces three key innovations: (1) a lightweight neural predictor for fine-grained pre-estimation of activation sparsity patterns; (2) an MPC protocol designed with spatial locality awareness and a customized zero-value skipping mechanism; and (3) dynamic KV-cache replenishment and prefetching optimizations. Evaluated on four mainstream LLMs, our system achieves 1.87–2.63× speedup and reduces communication volume by 1.94–2.64× over six state-of-the-art baselines, significantly alleviating the performance bottleneck in MPC-based private LLM inference.

Technology Category

Application Category

📝 Abstract
With the growing use of large language models (LLMs) hosted on cloud platforms to offer inference services, privacy concerns about the potential leakage of sensitive information are escalating. Secure multi-party computation (MPC) is a promising solution to protect the privacy in LLM inference. However, MPC requires frequent inter-server communication, causing high performance overhead. Inspired by the prevalent activation sparsity of LLMs, where most neuron are not activated after non-linear activation functions, we propose an efficient private inference system, Comet. This system employs an accurate and fast predictor to predict the sparsity distribution of activation function output. Additionally, we introduce a new private inference protocol. It efficiently and securely avoids computations involving zero values by exploiting the spatial locality of the predicted sparse distribution. While this computation-avoidance approach impacts the spatiotemporal continuity of KV cache entries, we address this challenge with a low-communication overhead cache refilling strategy that merges miss requests and incorporates a prefetching mechanism. Finally, we evaluate Comet on four common LLMs and compare it with six state-of-the-art private inference systems. Comet achieves a 1.87x-2.63x speedup and a 1.94x-2.64x communication reduction.
Problem

Research questions and friction points this paper is trying to address.

Accelerating private inference for large language models
Reducing communication overhead in secure multi-party computation
Predicting activation sparsity to avoid zero-value computations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts activation sparsity to accelerate inference
Introduces a new private inference protocol
Uses cache refilling with prefetching for efficiency
🔎 Similar Papers
No similar papers found.
G
Guang Yan
State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and University of Chinese Academy of Sciences
Yuhui Zhang
Yuhui Zhang
Stanford University
Machine LearningComputer VisionNatural Language ProcessingBiotech
Z
Zimu Guo
State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and University of Chinese Academy of Sciences
L
Lutan Zhao
State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and University of Chinese Academy of Sciences
X
Xiaojun Chen
State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and University of Chinese Academy of Sciences
C
Chen Wang
EIRI, NELBDRC, Tsinghua University
W
Wenhao Wang
State Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS and University of Chinese Academy of Sciences
Dan Meng
Dan Meng
OPPO
Rui Hou
Rui Hou
Member of Technical Staff, xAI
Large Language ModelReasoning