🤖 AI Summary
To address the high communication overhead of secure multi-party computation (MPC) in private large language model (LLM) inference on cloud platforms—where privacy and efficiency are often at odds—this paper proposes the first MPC-based secure inference system leveraging activation sparsity priors. Our method introduces three key innovations: (1) a lightweight neural predictor for fine-grained pre-estimation of activation sparsity patterns; (2) an MPC protocol designed with spatial locality awareness and a customized zero-value skipping mechanism; and (3) dynamic KV-cache replenishment and prefetching optimizations. Evaluated on four mainstream LLMs, our system achieves 1.87–2.63× speedup and reduces communication volume by 1.94–2.64× over six state-of-the-art baselines, significantly alleviating the performance bottleneck in MPC-based private LLM inference.
📝 Abstract
With the growing use of large language models (LLMs) hosted on cloud platforms to offer inference services, privacy concerns about the potential leakage of sensitive information are escalating. Secure multi-party computation (MPC) is a promising solution to protect the privacy in LLM inference. However, MPC requires frequent inter-server communication, causing high performance overhead. Inspired by the prevalent activation sparsity of LLMs, where most neuron are not activated after non-linear activation functions, we propose an efficient private inference system, Comet. This system employs an accurate and fast predictor to predict the sparsity distribution of activation function output. Additionally, we introduce a new private inference protocol. It efficiently and securely avoids computations involving zero values by exploiting the spatial locality of the predicted sparse distribution. While this computation-avoidance approach impacts the spatiotemporal continuity of KV cache entries, we address this challenge with a low-communication overhead cache refilling strategy that merges miss requests and incorporates a prefetching mechanism. Finally, we evaluate Comet on four common LLMs and compare it with six state-of-the-art private inference systems. Comet achieves a 1.87x-2.63x speedup and a 1.94x-2.64x communication reduction.