🤖 AI Summary
To address the efficiency bottleneck of secure inference for generative models in privacy-sensitive settings, this paper proposes a fine-grained hierarchical quantization framework that integrates a multi-input lookup table (LUT) protocol with dual secret sharing. The method enables efficient integer-only quantized inference by introducing a 1-bit weight fully connected layer and a LUT-based secure softmax computation, thereby eliminating truncation overhead from nonlinear function evaluation. Through hierarchical quantization, low-precision integer arithmetic, and co-optimization of cryptographic protocols, the approach significantly reduces both communication and computational costs. Experiments on BERT-base demonstrate that our method achieves 8×, 9×, and 22× speedups over Lu et al. (NDSS ’25), Gupta et al. (PETS ’24), and Knott et al. (NeurIPS ’21), respectively. This work establishes a scalable new paradigm for high-assurance private inference of generative AI models.
📝 Abstract
With the increasing deployment of generative machine learning models in privacy-sensitive domains such as healthcare and personalized services, ensuring secure inference has become a critical challenge. Secure multi-party computation (MPC) enables privacy-preserving model inference but suffers from high communication and computation overhead. The main bottleneck lies in the expensive secure evaluation of floating-point operations. Quantization offers a promising solution by converting floating-point operations into lower-precision integer computations, significantly reducing overhead. However, existing MPC-based quantized inference methods either rely on public quantization parameters-posing privacy risks-or suffer from inefficiencies, particularly in handling nonlinear functions such as activations and softmax. In this work, we propose a fine-grained, layer-wise quantization scheme and support 1-bit weight fully connected layers in a secure setting. We design a multi-input lookup table protocol to evaluate softmax efficiently and securely. Furthermore, we use dual secret sharing schemes and perform precision conversions via lookup tables, eliminating truncation overhead entirely. Experimental evaluation on BERT-base models demonstrates that our approach achieves up to $8 imes$ speedup compared to Lu emph{et al}. (NDSS 25), $9 imes$ speedup compared to Gupta emph{et al}. (PETS 24) and $22 imes$ speedup compared to Knott emph{et al}. (NeurIPS 21).