🤖 AI Summary
To address three key bottlenecks in deep convolutional neural network (CNN) inference under fully homomorphic encryption (FHE)—high convolutional computational overhead, expensive bootstrapping, and excessive multiplicative circuit depth—this paper proposes an efficient privacy-preserving inference framework tailored for the RNS-CKKS scheme. We introduce a novel ciphertext batching packing mechanism and integrate depthwise separable convolutions, coupled with an optimized dot-product matrix formulation for batch normalization. Furthermore, we approximate the SiLU activation function using low-degree Legendre polynomials to minimize multiplicative depth. Experiments demonstrate that our approach significantly reduces inference latency and circuit depth—by up to 62%—while maintaining prediction accuracy loss below 0.5% between plaintext and encrypted-domain inference. The framework thus achieves high efficiency and high accuracy in FHE-based inference without compromising strong privacy guarantees.
📝 Abstract
The deep learning (DL) has been penetrating daily life in many domains, how to keep the DL model inference secure and sample privacy in an encrypted environment has become an urgent and increasingly important issue for various security-critical applications. To date, several approaches have been proposed based on the Residue Number System variant of the Cheon-Kim-Kim-Song (RNS-CKKS) scheme. However, they all suffer from high latency, which severely limits the applications in real-world tasks. Currently, the research on encrypted inference in deep CNNs confronts three main bottlenecks: i) the time and storage costs of convolution calculation; ii) the time overhead of huge bootstrapping operations; and iii) the consumption of circuit multiplication depth. Towards these three challenges, we in this paper propose an efficient and effective mechanism FastFHE to accelerate the model inference while simultaneously retaining high inference accuracy over fully homomorphic encryption. Concretely, our work elaborates four unique novelties. First, we propose a new scalable ciphertext data-packing scheme to save the time and storage consumptions. Second, we work out a depthwise-separable convolution fashion to degrade the computation load of convolution calculation. Third, we figure out a BN dot-product fusion matrix to merge the ciphertext convolutional layer with the batch-normalization layer without incurring extra multiplicative depth. Last but not least, we adopt the low-degree Legendre polynomial to approximate the nonlinear smooth activation function SiLU under the guarantee of tiny accuracy error before and after encrypted inference. Finally, we execute multi-facet experiments to verify the efficiency and effectiveness of our proposed approach.