Activate Me!: Designing Efficient Activation Functions for Privacy-Preserving Machine Learning with Fully Homomorphic Encryption

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fully homomorphic encryption (FHE) natively supports only linear operations, making efficient implementation of nonlinear activation functions—essential for neural networks—challenging. To address this bottleneck, we systematically evaluate Square and ReLU approximation techniques under the CKKS scheme and propose a novel, secure ReLU implementation enabling dynamic switching between plaintext and ciphertext computation paths: critical layers are decrypted adaptively to balance security and accuracy. Leveraging low-degree polynomial approximations and OpenFHE, our method achieves 99.4% test accuracy on LeNet-5 (128 sec/image) using Square, and 89.8% accuracy on ResNet-20 (1697 sec/image) with our ReLU variant. This work is the first to quantitatively characterize the fundamental trade-off among accuracy, inference latency, and adaptive computation-path switching in FHE-based neural network inference, establishing a scalable, nonlinearity-aware paradigm for privacy-preserving deep learning.

Technology Category

Application Category

📝 Abstract
The growing adoption of machine learning in sensitive areas such as healthcare and defense introduces significant privacy and security challenges. These domains demand robust data protection, as models depend on large volumes of sensitive information for both training and inference. Fully Homomorphic Encryption (FHE) presents a compelling solution by enabling computations directly on encrypted data, maintaining confidentiality across the entire machine learning workflow. However, FHE inherently supports only linear operations, making it difficult to implement non-linear activation functions, essential components of modern neural networks. This work focuses on designing, implementing, and evaluating activation functions tailored for FHE-based machine learning. We investigate two commonly used functions: the Square function and Rectified Linear Unit (ReLU), using LeNet-5 and ResNet-20 architectures with the CKKS scheme from the OpenFHE library. For ReLU, we assess two methods: a conventional low-degree polynomial approximation and a novel scheme-switching technique that securely evaluates ReLU under FHE constraints. Our findings show that the Square function performs well in shallow networks like LeNet-5, achieving 99.4% accuracy with 128 seconds per image. In contrast, deeper models like ResNet-20 benefit more from ReLU. The polynomial approximation yields 83.8% accuracy with 1,145 seconds per image, while our scheme-switching method improves accuracy to 89.8%, albeit with a longer inference time of 1,697 seconds. These results underscore a critical trade-off in FHE-based ML: faster activation functions often reduce accuracy, whereas those preserving accuracy demand greater computational resources.
Problem

Research questions and friction points this paper is trying to address.

Designing activation functions for FHE-based machine learning
Evaluating Square and ReLU functions in encrypted neural networks
Balancing accuracy and computational efficiency in FHE-ML
Innovation

Methods, ideas, or system contributions that make the work stand out.

Designing activation functions for FHE-based ML
Evaluating Square and ReLU in FHE constraints
Scheme-switching technique enhances ReLU accuracy
🔎 Similar Papers
No similar papers found.