FHEON: A Configurable Framework for Developing Privacy-Preserving Neural Networks Using Homomorphic Encryption

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing homomorphic encryption (HE)-neural network integration approaches suffer from architectural constraints and a lack of general-purpose development frameworks. To address this, we propose the first highly configurable privacy-preserving CNN inference framework supporting arbitrary CNN architectures—including LeNet, VGG, and ResNet—executed efficiently in the ciphertext domain. Our framework introduces the first HE-compatible, configurable designs for convolution, average pooling, ReLU, and fully connected layers. Leveraging algorithm-hardware co-optimization, it achieves 98.5% and 92.2% inference accuracy on MNIST and CIFAR-10, respectively—within ≤1% accuracy loss relative to plaintext baselines—while attaining inference latencies of 13 seconds and 403 seconds and peak memory usage ≤42.3 GB. This work bridges the practical gap between HE and general-purpose deep learning frameworks, simultaneously ensuring cryptographic security, high model fidelity, and deployment feasibility.

Technology Category

Application Category

📝 Abstract
The widespread adoption of Machine Learning as a Service raises critical privacy and security concerns, particularly about data confidentiality and trust in both cloud providers and the machine learning models. Homomorphic Encryption (HE) has emerged as a promising solution to this problems, allowing computations on encrypted data without decryption. Despite its potential, existing approaches to integrate HE into neural networks are often limited to specific architectures, leaving a wide gap in providing a framework for easy development of HE-friendly privacy-preserving neural network models similar to what we have in the broader field of machine learning. In this paper, we present FHEON, a configurable framework for developing privacy-preserving convolutional neural network (CNN) models for inference using HE. FHEON introduces optimized and configurable implementations of privacy-preserving CNN layers including convolutional layers, average pooling layers, ReLU activation functions, and fully connected layers. These layers are configured using parameters like input channels, output channels, kernel size, stride, and padding to support arbitrary CNN architectures. We assess the performance of FHEON using several CNN architectures, including LeNet-5, VGG-11, VGG- 16, ResNet-20, and ResNet-34. FHEON maintains encrypted-domain accuracies within +/- 1% of their plaintext counterparts for ResNet-20 and LeNet-5 models. Notably, on a consumer-grade CPU, the models build on FHEON achieved 98.5% accuracy with a latency of 13 seconds on MNIST using LeNet-5, and 92.2% accuracy with a latency of 403 seconds on CIFAR-10 using ResNet-20. Additionally, FHEON operates within a practical memory budget requiring not more than 42.3 GB for VGG-16.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy concerns in machine learning services using homomorphic encryption
Overcomes limitations of existing HE frameworks for neural network architectures
Provides configurable privacy-preserving CNN layers for encrypted data inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Configurable framework for privacy-preserving neural networks
Optimized homomorphic encryption layers for CNN architectures
Maintains accuracy within 1% of plaintext models
🔎 Similar Papers
No similar papers found.
N
Nges Brian Njungle
STAM Center, Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ 85281, USA
E
Eric Jahns
STAM Center, Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ 85281, USA
Michel A. Kinsy
Michel A. Kinsy
Associate Professor, Arizona State University
Microelectronics SecurityHardware SecuritySecure Computer ArchitectureAdaptive ComputingCryptosystems