Privacy-Preserving CNN Training with Transfer Learning: Two Hidden Layers

๐Ÿ“… 2025-04-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of training deep neural networks under fully homomorphic encryption (FHE). We propose the first non-interactive, end-to-end trainable four-layer CNN framework supporting both single- and multi-output classification tasks. To mitigate gradient vanishing in deep networks under FHE constraints, we empirically validateโ€” for the first time in FHEโ€”the effectiveness of jointly using the sigmoid activation function and binary cross-entropy (BCE) loss for multi-class classification. Furthermore, we design an enhanced Double Volley Revolver encoding scheme that significantly improves the trade-off between computational efficiency and memory utilization within the CKKS FHE scheme. Implemented in C++ and open-sourced, our system demonstrates markedly improved training stability and model scalability compared to conventional loss functions such as squared loss error (SLE), as confirmed by comprehensive experiments.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we present the demonstration of training a four-layer neural network entirely using fully homomorphic encryption (FHE), supporting both single-output and multi-output classification tasks in a non-interactive setting. A key contribution of our work is identifying that replacing extit{Softmax} with extit{Sigmoid}, in conjunction with the Binary Cross-Entropy (BCE) loss function, provides an effective and scalable solution for homomorphic classification. Moreover, we show that the BCE loss function, originally designed for multi-output tasks, naturally extends to the multi-class setting, thereby enabling broader applicability. We also highlight the limitations of prior loss functions such as the SLE loss and the one proposed in the 2019 CVPR Workshop, both of which suffer from vanishing gradients as network depth increases. To address the challenges posed by large-scale encrypted data, we further introduce an improved version of the previously proposed data encoding scheme, extit{Double Volley Revolver}, which achieves a better trade-off between computational and memory efficiency, making FHE-based neural network training more practical. The complete, runnable C++ code to implement our work can be found at: href{https://github.com/petitioner/ML.NNtraining}{$ exttt{https://github.com/petitioner/ML.NNtraining}$}.
Problem

Research questions and friction points this paper is trying to address.

Training neural networks with fully homomorphic encryption
Replacing Softmax with Sigmoid for scalable homomorphic classification
Improving data encoding for efficient FHE-based training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses fully homomorphic encryption for CNN training
Replaces Softmax with Sigmoid and BCE loss
Improves data encoding with Double Volley Revolver
๐Ÿ”Ž Similar Papers
No similar papers found.