FicGCN: Unveiling the Homomorphic Encryption Efficiency from Irregular Graph Convolutional Networks

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and poor exploitability of sparsity in homomorphic encryption (HE)-based graph convolutional networks (GCNs), this paper proposes an efficient HE-GCN acceleration framework for privacy-preserving graph learning. The method introduces two key innovations: (1) Sparse ciphertext intra-aggregation (SpIntra-CA), which directly aggregates neighbor information within the ciphertext packing space, eliminating redundant rotation operations; and (2) neighborhood-driven region reordering, jointly optimizing adjacency-aware data rearrangement and low-latency rotations to mitigate irregular memory access costs. The framework tightly integrates HE principles, sparse graph representations, and ciphertext-level computation optimizations. Evaluated on multiple benchmark datasets, the proposed approach achieves up to 4.10× end-to-end inference speedup over state-of-the-art HE-GCN methods, significantly advancing the practicality of encrypted graph neural networks.

Technology Category

Application Category

📝 Abstract
Graph Convolutional Neural Networks (GCNs) have gained widespread popularity in various fields like personal healthcare and financial systems, due to their remarkable performance. Despite the growing demand for cloud-based GCN services, privacy concerns over sensitive graph data remain significant. Homomorphic Encryption (HE) facilitates Privacy-Preserving Machine Learning (PPML) by allowing computations to be performed on encrypted data. However, HE introduces substantial computational overhead, particularly for GCN operations that require rotations and multiplications in matrix products. The sparsity of GCNs offers significant performance potential, but their irregularity introduces additional operations that reduce practical gains. In this paper, we propose FicGCN, a HE-based framework specifically designed to harness the sparse characteristics of GCNs and strike a globally optimal balance between aggregation and combination operations. FicGCN employs a latency-aware packing scheme, a Sparse Intra-Ciphertext Aggregation (SpIntra-CA) method to minimize rotation overhead, and a region-based data reordering driven by local adjacency structure. We evaluated FicGCN on several popular datasets, and the results show that FicGCN achieved the best performance across all tested datasets, with up to a 4.10x improvement over the latest design.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in Homomorphic Encryption for GCNs
Optimizing sparse GCN operations to enhance privacy-preserving performance
Balancing aggregation and combination operations for efficient HE-based GCNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latency-aware packing scheme for efficiency
Sparse Intra-Ciphertext Aggregation reduces rotation overhead
Region-based data reordering optimizes adjacency structure
🔎 Similar Papers
No similar papers found.
Z
Zhaoxuan Kan
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Husheng Han
Husheng Han
Institute of Computing Technology, Chinese Academy of Sciences
Computer architectureSecurityDNNDomain-Specific Accelerator
S
Shangyi Shi
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
T
Tenghui Hua
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
H
Hang Lu
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; Zhongguancun Laboratory, Beijing, China
X
Xiaowei Li
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; Zhongguancun Laboratory, Beijing, China
Jianan Mu
Jianan Mu
Institute of Computing Technology, State Key Laboratory of Processors (SKLP), CAS
Design AutomationAccelaretorPrivacy Preserving Computing
X
Xing Hu
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; Zhongguancun Laboratory, Beijing, China; Shanghai Innovation Center for Processor Technologies, Shanghai, China