HoGS: Homophily-Oriented Graph Synthesis for Local Differentially Private GNN Training

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of graph neural networks (GNNs) under local differential privacy (LDP) by proposing the HoGS framework. HoGS leverages graph homophily to jointly reconstruct both graph structure and node features from LDP-perturbed data, generating high-quality synthetic graphs for GNN training while rigorously preserving the privacy of node attributes and links. To the best of our knowledge, this is the first approach to integrate graph homophily modeling with LDP mechanisms, enabling unified protection of these two types of sensitive information and substantially reducing utility loss. Experimental results on three real-world datasets demonstrate that various state-of-the-art GNN models trained on synthetic graphs produced by HoGS achieve significantly higher accuracy than those trained using existing LDP baselines.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) have demonstrated remarkable performance in various graph-based machine learning tasks by effectively modeling high-order interactions between nodes. However, training GNNs without protection may leak sensitive personal information in graph data, including links and node features. Local differential privacy (LDP) is an advanced technique for protecting data privacy in decentralized networks. Unfortunately, existing local differentially private GNNs either only preserve link privacy or suffer significant utility loss in the process of preserving link and node feature privacy. In this paper, we propose an effective LDP framework, called HoGS, which trains GNNs with link and feature protection by generating a synthetic graph. Concretely, HoGS first collects the link and feature information of the graph under LDP, and then utilizes the phenomenon of homophily in graph data to reconstruct the graph structure and node features separately, thereby effectively mitigating the negative impact of LDP on the downstream GNN training. We theoretically analyze the privacy guarantee of HoGS and conduct experiments using the generated synthetic graph as input to various state-of-the-art GNN architectures. Experimental results on three real-world datasets show that HoGS significantly outperforms baseline methods in the accuracy of training GNNs.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Local Differential Privacy
Privacy Preservation
Graph Data
Utility Loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local Differential Privacy
Graph Neural Networks
Homophily
Graph Synthesis
Privacy-Preserving Machine Learning
🔎 Similar Papers
No similar papers found.
Wen Xu
Wen Xu
Georgia Institute of Technology
Systems Security
Z
Zhetao Li
College of Information Science and Technology, Jinan University, Guangzhou 510632, China
Y
Yong Xiao
College of Information Science and Technology, Jinan University, Guangzhou 510632, China
Pengpeng Qiao
Pengpeng Qiao
Institute of Science Tokyo (formerly Tokyo Tech)
M
Mianxiong Dong
Department of Sciences and Informatics, Muroran Institute of Technology, Muroran 050-8585, Japan
K
Kaoru Ota
Department of Sciences and Informatics, Muroran Institute of Technology, Muroran 050-8585, Japan