Community Concealment from Unsupervised Graph Learning-Based Clustering

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the risk that graph neural networks (GNNs) may inadvertently leak sensitive community information in unsupervised clustering, thereby compromising group privacy. To mitigate this, the authors propose a utility-preserving obfuscation method that jointly optimizes edge rewiring and node feature perturbation to weaken the community salience upon which GNN message passing relies. Theoretical analysis identifies inter-community connectivity and intra-community feature similarity as key factors governing obfuscation efficacy, informing the design of targeted perturbation strategies. Experimental results demonstrate that, under the same perturbation budget, the proposed approach improves median obfuscation performance by 20%–45% over DICE, effectively balancing community privacy preservation with model utility.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are designed to use attributed graphs to learn representations. Such representations are beneficial in the unsupervised learning of clusters and community detection. Nonetheless, such inference may reveal sensitive groups, clustered systems, or collective behaviors, raising concerns regarding group-level privacy. Community attribution in social and critical infrastructure networks, for example, can expose coordinated asset groups, operational hierarchies, and system dependencies that could be used for profiling or intelligence gathering. We study a defensive setting in which a data publisher (defender) seeks to conceal a community of interest while making limited, utility-aware changes in the network. Our analysis indicates that community concealment is strongly influenced by two quantifiable factors: connectivity at the community boundary and feature similarity between the protected community and adjacent communities. Informed by these findings, we present a perturbation strategy that rewires a set of selected edges and modifies node features to reduce the distinctiveness leveraged by GNN message passing. The proposed method outperforms DICE in our experiments on synthetic benchmarks and real network graphs under identical perturbation budgets. Overall, it achieves median relative concealment improvements of approximately 20-45% across the evaluated settings. These findings demonstrate a mitigation strategy against GNN-based community learning and highlight group-level privacy risks intrinsic to graph learning.
Problem

Research questions and friction points this paper is trying to address.

community concealment
graph neural networks
group-level privacy
unsupervised clustering
attributed graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

community concealment
graph neural networks
privacy-preserving perturbation
unsupervised graph learning
group-level privacy
🔎 Similar Papers
No similar papers found.