Graph Unlearning Meets Influence-aware Negative Preference Optimization

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the sharp utility degradation of graph neural networks (GNNs) upon node removal, this paper proposes Influence-aware Negative Preference Optimization (I-NPO). I-NPO first estimates edge influence to identify high-impact edges, then designs an influence-aware message-passing mechanism and introduces a topological entropy loss to mitigate local structural information loss. By integrating negative preference optimization with entropy regularization, it effectively suppresses gradient explosion and enhances forgetting robustness. Experiments on five real-world graph datasets demonstrate that I-NPO consistently outperforms state-of-the-art methods in forgetting quality—measured by metrics such as forgetting rate and retained-set accuracy—while significantly preserving model utility. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Recent advancements in graph unlearning models have enhanced model utility by preserving the node representation essentially invariant, while using gradient ascent on the forget set to achieve unlearning. However, this approach causes a drastic degradation in model utility during the unlearning process due to the rapid divergence speed of gradient ascent. In this paper, we introduce extbf{INPO}, an extbf{I}nfluence-aware extbf{N}egative extbf{P}reference extbf{O}ptimization framework that focuses on slowing the divergence speed and improving the robustness of the model utility to the unlearning process. Specifically, we first analyze that NPO has slower divergence speed and theoretically propose that unlearning high-influence edges can reduce impact of unlearning. We design an influence-aware message function to amplify the influence of unlearned edges and mitigate the tight topological coupling between the forget set and the retain set. The influence of each edge is quickly estimated by a removal-based method. Additionally, we propose a topological entropy loss from the perspective of topology to avoid excessive information loss in the local structure during unlearning. Extensive experiments conducted on five real-world datasets demonstrate that INPO-based model achieves state-of-the-art performance on all forget quality metrics while maintaining the model's utility. Codes are available at href{https://github.com/sh-qiangchen/INPO}{https://github.com/sh-qiangchen/INPO}.
Problem

Research questions and friction points this paper is trying to address.

Slowing gradient ascent divergence in graph unlearning
Reducing unlearning impact by targeting high-influence edges
Maintaining model utility while achieving effective forget quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Influence-aware Negative Preference Optimization for graph unlearning
Amplifies influence of unlearned edges with message function
Uses topological entropy loss to preserve local structure
🔎 Similar Papers
No similar papers found.
Q
Qiang Chen
Central South University, Changsha, China
Z
Zhongze Wu
Central South University, Changsha, China
A
Ang He
Shanghai Maritime University, Shanghai, China
X
Xi Lin
Shanghai Jiao Tong University, Shanghai, China
S
Shuo Jiang
Tongji University, Shanghai, China
Shan You
Shan You
SenseTime Research
deep learningmultimodal LLMedge AI
C
Chang Xu
University of Sydney, Sydney, Australia
Y
Yi Chen
Hong Kong University of Science and Technology, Hong Kong, China
X
Xiu Su
Central South University, Changsha, China