Transferable Hypergraph Attack via Injecting Nodes into Pivotal Hyperedges

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hypergraph neural network (HGNN) adversarial attack methods rely on target model architecture and overlook a common vulnerability arising from heterogeneous hyperedge importance, thereby limiting transferability and generalization. This paper proposes the first transferable adversarial attack framework targeting hyperedge importance: (1) a hyperedge identifier locates critical hyperedges; (2) a feature inversion mechanism generates malicious nodes inducing maximal semantic shift; and (3) a model-agnostic injection strategy executes the attack without access to target model parameters. The core innovation lies in the first application of hyperedge importance assessment to cross-model transfer attacks, with perturbation generation explicitly driven by semantic divergence maximization. Evaluated on six real-world datasets, our method significantly outperforms state-of-the-art approaches—particularly under cross-model transfer settings—demonstrating superior robustness degradation and attack transferability.

Technology Category

Application Category

📝 Abstract
Recent studies have demonstrated that hypergraph neural networks (HGNNs) are susceptible to adversarial attacks. However, existing methods rely on the specific information mechanisms of target HGNNs, overlooking the common vulnerability caused by the significant differences in hyperedge pivotality along aggregation paths in most HGNNs, thereby limiting the transferability and effectiveness of attacks. In this paper, we present a novel framework, i.e., Transferable Hypergraph Attack via Injecting Nodes into Pivotal Hyperedges (TH-Attack), to address these limitations. Specifically, we design a hyperedge recognizer via pivotality assessment to obtain pivotal hyperedges within the aggregation paths of HGNNs. Furthermore, we introduce a feature inverter based on pivotal hyperedges, which generates malicious nodes by maximizing the semantic divergence between the generated features and the pivotal hyperedges features. Lastly, by injecting these malicious nodes into the pivotal hyperedges, TH-Attack improves the transferability and effectiveness of attacks. Extensive experiments are conducted on six authentic datasets to validate the effectiveness of TH-Attack and the corresponding superiority to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Attacking hypergraph neural networks via adversarial node injection
Improving attack transferability across different hypergraph models
Targeting pivotal hyperedges to enhance attack effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injecting malicious nodes into pivotal hyperedges
Assessing hyperedge pivotality via recognizer module
Generating features by maximizing semantic divergence
🔎 Similar Papers
No similar papers found.
M
Meixia He
School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University
P
Peican Zhu
School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University
Le Cheng
Le Cheng
PhD candidate, Northwestern Polytechnical University
Network scienceSource detectionMachine learning
Y
Yangming Guo
School of Cybersecurity, Northwestern Polytechnical University
M
Manman Yuan
School of Computer Science, Inner Mongolia University
Keke Tang
Keke Tang
Full Professor of Cybersecurity, Guangzhou University (always open to cooperation)
AI security3D visioncomputer graphicsrobotics