RoBCtrl: Attacking GNN-Based Social Bot Detectors via Reinforced Manipulation of Bots Control Interaction

πŸ“… 2025-10-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing GNN-based social bot detectors exhibit three critical limitations under control-attack scenarios: weak controllability over bot accounts, high model opacity (black-box nature), and poor adaptability to bot heterogeneity. To address these challenges, we propose RoBCtrlβ€”the first multi-agent reinforcement learning (MARL) adversarial framework tailored for social bot control attacks. RoBCtrl innovatively employs diffusion models to generate high-fidelity, evolution-aware bot accounts; introduces a structural-entropy-guided hierarchical state abstraction to enhance multi-agent collaborative policy learning efficiency; and enables controllable, cross-heterogeneous-group coordinated attacks. Evaluated on multiple benchmark datasets, RoBCtrl significantly degrades the accuracy of mainstream GNN-based detectors and achieves an average 23.6% higher attack success rate than state-of-the-art methods. This work establishes a novel paradigm for rigorously evaluating and improving the robustness of social bot detection systems.

Technology Category

Application Category

πŸ“ Abstract
Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has garnered considerable attention from researchers, leading to the development of numerous detection technologies. However, the vulnerability and robustness of these detection methods is still underexplored. Existing Graph Neural Network (GNN)-based methods cannot be directly applied due to the issues of limited control over social agents, the black-box nature of bot detectors, and the heterogeneity of bots. To address these challenges, this paper proposes the first adversarial multi-agent Reinforcement learning framework for social Bot control attacks (RoBCtrl) targeting GNN-based social bot detectors. Specifically, we use a diffusion model to generate high-fidelity bot accounts by reconstructing existing account data with minor modifications, thereby evading detection on social platforms. To the best of our knowledge, this is the first application of diffusion models to mimic the behavior of evolving social bots effectively. We then employ a Multi-Agent Reinforcement Learning (MARL) method to simulate bots adversarial behavior. We categorize social accounts based on their influence and budget. Different agents are then employed to control bot accounts across various categories, optimizing the attachment strategy through reinforcement learning. Additionally, a hierarchical state abstraction based on structural entropy is designed to accelerate the reinforcement learning. Extensive experiments on social bot detection datasets demonstrate that our framework can effectively undermine the performance of GNN-based detectors.
Problem

Research questions and friction points this paper is trying to address.

Attacking GNN-based social bot detection systems
Evading detection by generating realistic bot accounts
Optimizing adversarial bot control via multi-agent reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion model to generate realistic bot accounts
Employs multi-agent reinforcement learning for adversarial control
Applies hierarchical state abstraction to accelerate training
πŸ”Ž Similar Papers
No similar papers found.