🤖 AI Summary
Large language model (LLM)-driven multi-agent systems (MAS) face escalating corruption attacks under dynamic communication, while existing static graph-based defenses lack adaptability to evolving threats. Method: This paper proposes a dynamic defense paradigm leveraging graph representation learning and node behavioral assessment to detect anomalous agent interactions in real time and dynamically reconfigure the communication topology to sever malicious connections. Contribution/Results: The core innovation is the first “monitor–assess–reconstruct” closed-loop mechanism, enabling adaptive identification and response to complex, time-varying attacks. Extensive experiments across diverse dynamic scenarios demonstrate that the approach significantly outperforms state-of-the-art defenses, effectively mitigating heterogeneous corruption attacks and substantially enhancing MAS robustness and trustworthiness—establishing a novel, deployable security paradigm for LLM-MAS.
📝 Abstract
Large Language Model (LLM)-based Multi-Agent Systems (MAS) have become a popular paradigm of AI applications. However, trustworthiness issues in MAS remain a critical concern. Unlike challenges in single-agent systems, MAS involve more complex communication processes, making them susceptible to corruption attacks. To mitigate this issue, several defense mechanisms have been developed based on the graph representation of MAS, where agents represent nodes and communications form edges. Nevertheless, these methods predominantly focus on static graph defense, attempting to either detect attacks in a fixed graph structure or optimize a static topology with certain defensive capabilities. To address this limitation, we propose a dynamic defense paradigm for MAS graph structures, which continuously monitors communication within the MAS graph, then dynamically adjusts the graph topology, accurately disrupts malicious communications, and effectively defends against evolving and diverse dynamic attacks. Experimental results in increasingly complex and dynamic MAS environments demonstrate that our method significantly outperforms existing MAS defense mechanisms, contributing an effective guardrail for their trustworthy applications. Our code is available at https://github.com/ChengcanWu/Monitoring-LLM-Based-Multi-Agent-Systems.