🤖 AI Summary
This work addresses critical challenges in large language model (LLM)-driven multi-agent collaboration—specifically, the lack of unified coordination mechanisms and ill-defined security and privacy risks. To this end, we systematically establish the first taxonomy framework for LLM-agent collaboration grounded in the “connected intelligence” paradigm. We propose a three-layer architecture for cross-agent coordination—encompassing data, computation, and knowledge—and develop a security vulnerability attribution framework alongside a layered defense roadmap. Integrating LLMs, federated learning, differential privacy, and trusted execution environments, we construct a holistic evaluation system for the LLM-agent ecosystem, identifying seven categories of security threats and five privacy leakage pathways. Our contributions provide both theoretical foundations and practical guidelines for building robust, auditable, and scalable autonomous agent collaboration systems.
📝 Abstract
With the rapid advancement of large models (LMs), the development of general-purpose intelligent agents powered by LMs has become a reality. It is foreseeable that in the near future, LM-driven general AI agents will serve as essential tools in production tasks, capable of autonomous communication and collaboration without human intervention. This paper investigates scenarios involving the autonomous collaboration of future LM agents. We review the current state of LM agents, the key technologies enabling LM agent collaboration, and the security and privacy challenges they face during cooperative operations. To this end, we first explore the foundational principles of LM agents, including their general architecture, key components, enabling technologies, and modern applications. We then discuss practical collaboration paradigms from data, computation, and knowledge perspectives to achieve connected intelligence among LM agents. After that, we analyze the security vulnerabilities and privacy risks associated with LM agents, particularly in multi-agent settings, examining underlying mechanisms and reviewing current and potential countermeasures. Lastly, we propose future research directions for building robust and secure LM agent ecosystems.