Large Model Based Agents: State-of-the-Art, Cooperation Paradigms, Security and Privacy, and Future Trends

📅 2024-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical challenges in large language model (LLM)-driven multi-agent collaboration—specifically, the lack of unified coordination mechanisms and ill-defined security and privacy risks. To this end, we systematically establish the first taxonomy framework for LLM-agent collaboration grounded in the “connected intelligence” paradigm. We propose a three-layer architecture for cross-agent coordination—encompassing data, computation, and knowledge—and develop a security vulnerability attribution framework alongside a layered defense roadmap. Integrating LLMs, federated learning, differential privacy, and trusted execution environments, we construct a holistic evaluation system for the LLM-agent ecosystem, identifying seven categories of security threats and five privacy leakage pathways. Our contributions provide both theoretical foundations and practical guidelines for building robust, auditable, and scalable autonomous agent collaboration systems.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of large models (LMs), the development of general-purpose intelligent agents powered by LMs has become a reality. It is foreseeable that in the near future, LM-driven general AI agents will serve as essential tools in production tasks, capable of autonomous communication and collaboration without human intervention. This paper investigates scenarios involving the autonomous collaboration of future LM agents. We review the current state of LM agents, the key technologies enabling LM agent collaboration, and the security and privacy challenges they face during cooperative operations. To this end, we first explore the foundational principles of LM agents, including their general architecture, key components, enabling technologies, and modern applications. We then discuss practical collaboration paradigms from data, computation, and knowledge perspectives to achieve connected intelligence among LM agents. After that, we analyze the security vulnerabilities and privacy risks associated with LM agents, particularly in multi-agent settings, examining underlying mechanisms and reviewing current and potential countermeasures. Lastly, we propose future research directions for building robust and secure LM agent ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Large-scale Intelligent Models
Collaborative Robotics
Safety and Privacy Challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-robot Collaboration
Security and Privacy
Intelligent Robot Systems
🔎 Similar Papers
Yuntao Wang
Yuntao Wang
Tsinghua University
Human-Computer InteractionUbiquitous ComputingPhysio-Behavioral Computing
Y
Yanghe Pan
School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an, China
Zhou Su
Zhou Su
Xi'an Jiaotong University
Yi Deng
Yi Deng
School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an, China
Q
Quan Zhao
School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an, China
L
L. Du
School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an, China
T
Tom H. Luan
School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an, China
J
Jiawen Kang
School of Automation, Guangdong University of Technology, Guangzhou, China
D
Dusit Niyato
College of Computing and Data Science, Nanyang Technological University, Singapore