🤖 AI Summary
Multi-LLM agent systems deployed in edge-based general intelligence scenarios face critical security risks—including insecure inter-LLM communication, expanded attack surfaces, and cross-domain data leakage—stemming from the absence of zero-trust principles. To address this, this paper pioneers the systematic integration of zero-trust architecture into multi-LLM coordination, proposing a “never trust, always verify” security framework. Our contributions are threefold: (1) a novel taxonomy of multi-LLM security mechanisms spanning model-level and system-level abstractions; (2) a lightweight, zero-trust–enabled architecture integrating strong identity authentication, context-aware access control, proactive defense, and blockchain-driven governance; and (3) concrete security implementation guidelines and open research directions for multi-LLM collaboration in heterogeneous edge environments. This work establishes both theoretical foundations and deployable technical paradigms for trustworthy edge AI.
📝 Abstract
Agentification serves as a critical enabler of Edge General Intelligence (EGI), transforming massive edge devices into cognitive agents through integrating Large Language Models (LLMs) and perception, reasoning, and acting modules. These agents collaborate across heterogeneous edge infrastructures, forming multi-LLM agentic AI systems that leverage collective intelligence and specialized capabilities to tackle complex, multi-step tasks. However, the collaborative nature of multi-LLM systems introduces critical security vulnerabilities, including insecure inter-LLM communications, expanded attack surfaces, and cross-domain data leakage that traditional perimeter-based security cannot adequately address. To this end, this survey introduces zero-trust security of multi-LLM in EGI, a paradigmatic shift following the ``never trust, always verify'' principle. We begin by systematically analyzing the security risks in multi-LLM systems within EGI contexts. Subsequently, we present the vision of a zero-trust multi-LLM framework in EGI. We then survey key technical progress to facilitate zero-trust multi-LLM systems in EGI. Particularly, we categorize zero-trust security mechanisms into model- and system-level approaches. The former and latter include strong identification, context-aware access control, etc., and proactive maintenance, blockchain-based management, etc., respectively. Finally, we identify critical research directions. This survey serves as the first systematic treatment of zero-trust applied to multi-LLM systems, providing both theoretical foundations and practical strategies.