🤖 AI Summary
AI agents in multi-agent collaboration pose fine-grained privacy risks due to information misuse—including unauthorized data sharing, tool invocation, and derivative data generation—rendering conventional binary access control inadequate for role-specific, computation-dependent, and dynamically evolving compliance requirements.
Method: We propose the first four-tier graded encryption communication framework tailored for AI agent collaboration, embedding privacy protection across the entire agent behavior lifecycle. It integrates homomorphic encryption, label-based dynamic access control, encrypted state machines, and secure multi-party computation, implemented end-to-end on LangGraph and Google ADK.
Contribution/Results: The framework enables progressive privacy-strength adjustment—from plaintext interaction to fully homomorphic encrypted computation—and empirically validates secure cross-silo collaboration. We release the first benchmark suite covering multi-level privacy tasks, providing both theoretical foundations and engineering blueprints for auditable, verifiable secure agent systems.
📝 Abstract
As AI agents increasingly operate in real-world, multi-agent environments, ensuring reliable and context-aware privacy in agent communication is critical, especially to comply with evolving regulatory requirements. Traditional access controls are insufficient, as privacy risks often arise after access is granted; agents may use information in ways that compromise privacy, such as messaging humans, sharing context with other agents, making tool calls, persisting data, or generating derived private information. Existing approaches often treat privacy as a binary constraint, whether data is shareable or not, overlooking nuanced, role-specific, and computation-dependent privacy needs essential for regulatory compliance.
Agents, including those based on large language models, are inherently probabilistic and heuristic. There is no formal guarantee of how an agent will behave for any query, making them ill-suited for operations critical to security. To address this, we introduce AgentCrypt, a four-tiered framework for fine-grained, encrypted agent communication that adds a protection layer atop any AI agent platform. AgentCrypt spans unrestricted data exchange (Level 1) to fully encrypted computation using techniques such as homomorphic encryption (Level 4). Crucially, it guarantees the privacy of tagged data is always maintained, prioritizing privacy above correctness.
AgentCrypt ensures privacy across diverse interactions and enables computation on otherwise inaccessible data, overcoming barriers such as data silos. We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms. We also introduce a benchmark dataset simulating privacy-critical tasks at all privacy levels, enabling systematic evaluation and fostering the development of regulatable machine learning systems for secure agent communication and computation.