🤖 AI Summary
Traditional static access control mechanisms fail to address the dynamic, multi-source, and context-sensitive information flows inherent in LLM-based agent systems. To bridge this gap, we propose Agent Access Control (AAC), a novel framework that shifts access control from binary permission assignment to fine-grained information-flow governance. AAC integrates triple-context modeling—relational, situational, and normative—to power a dedicated access control reasoning engine. It further incorporates information rewriting techniques—including redaction, summarization, and paraphrasing—to enable adaptive response generation and real-time policy enforcement. Experimental evaluation demonstrates that AAC significantly enhances both the security and semantic fidelity of information flows, while preserving human-like reasoning capabilities. The framework achieves controllable, interpretable, and scalable AI governance. This work establishes a new methodology and systematic implementation pathway for designing trustworthy LLM agents.
📝 Abstract
The autonomy and contextual complexity of LLM-based agents render traditional access control (AC) mechanisms insufficient. Static, rule-based systems designed for predictable environments are fundamentally ill-equipped to manage the dynamic information flows inherent in agentic interactions. This position paper argues for a paradigm shift from binary access control to a more sophisticated model of information governance, positing that the core challenge is not merely about permission, but about governing the flow of information. We introduce Agent Access Control (AAC), a novel framework that reframes AC as a dynamic, context-aware process of information flow governance. AAC operates on two core modules: (1) multi-dimensional contextual evaluation, which assesses not just identity but also relationships, scenarios, and norms; and (2) adaptive response formulation, which moves beyond simple allow/deny decisions to shape information through redaction, summarization, and paraphrasing. This vision, powered by a dedicated AC reasoning engine, aims to bridge the gap between human-like nuanced judgment and scalable Al safety, proposing a new conceptual lens for future research in trustworthy agent design.