Extending the Formalism and Theoretical Foundations of Cryptography to AI

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a unified formal foundation for systematically evaluating the security of access control and permission mechanisms in language model–based agents. It extends cryptographic formal methods to the domain of AI agents by introducing the AIOracle formal model and a security game framework encompassing confidentiality, integrity, and availability. The paper further establishes a taxonomy of agent-specific attacks and, through modular decomposition of beneficial and harmless objectives, constructs provably secure reductions that reveal a fundamental tension between training data confidentiality and system completeness. By establishing a quantifiable theoretical basis for AI agent security, this study demonstrates the necessity of modular design for achieving provable security and provides a formal verification pathway for future secure agent architectures.

Technology Category

Application Category

📝 Abstract
Recent progress in (Large) Language Models (LMs) has enabled the development of autonomous LM-based agents capable of executing complex tasks with minimal supervision. These agents have started to be integrated into systems with significant autonomy and authority. The security community has been studying their security. One emerging direction to mitigate security risks is to constrain agent behaviours via access control and permissioning mechanisms. Existing permissioning proposals, however, remain difficult to compare due to the absence of a shared formal foundation. This work provides such a foundation. We first systematize the landscape by constructing an attack taxonomy tailored to language models, the computational primitives of agentic systems. We then develop a formal treatment of agentic access control by defining an AIOracle algorithmically and introducing a security-game framework that captures completeness (in the absence of an adversary) and adversarial robustness. Our security game unifies confidentiality, integrity, and availability within a single model. Using this framework, we show that existing approaches to confidentiality of training data fundamentally conflict with completeness. Finally, we formalize a modular decomposition of helpfulness and harmlessness objectives and prove its soundness, in order to enable principled reasoning about the security of agentic system designs. Our studies suggests that if we were to design a secure system with measurable security, then we might want to use a modular approach to break the problem into sub-problems and let the composition on different modules complete the design. Our studies show that this natural approach with the relevant formalism is needed to prove security reductions.
Problem

Research questions and friction points this paper is trying to address.

AI agents
access control
formal foundations
security
language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

formal security framework
agentic access control
AIOracle
modular decomposition
adversarial robustness
🔎 Similar Papers
No similar papers found.