🤖 AI Summary
AI agents and large language model (LLM) applications lack runtime security guarantees in open environments. Method: This paper proposes BASIC, the first lightweight, zero-architecture-intrusion runtime defense framework that requires no model retraining and incurs minimal overhead. Built upon five pillars—Behavioral Certificates, Attested Prompts, Safety Boundaries, Contextual Defense, and Policy Encoding—BASIC enables behavioral trust attestation, contextual integrity protection, and dynamic self-defense. Contribution/Results: Experiments demonstrate zero inference latency, no external dependencies or system refactoring, and support for fine-grained, customizable security policies. Crucially, this work pioneers the adaptation of the HTTPS-inspired defense-in-depth paradigm to the LLM runtime layer, establishing the first industrially viable, standardized solution for AI Agent and LLM Security (A2AS).
📝 Abstract
The A2AS framework is introduced as a security layer for AI agents and LLM-powered applications, similar to how HTTPS secures HTTP. A2AS enforces certified behavior, activates model self-defense, and ensures context window integrity. It defines security boundaries, authenticates prompts, applies security rules and custom policies, and controls agentic behavior, enabling a defense-in-depth strategy. The A2AS framework avoids latency overhead, external dependencies, architectural changes, model retraining, and operational complexity. The BASIC security model is introduced as the A2AS foundation: (B) Behavior certificates enable behavior enforcement, (A) Authenticated prompts enable context window integrity, (S) Security boundaries enable untrusted input isolation, (I) In-context defenses enable secure model reasoning, (C) Codified policies enable application-specific rules. This first paper in the series introduces the BASIC security model and the A2AS framework, exploring their potential toward establishing the A2AS industry standard.