AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Language-model-driven autonomous agents face critical safety and governance challenges in production environments. Method: This paper introduces AAGATE—the first end-to-end governance platform for autonomous AI agents—grounded in the NIST AI Risk Management Framework (AI RMF). It integrates MAESTRO threat modeling, hybrid AIVSS/SSVC risk scoring, and red-teaming, while introducing three novel components: the Digital Identity Rights Framework (DIRF), Logic-layer Protection and Control Instrumentation (LPCI), and Quantitative Semantic Adversarial Fidelity (QSAF) monitoring for cognitive degradation. Built on Kubernetes-native infrastructure, zero-trust service mesh, explainable policy engine, behavioral analytics, and decentralized accountability hooks, it enables fine-grained, verifiable, and adaptive governance. Contribution/Results: Experiments demonstrate that AAGATE significantly enhances security, traceability, and regulatory compliance of autonomous AI systems, enabling trustworthy, enterprise-scale AI deployment.

Technology Category

Application Category

📝 Abstract
This paper introduces the Agentic AI Governance Assurance & Trust Engine (AAGATE), a Kubernetes-native control plane designed to address the unique security and governance challenges posed by autonomous, language-model-driven agents in production. Recognizing the limitations of traditional Application Security (AppSec) tooling for improvisational, machine-speed systems, AAGATE operationalizes the NIST AI Risk Management Framework (AI RMF). It integrates specialized security frameworks for each RMF function: the Agentic AI Threat Modeling MAESTRO framework for Map, a hybrid of OWASP's AIVSS and SEI's SSVC for Measure, and the Cloud Security Alliance's Agentic AI Red Teaming Guide for Manage. By incorporating a zero-trust service mesh, an explainable policy engine, behavioral analytics, and decentralized accountability hooks, AAGATE provides a continuous, verifiable governance solution for agentic AI, enabling safe, accountable, and scalable deployment. The framework is further extended with DIRF for digital identity rights, LPCI defenses for logic-layer injection, and QSAF monitors for cognitive degradation, ensuring governance spans systemic, adversarial, and ethical risks.
Problem

Research questions and friction points this paper is trying to address.

Governs autonomous AI agents' security and governance challenges
Operationalizes NIST AI Risk Management Framework for agentic systems
Provides continuous verifiable governance across systemic adversarial ethical risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kubernetes-native control plane for autonomous AI governance
Operationalizes NIST AI RMF with specialized security frameworks
Integrates zero-trust service mesh and behavioral analytics
🔎 Similar Papers
No similar papers found.