🤖 AI Summary
This study addresses the challenge enterprises face in effectively governing and verifying compliance of emergent AI systems, which creates a trust gap between regulatory expectations and operational capabilities. To bridge this gap, the work proposes a continuous, autonomous AI governance architecture that translates compliance requirements into real-time telemetry mechanisms at the operating system layer. By leveraging a zero-trust telemetry boundary, ephemeral read-only probes, and AI observability agents—integrating LangSmith and Datadog LLM telemetry—the approach automatically discovers AI systems, collects control assertions, and continuously generates verifiable evidence without accessing source code or sensitive payloads. Validated against major regulatory frameworks including ISO/IEC 42001, the EU AI Act, SOC 2, GDPR, and HIPAA, this method enables a fundamental shift from document-based policy trust to empirically grounded architectural trust.
📝 Abstract
The accelerating adoption of large language models, retrieval-augmented generation pipelines, and multi-agent AI workflows has created a structural governance crisis. Organizations cannot govern what they cannot see, and existing compliance methodologies built for deterministic web applications provide no mechanism for discovering or continuously validating AI systems that emerge across engineering teams without formal oversight. The result is a widening trust gap between what regulators demand as proof of AI governance maturity and what organizations can demonstrate. This paper proposes AI Trust OS, a governance architecture for continuous, autonomous AI observability and zero-trust compliance. AI Trust OS reconceptualizes compliance as an always-on, telemetry-driven operating layer in which AI systems are discovered through observability signals, control assertions are collected by automated probes, and trust artifacts are synthesized continuously. The framework rests on four principles: proactive discovery, telemetry evidence over manual attestation, continuous posture over point-in-time audit, and architecture-backed proof over policy-document trust. The framework operates through a zero-trust telemetry boundary in which ephemeral read-only probes validate structural metadata without ingressing source code or payload-level PII. An AI Observability Extractor Agent scans LangSmith and Datadog LLM telemetry, automatically registering undocumented AI systems and shifting governance from organizational self-report to empirical machine observation. Evaluated across ISO 42001, the EU AI Act, SOC 2, GDPR, and HIPAA, the paper argues that telemetry-first AI governance represents a categorical architectural shift in how enterprise trust is produced and demonstrated.