Trusted AI Agents in the Cloud

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-tenant cloud environments, AI agents face critical security challenges—including data leakage, tool misuse, and cross-entity trust deficits. To address these, this paper proposes Omega: a hardware-assisted confidential computing system leveraging AMD SEV-SNP and NVIDIA H100 accelerators. Omega integrates confidential virtual machines, confidential GPUs, differential attestation, and a policy enforcement framework to achieve end-to-end isolation, cross-entity trustworthy verification, and auditable external interaction governance. Its key innovations include (i) the first accelerator-level fine-grained isolation for AI agents; (ii) dynamic, differential-attestation-based cross-entity trust establishment; and (iii) nested, high-density deployment of trusted agents. Evaluation demonstrates that Omega ensures secure LLM agent state management and data flow while complying with GDPR and other regulatory requirements—achieving near-native performance and enabling scalable, secure multi-agent collaboration with strong policy consistency in cloud environments.

Technology Category

Application Category

📝 Abstract
AI agents powered by large language models are increasingly deployed as cloud services that autonomously access sensitive data, invoke external tools, and interact with other agents. However, these agents run within a complex multi-party ecosystem, where untrusted components can lead to data leakage, tampering, or unintended behavior. Existing Confidential Virtual Machines (CVMs) provide only per binary protection and offer no guarantees for cross-principal trust, accelerator-level isolation, or supervised agent behavior. We present Omega, a system that enables trusted AI agents by enforcing end-to-end isolation, establishing verifiable trust across all contributing principals, and supervising every external interaction with accountable provenance. Omega builds on Confidential VMs and Confidential GPUs to create a Trusted Agent Platform that hosts many agents within a single CVM using nested isolation. It also provides efficient multi-agent orchestration with cross-principal trust establishment via differential attestation, and a policy specification and enforcement framework that governs data access, tool usage, and inter-agent communication for data protection and regulatory compliance. Implemented on AMD SEV-SNP and NVIDIA H100, Omega fully secures agent state across CVM-GPU, and achieves high performance while enabling high-density, policy-compliant multi-agent deployments at cloud scale.
Problem

Research questions and friction points this paper is trying to address.

Securing AI agents in multi-party cloud ecosystems
Ensuring cross-principal trust and accelerator-level isolation
Supervising agent interactions with accountable provenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nested isolation for multi-agent trusted platform
Differential attestation for cross-principal trust establishment
Policy framework governing data access and tool usage
🔎 Similar Papers
No similar papers found.
T
Teofil Bodea
Technical University of Munich
Masanori Misono
Masanori Misono
Technical University of Munich
Operating SystemVirtualizationSystem Software
Julian Pritzi
Julian Pritzi
Technical University of Munich
P
Patrick Sabanic
Technical University of Munich
T
Thore Sommer
Technical University of Munich
H
Harshavardhan Unnibhavi
Technical University of Munich
David Schall
David Schall
Technical University of Munich
N
Nuno Santos
INESC-ID/Instituto Superior Tecnico, University of Lisbon
Dimitrios Stavrakakis
Dimitrios Stavrakakis
Postdoctoral researcher @ Technical University of Munich
confidential computingmemory safetyoperating systemssecuritypersistent memory
Pramod Bhatotia
Pramod Bhatotia
Professor, TU Munich
Computer Systems