TRiSM for Agentic AI: A Review of Trust, Risk, and Security Management in LLM-based Agentic Multi-Agent Systems

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLM-driven Autonomous Multi-Agent Systems (AMAS) face critical challenges in trust deficiency, emergent risks, and safety management amid increasing autonomy. Method: We propose TRiSM—the first Trust, Risk, and Safety Management framework tailored for Agentic AI—grounded in four pillars: governance, explainability, ModelOps, and privacy-security. TRiSM introduces a novel four-dimensional architecture specifically designed for embodied and tool-augmented AMAS, the first domain-specific risk taxonomy for Agentic AI, and integrates human-centered evaluation, compliance-driven practices (e.g., GDPR, EU AI Act), adversarial robustness defenses, federated encryption, and dynamic ModelOps pipelines. Contribution/Results: The framework delivers a full-lifecycle TRiSM implementation guide, a reusable risk case repository, and an open benchmark challenge—establishing a standardized, actionable pathway toward trustworthy, deployable AMAS.

Technology Category

Application Category

📝 Abstract
Agentic AI systems, built on large language models (LLMs) and deployed in multi-agent configurations, are redefining intelligent autonomy, collaboration and decision-making across enterprise and societal domains. This review presents a structured analysis of Trust, Risk, and Security Management (TRiSM) in the context of LLM-based agentic multi-agent systems (AMAS). We begin by examining the conceptual foundations of agentic AI, its architectural differences from traditional AI agents, and the emerging system designs that enable scalable, tool-using autonomy. The TRiSM in the agentic AI framework is then detailed through four pillars governance, explainability, ModelOps, and privacy/security each contextualized for agentic LLMs. We identify unique threat vectors and introduce a comprehensive risk taxonomy for the agentic AI applications, supported by case studies illustrating real-world vulnerabilities. Furthermore, the paper also surveys trust-building mechanisms, transparency and oversight techniques, and state-of-the-art explainability strategies in distributed LLM agent systems. Additionally, metrics for evaluating trust, interpretability, and human-centered performance are reviewed alongside open benchmarking challenges. Security and privacy are addressed through encryption, adversarial defense, and compliance with evolving AI regulations. The paper concludes with a roadmap for responsible agentic AI, proposing research directions to align emerging multi-agent systems with robust TRiSM principles for safe, accountable, and transparent deployment.
Problem

Research questions and friction points this paper is trying to address.

Analyzing TRiSM in LLM-based agentic multi-agent systems
Identifying unique threat vectors and risk taxonomy
Proposing trust-building and security mechanisms for agentic AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

TRiSM framework for LLM-based multi-agent systems
Four pillars: governance, explainability, ModelOps, privacy
Encryption and adversarial defense for security
🔎 Similar Papers
No similar papers found.