🤖 AI Summary
To address runtime safety failures in large language model (LLM)-based agents—arising from autonomy and non-determinism in multi-step decision-making, goal planning, and tool invocation—this paper proposes a dynamic, hierarchical, cross-phase safety assurance framework. Methodologically, it innovatively adapts the Swiss Cheese Model to establish an AI safety reference architecture; introduces the first runtime protection taxonomy spanning quality attributes, pipeline stages, and architectural components; and formalizes an AI safety-by-design software architecture paradigm. Through systematic literature review, architectural modeling, and multi-layered defense design, the framework enables real-time monitoring and intervention over critical artifacts—including goals, plans, and tools. The contributions include a reusable classification system and a structured design guideline, collectively supporting the development of robust, verifiable, and evolvable safety-critical systems for foundation model agents. (149 words)
📝 Abstract
Foundation Model (FM)-based agents are revolutionizing application development across various domains. However, their rapidly growing capabilities and autonomy have raised significant concerns about AI safety. Researchers are exploring better ways to design guardrails to ensure that the runtime behavior of FM-based agents remains within specific boundaries. Nevertheless, designing effective runtime guardrails is challenging due to the agents' autonomous and non-deterministic behavior. The involvement of multiple pipeline stages and agent artifacts, such as goals, plans, tools, at runtime further complicates these issues. Addressing these challenges at runtime requires multi-layered guardrails that operate effectively at various levels of the agent architecture. Therefore, in this paper, based on the results of a systematic literature review, we present a comprehensive taxonomy of runtime guardrails for FM-based agents to identify the key quality attributes for guardrails and design dimensions. Inspired by the Swiss Cheese Model, we also propose a reference architecture for designing multi-layered runtime guardrails for FM-based agents, which includes three dimensions: quality attributes, pipelines, and artifacts. The proposed taxonomy and reference architecture provide concrete and robust guidance for researchers and practitioners to build AI-safety-by-design from a software architecture perspective.