🤖 AI Summary
This work addresses the challenge of engineering non-functional requirements (NFRs)—such as safety, observability, and cost management—in agentic AI systems, where they often manifest as cross-cutting concerns that impede practical implementation. The authors propose a systematic approach grounded in i* goal modeling to identify NFR softgoals and map them into reusable, aspect-oriented implementations in Rust. They introduce a novel NFR pattern language comprising 12 patterns across four categories, including agent-specific aspects like tool sandboxing, prompt injection detection, token budgeting, and action auditing. Furthermore, they extend the V-model to unify the modeling of functional and non-functional goals. Empirical validation on an open-source agent framework demonstrates that this method effectively modularizes cross-cutting concerns, thereby enhancing system reliability and maintainability.
📝 Abstract
Agentic AI systems exhibit numerous crosscutting concerns -- security, observability, cost management, fault tolerance -- that are poorly modularized in current implementations, contributing to the high failure rate of AI projects in reaching production. The goals-to-aspects methodology proposed at RE 2004 demonstrated that aspects can be systematically discovered from i* goal models by identifying non-functional soft-goals that crosscut functional goals. This paper revisits and extends that methodology to the agentic AI domain. We present a pattern language of 12 reusable patterns organized across four NFR categories (security, reliability, observability, cost management), each mapping an i* goal model to a concrete aspect implementation using an AOP framework for Rust. Four patterns address agent-specific crosscutting concerns absent from traditional AOP literature: tool-scope sandboxing, prompt injection detection, token budget management, and action audit trails. We extend the V-graph model to capture how agent tasks simultaneously contribute to functional goals and non-functional soft-goals. We validate the pattern language through a case study analyzing an open-source autonomous agent framework, demonstrating how goal-driven aspect discovery systematically identifies and modularizes crosscutting concerns. The pattern language offers a principled approach for engineering reliable agentic AI systems through early identification of crosscutting concerns.