AgentRx: Diagnosing AI Agent Failures from Execution Trajectories

πŸ“… 2026-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
AI agents often fail in complex tasks due to stochastic behaviors, long-horizon dependencies, multi-agent interactions, and tool-induced noise, with root causes difficult to pinpoint. To address this, this work introduces the first cross-domain benchmark for AI agent failures, comprising 115 expert-annotated trajectories, and proposes AgentRxβ€”a novel diagnostic framework. AgentRx establishes a domain-agnostic failure taxonomy grounded in grounded theory, synthesizes constraint-guided violation logs with step-by-step verification, and employs a large language model discriminator to automatically identify critical error steps and their failure categories. Experiments across three distinct task domains demonstrate that AgentRx significantly outperforms existing methods in both localizing critical steps and attributing failure causes, establishing the first auditable, automated, and cross-domain diagnostic system for AI agent failures.

Technology Category

Application Category

πŸ“ Abstract
AI agents often fail in ways that are difficult to localize because executions are probabilistic, long-horizon, multi-agent, and mediated by noisy tool outputs. We address this gap by manually annotating failed agent runs and release a novel benchmark of 115 failed trajectories spanning structured API workflows, incident management, and open-ended web/file tasks. Each trajectory is annotated with a critical failure step and a category from a grounded-theory derived, cross-domain failure taxonomy. To mitigate the human cost of failure attribution, we present AGENTRX, an automated domain-agnostic diagnostic framework that pinpoints the critical failure step in a failed agent trajectory. It synthesizes constraints, evaluates them step-by-step, and produces an auditable validation log of constraint violations with associated evidence; an LLM-based judge uses this log to localize the critical step and category. Our framework improves step localization and failure attribution over existing baselines across three domains.
Problem

Research questions and friction points this paper is trying to address.

AI agent failures
failure localization
execution trajectories
failure attribution
noisy tool outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

failure diagnosis
agent execution trajectory
constraint synthesis
auditable validation
LLM-based judgment
πŸ”Ž Similar Papers
No similar papers found.