TRAJEVAL: Decomposing Code Agent Trajectories for Fine-Grained Diagnosis

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of code-generating agents rely solely on overall pass rates, which obscure the root causes of failures. This work proposes TRAJEVAL, a novel framework that decomposes agent execution trajectories into three interpretable stages—search, read, and edit—and aligns them with reference patches to compute stage-wise precision and recall for fine-grained diagnostic insights. Analysis of 16,758 trajectories reveals both general inefficiencies and model-specific failure patterns. The proposed metrics effectively predict Pass@1 performance with a mean absolute error of 0.87–2.1%. Furthermore, integrating TRAJEVAL with a real-time feedback mechanism improves state-of-the-art model performance by 2.2–4.6 percentage points while reducing inference costs by 20–31%.

Technology Category

Application Category

📝 Abstract
Code agents can autonomously resolve GitHub issues, yet when they fail, current evaluation provides no visibility into where or why. Metrics such as Pass@1 collapse an entire execution into a single binary outcome, making it difficult to identify where and why the agent went wrong. To address this limitation, we introduce TRAJEVAL, a diagnostic framework that decomposes agent trajectories into three interpretable stages: search (file localization), read (function comprehension), and edit (modification targeting). For each stage, we compute precision and recall by comparing against reference patches. Analyzing 16,758 trajectories across three agent architectures and seven models, we find universal inefficiencies (all agents examine approximately 22x more functions than necessary) yet distinct failure modes: GPT-5 locates relevant code but targets edits incorrectly, while Qwen-32B fails at file discovery entirely. We validate that these diagnostics are predictive, achieving model-level Pass@1 prediction within 0.87-2.1% MAE, and actionable: real-time feedback based on trajectory signals improves two state-of-the-art models by 2.2-4.6 percentage points while reducing costs by 20-31%. These results demonstrate that our framework not only provides a more fine-grained analysis of agent behavior, but also translates diagnostic signals into tangible performance gains. More broadly, TRAJEVAL transforms agent evaluation beyond outcome-based benchmarking toward mechanism-driven diagnosis of agent success and failure.
Problem

Research questions and friction points this paper is trying to address.

code agent evaluation
trajectory decomposition
fine-grained diagnosis
failure analysis
execution trace
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory decomposition
fine-grained diagnosis
code agent evaluation
mechanism-driven analysis
real-time feedback
🔎 Similar Papers
No similar papers found.