Agent psychometrics: Task-level performance prediction in agentic coding benchmarks

๐Ÿ“… 2026-04-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current evaluations of coding agents rely solely on aggregate pass rates, which obscure performance differences at the task level. This work proposes the first performance prediction framework tailored for multi-step coding tasks, innovatively extending Item Response Theory (IRT) to the agent setting by decomposing agent proficiency into two interpretable dimensions: large language model (LLM) capability and scaffolding effectiveness. By integrating task metadata with code semantic features, the framework enables generalizable success probability estimation across heterogeneous benchmark suites, accurately predicting outcomes for both unseen tasks and novel LLMโ€“scaffolding combinations. This approach provides benchmark designers with an efficient, low-cost tool for difficulty calibration.
๐Ÿ“ Abstract
As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.
Problem

Research questions and friction points this paper is trying to address.

agent psychometrics
task-level performance prediction
agentic coding benchmarks
Item Response Theory
LLM-scaffold combinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent psychometrics
Item Response Theory
agentic coding
task-level performance prediction
LLM-scaffold decomposition
๐Ÿ”Ž Similar Papers
No similar papers found.