ContextBench: A Benchmark for Context Retrieval in Coding Agents

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing code agent evaluations, which predominantly focus on final task success rates while neglecting how agents retrieve and utilize code context during problem solving. To bridge this gap, we propose ContextBench, the first process-oriented benchmark for contextual retrieval in code repair tasks, comprising 1,136 human-annotated problems with gold-standard contexts across 66 repositories and 8 programming languages. We further develop an automated framework to trace agent execution trajectories, enabling fine-grained assessment of context recall, precision, and efficiency. Our experiments reveal that complex agent architectures yield limited gains in retrieval performance, large language models generally favor high-recall strategies, and a significant discrepancy exists between explored and actually utilized contexts. This study provides a granular evaluation methodology and intermediate supervision signals for advancing code-aware agents.

Technology Category

Application Category

📝 Abstract
LLM-based coding agents have shown strong performance on automated issue resolution benchmarks, yet existing evaluations largely focus on final task success, providing limited insight into how agents retrieve and use code context during problem solving. We introduce ContextBench, a process-oriented evaluation of context retrieval in coding agents. ContextBench consists of 1,136 issue-resolution tasks from 66 repositories across eight programming languages, each augmented with human-annotated gold contexts. We further implement an automated evaluation framework that tracks agent trajectories and measures context recall, precision, and efficiency throughout issue resolution. Using ContextBench, we evaluate four frontier LLMs and five coding agents. Our results show that sophisticated agent scaffolding yields only marginal gains in context retrieval ("The Bitter Lesson"of coding agents), LLMs consistently favor recall over precision, and substantial gaps exist between explored and utilized context. ContextBench augments existing end-to-end benchmarks with intermediate gold-context metrics that unbox the issue-resolution process. These contexts offer valuable intermediate signals for guiding LLM reasoning in software tasks.
Problem

Research questions and friction points this paper is trying to address.

context retrieval
coding agents
LLM evaluation
issue resolution
code context
Innovation

Methods, ideas, or system contributions that make the work stand out.

context retrieval
coding agents
evaluation benchmark
gold context
LLM reasoning
🔎 Similar Papers
No similar papers found.