From Laboratory to Real-World Applications: Benchmarking Agentic Code Reasoning at the Repository Level

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods struggle to effectively assess the logical consistency and cross-file reasoning capabilities of large language models on real-world, repository-scale code. To address this gap, this work proposes RepoReason—a white-box diagnostic benchmark that enables fine-grained evaluation of a model’s abductive assertion verification through an execution-driven mutation framework combined with dynamic program slicing. The approach innovatively incorporates an environment-semantics-based oracle to mitigate memorization effects and introduces, for the first time, three orthogonal metrics: ESV (Environment Semantic Volume), MCL (Mocking Call Level), and DFI (Dependency Fusion Index), which respectively quantify reading load, simulation depth, and integration breadth. Empirical results reveal a pervasive aggregation deficit among state-of-the-art models, with integration breadth emerging as the primary cognitive bottleneck—offering a critical direction for optimizing next-generation intelligent software engineering tools.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) evolve into autonomous agents, evaluating repository-level reasoning, the ability to maintain logical consistency across massive, real-world, interdependent file systems, has become critical. Current benchmarks typically fluctuate between isolated code snippets and black-box evaluations. We present RepoReason, a white-box diagnostic benchmark centered on abductive assertion verification. To eliminate memorization while preserving authentic logical depth, we implement an execution-driven mutation framework that utilizes the environment as a semantic oracle to regenerate ground-truth states. Furthermore, we establish a fine-grained diagnostic system using dynamic program slicing, quantifying reasoning via three orthogonal metrics: $ESV$ (reading load), $MCL$ (simulation depth), and $DFI$ (integration width). Comprehensive evaluations of frontier models (e.g., Claude-4.5-Sonnet, DeepSeek-v3.1-Terminus) reveal a prevalent aggregation deficit, where integration width serves as the primary cognitive bottleneck. Our findings provide granular white-box insights for optimizing the next generation of agentic software engineering.
Problem

Research questions and friction points this paper is trying to address.

repository-level reasoning
agentic code reasoning
benchmarking
logical consistency
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository-level reasoning
abductive assertion verification
execution-driven mutation
dynamic program slicing
agentic code reasoning
🔎 Similar Papers
No similar papers found.