ReCUBE: Evaluating Repository-Level Context Utilization in Code Generation

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks struggle to evaluate how effectively large language models leverage repository-level context—including source code, dependencies, and documentation—for code generation. To address this gap, this work introduces ReCUBE, a new benchmark that assesses cross-file code generation through a masked file reconstruction task. It also presents the first Caller-Centric Exploration (CCE) toolkit, which guides agents to focus on critical caller files. ReCUBE integrates dependency graph–based context modeling with a usage-aware automated testing framework for evaluation. Experiments across eight leading models reveal that even GPT-5 achieves only a 37.57% strict pass rate under full-context conditions, while integrating CCE consistently improves performance across all models, yielding a maximum absolute gain of 7.56% in strict pass rate. These results underscore that effective utilization of repository-level context remains a significant challenge and highlight the necessity and efficacy of our approach.
📝 Abstract
Large Language Models (LLMs) have recently emerged as capable coding assistants that operate over large codebases through either agentic exploration or full-context generation. Existing benchmarks capture a broad range of coding capabilities, such as resolving GitHub issues, but none of them directly isolate and measure how effectively LLMs leverage repository-level context during code generation. To address this, we introduce ReCUBE, a benchmark in which LLMs reconstruct a masked file within a real-world repository, using all remaining source files, dependency specifications, and documentation as their only source of context. ReCUBE evaluates reconstructed code with usage-aware test cases that simulate both internal module logic and external cross-file integration, reflecting real-world software usage patterns. We further propose the Caller-Centric Exploration (CCE) toolkit, a set of dependency graph-based tools that can be integrated into agentic frameworks to guide agents toward the most relevant caller files during repository exploration. Experiments across eight models in four settings show that repository-level context utilization remains highly challenging even for state-of-the-art models, with GPT-5 achieving only 37.57% strict pass rate in the full-context setting. Agents augmented with our CCE toolkit consistently outperform all baselines across all evaluated models, with improvements of up to 7.56% in strict pass rate. We release our benchmark, code, and evaluation framework as open source for the NLP research community.
Problem

Research questions and friction points this paper is trying to address.

repository-level context
code generation
large language models
benchmark
context utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository-level context
code generation benchmark
Caller-Centric Exploration
dependency graph
large language models
J
Jiseung Hong
Language Technologies Institute, Carnegie Mellon University
B
Benjamin G. Ascoli
Computer Science, Emory University
Jinho D. Choi
Jinho D. Choi
Associate Professor, Emory University
Natural Language ProcessingComputational LinguisticsConversational AI