🤖 AI Summary
This work addresses the limited cross-file contextual awareness of code large language models (CodeLLMs) in repository-level code generation. Methodologically, we introduce RepoExec—the first executable and functionally correct repository-level benchmark—and propose Dependency Invocation Rate (DIR), a novel metric quantifying the accuracy of cross-file dependency invocation. We further design an instruction-tuning dataset integrating test-driven validation and context-aware dependency modeling. Our contributions include the first comprehensive evaluation framework encompassing context-awareness, execution-driven assessment, and cross-file dependency modeling. Experimental results demonstrate that instruction tuning significantly improves contextual utilization and debugging capability, whereas pre-trained models exhibit stronger functional correctness. RepoExec has since become the de facto standard benchmark for repository-level code generation research.
📝 Abstract
CodeLLMs have gained widespread adoption for code generation tasks, yet their capacity to handle repository-level code generation with complex contextual dependencies remains underexplored. Our work underscores the critical importance of leveraging repository-level contexts to generate executable and functionally correct code. We present RepoExec, a novel benchmark designed to evaluate repository-level code generation, with a focus on three key aspects: executability, functional correctness through comprehensive test case generation, and accurate utilization of cross-file contexts. Our study examines a controlled scenario where developers specify essential code dependencies (contexts), challenging models to integrate them effectively. Additionally, we introduce an instruction-tuned dataset that enhances CodeLLMs' ability to leverage dependencies, along with a new metric, Dependency Invocation Rate (DIR), to quantify context utilization. Experimental results reveal that while pretrained LLMs demonstrate superior performance in terms of correctness, instruction-tuned models excel in context utilization and debugging capabilities. RepoExec offers a comprehensive evaluation framework for assessing code functionality and alignment with developer intent, thereby advancing the development of more reliable CodeLLMs for real-world applications. The dataset and source code are available at https://github.com/FSoft-AI4Code/RepoExec.