Do Not Treat Code as Natural Language: Implications for Repository-Level Code Generation and Beyond

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large code models exhibit limited performance in repository-level code generation due to their neglect of cross-file dependencies and structural context, while conventional NLP-based retrieval-augmented approaches struggle to effectively model the inherent structure of code. To address this, this work proposes Hydra, a novel framework that treats code as structured entities and introduces a hierarchical code tree index, a dependency-aware retriever (DAR), and a hybrid retrieval mechanism that integrates functional dependencies with semantic similarity. Hydra departs from traditional NLP-style paradigms for code processing and achieves state-of-the-art results on the DevEval and RepoExec benchmarks, surpassing the strongest baseline by over 5% in Pass@1. Notably, it enables smaller models equipped with Hydra to match the performance of larger models using conventional retrievers.

Technology Category

Application Category

📝 Abstract
Large language models for code (CodeLLMs) have demonstrated remarkable success in standalone code completion and generation, sometimes even surpassing human performance, yet their effectiveness diminishes in repository-level settings where cross-file dependencies and structural context are essential. Existing Retrieval-Augmented Generation (RAG) approaches often borrow strategies from NLP, relying on chunking-based indexing and similarity-based retrieval. Chunking results in the loss of coherence between code units and overlooks structural relationships, while similarity-driven methods frequently miss functionally relevant dependencies such as helper functions, classes, or global variables. To address these limitations, we present Hydra, a repository-level code generation framework that treats code as structured code rather than natural language. Our approach introduces (i) a structure-aware indexing strategy that represents repositories as hierarchical trees of functions, classes, and variables, preserving code structure and dependencies, (ii) a lightweight dependency-aware retriever (DAR) that explicitly identifies and retrieves the true dependencies required by a target function, and (iii) a hybrid retrieval mechanism that combines DAR with similarity-based retrieval to provide both essential building blocks and practical usage examples. Extensive experiments on the challenging DevEval and RepoExec benchmarks, both requiring function implementation from real-world repositories with complex large repository context, show that Hydra achieves state-of-the-art performance across open- and closed-source CodeLLMs. Notably, our method establishes a new state of the art in repository-level code generation, surpassing strongest baseline by over 5% in Pass@1 and even enabling smaller models to match or exceed the performance of much larger ones that rely on existing retrievers.
Problem

Research questions and friction points this paper is trying to address.

repository-level code generation
code structure
cross-file dependencies
retrieval-augmented generation
code coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository-level code generation
structure-aware indexing
dependency-aware retrieval
code as structured data
hybrid retrieval
🔎 Similar Papers
No similar papers found.