🤖 AI Summary
To address the degradation in code generation quality of large language models (LLMs) in dynamic codebases—caused by insufficient contextual accuracy and weak retrieval relevance—this paper proposes a repository-level retrieval-augmented generation (RAG) framework. Our method models the entire code repository as a knowledge graph that jointly encodes structured dependencies (e.g., call graphs, imports) and cross-file semantic relationships. We design a hybrid code retrieval mechanism and introduce a graph neural network (GNN)-enhanced RAG module to enable context-aware, robust retrieval and generation. Furthermore, we integrate fine-grained module-level dependency tracking to preserve repository-wide consistency during updates. Evaluated on the EvoCodeBench benchmark, our approach achieves significant improvements over state-of-the-art baselines: +12.7% absolute gain in functional correctness and marked gains in repository-level consistency, demonstrating superior adaptability to evolving codebases.
📝 Abstract
Recent advancements in Large Language Models (LLMs) have transformed code generation from natural language queries. However, despite their extensive knowledge and ability to produce high-quality code, LLMs often struggle with contextual accuracy, particularly in evolving codebases. Current code search and retrieval methods frequently lack robustness in both the quality and contextual relevance of retrieved results, leading to suboptimal code generation. This paper introduces a novel knowledge graph-based approach to improve code search and retrieval leading to better quality of code generation in the context of repository-level tasks. The proposed approach represents code repositories as graphs, capturing structural and relational information for enhanced context-aware code generation. Our framework employs a hybrid approach for code retrieval to improve contextual relevance, track inter-file modular dependencies, generate more robust code and ensure consistency with the existing codebase. We benchmark the proposed approach on the Evolutionary Code Benchmark (EvoCodeBench) dataset, a repository-level code generation benchmark, and demonstrate that our method significantly outperforms the baseline approach. These findings suggest that knowledge graph based code generation could advance robust, context-sensitive coding assistance tools.