🤖 AI Summary
Existing code completion methods predominantly rely on shallow semantic matching, neglecting structural semantics and cross-granularity dependencies—leading to suboptimal logical consistency and functional accuracy. To address this, we propose CoCo, a framework designed for large-scale codebases that leverages static analysis to extract structural context at function-, file-, and project-level granularities. CoCo introduces a graph neural network–driven module for context filtering and structure-aware re-ranking, explicitly modeling control-flow relationships and deep semantic dependencies. Furthermore, it uniformly converts structured contexts into natural language prompts and integrates them into a retrieval-augmented generation pipeline. Model-agnostic and modularly pluggable, CoCo achieves up to a 20.2% improvement in Exact Match (EM) on CrossCodeEval and RepoEval, significantly enhancing both generation quality and generalization across diverse repositories.
📝 Abstract
As code completion task from function-level to repository-level, leveraging contextual information from large-scale codebases becomes a core challenge. However, existing retrieval-augmented generation (RAG) methods typically treat code as plain natural language, relying primarily on shallow semantic matching while overlooking structural semantics and code-specific dependencies. This limits their ability to capture control flow and underlying intent, ultimately constraining the quality of generated code. Therefore, we propose CoCo, a novel framework that enables code Completion by Comprehension of multi-granularity context from large-scale code repositories. CoCo employs static code analysis to extract structured context at the function, file, and project levels, capturing execution logic and semantic dependencies. It then adopts an graph-based multi-granularity context selection mechanism to filter out redundant information and remove noise. Consequently, the information is converted into natural language in a consistent manner, thereby functioning as explicit contextual prompts to guide subsequent code completion. Additionally, a structure-aware code re-ranker mechanism ensures alignment at both semantic and structural levels. Extensive experiments on CrossCodeEval and RepoEval benchmarks demonstrate that CoCo consistently surpasses state-of-the-art baselines, achieving up to 20.2% gains in EM. Moreover, the framework is model-agnostic and can be seamlessly integrated into existing methods, leading to significant performance.