🤖 AI Summary
This work addresses the challenge that large language models often hallucinate when generating code for unseen codebases, primarily due to the inability of existing data synthesis methods to adequately capture inter-component relationships and usage contexts. To tackle this, the authors propose UCD-Training, a novel framework that introduces code graphs into data synthesis for previously unseen repositories. The approach first performs dependency-preserving continued pretraining based on file dependencies, followed by graph-guided supervised fine-tuning using three types of synthetic data—single-hop relations, composite APIs, and library usage scenarios—each augmented with explicit reasoning traces. Evaluated on the newly constructed UnseenCodeBench benchmark, the method significantly improves code generation performance and effectively reduces hallucinations, demonstrating its efficacy in understanding and reasoning about the structure and usage patterns of unfamiliar codebases.
📝 Abstract
In the context of newly release software frameworks, large language models (LLMs) often exhibit poor performance and a high rate of hallucination, as they are not exposed to such environments during training. Although inference-time augmentation techniques such as retrieval-augmented generation (RAG) can partially mitigate hallucinations, knowledge injection through prompting alone is insufficient to enable models to fully understand the intrinsic relationships among different components of a codebase, or to reason about the correct compositions and apply. Although explicit knowledge injection can be achieved through post-training, compared with public code domains, unseen codebases typically provide only source code and lack large volumes of high-quality, usage-oriented code that can be directly leveraged as training data. Consequently, existing data synthesis approaches are insufficient to adequately capture unseen codebases usage scenarios when restricted to source code alone. To address these challenges, we propose UCD-Training, a two-stage training framework for reasoning-aware data synthesis grounded in a code graph constructed from unseen codebases. UCD-Training first parses the source code to build a code graph, then conducts dependency-preserving continued pretraining (CPT) using file-level dependency data, followed by graph-grounded supervised fine-tuning (SFT) on three types of synthesized data augmented with explicit reasoning traces: (1) single-hop relation reasoning data, (2) compositional API reasoning data, and (3) codebase utilization data. We further introduce a new benchmark, UnseenCodeBench, for code generation on unseen codebases and conduct comprehensive experiments across multiple codebases.