Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models often hallucinate when generating code for unseen codebases, primarily due to the inability of existing data synthesis methods to adequately capture inter-component relationships and usage contexts. To tackle this, the authors propose UCD-Training, a novel framework that introduces code graphs into data synthesis for previously unseen repositories. The approach first performs dependency-preserving continued pretraining based on file dependencies, followed by graph-guided supervised fine-tuning using three types of synthetic data—single-hop relations, composite APIs, and library usage scenarios—each augmented with explicit reasoning traces. Evaluated on the newly constructed UnseenCodeBench benchmark, the method significantly improves code generation performance and effectively reduces hallucinations, demonstrating its efficacy in understanding and reasoning about the structure and usage patterns of unfamiliar codebases.

Technology Category

Application Category

📝 Abstract
In the context of newly release software frameworks, large language models (LLMs) often exhibit poor performance and a high rate of hallucination, as they are not exposed to such environments during training. Although inference-time augmentation techniques such as retrieval-augmented generation (RAG) can partially mitigate hallucinations, knowledge injection through prompting alone is insufficient to enable models to fully understand the intrinsic relationships among different components of a codebase, or to reason about the correct compositions and apply. Although explicit knowledge injection can be achieved through post-training, compared with public code domains, unseen codebases typically provide only source code and lack large volumes of high-quality, usage-oriented code that can be directly leveraged as training data. Consequently, existing data synthesis approaches are insufficient to adequately capture unseen codebases usage scenarios when restricted to source code alone. To address these challenges, we propose UCD-Training, a two-stage training framework for reasoning-aware data synthesis grounded in a code graph constructed from unseen codebases. UCD-Training first parses the source code to build a code graph, then conducts dependency-preserving continued pretraining (CPT) using file-level dependency data, followed by graph-grounded supervised fine-tuning (SFT) on three types of synthesized data augmented with explicit reasoning traces: (1) single-hop relation reasoning data, (2) compositional API reasoning data, and (3) codebase utilization data. We further introduce a new benchmark, UnseenCodeBench, for code generation on unseen codebases and conduct comprehensive experiments across multiple codebases.
Problem

Research questions and friction points this paper is trying to address.

unseen codebases
data synthesis
code graph
hallucination
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

code graph
dependency-preserving pretraining
reasoning-aware data synthesis
unseen codebases
graph-grounded fine-tuning
🔎 Similar Papers
No similar papers found.
G
Guangsheng Ou
Sun Yat-sen University, China
Q
Qiming Zhang
WeChat Pay, Tencent, China
S
Sirong Chen
WeChat Pay, Tencent, China
Anji Li
Anji Li
Sun Yat-sen University
AI4SEsoftware testing
D
Dong Xu
Sun Yat-sen University, China
T
Tiancheng Luo
The Chinese University of Hong Kong, Shenzhen, China
D
Dekun Dai
Sun Yat-sen University, China
C
Cuiyun Gao
The Chinese University of Hong Kong, China
L
Long Wang
WeChat Pay, Tencent, China
J
Jun Zhou
WeChat Pay, Tencent, China
Mingwei Liu
Mingwei Liu
Rutgers University
China laborhigh performance work systems
Zibin Zheng
Zibin Zheng
IEEE Fellow, Highly Cited Researcher, Sun Yat-sen University, China
BlockchainSmart ContractServices ComputingSoftware Reliability