To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor performance of large language models (LLMs) in invoking private library APIs, a challenge inadequately mitigated by retrieval-augmented approaches alone. The authors propose PriCoder, the first framework to model private library code generation as a learnable data synthesis task. By representing API invocation logic as graph structures and leveraging progressive graph evolution coupled with multi-dimensional graph pruning, PriCoder automatically generates high-quality, diverse training data for fine-tuning LLMs. Evaluated across three prominent LLMs, this approach achieves an average improvement of over 20% in pass@1 accuracy on private API tasks without compromising general-purpose code generation capabilities. The study also introduces and open-sources a new evaluation benchmark alongside the implementation code.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown strong potential for code generation, yet they remain limited in private-library-oriented code generation, where the goal is to generate code using APIs from private libraries. Existing approaches mainly rely on retrieving private-library API documentation and injecting relevant knowledge into the context at inference time. However, our study shows that this is insufficient: even given accurate required knowledge, LLMs still struggle to invoke private-library APIs effectively. To address this limitation, we propose PriCoder, an approach that teaches LLMs to invoke private-library APIs through automatically synthesized data. Specifically, PriCoder models private-library data synthesis as the construction of a graph, and alternates between two graph operators: (1) Progressive Graph Evolution, which improves data diversity by progressively synthesizing more diverse training samples from basic ones, and (2) Multidimensional Graph Pruning, which improves data quality through a rigorous filtering pipeline. To support rigorous evaluation, we construct two new benchmarks based on recently released libraries that are unfamiliar to the tested models. Experiments on three mainstream LLMs show that PriCoder substantially improves private-library-oriented code generation, yielding gains of over 20% in pass@1 in many settings, while causing negligible impact on general code generation capability. Our code and benchmarks are publicly available at https://github.com/contact-eniacode/PriCoder.
Problem

Research questions and friction points this paper is trying to address.

private libraries
code generation
Large Language Models
API invocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Private-library code generation
Synthetic data
Graph-based data synthesis
Progressive Graph Evolution
Multidimensional Graph Pruning
🔎 Similar Papers
No similar papers found.
Y
Yitong Zhang
College of AI, Tsinghua University; Proxseer Inc.
C
Chengze Li
School of Computer Science, Nanjing University
R
Ruize Chen
Software Institute, Nanjing University
Guowei Yang
Guowei Yang
The University of Queensland
Software engineeringProgram analysisMobile softwareAI4SESE4AI
X
Xiaoran Jia
School of Computer Science and Technology, Beijing Institute of Technology
Y
Yijie Ren
School of Computer Science and Engineering, Beihang University
Jia Li
Jia Li
Assistant Professor, College of AI, Tsinghua University
Programming Language ProcessingFoundation ModelAI Agent