🤖 AI Summary
This work addresses the poor performance of large language models (LLMs) in invoking private library APIs, a challenge inadequately mitigated by retrieval-augmented approaches alone. The authors propose PriCoder, the first framework to model private library code generation as a learnable data synthesis task. By representing API invocation logic as graph structures and leveraging progressive graph evolution coupled with multi-dimensional graph pruning, PriCoder automatically generates high-quality, diverse training data for fine-tuning LLMs. Evaluated across three prominent LLMs, this approach achieves an average improvement of over 20% in pass@1 accuracy on private API tasks without compromising general-purpose code generation capabilities. The study also introduces and open-sources a new evaluation benchmark alongside the implementation code.
📝 Abstract
Large Language Models (LLMs) have shown strong potential for code generation, yet they remain limited in private-library-oriented code generation, where the goal is to generate code using APIs from private libraries. Existing approaches mainly rely on retrieving private-library API documentation and injecting relevant knowledge into the context at inference time. However, our study shows that this is insufficient: even given accurate required knowledge, LLMs still struggle to invoke private-library APIs effectively.
To address this limitation, we propose PriCoder, an approach that teaches LLMs to invoke private-library APIs through automatically synthesized data. Specifically, PriCoder models private-library data synthesis as the construction of a graph, and alternates between two graph operators: (1) Progressive Graph Evolution, which improves data diversity by progressively synthesizing more diverse training samples from basic ones, and (2) Multidimensional Graph Pruning, which improves data quality through a rigorous filtering pipeline. To support rigorous evaluation, we construct two new benchmarks based on recently released libraries that are unfamiliar to the tested models. Experiments on three mainstream LLMs show that PriCoder substantially improves private-library-oriented code generation, yielding gains of over 20% in pass@1 in many settings, while causing negligible impact on general code generation capability. Our code and benchmarks are publicly available at https://github.com/contact-eniacode/PriCoder.