Case2Code: Scalable Synthetic Data for Code Generation

📅 2024-07-17
🏛️ International Conference on Computational Linguistics
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Scaling high-quality synthetic data generation for code large language models remains challenging—existing approaches rely on expensive, powerful teacher models and suffer from limited diversity and correctness. Method: This paper introduces the Case2Code task, the first scalable synthetic paradigm that frames program behavior induction (case-to-code) as an end-to-end pipeline: large language models generate diverse inputs; automated execution yields ground-truth outputs; and dynamic testing rigorously validates functional correctness—eliminating dependence on teacher models while enabling low-cost, high-diversity, high-fidelity code data synthesis. Contribution/Results: Empirical evaluation demonstrates that models trained on Case2Code data achieve significant improvements in both case-to-code generalization and standard benchmarks (HumanEval, MBPP), confirming the effectiveness and transferability of inductive synthetic data for code modeling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown outstanding breakthroughs in code generation. Recent work improves code LLMs by training on synthetic data generated by some powerful LLMs, which can be challenging to scale due to the dependence on a teacher model and high generation costs. In this paper, we focus on synthesizing code data at scale and propose a extbf{Case2Code} task by exploiting the expressiveness and correctness of programs. extbf{Case2Code} is an inductive inference task that aims to infer underlying code implementations by observing input-output examples or program behaviors, By incorporating LLMs to generate program inputs, and executing the program with these inputs to obtain the program outputs, we can synthesize diverse and high-quality extbf{Case2Code} data at scale for training and evaluating code LLMs. Experimental results show that case-to-code induction is challenging for current representative LLMs if they are untrained. Models trained with extbf{Case2Code} improve performance not only on distribution case-to-code induction but also on various coding-generation tasks, demonstrating the great potential of large-scale synthetic data and inductive learning.
Problem

Research questions and friction points this paper is trying to address.

Scalable synthetic data generation
Code generation using LLMs
Inductive inference in programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

Case2Code: scalable code synthesis
LLMs generate diverse program inputs
Inductive learning enhances code generation
🔎 Similar Papers
No similar papers found.
Yunfan Shao
Yunfan Shao
Fudan University
Natrual Language ProcessingMachine Learning
L
Linyang Li
Shanghai AI Laboratory
Yichuan Ma
Yichuan Ma
Fudan University
LLMSynthetic Data
Peiji Li
Peiji Li
Fudan University
D
Demin Song
Shanghai AI Laboratory
Q
Qinyuan Cheng
School of Computer Science, Fudan University; Shanghai AI Laboratory
Shimin Li
Shimin Li
Fudan University
Large Language ModelSpeech Language Model
Xiaonan Li
Xiaonan Li
School of Computer Science, Fudan University
P
Pengyu Wang
School of Computer Science, Fudan University
Qipeng Guo
Qipeng Guo
Fudan University
H
Hang Yan
Shanghai AI Laboratory; The Chinese University of Hong Kong
X
Xipeng Qiu
School of Computer Science, Fudan University
X
Xuanjing Huang
School of Computer Science, Fudan University
Dahua Lin
Dahua Lin
The Chinese University of Hong Kong
computer visionmachine learningprobabilistic inferencebayesian nonparametrics