What Makes Large Language Models Reason in (Multi-Turn) Code Generation?

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reasoning capability of large language models (LLMs) in code generation tasks by proposing a multi-round automatic re-prompting framework. Methodologically, it integrates chain-of-thought prompting, execution-feedback-driven dynamic re-prompting, grid-search-based optimization, and cross-scale model validation across Llama 3.0/3.1 (8B–405B) and GPT-4o on competitive programming benchmarks (CodeContests and TACO). The contributions are threefold: (1) it is the first to empirically demonstrate the universal performance gains of multi-round automatic re-prompting for code generation; (2) it reveals that optimal prompting strategies can be internalized via lightweight fine-tuning, substantially improving both inference efficiency and generalization across multi-step reasoning; and (3) it achieves consistent accuracy improvements across the entire model family under varying sampling budgets—fine-tuned models exhibit simultaneous gains in both accuracy and inference speed.

Technology Category

Application Category

📝 Abstract
Prompting techniques such as chain-of-thought have established themselves as a popular vehicle for improving the outputs of large language models (LLMs). For code generation, however, their exact mechanics and efficacy are under-explored. We thus investigate the effects of a wide range of prompting strategies with a focus on automatic re-prompting over multiple turns and computational requirements. After systematically decomposing reasoning, instruction, and execution feedback prompts, we conduct an extensive grid search on the competitive programming benchmarks CodeContests and TACO for multiple LLM families and sizes (Llama 3.0 and 3.1, 8B, 70B, 405B, and GPT-4o). Our study reveals strategies that consistently improve performance across all models with small and large sampling budgets. We then show how finetuning with such an optimal configuration allows models to internalize the induced reasoning process and obtain improvements in performance and scalability for multi-turn code generation.
Problem

Research questions and friction points this paper is trying to address.

Investigating prompting strategies for multi-turn code generation
Analyzing reasoning decomposition in LLMs for code generation
Optimizing performance and scalability via finetuning with optimal prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates multi-turn automatic re-prompting strategies
Systematically decomposes reasoning and feedback prompts
Finetunes models with optimal prompting configurations
🔎 Similar Papers
No similar papers found.