Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models

📅 2023-08-21
🏛️ arXiv.org
📈 Citations: 17
Influential: 2
📄 PDF
🤖 AI Summary
Efficiently adapting large language models (LLMs) for code generation under resource constraints remains challenging. Method: This paper systematically evaluates parameter-efficient fine-tuning (PEFT) methods—including LoRA, Adapter, and Prefix-Tuning—across multi-scale LLMs and three major code-generation benchmarks (Conala, CodeAlpacaPy, APPS). It further proposes a synergistic PEFT-quantization optimization framework to reduce memory footprint while preserving generation quality. Contribution/Results: We first demonstrate that PEFT significantly outperforms in-context learning (ICL) and retrieval-augmented generation (RAG) on code generation, achieving higher accuracy and stronger generalization. Our co-optimization strategy reduces memory overhead by up to 60%, enabling efficient fine-tuning of billion-parameter LLMs on a single GPU. Empirical results show substantial performance gains at minimal computational cost, establishing a scalable pathway for deploying lightweight, high-performance code-generation models in resource-constrained environments.
📝 Abstract
Large language models (LLMs) demonstrate impressive capabilities to generate accurate code snippets given natural language intents in a zero-shot manner, i.e., without the need for specific fine-tuning. While prior studies have highlighted the advantages of fine-tuning LLMs, this process incurs high computational costs, making it impractical in resource-scarce environments, particularly for models with billions of parameters. To address these challenges, previous research explored in-context learning (ICL) and retrieval-augmented generation (RAG) as strategies to guide the LLM generative process with task-specific prompt examples. However, ICL and RAG introduce inconveniences, such as the need for designing contextually relevant prompts and the absence of learning task-specific parameters, thereby limiting downstream task performance. In this context, we foresee parameter-efficient fine-tuning (PEFT) as a promising approach to efficiently specialize LLMs to task-specific data while maintaining reasonable resource consumption. In this paper, we deliver a comprehensive study of PEFT techniques for LLMs in the context of automated code generation. Our comprehensive investigation of PEFT techniques for LLMs reveals their superiority and potential over ICL and RAG across a diverse set of LLMs and three representative Python code generation datasets: Conala, CodeAlpacaPy, and APPS. Furthermore, our study highlights the potential for tuning larger LLMs and significant reductions in memory usage by combining PEFT with quantization. Therefore, this study opens opportunities for broader applications of PEFT in software engineering scenarios. Our code is available at https://github.com/martin-wey/peft-llm-code/.
Problem

Research questions and friction points this paper is trying to address.

Parameter Optimization
Large Language Models
Code Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter Efficient Fine-Tuning
Large Language Models
Code Generation
🔎 Similar Papers
No similar papers found.
M
M. Weyssow
DIRO, University of Montreal, Canada
X
Xin Zhou
Singapore Management University, Singapore
Kisub Kim
Kisub Kim
Assistant Professor @ DGIST, Korea
AI for Software EngineeringLarge Language ModelsSoftware AnalyticsManufacturing AI
D
David Lo
Singapore Management University, Singapore
H
H. Sahraoui
DIRO, University of Montreal, Canada