🤖 AI Summary
A comprehensive empirical comparison between prompt engineering and supervised fine-tuning remains lacking, particularly across code-related tasks. Method: This work systematically evaluates foundational prompting, in-context learning, and task-specific prompting against supervised fine-tuning (across 17 models) on code summarization, generation, and translation, using benchmarks including MBPP and CodeXGLUE, and rigorous user studies with models such as GPT-4 and CodeBERT. Contribution/Results: Results show that GPT-4 under fully automated prompting does not significantly outperform the best fine-tuned models (e.g., underperforms by 28.3% on MBPP). Crucially, incorporating real-time human feedback via conversational prompting substantially improves task completion rates—introducing “human-in-the-loop prompting” as a novel paradigm. The study confirms that fine-tuning retains advantages in fully automated settings, while human-in-the-loop prompting effectively overcomes inherent limitations of purely automatic prompting strategies.
📝 Abstract
The rapid advancements in large language models (LLMs) have greatly expanded the potential for automated code-related tasks. Two primary methodologies are used in this domain: prompt engineering and fine-tuning. Prompt engineering involves applying different strategies to query LLMs, like ChatGPT, while fine-tuning further adapts pre-trained models, such as CodeBERT, by training them on task-specific data. Despite the growth in the area, there remains a lack of comprehensive comparative analysis between the approaches for code models. In this paper, we evaluate GPT-4 using three prompt engineering strategies -- basic prompting, in-context learning, and task-specific prompting -- and compare it against 17 fine-tuned models across three code-related tasks: code summarization, generation, and translation. Our results indicate that GPT-4 with prompt engineering does not consistently outperform fine-tuned models. For instance, in code generation, GPT-4 is outperformed by fine-tuned models by 28.3% points on the MBPP dataset. It also shows mixed results for code translation tasks. Additionally, a user study was conducted involving 27 graduate students and 10 industry practitioners. The study revealed that GPT-4 with conversational prompts, incorporating human feedback during interaction, significantly improved performance compared to automated prompting. Participants often provided explicit instructions or added context during these interactions. These findings suggest that GPT-4 with conversational prompting holds significant promise for automated code-related tasks, whereas fully automated prompt engineering without human involvement still requires further investigation.