🤖 AI Summary
This study addresses the lack of systematic understanding regarding the impact of system prompts on instruction-tuned language models in code generation tasks, particularly across varying model scales, prompting strategies, and programming languages. To this end, the authors construct a multidimensional evaluation framework encompassing 120 distinct configurations and present the first comprehensive analysis revealing that the effectiveness of system prompts increases with model scale, that few-shot prompting substantially attenuates this effect, and that Java-based tasks exhibit greater sensitivity to prompt variations than Python-based ones. These findings offer critical empirical insights for prompt engineering and deployment practices in large code language models.
📝 Abstract
Instruction-tuned Language Models ILMs have become essential components of modern AI systems, demonstrating exceptional versatility across a wide range of natural language and reasoning tasks. Among their most impactful applications is code generation, where ILMs--commonly referred to as Code Language Models CLMs--have demonstrated remarkable capability. This strength stems from their defining feature: the use of explicit task instructions during fine-tuning, which enables them to bridge natural language and code by translating human intent into executable code. While much of their progress has been driven by advances in scaling laws and training methodologies, one critical aspect remains underexplored--the impact of system prompts on the performance of both general-purpose ILMs and specialized CLMs when instantiated to assist users with code generation activities. In this study, we take a first step toward bridging this gap by systematically evaluating how system prompts of varying instructional detail, along with model scale, prompting strategy, and programming language, affect ILMs and CLMs in code generation tasks. Our evaluation framework, spanning 120 model configurations, reveals that (1) the influence of system prompts increases with model scale; (2) few-shot prompting reduces this effect compared to zero-shot; and (3) programming language matters, with Java showing greater sensitivity to system prompt variations than Python.