Not All Tokens Matter: Data-Centric Optimization for Efficient Code Summarization

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic understanding regarding the impact of system prompts on instruction-tuned language models in code generation tasks, particularly across varying model scales, prompting strategies, and programming languages. To this end, the authors construct a multidimensional evaluation framework encompassing 120 distinct configurations and present the first comprehensive analysis revealing that the effectiveness of system prompts increases with model scale, that few-shot prompting substantially attenuates this effect, and that Java-based tasks exhibit greater sensitivity to prompt variations than Python-based ones. These findings offer critical empirical insights for prompt engineering and deployment practices in large code language models.

Technology Category

Application Category

📝 Abstract
Instruction-tuned Language Models ILMs have become essential components of modern AI systems, demonstrating exceptional versatility across a wide range of natural language and reasoning tasks. Among their most impactful applications is code generation, where ILMs--commonly referred to as Code Language Models CLMs--have demonstrated remarkable capability. This strength stems from their defining feature: the use of explicit task instructions during fine-tuning, which enables them to bridge natural language and code by translating human intent into executable code. While much of their progress has been driven by advances in scaling laws and training methodologies, one critical aspect remains underexplored--the impact of system prompts on the performance of both general-purpose ILMs and specialized CLMs when instantiated to assist users with code generation activities. In this study, we take a first step toward bridging this gap by systematically evaluating how system prompts of varying instructional detail, along with model scale, prompting strategy, and programming language, affect ILMs and CLMs in code generation tasks. Our evaluation framework, spanning 120 model configurations, reveals that (1) the influence of system prompts increases with model scale; (2) few-shot prompting reduces this effect compared to zero-shot; and (3) programming language matters, with Java showing greater sensitivity to system prompt variations than Python.
Problem

Research questions and friction points this paper is trying to address.

system prompts
instruction-tuned language models
code generation
code language models
prompting strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

system prompts
code language models
instruction tuning
prompt sensitivity
code summarization
🔎 Similar Papers
No similar papers found.
S
Saima Afrin
AURA @ Dept. of Computer Science, William & Mary, USA
Z
Zaiyu Cheng
AURA @ Dept. of Computer Science, William & Mary, USA
Tushar Sharma
Tushar Sharma
Asst. professor, FCS, Dalhousie University
Software engineeringMachine learning for software engineeringGreen AI
Alexander Serebrenik
Alexander Serebrenik
Full Professor of Social Software Engineering, Computer Science, Eindhoven University of Technology
software engineeringhuman aspects of software engineeringmining software repositories
M
M. D. Penta
Dept. of Engineering, University of Sannio, IT
A
A. Mastropaolo
AURA @ Dept. of Computer Science, William & Mary, USA