🤖 AI Summary
Large language models (LLMs) often generate suboptimal or insecure code and lack interpretability in code generation tasks. To address this, we propose a prototype-driven in-context learning (ICL) example sampling method that integrates abstract syntax tree (AST) analysis to identify syntactically and semantically critical regions influencing code generation—thereby jointly optimizing performance and interpretability. Our approach is the first to unify prototype-based clustering with AST-structured awareness, enabling systematic, causal analysis of how ICL example quality affects generation outcomes. Experiments on the MBPP benchmark demonstrate that high-quality prototype examples significantly improve pass@10 (+12.3%), while simultaneously enhancing code readability and security; conversely, low-quality examples degrade performance markedly. This work establishes an interpretable, prototype-guided ICL sampling paradigm for controllable and trustworthy code generation.
📝 Abstract
Since the introduction of Large Language Models (LLMs), they have been widely adopted for various tasks such as text summarization, question answering, speech-to-text translation, and more. In recent times, the use of LLMs for code generation has gained significant attention, with tools such as Cursor and Windsurf demonstrating the ability to analyze massive code repositories and recommend relevant changes. Big tech companies have also acknowledged the growing reliance on LLMs for code generation within their codebases. Although these advances significantly improve developer productivity, increasing reliance on automated code generation can proportionally increase the risk of suboptimal solutions and insecure code. Our work focuses on automatically sampling In-Context Learning (ICL) demonstrations which can improve model performance and enhance the interpretability of the generated code. Using AST-based analysis on outputs from the MBPP test set, we identify regions of code most influenced by the chosen demonstrations. In our experiments, we show that high-quality ICL demonstrations not only make outputs easier to interpret but also yield a positive performance improvement on the pass@10 metric. Conversely, poorly chosen ICL demonstrations affected the LLM performance on the pass@10 metric negatively compared to the base model. Overall, our approach highlights the importance of efficient sampling strategies for ICL, which can affect the performance of the model on any given task.