CL4SE: A Context Learning Benchmark For Software Engineering Tasks

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of a systematic taxonomy and dedicated benchmark for in-context learning in software engineering, which has hindered the quantification of how different contextual information affects core tasks. To bridge this gap, we propose CL4SE, the first in-context learning benchmark tailored for software engineering, which defines four fine-grained context types—explanatory examples, project-specific context, process decisions, and mixed positive-negative examples—and curates a high-quality dataset spanning code generation, summarization, code review, and patch evaluation. Evaluation on over 13,000 samples demonstrates that mainstream large language models, without any fine-tuning, achieve an average performance gain of 24.7%; notably, process context improves code review by 33%, and mixed positive-negative context boosts patch evaluation by 30%. We publicly release the dataset and evaluation framework to foster reproducible research.

Technology Category

Application Category

📝 Abstract
Context engineering has emerged as a pivotal paradigm for unlocking the potential of Large Language Models (LLMs) in Software Engineering (SE) tasks, enabling performance gains at test time without model fine-tuning. Despite its success, existing research lacks a systematic taxonomy of SE-specific context types and a dedicated benchmark to quantify the heterogeneous effects of different contexts across core SE workflows. To address this gap, we propose CL4SE (Context Learning for Software Engineering), a comprehensive benchmark featuring a fine-grained taxonomy of four SE-oriented context types (interpretable examples, project-specific context, procedural decision-making context, and positive & negative context), each mapped to a representative task (code generation, code summarization, code review, and patch correctness assessment). We construct high-quality datasets comprising over 13,000 samples from more than 30 open-source projects and evaluate five mainstream LLMs across nine metrics. Extensive experiments demonstrate that context learning yields an average performance improvement of 24.7% across all tasks. Specifically, procedural context boosts code review performance by up to 33% (Qwen3-Max), mixed positive-negative context improves patch assessment by 30% (DeepSeek-V3), project-specific context increases code summarization BLEU by 14.78% (GPT-Oss-120B), and interpretable examples enhance code generation PASS@1 by 5.72% (DeepSeek-V3). CL4SE establishes the first standardized evaluation framework for SE context learning, provides actionable empirical insights into task-specific context design, and releases a large-scale dataset to facilitate reproducible research in this domain.
Problem

Research questions and friction points this paper is trying to address.

context learning
software engineering
benchmark
large language models
context taxonomy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context Learning
Software Engineering Benchmark
Large Language Models
Context Taxonomy
Empirical Evaluation
🔎 Similar Papers
No similar papers found.