Success is in the Details: Evaluate and Enhance Details Sensitivity of Code LLMs through Counterfactuals

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current code large language models (Code LLMs) exhibit insufficient sensitivity to subtle semantic variations in problem descriptions—termed “code sensitivity”—and existing benchmarks and instruction datasets lack systematic evaluation of this capability. Method: We propose CTF-Code, the first counterfactual detail-sensitivity benchmark for code generation, and introduce CTF-Instruct, an incremental instruction-tuning framework tailored to enhance such sensitivity. Contribution/Results: Our approach features (1) a counterfactual perturbation–based sensitivity evaluation paradigm; (2) a three-dimensional instruction-data filtering mechanism balancing difficulty, diversity, and sensitivity; and (3) a sensitivity-driven evaluation protocol. Experiments demonstrate that CTF-Instruct achieves over 2% absolute improvement on CTF-Code and over 10% on LiveCodeBench, significantly boosting model robustness to fine-grained semantic changes in programming tasks.

Technology Category

Application Category

📝 Abstract
Code Sensitivity refers to the ability of Code LLMs to recognize and respond to details changes in problem descriptions. While current code benchmarks and instruction data focus on difficulty and diversity, sensitivity is overlooked. We first introduce the CTF-Code benchmark, constructed using counterfactual perturbations, minimizing input changes while maximizing output changes. The evaluation shows that many LLMs have a more than 10% performance drop compared to the original problems. To fully utilize sensitivity, CTF-Instruct, an incremental instruction fine-tuning framework, extends on existing data and uses a selection mechanism to meet the three dimensions of difficulty, diversity, and sensitivity. Experiments show that LLMs fine-tuned with CTF-Instruct data achieve over a 2% improvement on CTF-Code, and more than a 10% performance boost on LiveCodeBench, validating the feasibility of enhancing LLMs' sensitivity to improve performance.
Problem

Research questions and friction points this paper is trying to address.

Assessing Code LLMs' sensitivity to minor input changes
Addressing lack of sensitivity focus in current benchmarks
Improving LLM performance via sensitivity-enhanced fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

CTF-Code benchmark with counterfactual perturbations
Incremental instruction fine-tuning framework CTF-Instruct
Selection mechanism for difficulty, diversity, sensitivity
🔎 Similar Papers
No similar papers found.
Xianzhen Luo
Xianzhen Luo
Harbin Institute of Technology
Code IntelligenceInference Acceleration
Qingfu Zhu
Qingfu Zhu
Harbin Institute of Technology
NLPCode LLM
Z
Zhiming Zhang
Harbin Institute of Technology, Harbin, China
M
Mingzheng Xu
Harbin Institute of Technology, Harbin, China
Tianhao Cheng
Tianhao Cheng
Fudan University
Large Language Model
Y
Yixuan Wang
Harbin Institute of Technology, Harbin, China
S
Shijie Xuyang
Fudan University, Shanghai, China
Z
Zhiyuan Ma
University of Science and Technology of China, Hefei, China
Y
YuanTao Fan
Beijing University of Posts and Telecommunications, Beijing, China
Wanxiang Che
Wanxiang Che
Professor of Harbin Institute of Technology
Natural Language Processing