🤖 AI Summary
Current code large language models (Code LLMs) exhibit insufficient sensitivity to subtle semantic variations in problem descriptions—termed “code sensitivity”—and existing benchmarks and instruction datasets lack systematic evaluation of this capability.
Method: We propose CTF-Code, the first counterfactual detail-sensitivity benchmark for code generation, and introduce CTF-Instruct, an incremental instruction-tuning framework tailored to enhance such sensitivity.
Contribution/Results: Our approach features (1) a counterfactual perturbation–based sensitivity evaluation paradigm; (2) a three-dimensional instruction-data filtering mechanism balancing difficulty, diversity, and sensitivity; and (3) a sensitivity-driven evaluation protocol. Experiments demonstrate that CTF-Instruct achieves over 2% absolute improvement on CTF-Code and over 10% on LiveCodeBench, significantly boosting model robustness to fine-grained semantic changes in programming tasks.
📝 Abstract
Code Sensitivity refers to the ability of Code LLMs to recognize and respond to details changes in problem descriptions. While current code benchmarks and instruction data focus on difficulty and diversity, sensitivity is overlooked. We first introduce the CTF-Code benchmark, constructed using counterfactual perturbations, minimizing input changes while maximizing output changes. The evaluation shows that many LLMs have a more than 10% performance drop compared to the original problems. To fully utilize sensitivity, CTF-Instruct, an incremental instruction fine-tuning framework, extends on existing data and uses a selection mechanism to meet the three dimensions of difficulty, diversity, and sensitivity. Experiments show that LLMs fine-tuned with CTF-Instruct data achieve over a 2% improvement on CTF-Code, and more than a 10% performance boost on LiveCodeBench, validating the feasibility of enhancing LLMs' sensitivity to improve performance.