IFEvalCode: Controlled Code Generation

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited instruction-following capability of code large language models (Code LLMs) under fine-grained constraints—such as coding style, structural conventions, and line-count limits—while preserving functional correctness. Method: We propose a forward-backward constraint generation mechanism to decouple evaluation of functional correctness from instruction adherence; construct IFEvalCode, the first bilingual (Chinese–English), multi-language (7 languages) controllable code generation benchmark (1.6K samples); and design a multilingual instruction-tuning and fine-grained evaluation framework. Results: Extensive experiments across 40+ mainstream models reveal substantial deficits in instruction following: closed-source models consistently outperform open-source counterparts, highlighting a critical gap in current Code LLM capabilities for real-world development constraints.

Technology Category

Application Category

📝 Abstract
Code large language models (Code LLMs) have made significant progress in code generation by translating natural language descriptions into functional code; however, real-world applications often demand stricter adherence to detailed requirements such as coding style, line count, and structural constraints, beyond mere correctness. To address this, the paper introduces forward and backward constraints generation to improve the instruction-following capabilities of Code LLMs in controlled code generation, ensuring outputs align more closely with human-defined guidelines. The authors further present IFEvalCode, a multilingual benchmark comprising 1.6K test samples across seven programming languages (Python, Java, JavaScript, TypeScript, Shell, C++, and C#), with each sample featuring both Chinese and English queries. Unlike existing benchmarks, IFEvalCode decouples evaluation into two metrics: correctness (Corr.) and instruction-following (Instr.), enabling a more nuanced assessment. Experiments on over 40 LLMs reveal that closed-source models outperform open-source ones in controllable code generation and highlight a significant gap between the models' ability to generate correct code versus code that precisely follows instructions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Code LLMs to meet strict coding requirements
Introducing constraints for better instruction-following in code generation
Evaluating models on correctness and adherence to guidelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward and backward constraints generation
IFEvalCode multilingual benchmark
Decouples evaluation into correctness and instruction-following
🔎 Similar Papers
No similar papers found.