Vibe Checker: Aligning Code Evaluation with Human Preference

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation evaluation heavily relies on functional correctness (e.g., pass@k), neglecting non-functional aspects—such as readability, intent preservation, and instruction adherence—that strongly correlate with human preferences. Method: We propose Vibe Checker, the first framework to treat *instruction following* as a core evaluation dimension. It introduces VeriCode, a taxonomy of 30 verifiable instruction categories, and implements deterministic validators for automated, quantitative assessment. Our composite score jointly measures functional correctness and instruction-following capability. Contribution/Results: The composite score achieves a high correlation (ρ = 0.82) with human judgments. Experiments across 31 state-of-the-art LLMs reveal that even top-performing models exhibit significant deficits in multi-instruction adherence—often accompanied by functional degradation. Instruction-following capability emerges as a critical differentiator of model quality, establishing a more user-aligned evaluation paradigm for code generation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have catalyzed vibe coding, where users leverage LLMs to generate and iteratively refine code through natural language interactions until it passes their vibe check. Vibe check is tied to real-world human preference and goes beyond functionality: the solution should feel right, read cleanly, preserve intent, and remain correct. However, current code evaluation remains anchored to pass@k and captures only functional correctness, overlooking the non-functional instructions that users routinely apply. In this paper, we hypothesize that instruction following is the missing piece underlying vibe check that represents human preference in coding besides functional correctness. To quantify models'code instruction following capabilities with measurable signals, we present VeriCode, a taxonomy of 30 verifiable code instructions together with corresponding deterministic verifiers. We use the taxonomy to augment established evaluation suites, resulting in Vibe Checker, a testbed to assess both code instruction following and functional correctness. Upon evaluating 31 leading LLMs, we show that even the strongest models struggle to comply with multiple instructions and exhibit clear functional regression. Most importantly, a composite score of functional correctness and instruction following correlates the best with human preference, with the latter emerging as the primary differentiator on real-world programming tasks. Our work identifies core factors of the vibe check, providing a concrete path for benchmarking and developing models that better align with user preferences in coding.
Problem

Research questions and friction points this paper is trying to address.

Current code evaluation overlooks non-functional human preferences
Quantifying code instruction following capabilities beyond functional correctness
Aligning LLM code generation with real-world human coding preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

VeriCode taxonomy classifies 30 verifiable code instructions
Vibe Checker testbed evaluates instruction following and correctness
Composite scoring combines functional correctness with human preferences
🔎 Similar Papers
No similar papers found.