🤖 AI Summary
This paper identifies a systematic reliability failure in large language models (LLMs) on code–requirement alignment verification: LLMs frequently misclassify correct implementations as non-compliant, and increasingly complex prompts—such as those requesting explanations or corrective suggestions—exacerbate this error. Method: We conduct the first systematic identification and quantification of this defect using a unified prompt template, performing comprehensive prompt engineering, attribution analysis, and cross-model benchmarking on mainstream code–requirement alignment datasets. Contribution/Results: We propose two lightweight prompt optimization strategies that significantly reduce false-negative rates. Empirical evaluation demonstrates their practical efficacy in automated code review and task-oriented agent deployment. Our findings provide novel insights into LLM trustworthiness for code understanding and verification tasks, offering actionable, deployable improvements for building reliable LLM-based software engineering tools.
📝 Abstract
Large language models (LLMs) have become essential tools in software development, widely used for requirements engineering, code generation and review tasks. Software engineers often rely on LLMs to assess whether system code implementation satisfy task requirements, thereby enhancing code robustness and accuracy. However, it remains unclear whether LLMs can reliably determine whether the code complies fully with the given task descriptions, which is usually natural language specifications. In this paper, we uncover a systematic failure of LLMs in evaluating whether code aligns with natural language requirements. Specifically, with widely used benchmarks, we employ unified prompts to judge code correctness. Our results reveal that LLMs frequently misclassify correct code implementations as either ``not satisfying requirements'' or containing potential defects. Surprisingly, more complex prompting, especially when leveraging prompt engineering techniques involving explanations and proposed corrections, leads to higher misjudgment rate, which highlights the critical reliability issues in using LLMs as code review assistants. We further analyze the root causes of these misjudgments, and propose two improved prompting strategies for mitigation. For the first time, our findings reveals unrecognized limitations in LLMs to match code with requirements. We also offer novel insights and practical guidance for effective use of LLMs in automated code review and task-oriented agent scenarios.