🤖 AI Summary
Current automated benchmarks lack the capability to jointly evaluate both the security and functional correctness of code generated by large language models (LLMs). To address this gap, we propose SecFuncBench—the first fully automated framework for joint security-functionality assessment. It comprises a high-quality, human-verified test suite covering diverse vulnerability classes and functional scenarios, an agent-based sandbox executor for safe code execution, and an LLM-driven dual-criterion evaluator that simultaneously assesses security violations and functional correctness. Evaluation across ten mainstream open- and closed-source LLMs reveals a prevalent security-functionality trade-off imbalance—models achieving high functional scores often exhibit elevated vulnerability rates. We publicly release the complete framework and dataset, establishing a reproducible, extensible paradigm for rigorous, holistic safety evaluation of LLM-generated code.
📝 Abstract
Large language models (LLMs) and autonomous coding agents are increasingly used to generate software across a wide range of domains. Yet a core requirement remains unmet: ensuring that generated code is secure without compromising its functional correctness. Existing benchmarks and evaluations for secure code generation fall short-many measure only vulnerability reduction, disregard correctness preservation, or evaluate security and functionality on separate datasets, violating the fundamental need for simultaneous joint evaluation. We present DUALGAUGE, the first fully automated benchmarking framework designed to rigorously evaluate the security and correctness of LLM-generated code in unison. Given the lack of datasets enabling joint evaluation of secure code generation, we also present DUALGAUGE-BENCH, a curated benchmark suite of diverse coding tasks, each paired with manually validated test suites for both security and functionality, designed for full coverage of specification requirements. At the core of DUALGAUGE is an agentic program executor, which runs a program against given tests in sandboxed environments, and an LLM-based evaluator, which assesses both correctness and vulnerability behavior against expected outcomes. We rigorously evaluated and ensured the quality of DUALGAUGE-BENCH and the accuracy of DUALGAUGE, and applied DUALGAUGE to benchmarking ten leading LLMs on DUALGAUGE-BENCH across thousands of test scenarios. Our results reveal critical gaps in correct and secure code generation by these LLMs, for which our open-source system and datasets help accelerate progress via reproducible, scalable, and rigorous evaluation.