AutoBaxBuilder: Bootstrapping Code Security Benchmarking

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code security benchmarks suffer from high manual construction costs, training data contamination, narrow task coverage, and limited scalability in difficulty—hindering rigorous evaluation of LLM-generated code security. This paper introduces AutoBaxBench, the first end-to-end automated framework for generating code security benchmarks. It integrates multi-stage prompt engineering, functional equivalence verification, vulnerability triggerability detection, adversarial test case generation, and fine-grained plausibility validation to eliminate data contamination and enable dynamic difficulty adjustment. AutoBaxBench generates high-quality security test tasks within two hours at under $10 per task. Empirical evaluation on an open-source benchmark demonstrates that AutoBaxBench achieves discrimination power comparable to expert-crafted benchmarks, significantly overcoming key bottlenecks of manual benchmark construction while ensuring scalability, fidelity, and cost-efficiency.

Technology Category

Application Category

📝 Abstract
As LLMs see wide adoption in software engineering, the reliable assessment of the correctness and security of LLM-generated code is crucial. Notably, prior work has demonstrated that security is often overlooked, exposing that LLMs are prone to generating code with security vulnerabilities. These insights were enabled by specialized benchmarks, crafted through significant manual effort by security experts. However, relying on manually-crafted benchmarks is insufficient in the long term, because benchmarks (i) naturally end up contaminating training data, (ii) must extend to new tasks to provide a more complete picture, and (iii) must increase in difficulty to challenge more capable LLMs. In this work, we address these challenges and present AutoBaxBuilder, a framework that generates tasks and tests for code security benchmarking from scratch. We introduce a robust pipeline with fine-grained plausibility checks, leveraging the code understanding capabilities of LLMs to construct functionality tests and end-to-end security-probing exploits. To confirm the quality of the generated benchmark, we conduct both a qualitative analysis and perform quantitative experiments, comparing it against tasks constructed by human experts. We use AutoBaxBuilder to construct entirely new tasks and release them to the public as AutoBaxBench, together with a thorough evaluation of the security capabilities of LLMs on these tasks. We find that a new task can be generated in under 2 hours, costing less than USD 10.
Problem

Research questions and friction points this paper is trying to address.

Automates creation of code security benchmarking tasks
Generates functionality tests and security exploits using LLMs
Addresses limitations of manual benchmarks for LLM-generated code
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates code security tasks automatically from scratch
Uses LLMs for functionality tests and security exploits
Creates benchmarks faster and cheaper than manual methods
🔎 Similar Papers
No similar papers found.