DiffSpec: Differential Testing with LLMs using Natural Language Specifications and Code Artifacts

📅 2024-10-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses behavioral inconsistencies across multiple implementations of systems such as eBPF runtimes and WebAssembly (Wasm) validators—arising from ambiguities in natural-language specifications or implementation divergences. We propose the first LLM-driven prompt-chain framework that jointly leverages natural-language specification understanding, code-aware semantic analysis, and differential testing objectives. Given specification documents and source-code artifacts, our multi-stage prompt chain guides large language models to synthesize high-discriminative differential test cases. Evaluated on eBPF, it generated 1,901 differential test cases, uncovering four confirmed vulnerabilities—including kernel memory leaks and infinite loops. On Wasm, it produced 299 test cases, identifying two patched vulnerabilities. To our knowledge, this is the first end-to-end automated approach translating natural-language specifications into executable differential tests, significantly enhancing cross-implementation bug detection capability.

Technology Category

Application Category

📝 Abstract
Differential testing can be an effective way to find bugs in software systems with multiple implementations that conform to the same specification, like compilers, network protocol parsers, or language runtimes. Specifications for such systems are often standardized in natural language documents, like Instruction Set Architecture (ISA) specifications or IETF RFC's. Large Language Models (LLMs) have demonstrated potential in both generating tests and handling large volumes of natural language text, making them well-suited for analyzing artifacts like specification documents, bug reports, and code implementations. In this work, we leverage natural language and code artifacts to guide LLMs to generate targeted tests that highlight meaningful behavioral differences between implementations, including those corresponding to bugs. We introduce DiffSpec, a framework for generating differential tests with LLMs using prompt chaining. We demonstrate DiffSpec's efficacy on two different (extensively tested) systems, eBPF runtimes and Wasm validators. Using DiffSpec, we generated 1901 differentiating tests, uncovering at least four distinct and confirmed bugs in eBPF, including a kernel memory leak, inconsistent behavior in jump instructions, undefined behavior when using the stack pointer, and tests with infinite loops that hang the verifier in ebpf-for-windows. We also found 299 differentiating tests in Wasm validators pointing to two confirmed and fixed bugs.
Problem

Research questions and friction points this paper is trying to address.

Generating targeted tests for multiple software implementations
Leveraging LLMs to analyze natural language specifications and code
Finding bugs in eBPF runtimes and Wasm validators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage LLMs to analyze natural language specifications
Generate targeted tests via prompt chaining framework
Uncover bugs in eBPF and Wasm via differential testing
🔎 Similar Papers
No similar papers found.