Tuning Random Generators: Property-Based Testing as Probabilistic Programming

๐Ÿ“… 2025-08-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In property-based testing, manually tuning the weights of random generators limits distribution expressiveness and hinders automation. Method: This paper proposes a target-function-driven offline auto-tuning approach. Its core contribution is Loaded Diceโ€”a differentiable discrete probabilistic programming system that enables symbolic weight modeling, gradient-guided parameter learning, and joint optimization of diversity and effectiveness. Generators are modeled as probabilistic programs with learnable weights; end-to-end optimization is achieved by defining objective functionsโ€”e.g., coverage or boundary-triggering rate. Results: Experiments show that auto-tuned generators significantly improve test-case diversity and defect-detection effectiveness, accelerating average bug detection by 3.1โ€“7.4ร—. The approach demonstrates both efficacy and generality in automating distribution shaping for property-based testing.

Technology Category

Application Category

๐Ÿ“ Abstract
Property-based testing validates software against an executable specification by evaluating it on randomly generated inputs. The standard way that PBT users generate test inputs is via generators that describe how to sample test inputs through random choices. To achieve a good distribution over test inputs, users must tune their generators, i.e., decide on the weights of these individual random choices. Unfortunately, it is very difficult to understand how to choose individual generator weights in order to achieve a desired distribution, so today this process is tedious and limits the distributions that can be practically achieved. In this paper, we develop techniques for the automatic and offline tuning of generators. Given a generator with undetermined symbolic weights and an objective function, our approach automatically learns values for these weights that optimize for the objective. We describe useful objective functions that allow users to (1) target desired distributions and (2) improve the diversity and validity of their test cases. We have implemented our approach in a novel discrete probabilistic programming system, Loaded Dice, that supports differentiation and parameter learning, and use it as a language for generators. We empirically demonstrate that our approach is effective at optimizing generator distributions according to the specified objective functions. We also perform a thorough evaluation on PBT benchmarks, demonstrating that, when automatically tuned for diversity and validity, the generators exhibit a 3.1-7.4x speedup in bug finding.
Problem

Research questions and friction points this paper is trying to address.

Automatically tuning generator weights for desired distributions
Optimizing test input diversity and validity via objective functions
Improving bug finding speed through automated generator tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic offline tuning of generators
Symbolic weights optimization via objective functions
Probabilistic programming system for differentiation learning
๐Ÿ”Ž Similar Papers
No similar papers found.