🤖 AI Summary
Existing weakly relational abstract domains—particularly the zonotope domain—lack standardized benchmarks, hindering rigorous evaluation of algorithmic correctness, performance, and reproducibility.
Method: We introduce the first lightweight synthetic benchmark suite specifically designed for the zonotope domain, targeting core operations such as domain closure. Grounded in abstract interpretation theory and formalized via precise semantic constraints, our framework automatically generates test programs with controllable complexity and formal semantic guarantees.
Contribution/Results: This work establishes the first standardized micro-benchmarks for zonotope analysis, enabling scalable and formally verifiable experimental evaluation. Empirical evaluation demonstrates that our benchmark significantly improves fairness, reproducibility, and diagnostic capability in comparative studies across diverse zonotope-based algorithms—thereby filling a critical gap in the evaluation infrastructure for weakly relational abstract domains.
📝 Abstract
We present exttt{muRelBench}, a suite of synthetic benchmarks for weakly-relational abstract domains and their operations. For example, the benchmarks can support experimental evaluations of proposed algorithms such as domain closure.