muRelBench: MicroBenchmarks for Zonotope Domains

📅 2024-04-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing weakly relational abstract domains—particularly the zonotope domain—lack standardized benchmarks, hindering rigorous evaluation of algorithmic correctness, performance, and reproducibility. Method: We introduce the first lightweight synthetic benchmark suite specifically designed for the zonotope domain, targeting core operations such as domain closure. Grounded in abstract interpretation theory and formalized via precise semantic constraints, our framework automatically generates test programs with controllable complexity and formal semantic guarantees. Contribution/Results: This work establishes the first standardized micro-benchmarks for zonotope analysis, enabling scalable and formally verifiable experimental evaluation. Empirical evaluation demonstrates that our benchmark significantly improves fairness, reproducibility, and diagnostic capability in comparative studies across diverse zonotope-based algorithms—thereby filling a critical gap in the evaluation infrastructure for weakly relational abstract domains.

Technology Category

Application Category

📝 Abstract
We present exttt{muRelBench}, a suite of synthetic benchmarks for weakly-relational abstract domains and their operations. For example, the benchmarks can support experimental evaluations of proposed algorithms such as domain closure.
Problem

Research questions and friction points this paper is trying to address.

Evaluating algorithms for numerical abstract domains
Quickly prototyping performance improvements in abstract domains
Ensuring correctness in synthetic benchmark operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extensible microbenchmarking framework for abstract domains
Enables quick prototyping of numerical domain algorithms
Includes correctness checks for synthetic benchmarks