ExplainFuzz: Explainable and Constraint-Conditioned Test Generation with Probabilistic Circuits

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing test generation methods struggle to simultaneously capture realistic data distributions, context-sensitive dependencies, interpretability, and support for user-specified constraints. This work proposes the first integration of probabilistic circuits (PCs) into grammar-aware test generation by constructing a context-free grammar–guided PC that learns structured input distributions from real-world data. The approach enables conditional sampling to generate interpretable test cases that satisfy user-defined constraints. Compared to probabilistic context-free grammars (pCFGs), large language models, and mutation-based fuzzing, the method substantially reduces perplexity and enhances both realism and diversity of generated inputs. Empirical evaluation demonstrates significant improvements in bug-finding effectiveness: vulnerability-triggering rates increase from 35% to 63% on SQL inputs and from 10% to 100% on XML inputs.
📝 Abstract
Understanding and explaining the structure of generated test inputs is essential for effective software testing and debugging. Existing approaches--including grammar-based fuzzers, probabilistic Context-Free Grammars (pCFGs), and Large Language Models (LLMs)--suffer from critical limitations. They frequently produce ill-formed inputs that fail to reflect realistic data distributions, struggle to capture context-sensitive probabilistic dependencies, and lack explainability. We introduce ExplainFuzz, a test generation framework that leverages Probabilistic Circuits (PCs) to learn and query structured distributions over grammar-based test inputs interpretably and controllably. Starting from a Context-Free Grammar (CFG), ExplainFuzz compiles a grammar-aware PC and trains it on existing inputs. New inputs are then generated via sampling. ExplainFuzz utilizes the conditioning capability of PCs to incorporate test-specific constraints (e.g., a query must have GROUP BY), enabling constrained probabilistic sampling to generate inputs satisfying grammar and user-provided constraints. Our results show that ExplainFuzz improves the coherence and realism of generated inputs, achieving significant perplexity reduction compared to pCFGs, grammar-unaware PCs, and LLMs. By leveraging its native conditioning capability, ExplainFuzz significantly enhances the diversity of inputs that satisfy a user-provided constraint. Compared to grammar-aware mutational fuzzing, ExplainFuzz increases bug-triggering rates from 35% to 63% in SQL and from 10% to 100% in XML. These results demonstrate the power of a learned input distribution over mutational fuzzing, which is often limited to exploring the local neighborhood of seed inputs. These capabilities highlight the potential of PCs to serve as a foundation for grammar-aware, controllable test generation that captures context-sensitive, probabilistic dependencies.
Problem

Research questions and friction points this paper is trying to address.

test generation
explainability
probabilistic dependencies
grammar-based fuzzing
input realism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic Circuits
Explainable Test Generation
Constraint-Conditioned Sampling
Grammar-Aware Fuzzing
Context-Sensitive Dependencies
🔎 Similar Papers
No similar papers found.