🤖 AI Summary
This work addresses the limitation of traditional probabilistic circuits, which employ data-agnostic mixture weights and thus struggle to capture the local geometric structure of data manifolds. To explicitly incorporate geometric information, we introduce Voronoi tessellation into the sum nodes of probabilistic circuits—a novel approach in this domain. We propose a differentiable relaxation mechanism to enable gradient-based learning, design an approximate inference method with provable error bounds, and derive structural constraints under which exact inference can be recovered. Experimental results on standard density estimation benchmarks demonstrate that our method effectively balances expressive geometric modeling with tractable inference.
📝 Abstract
Probabilistic circuits (PCs) enable exact and tractable inference but employ data independent mixture weights that limit their ability to capture local geometry of the data manifold. We propose Voronoi tessellations (VT) as a natural way to incorporate geometric structure directly into the sum nodes of a PC. However, naïvely introducing such structure breaks tractability. We formalize this incompatibility and develop two complementary solutions: (1) an approximate inference framework that provides guaranteed lower and upper bounds for inference, and (2) a structural condition for VT under which exact tractable inference is recovered. Finally, we introduce a differentiable relaxation for VT that enables gradient-based learning and empirically validate the resulting approach on standard density estimation tasks.