Tractable Sharpness-Aware Learning of Probabilistic Circuits

๐Ÿ“… 2025-08-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Probabilistic circuits (PCs) often converge to sharp optima under data scarcity, degrading generalization performance. Method: We propose sharpness-aware tractable learning, the first method to enable efficient closed-form computation of the Hessian trace of the log-likelihood for PCs. Leveraging this, we design a gradient-norm regularization term and integrate it into both Expectation-Maximization and gradient-based learning frameworks to explicitly steer optimization toward flat minimaโ€”without incurring additional Hessian computation overhead. Contribution/Results: Our approach significantly enhances model robustness and generalization. Empirical evaluation across synthetic and multiple real-world datasets demonstrates consistent improvements in test log-likelihood, outperforming state-of-the-art regularization and ensemble baselines. This work establishes the first differentiable, computationally efficient sharpness-aware generalization enhancement framework for PC training.

Technology Category

Application Category

๐Ÿ“ Abstract
Probabilistic Circuits (PCs) are a class of generative models that allow exact and tractable inference for a wide range of queries. While recent developments have enabled the learning of deep and expressive PCs, this increased capacity can often lead to overfitting, especially when data is limited. We analyze PC overfitting from a log-likelihood-landscape perspective and show that it is often caused by convergence to sharp optima that generalize poorly. Inspired by sharpness aware minimization in neural networks, we propose a Hessian-based regularizer for training PCs. As a key contribution, we show that the trace of the Hessian of the log-likelihood-a sharpness proxy that is typically intractable in deep neural networks-can be computed efficiently for PCs. Minimizing this Hessian trace induces a gradient-norm-based regularizer that yields simple closed-form parameter updates for EM, and integrates seamlessly with gradient based learning methods. Experiments on synthetic and real-world datasets demonstrate that our method consistently guides PCs toward flatter minima, improves generalization performance.
Problem

Research questions and friction points this paper is trying to address.

Overfitting in deep Probabilistic Circuits due to sharp optima
Intractable Hessian trace computation in deep neural networks
Improving generalization by guiding PCs toward flatter minima
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hessian-based regularizer for training PCs
Efficient computation of Hessian trace
Closed-form parameter updates for EM
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Hrithik Suresh
Mehta Family School of Data Science and Artificial Intelligence, Department of Data Science, Indian Institute of Technology Palakkad, Kerala, India
Sahil Sidheekh
Sahil Sidheekh
Ph.D. Student - The University of Texas at Dallas. Ex-Verisk AI, Ex-IIT Ropar
Generative ModelsMeta-learningExact InferenceTractable Probabilistic Models
V
Vishnu Shreeram M. P
Mehta Family School of Data Science and Artificial Intelligence, Department of Data Science, Indian Institute of Technology Palakkad, Kerala, India
Sriraam Natarajan
Sriraam Natarajan
University of Texas at Dallas
Artificial Intelligencemachine learningStatistical Relational LearningStatistical Relational Artificial IntelligenceRein
N
Narayanan C. Krishnan
Mehta Family School of Data Science and Artificial Intelligence, Department of Data Science, Indian Institute of Technology Palakkad, Kerala, India