๐ค AI Summary
Probabilistic circuits (PCs) often converge to sharp optima under data scarcity, degrading generalization performance.
Method: We propose sharpness-aware tractable learning, the first method to enable efficient closed-form computation of the Hessian trace of the log-likelihood for PCs. Leveraging this, we design a gradient-norm regularization term and integrate it into both Expectation-Maximization and gradient-based learning frameworks to explicitly steer optimization toward flat minimaโwithout incurring additional Hessian computation overhead.
Contribution/Results: Our approach significantly enhances model robustness and generalization. Empirical evaluation across synthetic and multiple real-world datasets demonstrates consistent improvements in test log-likelihood, outperforming state-of-the-art regularization and ensemble baselines. This work establishes the first differentiable, computationally efficient sharpness-aware generalization enhancement framework for PC training.
๐ Abstract
Probabilistic Circuits (PCs) are a class of generative models that allow exact and tractable inference for a wide range of queries. While recent developments have enabled the learning of deep and expressive PCs, this increased capacity can often lead to overfitting, especially when data is limited. We analyze PC overfitting from a log-likelihood-landscape perspective and show that it is often caused by convergence to sharp optima that generalize poorly. Inspired by sharpness aware minimization in neural networks, we propose a Hessian-based regularizer for training PCs. As a key contribution, we show that the trace of the Hessian of the log-likelihood-a sharpness proxy that is typically intractable in deep neural networks-can be computed efficiently for PCs. Minimizing this Hessian trace induces a gradient-norm-based regularizer that yields simple closed-form parameter updates for EM, and integrates seamlessly with gradient based learning methods. Experiments on synthetic and real-world datasets demonstrate that our method consistently guides PCs toward flatter minima, improves generalization performance.