🤖 AI Summary
Double-intractable problems refer to Bayesian inference tasks where both the likelihood and posterior are only available in unnormalized form, and their normalizing constants cannot be computed analytically—necessitating expensive MCMC sampling. This paper introduces the first generalized Bayesian conjugate inference framework tailored to discrete-data exponential family models, enabling closed-form posterior approximations for the first time in discrete double-intractable settings, with theoretical guarantees and computational efficiency. Our approach leverages the exponential family structure to design a generalized likelihood, thereby inducing a conjugate prior and yielding analytical posterior updates that bypass normalizing constant evaluation entirely. We validate the method on COM-Poisson graphical models, discrete autoregressive sequences, and Ising/Potts Markov random fields. Empirical results show speedups of 10–6,000× over state-of-the-art Bayesian methods, while maintaining comparable accuracy.
📝 Abstract
Doubly intractable problems occur when both the likelihood and the posterior are available only in unnormalised form, with computationally intractable normalisation constants. Bayesian inference then typically requires direct approximation of the posterior through specialised and typically expensive MCMC methods. In this paper, we provide a computationally efficient alternative in the form of a novel generalised Bayesian posterior that allows for conjugate inference within the class of exponential family models for discrete data. We derive theoretical guarantees to characterise the asymptotic behaviour of the generalised posterior, supporting its use for inference. The method is evaluated on a range of challenging intractable exponential family models, including the Conway-Maxwell-Poisson graphical model of multivariate count data, autoregressive discrete time series models, and Markov random fields such as the Ising and Potts models. The computational gains are significant; in our experiments, the method is between 10 and 6000 times faster than state-of-the-art Bayesian computational methods.