Distribution Transformers: Fast Approximate Bayesian Inference With On-The-Fly Prior Adaptation

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bayesian inference often relies on approximate methods due to intractable posterior computation; however, existing approaches are either computationally expensive or require retraining upon prior changes, hindering real-time sequential inference. This paper introduces the Distribution Transformer—a novel, learnable architecture capable of mapping arbitrary probability distributions. It is the first to uniformly represent both priors and posteriors using Gaussian Mixture Models (GMMs) and employs self-attention and cross-attention mechanisms for end-to-end distribution transformation, enabling online prior adaptation without retraining. Experiments demonstrate millisecond-scale inference latency—over 1,000× faster than conventional methods—while achieving log-likelihood performance competitive with or surpassing state-of-the-art baselines. The method is validated across diverse applications: sequential sensor fusion, quantum parameter estimation, and Gaussian process prediction with hyperpriors.

Technology Category

Application Category

📝 Abstract
While Bayesian inference provides a principled framework for reasoning under uncertainty, its widespread adoption is limited by the intractability of exact posterior computation, necessitating the use of approximate inference. However, existing methods are often computationally expensive, or demand costly retraining when priors change, limiting their utility, particularly in sequential inference problems such as real-time sensor fusion. To address these challenges, we introduce the Distribution Transformer -- a novel architecture that can learn arbitrary distribution-to-distribution mappings. Our method can be trained to map a prior to the corresponding posterior, conditioned on some dataset -- thus performing approximate Bayesian inference. Our novel architecture represents a prior distribution as a (universally-approximating) Gaussian Mixture Model (GMM), and transforms it into a GMM representation of the posterior. The components of the GMM attend to each other via self-attention, and to the datapoints via cross-attention. We demonstrate that Distribution Transformers both maintain flexibility to vary the prior, and significantly reduces computation times-from minutes to milliseconds-while achieving log-likelihood performance on par with or superior to existing approximate inference methods across tasks such as sequential inference, quantum system parameter inference, and Gaussian Process predictive posterior inference with hyperpriors.
Problem

Research questions and friction points this paper is trying to address.

Overcomes intractability of exact Bayesian inference.
Reduces computational cost in sequential inference tasks.
Enables on-the-fly adaptation to changing prior distributions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Mixture Model mapping
Self-attention and cross-attention mechanisms
Real-time prior adaptation capability
🔎 Similar Papers
No similar papers found.