A Computational Framework and Implementation of Implicit Priors in Bayesian Inverse Problems

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit priors in Bayesian inverse problems lack systematic modeling and a unified computational framework. Method: This paper introduces the first general computational framework that rigorously distinguishes implicit from explicit priors both conceptually and implementationally. Building upon this framework, we extend the open-source CUQIpy platform to integrate representative implicit prior methods—including Plug-and-Play (PnP) and Regularized Linear Randomized Optimization (RLRTO)—and support Markov Chain Monte Carlo (MCMC) sampling and uncertainty quantification. Contribution/Results: Evaluated on canonical inverse problems—namely, image reconstruction and PDE parameter estimation—the framework significantly enhances flexibility in modeling complex priors, improves computational efficiency, and strengthens uncertainty characterization. It establishes a scalable, reproducible, and general-purpose paradigm for data-driven Bayesian inversion with implicit priors.

Technology Category

Application Category

📝 Abstract
Solving Bayesian inverse problems typically involves deriving a posterior distribution using Bayes' rule, followed by sampling from this posterior for analysis. Sampling methods, such as general-purpose Markov chain Monte Carlo (MCMC), are commonly used, but they require prior and likelihood densities to be explicitly provided. In cases where expressing the prior explicitly is challenging, implicit priors offer an alternative, encoding prior information indirectly. These priors have gained increased interest in recent years, with methods like Plug-and-Play (PnP) priors and Regularized Linear Randomize-then-Optimize (RLRTO) providing computationally efficient alternatives to standard MCMC algorithms. However, the abstract concept of implicit priors for Bayesian inverse problems is yet to be systematically explored and little effort has been made to unify different kinds of implicit priors. This paper presents a computational framework for implicit priors and their distinction from explicit priors. We also introduce an implementation of various implicit priors within the CUQIpy Python package for Computational Uncertainty Quantification in Inverse Problems. Using this implementation, we showcase several implicit prior techniques by applying them to a variety of different inverse problems from image processing to parameter estimation in partial differential equations.
Problem

Research questions and friction points this paper is trying to address.

Developing computational framework for implicit priors in Bayesian inference
Addressing challenges in explicit prior specification through implicit alternatives
Unifying and implementing diverse implicit prior methods in CUQIpy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computational framework for implicit priors
Implementation in CUQIpy Python package
Applied to diverse inverse problems
🔎 Similar Papers
No similar papers found.
J
Jasper M. Everink
Technical University of Denmark, Lyngby, Denmark
C
Chao Zhang
Technical University of Denmark, Lyngby, Denmark
A
Amal M. A. Alghamdi
Technical University of Denmark, Lyngby, Denmark
Rémi Laumont
Rémi Laumont
EDF R&D
Inverse problemsMCMC methodsOptimizationSamplingUncertainty Quantification
Nicolai A. B. Riis
Nicolai A. B. Riis
Copenhagen Imaging
J
Jakob S. Jørgensen
Technical University of Denmark, Lyngby, Denmark