π€ AI Summary
This work addresses the computational intractability of maximum a posteriori (MAP) inference in probabilistic graphical models by introducing, for the first time, Probably Approximately Correct (PAC) learning theory into this setting. The authors propose a PAC-MAP framework that delivers provably near-optimal solutions under a given computational budget and characterize its tractability conditions using information-theoretic criteria. Leveraging probabilistic circuits with specific structural properties, they develop an efficient PAC-MAP solver and design a novel randomization strategy to enhance both the reliability and theoretical guarantees of existing heuristic algorithms. Experimental results demonstrate that the proposed method not only serves as a high-performance standalone MAP inference tool but also significantly boosts the effectiveness of mainstream heuristics.
π Abstract
Computing the conditional mode of a distribution, better known as the $\mathit{maximum\ a\ posteriori}$ (MAP) assignment, is a fundamental task in probabilistic inference. However, MAP estimation is generally intractable, and remains hard even under many common structural constraints and approximation schemes. We introduce $\mathit{probably\ approximately\ correct}$ (PAC) algorithms for MAP inference that provide provably optimal solutions under variable and fixed computational budgets. We characterize tractability conditions for PAC-MAP using information theoretic measures that can be estimated from finite samples. Our PAC-MAP solvers are efficiently implemented using probabilistic circuits with appropriate architectures. The randomization strategies we develop can be used either as standalone MAP inference techniques or to improve on popular heuristics, fortifying their solutions with rigorous guarantees. Experiments confirm the benefits of our method in a range of benchmarks.