🤖 AI Summary
Maximum a posteriori (MAP) inference—specifically, Most Probable Explanation (MPE)—in probabilistic graphical models (PGMs) becomes computationally intractable for high-treewidth instances.
Method: This paper introduces L2C, a data-driven, neural heuristic framework that learns variable assignment policies via neural networks modeling conditional utilities; it leverages weak supervision derived from backtracking the search trajectories of classical solvers (e.g., branch-and-bound), enabling end-to-end training without ground-truth labels and seamless integration with existing solvers.
Contribution/Results: L2C significantly reduces search space expansion—by 57% on average—while preserving solution optimality. It accelerates MPE inference by 1.8–3.2× over state-of-the-art methods, with particularly pronounced gains on high-treewidth and large-scale PGMs.
📝 Abstract
We introduce learning to condition (L2C), a scalable, data-driven framework for accelerating Most Probable Explanation (MPE) inference in Probabilistic Graphical Models (PGMs), a fundamentally intractable problem. L2C trains a neural network to score variable-value assignments based on their utility for conditioning, given observed evidence. To facilitate supervised learning, we develop a scalable data generation pipeline that extracts training signals from the search traces of existing MPE solvers. The trained network serves as a heuristic that integrates with search algorithms, acting as a conditioning strategy prior to exact inference or as a branching and node selection policy within branch-and-bound solvers. We evaluate L2C on challenging MPE queries involving high-treewidth PGMs. Experiments show that our learned heuristic significantly reduces the search space while maintaining or improving solution quality over state-of-the-art methods.