Learning to Condition: A Neural Heuristic for Scalable MPE Inference

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Maximum a posteriori (MAP) inference—specifically, Most Probable Explanation (MPE)—in probabilistic graphical models (PGMs) becomes computationally intractable for high-treewidth instances. Method: This paper introduces L2C, a data-driven, neural heuristic framework that learns variable assignment policies via neural networks modeling conditional utilities; it leverages weak supervision derived from backtracking the search trajectories of classical solvers (e.g., branch-and-bound), enabling end-to-end training without ground-truth labels and seamless integration with existing solvers. Contribution/Results: L2C significantly reduces search space expansion—by 57% on average—while preserving solution optimality. It accelerates MPE inference by 1.8–3.2× over state-of-the-art methods, with particularly pronounced gains on high-treewidth and large-scale PGMs.

Technology Category

Application Category

📝 Abstract
We introduce learning to condition (L2C), a scalable, data-driven framework for accelerating Most Probable Explanation (MPE) inference in Probabilistic Graphical Models (PGMs), a fundamentally intractable problem. L2C trains a neural network to score variable-value assignments based on their utility for conditioning, given observed evidence. To facilitate supervised learning, we develop a scalable data generation pipeline that extracts training signals from the search traces of existing MPE solvers. The trained network serves as a heuristic that integrates with search algorithms, acting as a conditioning strategy prior to exact inference or as a branching and node selection policy within branch-and-bound solvers. We evaluate L2C on challenging MPE queries involving high-treewidth PGMs. Experiments show that our learned heuristic significantly reduces the search space while maintaining or improving solution quality over state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Accelerating MPE inference in probabilistic graphical models
Developing neural heuristic for scalable conditioning strategies
Reducing search space while maintaining solution quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network scores variable assignments for conditioning
Training data generated from existing solver search traces
Learned heuristic integrates with search algorithms to reduce space
🔎 Similar Papers
No similar papers found.