Learning to Guide Local Search for MPE Inference in Probabilistic Graphical Models

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of stochastic local search (SLS) to become trapped in local optima when repeatedly performing maximum a posteriori (MPE) inference on fixed graphical models. To overcome this limitation, the paper introduces the first reusable neural guidance mechanism that leverages an attention network to predict the contribution of local moves toward reducing the Hamming distance to high-quality solutions, thereby balancing short-term likelihood gains against long-term search potential. By integrating this neural guidance into existing SLS frameworks, the approach enables knowledge transfer across queries, breaking the barrier of traditional heuristics that cannot reuse prior search experience. Empirical results on high-treewidth benchmarks demonstrate significant improvements over both standard SLS and GLS+, with superior convergence speed and solution quality.

Technology Category

Application Category

📝 Abstract
Most Probable Explanation (MPE) inference in Probabilistic Graphical Models (PGMs) is a fundamental yet computationally challenging problem arising in domains such as diagnosis, planning, and structured prediction. In many practical settings, the graphical model remains fixed while inference must be performed repeatedly for varying evidence patterns. Stochastic Local Search (SLS) algorithms scale to large models but rely on myopic best-improvement rule that prioritizes immediate likelihood gains and often stagnate in poor local optima. Heuristics such as Guided Local Search (GLS+) partially alleviate this limitation by modifying the search landscape, but their guidance cannot be reused effectively across multiple inference queries on the same model. We propose a neural amortization framework for improving local search in this repeated-query regime. Exploiting the fixed graph structure, we train an attention-based network to score local moves by predicting their ability to reduce Hamming distance to a near-optimal solution. Our approach integrates seamlessly with existing local search procedures, using this signal to balance short-term likelihood gains with long-term promise during neighbor selection. We provide theoretical intuition linking distance-reducing move selection to improved convergence behavior, and empirically demonstrate consistent improvements over SLS and GLS+ on challenging high-treewidth benchmarks in the amortized inference setting.
Problem

Research questions and friction points this paper is trying to address.

MPE inference
Probabilistic Graphical Models
Local Search
Repeated Inference
Amortized Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural amortization
local search guidance
attention-based scoring
MPE inference
Hamming distance prediction
🔎 Similar Papers
No similar papers found.