🤖 AI Summary
To address the trade-off between efficiency and accuracy in large-scale Markov Random Field (MRF) inference, this paper proposes a differentiable reparameterization method based on Graph Neural Networks (GNNs), transforming discrete optimization into a parallelizable gradient-descent problem. The core innovation lies in the first integration of the lifting technique into a non-parametric GNN framework, which smooths the loss landscape via neural parameterization to enable efficient and scalable approximate inference. This approach breaks the computational complexity bottlenecks of conventional algorithms, achieving linear time complexity in graph size. Experiments demonstrate that, on medium-scale MRFs, the method attains solution quality comparable to the exact solver Toulbar2; on large-scale instances, it consistently outperforms all baseline methods—including both classical and learning-based approaches—in both solution quality and runtime efficiency.
📝 Abstract
Inference in large-scale Markov Random Fields (MRFs) is a critical yet challenging task, traditionally approached through approximate methods like belief propagation and mean field, or exact methods such as the Toulbar2 solver. These strategies often fail to strike an optimal balance between efficiency and solution quality, particularly as the problem scale increases. This paper introduces NeuroLifting, a novel technique that leverages Graph Neural Networks (GNNs) to reparameterize decision variables in MRFs, facilitating the use of standard gradient descent optimization. By extending traditional lifting techniques into a non-parametric neural network framework, NeuroLifting benefits from the smooth loss landscape of neural networks, enabling efficient and parallelizable optimization. Empirical results demonstrate that, on moderate scales, NeuroLifting performs very close to the exact solver Toulbar2 in terms of solution quality, significantly surpassing existing approximate methods. Notably, on large-scale MRFs, NeuroLifting delivers superior solution quality against all baselines, as well as exhibiting linear computational complexity growth. This work presents a significant advancement in MRF inference, offering a scalable and effective solution for large-scale problems.