Combining Local Symmetry Exploitation and Reinforcement Learning for Optimised Probabilistic Inference -- A Work In Progress

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing variable elimination order in graphical models is an NP-hard combinatorial problem. This paper proposes a novel approach integrating local factor symmetries with proximal policy optimization (PPO)-based reinforcement learning. Our key contribution is the first explicit modeling and incorporation of local symmetry structures into the RL reward function, where the objective is to minimize the compact dimensionality of intermediate tensors—rather than their exponential原始 size—thereby enabling efficient search over contraction sequences. Leveraging the duality between graphical models and tensor networks, we achieve symmetry-driven compression of intermediate representations. Experiments demonstrate that our method significantly reduces intermediate tensor sizes, guides the agent toward superior contraction paths, and enhances scalability and computational efficiency in large-scale probabilistic inference tasks.

Technology Category

Application Category

📝 Abstract
Efficient probabilistic inference by variable elimination in graphical models requires an optimal elimination order. However, finding an optimal order is a challenging combinatorial optimisation problem for models with a large number of random variables. Most recently, a reinforcement learning approach has been proposed to find efficient contraction orders in tensor networks. Due to the duality between graphical models and tensor networks, we adapt this approach to probabilistic inference in graphical models. Furthermore, we incorporate structure exploitation into the process of finding an optimal order. Currently, the agent's cost function is formulated in terms of intermediate result sizes which are exponential in the number of indices (i.e., random variables). We show that leveraging specific structures during inference allows for introducing compact encodings of intermediate results which can be significantly smaller. By considering the compact encoding sizes for the cost function instead, we enable the agent to explore more efficient contraction orders. The structure we consider in this work is the presence of local symmetries (i.e., symmetries within a model's factors).
Problem

Research questions and friction points this paper is trying to address.

Optimizing probabilistic inference in graphical models.
Finding efficient elimination orders using reinforcement learning.
Exploiting local symmetries to reduce intermediate result sizes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes probabilistic inference orders.
Local symmetry exploitation reduces intermediate result sizes.
Compact encodings enhance efficiency in graphical models.
🔎 Similar Papers
No similar papers found.