Optimizing Backward Policies in GFlowNets via Trajectory Likelihood Maximization

📅 2024-10-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GFlowNet training paradigms assume a fixed backward policy, severely limiting modeling flexibility and convergence efficiency. To address this, we propose the first framework that models the backward policy as a learnable, entropy-regularized MDP value function, enabling joint optimization of forward and backward policies. Our method performs end-to-end training via trajectory likelihood maximization combined with entropy-regularized reinforcement learning, preserving full compatibility with standard GFlowNets and RL algorithms. Empirically, it significantly improves multimodal distribution modeling fidelity and accelerates training convergence. The core contribution is the removal of the restrictive fixed-backward-policy assumption, establishing the first differentiable, jointly optimizable bidirectional policy coordination mechanism—thereby unifying GFlowNet inference with principled policy learning under entropy-regularized RL.

Technology Category

Application Category

📝 Abstract
Generative Flow Networks (GFlowNets) are a family of generative models that learn to sample objects with probabilities proportional to a given reward function. The key concept behind GFlowNets is the use of two stochastic policies: a forward policy, which incrementally constructs compositional objects, and a backward policy, which sequentially deconstructs them. Recent results show a close relationship between GFlowNet training and entropy-regularized reinforcement learning (RL) problems with a particular reward design. However, this connection applies only in the setting of a fixed backward policy, which might be a significant limitation. As a remedy to this problem, we introduce a simple backward policy optimization algorithm that involves direct maximization of the value function in an entropy-regularized Markov Decision Process (MDP) over intermediate rewards. We provide an extensive experimental evaluation of the proposed approach across various benchmarks in combination with both RL and GFlowNet algorithms and demonstrate its faster convergence and mode discovery in complex environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing backward policies in GFlowNets for better performance.
Addressing limitations of fixed backward policies in GFlowNet training.
Enhancing convergence and mode discovery in complex environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes backward policies via trajectory likelihood maximization
Links GFlowNet training to entropy-regularized RL problems
Introduces backward policy optimization in entropy-regularized MDP
🔎 Similar Papers
No similar papers found.