🤖 AI Summary
Existing GFlowNet training paradigms assume a fixed backward policy, severely limiting modeling flexibility and convergence efficiency. To address this, we propose the first framework that models the backward policy as a learnable, entropy-regularized MDP value function, enabling joint optimization of forward and backward policies. Our method performs end-to-end training via trajectory likelihood maximization combined with entropy-regularized reinforcement learning, preserving full compatibility with standard GFlowNets and RL algorithms. Empirically, it significantly improves multimodal distribution modeling fidelity and accelerates training convergence. The core contribution is the removal of the restrictive fixed-backward-policy assumption, establishing the first differentiable, jointly optimizable bidirectional policy coordination mechanism—thereby unifying GFlowNet inference with principled policy learning under entropy-regularized RL.
📝 Abstract
Generative Flow Networks (GFlowNets) are a family of generative models that learn to sample objects with probabilities proportional to a given reward function. The key concept behind GFlowNets is the use of two stochastic policies: a forward policy, which incrementally constructs compositional objects, and a backward policy, which sequentially deconstructs them. Recent results show a close relationship between GFlowNet training and entropy-regularized reinforcement learning (RL) problems with a particular reward design. However, this connection applies only in the setting of a fixed backward policy, which might be a significant limitation. As a remedy to this problem, we introduce a simple backward policy optimization algorithm that involves direct maximization of the value function in an entropy-regularized Markov Decision Process (MDP) over intermediate rewards. We provide an extensive experimental evaluation of the proposed approach across various benchmarks in combination with both RL and GFlowNet algorithms and demonstrate its faster convergence and mode discovery in complex environments.