GFlowNet Training by Policy Gradients

📅 2024-08-12
🏛️ International Conference on Machine Learning
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
GFlowNets suffer from low training efficiency and unstable gradient estimation in combinatorial object generation due to strict flow conservation constraints. To address this, we propose the first policy-gradient-based GFlowNet training framework. Our method reformulates flow conservation as a policy optimization objective, enabling joint training of forward and backward policies without explicit flow matching. We provide theoretical convergence guarantees and introduce a coupled update mechanism to reduce gradient variance. Experiments across multiple synthetic and real-world datasets demonstrate that our approach significantly improves sample quality, training stability, and fidelity to the target distribution—particularly under sparse-reward settings, where it exhibits superior robustness.

Technology Category

Application Category

📝 Abstract
Generative Flow Networks (GFlowNets) have been shown effective to generate combinatorial objects with desired properties. We here propose a new GFlowNet training framework, with policy-dependent rewards, that bridges keeping flow balance of GFlowNets to optimizing the expected accumulated reward in traditional Reinforcement-Learning (RL). This enables the derivation of new policy-based GFlowNet training methods, in contrast to existing ones resembling value-based RL. It is known that the design of backward policies in GFlowNet training affects efficiency. We further develop a coupled training strategy that jointly solves GFlowNet forward policy training and backward policy design. Performance analysis is provided with a theoretical guarantee of our policy-based GFlowNet training. Experiments on both simulated and real-world datasets verify that our policy-based strategies provide advanced RL perspectives for robust gradient estimation to improve GFlowNet performance.
Problem

Research questions and friction points this paper is trying to address.

Proposes new GFlowNet training framework with policy-dependent rewards
Develops coupled strategy for joint forward and backward policy training
Improves GFlowNet performance via robust gradient estimation techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy-dependent rewards for GFlowNet training
Coupled training of forward and backward policies
Policy-based methods improve gradient estimation
🔎 Similar Papers
No similar papers found.