🤖 AI Summary
This work addresses a fundamental limitation in existing GFlowNet methods, where the balance between exploration and exploitation is constrained by a fixed mixture of forward and backward policies imposed by the objective function. The authors first reveal that this constraint stems from the reversibility condition of the underlying Markov chain. Building on this insight, they propose the α-GFN framework, which introduces a tunable parameter α to explicitly control the policy mixing ratio. This enables flexible modulation of the exploration–exploitation trade-off while preserving convergence to a unique stationary flow. Theoretical analysis and experiments demonstrate that α-GFN substantially outperforms current approaches, achieving up to a tenfold increase in the number of distinct modes discovered across diverse tasks, including set generation, bit-string optimization, and molecular design.
📝 Abstract
Generative Flow Network (GFlowNet) objectives implicitly fix an equal mixing of forward and backward policies, potentially constraining the exploration-exploitation trade-off during training. By further exploring the link between GFlowNets and Markov chains, we establish an equivalence between GFlowNet objectives and Markov chain reversibility, thereby revealing the origin of such constraints, and provide a framework for adapting Markov chain properties to GFlowNets. Building on these theoretical findings, we propose $\alpha$-GFNs, which generalize the mixing via a tunable parameter $\alpha$. This generalization enables direct control over exploration-exploitation dynamics to enhance mode discovery capabilities, while ensuring convergence to unique flows. Across various benchmarks, including Set, Bit Sequence, and Molecule Generation, $\alpha$-GFN objectives consistently outperform previous GFlowNet objectives, achieving up to a $10 \times$ increase in the number of discovered modes.