🤖 AI Summary
This work addresses the limitations of traditional autoregressive language models, which are constrained by fixed vocabularies and tree-structured generation, hindering flexible modeling of variable-length text spans. Existing dynamic vocabulary approaches fail to explicitly account for the directed acyclic graph (DAG) state space, leading to restricted path exploration and sampling bias. To overcome these issues, we propose Flow of SpanS (FOSS), the first framework to extend Generative Flow Networks (GFlowNets) to the dynamic span level. FOSS leverages retrieval augmentation to construct a dynamic span vocabulary and explicitly models the DAG state space, enabling unbiased and diverse exploration of compositional generation paths. Experiments demonstrate that FOSS achieves up to a 12.5% improvement in MAUVE score on text generation tasks and a 3.5% gain in accuracy on knowledge-intensive tasks, with consistent advantages maintained across varying model scales, dataset sizes, and retrieval corpora.
📝 Abstract
Standard autoregressive language models generate text token-by-token from a fixed vocabulary, inducing a tree-structured state space when viewing token sampling as an action, which limits flexibility and expressiveness. Recent work introduces dynamic vocabulary by sampling retrieved text spans but overlooks that the same sentence can be composed of spans of varying lengths, lacking explicit modeling of the directed acyclic graph (DAG) state space. This leads to restricted exploration of compositional paths and is biased toward the chosen path. Generative Flow Networks (GFlowNets) are powerful for efficient exploring and generalizing over state spaces, particularly those with a DAG structure. However, prior GFlowNets-based language models operate at the token level and remain confined to tree-structured spaces, limiting their potential. In this work, we propose Flow of SpanS (FOSS), a principled GFlowNets framework for span generation. FoSS constructs a dynamic span vocabulary by segmenting the retrieved text flexibly, ensuring a DAG-structured state space, which allows GFlowNets to explore diverse compositional paths and improve generalization. With specialized reward models, FoSS generates diverse, high-quality text. Empirically, FoSS improves MAUVE scores by up to 12.5% over Transformer on text generation and achieves 3.5% gains on knowledge-intensive tasks, consistently outperforming state-of-the-art methods. Scaling experiments further demonstrate FoSS benefits from larger models, more data, and richer retrieval corpora, retaining its advantage over strong baselines.