🤖 AI Summary
Existing multi-step reasoning methods for large language models (LLMs) overly prioritize solution accuracy while neglecting diversity—supervised fine-tuning requires extensive annotated data, and reinforcement learning tends to converge to a single high-reward solution.
Method: We propose a lightweight fine-tuning framework requiring only 15 demonstration examples. For the first time, we integrate Generative Flow Networks (GFlowNets) into LLM reasoning, modeling multi-step inference as a Markov flow over a directed acyclic graph (DAG), where the sampling probability of each solution path is proportional to its unnormalized reward. This enables joint optimization of solution quality and diversity.
Contribution/Results: Our approach synergistically combines reward-driven diversity enhancement with minimal supervision. It achieves state-of-the-art performance across six diverse reasoning benchmarks—including BlocksWorld, Game24, and Rubik’s Cube—significantly improving both accuracy and solution diversity simultaneously.
📝 Abstract
The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reward-maximization reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample divergent paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across six challenging reasoning tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), GSM8k (math reasoning), and ProntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.