🤖 AI Summary
Existing reinforcement learning fine-tuning approaches for large language models often rely on costly or low-quality human annotations, model-generated data, or verification datasets, which struggle to efficiently enhance multi-hop reasoning capabilities. This work proposes a novel paradigm that leverages purely synthetic, rule-generated fictitious data for reinforcement learning fine-tuning, significantly improving model performance on real-world multi-hop question answering tasks without any reliance on ground-truth annotations. For the first time, it demonstrates that synthetic data containing only fabricated knowledge can effectively foster a model’s ability to compose and generalize knowledge. The method achieves substantial performance gains on mainstream QA benchmarks, with particularly pronounced advantages on challenging, high-difficulty questions.
📝 Abstract
Reinforcement Learning (RL) has been shown to significantly boost reasoning capabilities of large language models (LLMs) in math, coding, and multi-hop reasoning tasks. However, RL fine-tuning requires abundant high-quality verifiable data, often sourced from human annotations, generated from frontier LLMs, or scored by LLM-based verifiers. All three have considerable limitations: human-annotated datasets are small and expensive to curate, LLM-generated data is hallucination-prone and costly, and LLM-based verifiers are inaccurate and slow. In this work, we investigate a cheaper alternative: RL fine-tuning on rule-generated synthetic data for multi-hop reasoning tasks. We discover that LLMs fine-tuned on synthetic data perform significantly better on popular real-world question-answering benchmarks, despite the synthetic data containing only fictional knowledge. On stratifying performance by question difficulty, we find that synthetic data teaches LLMs to compose knowledge -- a fundamental and generalizable reasoning skill. Our work highlights rule-generated synthetic reasoning data as a free and scalable resource to improve LLM reasoning capabilities.