Learning from Synthetic Data Improves Multi-hop Reasoning

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning fine-tuning approaches for large language models often rely on costly or low-quality human annotations, model-generated data, or verification datasets, which struggle to efficiently enhance multi-hop reasoning capabilities. This work proposes a novel paradigm that leverages purely synthetic, rule-generated fictitious data for reinforcement learning fine-tuning, significantly improving model performance on real-world multi-hop question answering tasks without any reliance on ground-truth annotations. For the first time, it demonstrates that synthetic data containing only fabricated knowledge can effectively foster a model’s ability to compose and generalize knowledge. The method achieves substantial performance gains on mainstream QA benchmarks, with particularly pronounced advantages on challenging, high-difficulty questions.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) has been shown to significantly boost reasoning capabilities of large language models (LLMs) in math, coding, and multi-hop reasoning tasks. However, RL fine-tuning requires abundant high-quality verifiable data, often sourced from human annotations, generated from frontier LLMs, or scored by LLM-based verifiers. All three have considerable limitations: human-annotated datasets are small and expensive to curate, LLM-generated data is hallucination-prone and costly, and LLM-based verifiers are inaccurate and slow. In this work, we investigate a cheaper alternative: RL fine-tuning on rule-generated synthetic data for multi-hop reasoning tasks. We discover that LLMs fine-tuned on synthetic data perform significantly better on popular real-world question-answering benchmarks, despite the synthetic data containing only fictional knowledge. On stratifying performance by question difficulty, we find that synthetic data teaches LLMs to compose knowledge -- a fundamental and generalizable reasoning skill. Our work highlights rule-generated synthetic reasoning data as a free and scalable resource to improve LLM reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Synthetic Data
Multi-hop Reasoning
Large Language Models
Data Scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic data
reinforcement learning
multi-hop reasoning
rule-based generation
knowledge composition
🔎 Similar Papers
No similar papers found.