$π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited multi-hop reasoning capability of large language models in long-context scenarios by proposing a novel paradigm that automatically generates verifiable multi-hop question-answer pairs from structured data, such as Wikipedia tables. The approach integrates a dual-path code execution mechanism to verify answer correctness and back-translates structured reasoning traces into natural language to serve as supervision signals for both supervised fine-tuning and self-distillation. Experimental results demonstrate that this method achieves average accuracy improvements of 4.3% across four established long-context reasoning benchmarks and 2.7% on the newly introduced π²-Bench. Furthermore, self-distillation yields an additional performance gain of 4.4%, highlighting the effectiveness of the proposed framework in enhancing complex reasoning over extended contexts.
📝 Abstract
We study a pipeline that curates reasoning data from initial structured data for improving long-context reasoning in large language models (LLMs). Our approach, $π^2$, constructs high-quality reasoning data through rigorous QA curation: 1) extracting and expanding tables from Wikipedia, 2) from the collected tables and relevant context, generating realistic and multi-hop analytical reasoning questions whose answers are automatically determined and verified through dual-path code execution, and 3) back-translating step-by-step structured reasoning traces as solutions of QA pairs given realistic web-search context. Supervised fine-tuning with \textsc{\small{gpt-oss-20b}} and \textsc{\small{Qwen3-4B-Instruct-2507}} on $π^2$ yields consistent improvements across four long-context reasoning benchmarks and our alike $π^2$-Bench, with average absolute accuracy gains of +4.3% and +2.7% respectively. Notably, our dataset facilitates self-distillation, where \textsc{\small{gpt-oss-20b}} even improves its average performance by +4.4% with its own reasoning traces, demonstrating $π^2$'s usefulness. Our code, data, and models are open-source at https://github.com/vt-pi-squared/pi-squared.
Problem

Research questions and friction points this paper is trying to address.

long-context reasoning
large language models
reasoning data
structured data
QA curation
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured reasoning
long-context reasoning
multi-hop QA generation
dual-path verification
self-distillation
🔎 Similar Papers
No similar papers found.
Q
Quyet V. Do
Virginia Tech, VA 24061, USA
Thinh Pham
Thinh Pham
Virginia Tech
NLPLLMsAgents
N
Nguyen Nguyen
Virginia Tech, VA 24061, USA
S
Sha Li
Virginia Tech, VA 24061, USA
P
Pratibha Zunjare
Virginia Tech, VA 24061, USA
Tu Vu
Tu Vu
Research Scientist, Google DeepMind; Assistant Professor, Virginia Tech
Natural Language ProcessingLarge Language ModelsTransfer Learning