Learning to Reason in Structured In-context Environments with Reinforcement Learning

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM reasoning environments face three critical bottlenecks: poor scalability due to reliance on expert annotation, weak generalization in gamified settings, and absence of formal verification mechanisms. To address these, we propose the Structured Inference Environment (SIE) framework—the first to enable automated construction of reasoning environments from large-scale structured data and verifiable generation of domain rules. SIE explicitly models schema and compositional reasoning chains, supporting exploratory inference under partial observability and multi-step logical deduction. Empirically, SIE achieves significant performance gains on structured reasoning benchmarks. Moreover, it successfully transfers compositional reasoning capabilities to cross-domain tasks—including mathematical theorem proving and formal logic—demonstrating strong generalization and robustness. Crucially, all generated rules are formally verifiable, ensuring correctness and interpretability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved significant advancements in reasoning capabilities through reinforcement learning (RL) via environmental exploration. As the intrinsic properties of the environment determine the abilities that LLMs can learn, the environment plays a important role in the RL finetuning process. An ideal LLM reasoning environment should possess three core characteristics: scalability, generalizable reasoning, and verifiability. However, existing mathematical and coding environments are difficult to scale due to heavy reliance on expert annotation, while the skills learned in game-based environments are too specialized to generalize. To bridge this gap, we introduce the extbf{S}tructured extbf{I}n-context extbf{E}nvironment (SIE) framework. SIE achieves scalability by automatically constructing reasoning environments from large-scale structured data, where the rich compositional patterns naturally support generalizable reasoning. Moreover, the explicit schemas and reasoning chains in structured data provide a foundation for rule-based verifiability. Experimental results show that SIE framework not only achieves substantial improvements in in-domain structured reasoning, but also enables the learned compositional reasoning skills to generalize effectively to out-of-domain mathematical and logical reasoning tasks. We further explored learning in information-limited partial SIEs and found that LLMs can infer the missing information through exploring the environment, leading to robust reasoning improvements and generalization performance.
Problem

Research questions and friction points this paper is trying to address.

Creating scalable reasoning environments for LLMs without expert annotation
Enabling generalizable reasoning skills across mathematical and logical tasks
Developing verifiable reasoning chains through structured data exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically constructs reasoning environments from structured data
Enables scalable reinforcement learning for language models
Supports verifiable reasoning through explicit schemas and chains
🔎 Similar Papers
No similar papers found.
P
Peng Yu
Shanghai Jiao Tong University
Z
Zeyuan Zhao
Shanghai Jiao Tong University
Shao Zhang
Shao Zhang
PhD Candidate, Shanghai Jiao Tong University
Human-AI CollaborationMulti-Agent SystemLanguage Agent
L
Luoyi Fu
Shanghai Jiao Tong University
X
Xinbing Wang
Shanghai Jiao Tong University
Ying Wen
Ying Wen
Associate Professor, Shanghai Jiao Tong University
Multi-Agent LearningReinforcement Learning