Generalization of RLVR Using Causal Reasoning as a Testbed

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the generalization capability of Reinforcement Learning with Verifiable Rewards (RLVR) in formal causal reasoning tasks, focusing on robustness across associational, interventional, and counterfactual queries under probabilistic causal graph models. Methodologically, we construct a multi-level causal graph query dataset and conduct controlled comparisons between RLVR and Supervised Fine-Tuning (SFT) using Qwen-2.5-Instruct (3B–32B). To our knowledge, this is the first work to systematically evaluate RLVR’s generalization on structured causal reasoning. We find that RLVR’s gains critically depend on the model’s initial reasoning capacity and specifically enhance marginalization strategies and intermediate probability computation. Notably, RLVR significantly outperforms SFT on higher-order counterfactual and complex subgraph queries—achieving substantial accuracy improvements—thereby demonstrating its targeted enhancement of core causal sub-skills.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising paradigm for post-training large language models (LLMs) on complex reasoning tasks. Yet, the conditions under which RLVR yields robust generalization remain poorly understood. This paper provides an empirical study of RLVR generalization in the setting of probabilistic inference over causal graphical models. This setting offers two natural axes along which to examine generalization: (i) the level of the probabilistic query -- associational, interventional, or counterfactual -- and (ii) the structural complexity of the query, measured by the size of its relevant subgraph. We construct datasets of causal graphs and queries spanning these difficulty axes and fine-tune Qwen-2.5-Instruct models using RLVR or supervised fine-tuning (SFT). We vary both the model scale (3B-32B) and the query level included in training. We find that RLVR yields stronger within-level and across-level generalization than SFT, but only for specific combinations of model size and training query level. Further analysis shows that RLVR's effectiveness depends on the model's initial reasoning competence. With sufficient initial competence, RLVR improves an LLM's marginalization strategy and reduces errors in intermediate probability calculations, producing substantial accuracy gains, particularly on more complex queries. These findings show that RLVR can improve specific causal reasoning subskills, with its benefits emerging only when the model has sufficient initial competence.
Problem

Research questions and friction points this paper is trying to address.

Examines RLVR generalization in causal reasoning tasks
Tests generalization across query levels and structural complexities
Identifies conditions for RLVR's effectiveness based on model competence
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLVR improves generalization over SFT for causal reasoning
Effectiveness depends on model size and initial reasoning competence
Enhances marginalization strategy and reduces calculation errors
🔎 Similar Papers
No similar papers found.