🤖 AI Summary
This study investigates the generalization capability of Reinforcement Learning with Verifiable Rewards (RLVR) in formal causal reasoning tasks, focusing on robustness across associational, interventional, and counterfactual queries under probabilistic causal graph models. Methodologically, we construct a multi-level causal graph query dataset and conduct controlled comparisons between RLVR and Supervised Fine-Tuning (SFT) using Qwen-2.5-Instruct (3B–32B). To our knowledge, this is the first work to systematically evaluate RLVR’s generalization on structured causal reasoning. We find that RLVR’s gains critically depend on the model’s initial reasoning capacity and specifically enhance marginalization strategies and intermediate probability computation. Notably, RLVR significantly outperforms SFT on higher-order counterfactual and complex subgraph queries—achieving substantial accuracy improvements—thereby demonstrating its targeted enhancement of core causal sub-skills.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising paradigm for post-training large language models (LLMs) on complex reasoning tasks. Yet, the conditions under which RLVR yields robust generalization remain poorly understood. This paper provides an empirical study of RLVR generalization in the setting of probabilistic inference over causal graphical models. This setting offers two natural axes along which to examine generalization: (i) the level of the probabilistic query -- associational, interventional, or counterfactual -- and (ii) the structural complexity of the query, measured by the size of its relevant subgraph. We construct datasets of causal graphs and queries spanning these difficulty axes and fine-tune Qwen-2.5-Instruct models using RLVR or supervised fine-tuning (SFT). We vary both the model scale (3B-32B) and the query level included in training. We find that RLVR yields stronger within-level and across-level generalization than SFT, but only for specific combinations of model size and training query level. Further analysis shows that RLVR's effectiveness depends on the model's initial reasoning competence. With sufficient initial competence, RLVR improves an LLM's marginalization strategy and reduces errors in intermediate probability calculations, producing substantial accuracy gains, particularly on more complex queries. These findings show that RLVR can improve specific causal reasoning subskills, with its benefits emerging only when the model has sufficient initial competence.