Towards Generalizable Reasoning: Group Causal Counterfactual Policy Optimization for LLM Reasoning

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing large language models whose reward mechanisms overly prioritize final answer correctness while neglecting the quality of the reasoning process, thereby constraining generalization. The paper introduces, for the first time, a causal counterfactual framework that treats multi-candidate reasoning as a population-level counterfactual experiment, proposing a novel reward mechanism that jointly optimizes reasoning process validity and cross-problem transferability. The approach integrates turn-based causal reward modeling, token-level advantage function estimation, and policy gradient optimization to enable end-to-end training of robust and generalizable reasoning patterns. Extensive experiments across multiple reasoning benchmarks demonstrate significant improvements in generalization performance, validating the effectiveness of the proposed method in fostering coherent and robust reasoning processes.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at complex tasks with advances in reasoning capabilities. However, existing reward mechanisms remain tightly coupled to final correctness and pay little attention to the underlying reasoning process: trajectories with sound reasoning but wrong answers receive low credit, while lucky guesses with flawed logic may be highly rewarded, affecting reasoning generalization. From a causal perspective, we interpret multi-candidate reasoning for a fixed question as a family of counterfactual experiments with theoretical supports. Building on this, we propose Group Causal Counterfactual Policy Optimization to explicitly train LLMs to learn generalizable reasoning patterns. It proposes an episodic causal counterfactual reward that jointly captures (i) robustness, encouraging the answer distribution induced by a reasoning step to remain stable under counterfactual perturbations; and (ii) effectiveness, enforcing sufficient variability so that the learned reasoning strategy can transfer across questions. We then construct token-level advantages from this reward and optimize the policy, encouraging LLMs to favor reasoning patterns that are process-valid and counterfactually robust. Extensive experiments on diverse benchmarks demonstrate its advantages.
Problem

Research questions and friction points this paper is trying to address.

reasoning generalization
reward mechanism
causal counterfactual
large language models
reasoning process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Inference
Counterfactual Reasoning
Policy Optimization
Reasoning Generalization
Large Language Models
🔎 Similar Papers
No similar papers found.