🤖 AI Summary
Existing group-wise policy optimization methods (e.g., GRPO) treat candidate responses as independent samples, neglecting their implicit semantic interactions—such as complementarity and contradiction. To address this, this work pioneers the integration of causal modeling into targeted post-training of large language models. We propose a structural causal model (SCM)-based optimization framework that explicitly captures response-level semantic dependencies: (i) constructing a causal dependency graph among responses; (ii) designing a causal reward adjustment mechanism; and (iii) introducing a KL divergence regularization term relative to a causally projected reference distribution. Experiments across multiple reasoning benchmarks demonstrate that our method significantly outperforms GRPO and other baselines in both effectiveness and robustness. These results validate the critical value of a causal perspective for group-wise policy optimization, establishing a principled foundation for modeling inter-response semantics in LLM alignment.
📝 Abstract
Recent advances in large language models (LLMs) have broadened their applicability across diverse tasks, yet specialized domains still require targeted post training. Among existing methods, Group Relative Policy Optimization (GRPO) stands out for its efficiency, leveraging groupwise relative rewards while avoiding costly value function learning. However, GRPO treats candidate responses as independent, overlooking semantic interactions such as complementarity and contradiction. To address this challenge, we first introduce a Structural Causal Model (SCM) that reveals hidden dependencies among candidate responses induced by conditioning on a final integrated output forming a collider structure. Then, our causal analysis leads to two insights: (1) projecting responses onto a causally informed subspace improves prediction quality, and (2) this projection yields a better baseline than query only conditioning. Building on these insights, we propose Group Causal Policy Optimization (GCPO), which integrates causal structure into optimization through two key components: a causally informed reward adjustment and a novel KL regularization term that aligns the policy with a causally projected reference distribution. Comprehensive experimental evaluations demonstrate that GCPO consistently surpasses existing methods, including GRPO across multiple reasoning benchmarks.