Group Causal Policy Optimization for Post-Training Large Language Models

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing group-wise policy optimization methods (e.g., GRPO) treat candidate responses as independent samples, neglecting their implicit semantic interactions—such as complementarity and contradiction. To address this, this work pioneers the integration of causal modeling into targeted post-training of large language models. We propose a structural causal model (SCM)-based optimization framework that explicitly captures response-level semantic dependencies: (i) constructing a causal dependency graph among responses; (ii) designing a causal reward adjustment mechanism; and (iii) introducing a KL divergence regularization term relative to a causally projected reference distribution. Experiments across multiple reasoning benchmarks demonstrate that our method significantly outperforms GRPO and other baselines in both effectiveness and robustness. These results validate the critical value of a causal perspective for group-wise policy optimization, establishing a principled foundation for modeling inter-response semantics in LLM alignment.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have broadened their applicability across diverse tasks, yet specialized domains still require targeted post training. Among existing methods, Group Relative Policy Optimization (GRPO) stands out for its efficiency, leveraging groupwise relative rewards while avoiding costly value function learning. However, GRPO treats candidate responses as independent, overlooking semantic interactions such as complementarity and contradiction. To address this challenge, we first introduce a Structural Causal Model (SCM) that reveals hidden dependencies among candidate responses induced by conditioning on a final integrated output forming a collider structure. Then, our causal analysis leads to two insights: (1) projecting responses onto a causally informed subspace improves prediction quality, and (2) this projection yields a better baseline than query only conditioning. Building on these insights, we propose Group Causal Policy Optimization (GCPO), which integrates causal structure into optimization through two key components: a causally informed reward adjustment and a novel KL regularization term that aligns the policy with a causally projected reference distribution. Comprehensive experimental evaluations demonstrate that GCPO consistently surpasses existing methods, including GRPO across multiple reasoning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addresses overlooked semantic interactions in candidate responses
Proposes causal model to reveal hidden response dependencies
Improves policy optimization with causally informed adjustments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structural Causal Model reveals hidden dependencies
Causally informed subspace improves prediction quality
KL regularization aligns with causal projection
🔎 Similar Papers
No similar papers found.
Z
Ziyin Gu
Institute of Software, Chinese Academy of Sciences, Beijing, China
J
Jingyao Wang
Institute of Software, Chinese Academy of Sciences, Beijing, China
R
Ran Zuo
Communication University of China, Beijing, China
C
Chuxiong Sun
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zeen Song
Zeen Song
Institute of Software Chinese Academy of Sciences
Machine Learning
Changwen Zheng
Changwen Zheng
中国科学院软件研究所
机器学习、计算机仿真
Wenwen Qiang
Wenwen Qiang
Institute of Software, Chinese Academy of Sciences
Artificial IntelligenceMachine LearningCausal InferenceLLM/MLLM