Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation

πŸ“… 2026-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of conventional Direct Preference Optimization (DPO) to environmental confounders, which amplifies spurious correlations and degrades out-of-distribution (OOD) generalization. To mitigate this, the authors introduce causal invariance learning into the DPO framework for the first time. During preference alignment, they integrate backdoor adjustment with soft clustering to model latent environments, explicitly disentangling users’ stable preferences from environmental noise. An invariance constraint is then imposed to enhance robustness across diverse environments. Evaluated under four distribution shift settings, the proposed method improves recommendation performance by an average of 17.17%, demonstrating significantly stronger OOD generalization capabilities.

Technology Category

Application Category

πŸ“ Abstract
Direct Preference Optimization (DPO) guides large language models (LLMs) to generate recommendations aligned with user historical behavior distributions by minimizing preference alignment loss. However, our systematic empirical research and theoretical analysis reveal that DPO tends to amplify spurious correlations caused by environmental confounders during the alignment process, significantly undermining the generalization capability of LLM-based generative recommendation methods in out of distribution (OOD) scenarios. To mitigate this issue, we propose CausalDPO, an extension of DPO that incorporates a causal invariance learning mechanism. This method introduces a backdoor adjustment strategy during the preference alignment phase to eliminate interference from environmental confounders, explicitly models the latent environmental distribution using a soft clustering approach, and enhances robust consistency across diverse environments through invariance constraints. Theoretical analysis demonstrates that CausalDPO can effectively capture users stable preference structures across multiple environments, thereby improving the OOD generalization performance of LLM-based recommendation models. We conduct extensive experiments under four representative distribution shift settings to validate the effectiveness of CausalDPO, achieving an average performance improvement of 17.17% across four evaluation metrics.
Problem

Research questions and friction points this paper is trying to address.

Direct Preference Optimization
spurious correlations
environmental confounders
out-of-distribution generalization
generative recommendation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Invariance Learning
Backdoor Adjustment
Distributionally Robust Recommendation
Out-of-Distribution Generalization
Direct Preference Optimization
πŸ”Ž Similar Papers
No similar papers found.