π€ AI Summary
This work addresses the vulnerability of conventional Direct Preference Optimization (DPO) to environmental confounders, which amplifies spurious correlations and degrades out-of-distribution (OOD) generalization. To mitigate this, the authors introduce causal invariance learning into the DPO framework for the first time. During preference alignment, they integrate backdoor adjustment with soft clustering to model latent environments, explicitly disentangling usersβ stable preferences from environmental noise. An invariance constraint is then imposed to enhance robustness across diverse environments. Evaluated under four distribution shift settings, the proposed method improves recommendation performance by an average of 17.17%, demonstrating significantly stronger OOD generalization capabilities.
π Abstract
Direct Preference Optimization (DPO) guides large language models (LLMs) to generate recommendations aligned with user historical behavior distributions by minimizing preference alignment loss. However, our systematic empirical research and theoretical analysis reveal that DPO tends to amplify spurious correlations caused by environmental confounders during the alignment process, significantly undermining the generalization capability of LLM-based generative recommendation methods in out of distribution (OOD) scenarios. To mitigate this issue, we propose CausalDPO, an extension of DPO that incorporates a causal invariance learning mechanism. This method introduces a backdoor adjustment strategy during the preference alignment phase to eliminate interference from environmental confounders, explicitly models the latent environmental distribution using a soft clustering approach, and enhances robust consistency across diverse environments through invariance constraints. Theoretical analysis demonstrates that CausalDPO can effectively capture users stable preference structures across multiple environments, thereby improving the OOD generalization performance of LLM-based recommendation models. We conduct extensive experiments under four representative distribution shift settings to validate the effectiveness of CausalDPO, achieving an average performance improvement of 17.17% across four evaluation metrics.