🤖 AI Summary
In recommender systems, latent confounders obscure the causal relationship between user feedback and item exposure, severely degrading recommendation performance. To address this, we propose LCDR—a novel generative recommendation framework that integrates identifiable variational autoencoders (iVAEs) into causal modeling for the first time. LCDR explicitly disentangles latent confounders via a causal representation alignment constraint. Crucially, it requires only weak or noisy proxy variables—avoiding strong assumptions such as instrumental variables—thereby significantly enhancing practical feasibility of causal debiasing in real-world settings. Extensive experiments on three real-world datasets demonstrate that LCDR consistently outperforms state-of-the-art methods in Recall@10 and NDCG@10, while effectively mitigating confounding bias. These results validate both the efficacy and generalizability of causal-constrained generative modeling for recommendation.
📝 Abstract
Accurately predicting counterfactual user feedback is essential for building effective recommender systems. However, latent confounding bias can obscure the true causal relationship between user feedback and item exposure, ultimately degrading recommendation performance. Existing causal debiasing approaches often rely on strong assumptions-such as the availability of instrumental variables (IVs) or strong correlations between latent confounders and proxy variables-that are rarely satisfied in real-world scenarios. To address these limitations, we propose a novel generative framework called Latent Causality Constraints for Debiasing representation learning in Recommender Systems (LCDR). Specifically, LCDR leverages an identifiable Variational Autoencoder (iVAE) as a causal constraint to align the latent representations learned by a standard Variational Autoencoder (VAE) through a unified loss function. This alignment allows the model to leverage even weak or noisy proxy variables to recover latent confounders effectively. The resulting representations are then used to improve recommendation performance. Extensive experiments on three real-world datasets demonstrate that LCDR consistently outperforms existing methods in both mitigating bias and improving recommendation accuracy.