🤖 AI Summary
In recommender systems, unobserved confounders—such as environmental factors and system-level biases—distort user preference modeling. To address this, we propose a latent-space disentanglement framework that, for the first time under unsupervised settings, enables identifiable causal structure learning of unobserved confounders. Our method constructs a label-free, joint local-global causal graph that explicitly models both confounding effects and the generative process of user preferences. By intervening on learned confounder representations, it supports controllable and diverse recommendation adjustments. Technically, the approach integrates variational autoencoders with structural causal modeling to jointly achieve latent disentanglement and causal inference. Evaluated on nine real-world and one synthetic dataset, it outperforms state-of-the-art baselines by an average of 9.55% in recommendation accuracy. We provide theoretical guarantees for causal graph identifiability and release open-source code to facilitate reproducibility and controlled experimentation.
📝 Abstract
Inferring user preferences from users’ historical feedback is a valuable problem in recommender systems. Conventional approaches often rely on the assumption that user preferences in the feedback data are equivalent to the real user preferences without additional noise, which simplifies the problem modeling. However, there are various confounders during user-item interactions, such as weather and even the recommendation system itself. Therefore, neglecting the influence of confounders will result in inaccurate user preferences and suboptimal performance of the model. Furthermore, the unobservability of confounders poses a challenge in further addressing the problem. Along these lines, we refine the problem and propose a more rational solution to mitigate the influence of unobserved confounders. Specifically, we consider the influence of unobserved confounders, disentangle them from user preferences in the latent space, and employ causal graphs to model their interdependencies without specific labels. By ingeniously combining local and global causal graphs, we capture the user-specific effects of confounders on user preferences. Finally, we propose our model based on Variational Autoencoders, named
C
ausal
S
tructure
A
ware
V
ariational
A
uto
e
ncoders (CSA-VAE) and theoretically demonstrate the identifiability of the obtained causal graph. We conducted extensive experiments on one synthetic dataset and nine real-world datasets with different scales, including three unbiased datasets and six normal datasets, where the average performance boost against several state-of-the-art baselines achieves up to 9.55%, demonstrating the superiority of our model. Furthermore, users can control their recommendation list by manipulating the learned causal representations of confounders, generating potentially more diverse recommendation results. Our code is available at
Code-link
1
.