Causality-Inspired Fair Representation Learning for Multimodal Recommendation

๐Ÿ“… 2023-10-26
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In multimodal recommendation, inter-modal causal entanglement exacerbates sensitive information leakage, inducing biased user representations; existing fairness-aware methods fail to model the underlying multimodal causal structure, limiting generalizability. This paper pioneers the integration of causal inference into fairness-aware multimodal recommendation, proposing a causally inspired framework for modality disentanglement and relation-aware fairness. Specifically, it decouples modality-specific embeddings to break cross-modal entanglement, and jointly performs counterfactual intervention on sensitive attributes and graph-based relational modeling to achieve counterfactual fairness at the representation level. Evaluated on two public benchmarks, our method significantly outperforms state-of-the-art approaches: it improves recommendation accuracy while reducing correlation with sensitive attributes by 32.7%, thereby jointly enhancing fairness and informativeness.
๐Ÿ“ Abstract
Recently, multimodal recommendations (MMR) have gained increasing attention for alleviating the data sparsity problem of traditional recommender systems by incorporating modality-based representations. Although MMR exhibits notable improvement in recommendation accuracy, we empirically validate that an increase in the quantity or variety of modalities leads to a higher degree of usersโ€™ sensitive information leakage due to entangled causal relationships, risking fair representation learning. On the other hand, existing fair representation learning approaches are mostly based on the assumption that sensitive information is solely leaked from usersโ€™ interaction data and do not explicitly model the causal relationships introduced by multimodal data, which limits their applicability in multimodal scenarios. To address this limitation, we propose a novel fair multimodal recommendation approach (dubbed FMMRec) through causality-inspired fairness-oriented modal disentanglement and relation-aware fairness learning. Particularly, we disentangle biased and filtered modal embeddings inspired by causal inference techniques, enabling the mining of modality-based unfair and fair user-user relations, thereby enhancing the fairness and informativeness of user representations. By addressing the causal effects of sensitive attributes on user preferences, our approach aims to achieve counterfactual fairness in multimodal recommendations. Experiments on two public datasets demonstrate the superiority of our FMMRec relative to the state-of-the-art baselines. Our source code is available at https://github.com/WeixinChen98/FMMRec.
Problem

Research questions and friction points this paper is trying to address.

Address sensitive information leakage in multimodal recommendations
Disentangle biased and fair modal embeddings using causality
Achieve counterfactual fairness in user representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causality-inspired fairness-oriented modal disentanglement
Relation-aware fairness learning for multimodal data
Counterfactual fairness in multimodal recommendations
๐Ÿ”Ž Similar Papers
No similar papers found.
Weixin Chen
Weixin Chen
University of Illinois at Urbana-Champaign
Trustworthy Machine LearningNeuro-Symbolic AI
L
Li Chen
Hong Kong Baptist University, Hong Kong, China
Yongxin Ni
Yongxin Ni
National University of Singapore
Recommender Systems
Y
Yuhan Zhao
Harbin Engineering University, China and Hong Kong Baptist University, Hong Kong, China