Mitigating Propensity Bias of Large Language Models for Recommender Systems

๐Ÿ“… 2024-09-30
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) in recommender systems face two critical challenges: inherent bias leading to unfair recommendations, and dimensionality collapse during alignment of side information with collaborative signals, which impairs user preference modeling. To address these, we propose the Counterfactual LLM-based Recommendation framework (CLLMR). CLLMR introduces a novel spectral-domain side-information encoder that implicitly integrates structural patterns from the historical interaction graph, and incorporates a counterfactual reasoning mechanism to disentangle LLM-inherent biasesโ€”enabling causal-level co-optimization of fairness and representation diversity. Our method jointly leverages spectral graph encoding, causal embedding, and a bias-correcting loss function. Extensive experiments on multiple benchmark datasets demonstrate that CLLMR consistently outperforms state-of-the-art methods, achieving significant gains in Recall@10 and NDCG@10. Moreover, it effectively mitigates dimensionality collapse and enhances discriminative capability for user preferences.

Technology Category

Application Category

๐Ÿ“ Abstract
The rapid development of Large Language Models (LLMs) creates new opportunities for recommender systems, especially by exploiting the side information (e.g., descriptions and analyses of items) generated by these models. However, aligning this side information with collaborative information from historical interactions poses significant challenges. The inherent biases within LLMs can skew recommendations, resulting in distorted and potentially unfair user experiences. On the other hand, propensity bias causes side information to be aligned in such a way that it often tends to represent all inputs in a low-dimensional subspace, leading to a phenomenon known as dimensional collapse, which severely restricts the recommender system's ability to capture user preferences and behaviours. To address these issues, we introduce a novel framework named Counterfactual LLM Recommendation (CLLMR). Specifically, we propose a spectrum-based side information encoder that implicitly embeds structural information from historical interactions into the side information representation, thereby circumventing the risk of dimension collapse. Furthermore, our CLLMR approach explores the causal relationships inherent in LLM-based recommender systems. By leveraging counterfactual inference, we counteract the biases introduced by LLMs. Extensive experiments demonstrate that our CLLMR approach consistently enhances the performance of various recommender models.
Problem

Research questions and friction points this paper is trying to address.

Mitigating LLM biases in recommender systems
Addressing dimensional collapse in side information
Aligning LLM outputs with historical interaction data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spectrum-based side information encoder prevents dimension collapse
Counterfactual inference mitigates LLM biases in recommendations
Causal relationship exploration enhances recommender system performance
๐Ÿ”Ž Similar Papers
No similar papers found.
G
Guixian Zhang
School of Computer Science and Technology, Engineering Research Center of Mine Digitalisation, Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou 221116, China
G
Guan Yuan
School of Computer Science and Technology, Engineering Research Center of Mine Digitalisation, Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou 221116, China
D
Debo Cheng
UniSA STEM, University of South Australia, Adelaide 5095, Australia
L
Lin Liu
UniSA STEM, University of South Australia, Adelaide 5095, Australia
J
Jiuyong Li
UniSA STEM, University of South Australia, Adelaide 5095, Australia
Shichao Zhang
Shichao Zhang
Guangxi Normal University
Big DataData underlying logicKNN