Improving Recommendation Fairness without Sensitive Attributes Using Multi-Persona LLMs

๐Ÿ“… 2025-05-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of ensuring fairness in recommender systems lacking explicit sensitive attributes, this paper proposes LLMFOSAโ€”a novel framework for fair recommendation without sensitive labels. First, it employs a multi-Persona large language model to implicitly infer usersโ€™ latent sensitive information, circumventing reliance on explicit annotations. Second, it introduces consensus-driven confusion-aware representation learning, which decouples sensitive dimensions from recommendation representations via mutual information minimization. This work pioneers two key innovations: (1) multi-role collaborative reasoning for sensitive attribute inference, and (2) unsupervised confusion modeling for fairness-aware representation learning. Crucially, LLMFOSA achieves substantial fairness improvements without compromising recommendation accuracy: on two public benchmarks, it reduces statistical parity difference (SPD) by 37.2% and equal opportunity difference (EO) by 41.5%.

Technology Category

Application Category

๐Ÿ“ Abstract
Despite the success of recommender systems in alleviating information overload, fairness issues have raised concerns in recent years, potentially leading to unequal treatment for certain user groups. While efforts have been made to improve recommendation fairness, they often assume that users' sensitive attributes are available during model training. However, collecting sensitive information can be difficult, especially on platforms that involve no personal information disclosure. Therefore, we aim to improve recommendation fairness without any access to sensitive attributes. However, this is a non-trivial task because uncovering latent sensitive patterns from complicated user behaviors without explicit sensitive attributes can be difficult. Consequently, suboptimal estimates of sensitive distributions can hinder the fairness training process. To address these challenges, leveraging the remarkable reasoning abilities of Large Language Models (LLMs), we propose a novel LLM-enhanced framework for Fair recommendation withOut Sensitive Attributes (LLMFOSA). A Multi-Persona Sensitive Information Inference module employs LLMs with distinct personas that mimic diverse human perceptions to infer and distill sensitive information. Furthermore, a Confusion-Aware Sensitive Representation Learning module incorporates inference results and rationales to develop robust sensitive representations, considering the mislabeling confusion and collective consensus among agents. The model is then optimized by a formulated mutual information objective. Extensive experiments on two public datasets validate the effectiveness of LLMFOSA in improving fairness.
Problem

Research questions and friction points this paper is trying to address.

Improving recommendation fairness without sensitive attributes
Inferring latent sensitive patterns from user behaviors
Enhancing fairness training with LLM-based sensitive information inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Persona LLMs infer sensitive attributes
Confusion-Aware Learning enhances representation robustness
Mutual information optimizes fairness without sensitive data
๐Ÿ”Ž Similar Papers
No similar papers found.
Haoran Xin
Haoran Xin
HKUST; USTC
Data MiningRecommender SystemsPersonalization
Y
Ying Sun
Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou)
C
Chao Wang
School of Artificial Intelligence and Data Science, University of Science and Technology of China
Y
Yanke Yu
Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou)
W
Weijia Zhang
Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou)
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser