Towards Fair Large Language Model-based Recommender Systems without Costly Retraining

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fairness challenges in large language model-based recommender systems (LLM-RS), which often inherit biases from their training data. Existing debiasing methods either lack generalizability or require costly retraining. To overcome these limitations, we propose FUDLR, a novel framework that reframes debiasing as an efficient machine unlearning task. FUDLR first identifies bias-relevant samples—supporting multiple or coexisting bias types—via a bias-agnostic masking mechanism, then precisely estimates and removes their influence on model parameters without fine-tuning or retraining. Experimental results demonstrate that FUDLR significantly enhances recommendation fairness while effectively preserving accuracy, offering a new paradigm for building efficient, generalizable, and responsible LLM-RS.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have revolutionized Recommender Systems (RS) through advanced generative user modeling. However, LLM-based RS (LLM-RS) often inadvertently perpetuates bias present in the training data, leading to severe fairness issues. Addressing these fairness problems in LLM-RS faces two significant challenges. 1) Existing debiasing methods, designed for specific bias types, lack the generality to handle diverse or emerging biases in real-world applications. 2) Debiasing methods relying on retraining are computationally infeasible given the massive parameter scale of LLMs. To overcome these challenges, we propose FUDLR (Fast Unified Debiasing for LLM-RS). The core idea is to reformulate the debiasing problem as an efficient machine unlearning task with two stages. First, FUDLR identifies bias-inducing samples to unlearn through a novel bias-agnostic mask, optimized to balance fairness improvement with accuracy preservation. Its bias-agnostic design allows adaptability to various or co-existing biases simply by incorporating different fairness metrics. Second, FUDLR performs efficient debiasing by estimating and removing the influence of identified samples on model parameters. Extensive experiments demonstrate that FUDLR effectively and efficiently improves fairness while preserving recommendation accuracy, offering a practical path toward socially responsible LLM-RS. The code and data are available at https://github.com/JinLi-i/FUDLR.
Problem

Research questions and friction points this paper is trying to address.

fairness
large language models
recommender systems
bias
machine unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning
bias-agnostic debiasing
LLM-based recommender systems
fairness-aware recommendation
efficient debiasing
🔎 Similar Papers
No similar papers found.