Can Fairness Be Prompted? Prompt-Based Debiasing Strategies in High-Stakes Recommendations

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the risk that large language model (LLM)-based recommender systems may infer users’ sensitive attributes through indirect cues, thereby amplifying group-level biases. Existing debiasing approaches typically require access to model weights and incur high computational costs. To overcome these limitations, this study proposes the first prompt-based debiasing method tailored for LLM-powered recommendation systems, which operates without modifying the model architecture or accessing internal parameters. By crafting fairness-aware prompt templates integrated with three bias-aware strategies, the approach achieves effective debiasing across multiple mainstream LLMs and real-world datasets. Experimental results demonstrate that the method significantly enhances group fairness—by up to 74%—while preserving recommendation performance, offering end users a lightweight and practical solution for fairer recommendations.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can infer sensitive attributes such as gender or age from indirect cues like names and pronouns, potentially biasing recommendations. While several debiasing methods exist, they require access to the LLMs' weights, are computationally costly, and cannot be used by lay users. To address this gap, we investigate implicit biases in LLM Recommenders (LLMRecs) and explore whether prompt-based strategies can serve as a lightweight and easy-to-use debiasing approach. We contribute three bias-aware prompting strategies for LLMRecs. To our knowledge, this is the first study on prompt-based debiasing approaches in LLMRecs that focuses on group fairness for users. Our experiments with 3 LLMs, 4 prompt templates, 9 sensitive attribute values, and 2 datasets show that our proposed debiasing approach, which instructs an LLM to be fair, can improve fairness by up to 74% while retaining comparable effectiveness, but might overpromote specific demographic groups in some cases.
Problem

Research questions and friction points this paper is trying to address.

fairness
bias
large language models
recommendation systems
sensitive attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt-based debiasing
group fairness
LLM recommenders
sensitive attributes
fairness prompting
🔎 Similar Papers
No similar papers found.
M
Mihaela Rotar
University of Copenhagen
T
Theresia Veronika Rampisela
University of Copenhagen
Maria Maistro
Maria Maistro
University of Copenhagen
Information Retrieval