Lightweight Fairness for LLM-Based Recommendations via Kernelized Projection and Gated Adapters

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a closed-form, parameter-free debiasing approach for large language model (LLM)-based recommender systems, which are prone to inheriting and amplifying social biases present in pretraining data—particularly when demographic cues are available. To address limitations of existing fairness methods that often require additional trainable parameters or suffer from optimization instability, the authors integrate kernelized Iterative Nullspace Projection (Kernelized INLP) with a two-stage Gated Mixture-of-Experts (Gated MoE) adapter. This framework effectively removes one or multiple sensitive attributes while selectively recovering task-relevant semantic signals. Experiments on two public datasets demonstrate that the method substantially reduces information leakage about protected variables without compromising recommendation accuracy, achieving competitive performance compared to state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have introduced new capabilities to recommender systems, enabling dynamic, context-aware, and conversational recommendations. However, LLM-based recommender systems inherit and may amplify social biases embedded in their pre-training data, especially when demographic cues are present. Existing fairness solutions either require extra parameters fine-tuning, or suffer from optimization instability. We propose a lightweight and scalable bias mitigation method that combines a kernelized Iterative Null-space Projection (INLP) with a gated Mixture-of-Experts (MoE) adapter. Our approach estimates a closed-form projection that removes single or multiple sensitive attributes from LLM representations with no additional trainable parameters. To preserve task utility, we introduce a two-level MoE adapter that selectively restores useful signals without reintroducing bias. Experiments on two public datasets show that our method reduces attribute leakage across multiple protected variables while maintaining competitive recommendation accuracy.
Problem

Research questions and friction points this paper is trying to address.

LLM-based recommendations
social bias
fairness
demographic cues
bias mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kernelized INLP
Gated MoE Adapter
Bias Mitigation
LLM-based Recommendation
Lightweight Fairness
🔎 Similar Papers
No similar papers found.