Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address filter bubbles and information narrowing caused by excessive personalization in large language model (LLM)-based recommender systems, this paper proposes a lightweight neuro-symbolic framework. During inference, it dynamically reconstructs user-side knowledge graphs and applies rule-guided graph adaptation—including triplet reweighting, hard inversion, and bias removal—to mitigate feature co-occurrence bias. Coupled with structured prompt engineering and client-side customized optimization, the approach jointly enhances recommendation diversity and topic relevance. Its core innovation lies in enabling controllable adjustment of personalization intensity at inference time—without any model fine-tuning—through symbolic graph operations and prompt guidance alone. Evaluated on a recipe recommendation benchmark, the method achieves a +23.6% improvement in novelty while preserving recommendation accuracy, outperforming both global adaptation and naive prompting baselines.

Technology Category

Application Category

📝 Abstract
We present a lightweight neuro-symbolic framework to mitigate over-personalization in LLM-based recommender systems by adapting user-side Knowledge Graphs (KGs) at inference time. Instead of retraining models or relying on opaque heuristics, our method restructures a user's Personalized Knowledge Graph (PKG) to suppress feature co-occurrence patterns that reinforce Personalized Information Environments (PIEs), i.e., algorithmically induced filter bubbles that constrain content diversity. These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations while preserving topical relevance. We introduce a family of symbolic adaptation strategies, including soft reweighting, hard inversion, and targeted removal of biased triples, and a client-side learning algorithm that optimizes their application per user. Experiments on a recipe recommendation benchmark show that personalized PKG adaptations significantly increase content novelty while maintaining recommendation quality, outperforming global adaptation and naive prompt-based methods.
Problem

Research questions and friction points this paper is trying to address.

Mitigate over-personalization in LLM recommender systems
Suppress feature co-occurrence patterns reinforcing filter bubbles
Increase content novelty while maintaining recommendation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight neuro-symbolic framework adapts user knowledge graphs
Symbolic strategies reweight invert remove biased graph triples
Structured prompts steer LLM toward diverse relevant recommendations
🔎 Similar Papers
No similar papers found.