SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical yet overlooked issue of personalized safety constraints—such as trauma triggers and phobias—in large language model (LLM)-based conversational recommender systems, which often prioritize recommendation accuracy at the expense of user well-being. To tackle this safety alignment challenge, the study formally defines the problem and introduces SafeCRS, a novel framework that jointly optimizes recommendation quality and individual safety sensitivity through Safe Supervised Fine-Tuning (Safe-SFT) and Safe Group-wise Reward Decoupled Policy Optimization (Safe-GDPO). Additionally, the authors construct SafeRec, the first benchmark dataset dedicated to personalized safety alignment in conversational recommendation. Experimental results demonstrate that SafeCRS maintains competitive recommendation performance while reducing safety violation rates by up to 96.5% compared to the strongest baseline.

Technology Category

Application Category

📝 Abstract
Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs may negatively impact users by violating personalized safety constraints, when individualized safety sensitivities -- such as trauma triggers, self-harm history, or phobias -- are implicitly inferred from the conversation but not respected during recommendation. We formalize this challenge as personalized CRS safety and introduce SafeRec, a new benchmark dataset designed to systematically evaluate safety risks in LLM-based CRS under user-specific constraints. To further address this problem, we propose SafeCRS, a safety-aware training framework that integrates Safe Supervised Fine-Tuning (Safe-SFT) with Safe Group reward-Decoupled Normalization Policy Optimization (Safe-GDPO) to jointly optimize recommendation quality and personalized safety alignment. Extensive experiments on SafeRec demonstrate that SafeCRS reduces safety violation rates by up to 96.5% relative to the strongest recommendation-quality baseline while maintaining competitive recommendation quality. Warning: This paper contains potentially harmful and offensive content.
Problem

Research questions and friction points this paper is trying to address.

personalized safety
conversational recommender systems
safety constraints
LLM-based CRS
user-specific risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalized safety alignment
conversational recommender systems
safety-aware training
Safe-SFT
Safe-GDPO
🔎 Similar Papers
No similar papers found.
H
Haochang Hao
University of Illinois at Chicago
Y
Yifan Xu
University of Illinois at Urbana-Champaign
X
Xinzhuo Li
University of Illinois at Urbana-Champaign
Y
Yingqiang Ge
Amazon
Lu Cheng
Lu Cheng
Assistant Professor, UIC CS
Socially Responsible AICausal Machine LearningData MiningAI for Good