🤖 AI Summary
This work addresses the critical yet overlooked issue of personalized safety constraints—such as trauma triggers and phobias—in large language model (LLM)-based conversational recommender systems, which often prioritize recommendation accuracy at the expense of user well-being. To tackle this safety alignment challenge, the study formally defines the problem and introduces SafeCRS, a novel framework that jointly optimizes recommendation quality and individual safety sensitivity through Safe Supervised Fine-Tuning (Safe-SFT) and Safe Group-wise Reward Decoupled Policy Optimization (Safe-GDPO). Additionally, the authors construct SafeRec, the first benchmark dataset dedicated to personalized safety alignment in conversational recommendation. Experimental results demonstrate that SafeCRS maintains competitive recommendation performance while reducing safety violation rates by up to 96.5% compared to the strongest baseline.
📝 Abstract
Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs may negatively impact users by violating personalized safety constraints, when individualized safety sensitivities -- such as trauma triggers, self-harm history, or phobias -- are implicitly inferred from the conversation but not respected during recommendation. We formalize this challenge as personalized CRS safety and introduce SafeRec, a new benchmark dataset designed to systematically evaluate safety risks in LLM-based CRS under user-specific constraints. To further address this problem, we propose SafeCRS, a safety-aware training framework that integrates Safe Supervised Fine-Tuning (Safe-SFT) with Safe Group reward-Decoupled Normalization Policy Optimization (Safe-GDPO) to jointly optimize recommendation quality and personalized safety alignment. Extensive experiments on SafeRec demonstrate that SafeCRS reduces safety violation rates by up to 96.5% relative to the strongest recommendation-quality baseline while maintaining competitive recommendation quality. Warning: This paper contains potentially harmful and offensive content.