Mirroring Users: Towards Building Preference-aligned User Simulator with User Feedback in Recommendation

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of ambiguous and noisy user feedback, as well as low simulation efficiency on large-scale data in recommender systems, this paper proposes a two-stage user simulator construction framework. In the first stage, a large language model (LLM) parses raw feedback to generate interpretable cognitive decision paths, integrating uncertainty estimation and behavioral sampling for noise identification and data distillation. In the second stage, a lightweight student model is fine-tuned on the distilled high-quality preference samples. The method significantly improves modeling accuracy of true user preferences and domain reasoning capability, while maintaining computational efficiency. It enhances both interpretability and fidelity of simulated feedback—better reflecting human cognitive processes—thereby providing more reliable and cognitively grounded interaction signals for recommender system evaluation and optimization.

Technology Category

Application Category

📝 Abstract
User simulation is increasingly vital to develop and evaluate recommender systems (RSs). While Large Language Models (LLMs) offer promising avenues to simulate user behavior, they often struggle with the absence of specific domain alignment required for RSs and the efficiency demands of large-scale simulation. A vast yet underutilized resource for enhancing this alignment is the extensive user feedback inherent in RSs. However, directly leveraging such feedback presents two significant challenges. First, user feedback in RSs is often ambiguous and noisy, which negatively impacts effective preference alignment. Second, the massive volume of feedback largely hinders the efficiency of preference alignment, necessitating an efficient filtering mechanism to identify more informative samples. To overcome these hurdles, we introduce a novel data construction framework that leverages user feedback in RSs with advanced LLM capabilities to generate high-quality simulation data. Our framework unfolds in two key phases: (1) employing LLMs to generate cognitive decision-making processes on constructed simulation samples, reducing ambiguity in raw user feedback; (2) data distillation based on uncertainty estimation and behavior sampling to filter challenging yet denoised simulation samples. Accordingly, we fine-tune lightweight LLMs, as user simulators, using such high-quality dataset with corresponding decision-making processes. Extensive experiments verify that our framework significantly boosts the alignment with human preferences and in-domain reasoning capabilities of fine-tuned LLMs, and provides more insightful and interpretable signals when interacting with RSs. We believe our work will advance the RS community and offer valuable insights for broader human-centric AI research.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with user preferences using feedback
Reducing ambiguity and noise in user feedback data
Filtering informative samples for efficient preference alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated cognitive processes reduce feedback ambiguity
Uncertainty estimation and behavior sampling filter samples
Fine-tuned lightweight LLMs serve as preference-aligned simulators
🔎 Similar Papers
No similar papers found.