Curr-RLCER:Curriculum Reinforcement Learning For Coherence Explainable Recommendation

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the coherence issue in recommender systems arising from misaligned objectives between rating prediction and explanation generation. To resolve this, the authors propose a dynamic alignment framework based on curriculum reinforcement learning. The approach employs a curriculum learning strategy that progressively transitions from click-through rate prediction to open-ended explanation generation, coupled with a coherence-driven reward mechanism to ensure strong alignment between generated explanations and predicted ratings. To comprehensively evaluate performance, the study introduces dedicated metrics for coherence and system stability. Experimental results on three explainable recommendation datasets demonstrate that the proposed method significantly enhances both the coherence of recommendations and overall system stability.
📝 Abstract
Explainable recommendation systems (RSs) are designed to explicitly uncover the rationale of each recommendation, thereby enhancing the transparency and credibility of RSs. Previous methods often jointly predicted ratings and generated explanations, but overlooked the incoherence of such two objectives. To address this issue, we propose Curr-RLCER, a reinforcement learning framework for explanation coherent recommendation with dynamic rating alignment. It employs curriculum learning, transitioning from basic predictions (i.e., click through rating-CTR, selection-based rating) to open-ended recommendation explanation generation. In particular, the rewards of each stage are designed for progressively enhancing the stability of RSs. Furthermore, a coherence-driven reward mechanism is also proposed to enforce the coherence between generated explanations and predicted ratings, supported by a specifically designed evaluation scheme. The extensive experimental results on three explainable recommendation datasets indicate that the proposed framework is effective. Codes and datasets are available at https://github.com/pxcstart/Curr-RLCER.
Problem

Research questions and friction points this paper is trying to address.

explainable recommendation
coherence
rating-explanation alignment
reinforcement learning
curriculum learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curriculum Learning
Reinforcement Learning
Explainable Recommendation
Coherence Reward
Dynamic Rating Alignment
🔎 Similar Papers
No similar papers found.