๐ค AI Summary
This work addresses the limitation of existing long-tail recommendation methods that often neglect user preference alignment, thereby undermining long-term user engagement. To mitigate this issue, the authors propose HRL4PFG, a hierarchical reinforcement learningโbased proactive guidance strategy for interactive recommendation. The approach jointly leverages macro-level fairness objectives and micro-level real-time adjustments to progressively steer user preferences toward long-tail items. It formulates fair guidance targets through multi-step user feedback and dynamically integrates evolving user preferences to optimize immediate recommendations. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in terms of cumulative interaction rewards and maximum user interaction length, indicating enhanced long-term engagement and recommendation effectiveness.
๐ Abstract
Item-side fairness is crucial for ensuring the fair exposure of long-tail items in interactive recommender systems. Existing approaches promote the exposure of long-tail items by directly incorporating them into recommended results. This causes misalignment between user preferences and the recommended long-tail items, which hinders long-term user engagement and reduces the effectiveness of recommendations. We aim for a proactive fairness-guiding strategy, which actively guides user preferences toward long-tail items while preserving user satisfaction during the interactive recommendation process. To this end, we propose HRL4PFG, an interactive recommendation framework that leverages hierarchical reinforcement learning to guide user preferences toward long-tail items progressively. HRL4PFG operates through a macro-level process that generates fairness-guided targets based on multi-step feedback, and a micro-level process that fine-tunes recommendations in real time according to both these targets and evolving user preferences. Extensive experiments show that HRL4PFG improves cumulative interaction rewards and maximum user interaction length by a larger margin when compared with state-of-the-art methods in interactive recommendation environments.