π€ AI Summary
This work addresses the limitations of reinforcement learning (RL)-driven large language model (LLM)-based recommender systems, which often suffer from low data efficiency and high computational costs due to the mismatch between static data selection strategies and RLβs dynamic nature. To overcome this, the authors propose MiniRec, a novel framework that integrates reward alignment and optimization trajectory awareness into data selection. MiniRec evaluates sample learnability using reward signals, measures representativeness by aligning sample gradients with the ideal policy update direction, and incorporates diversity constraints alongside a curriculum learning strategy that progresses from easy to hard samples. This approach transcends conventional loss- or coverage-based selection methods, significantly reducing training data requirements while preserving recommendation performance. Extensive experiments demonstrate MiniRecβs superior data efficiency and effectiveness.
π Abstract
The integration of reinforcement learning (RL) into large language models (LLMs) has opened new opportunities for recommender systems by eliciting reasoning and improving user preference modeling. However, RL-based LLM recommendation faces significant efficiency challenges, making full-data training costly. Existing data selection methods define sample value based on learnability or representativeness, yet their loss- or gradient-driven or dataset coverage-driven criteria often misalign with RL learning dynamics, resulting in suboptimal performance. To address this, we propose MiniRec, a data selection framework tailored for RL-based LLM recommendation. MiniRec evaluates sample learnability using key RL signals -- rewards -- pruning samples that are too easy (too high reward) or too difficult (consistently low reward). It assesses representativeness by aligning sample gradients with the approximated"ideal"global RL optimization trajectory, selecting samples that mainly drive model updates, and it also enforces diversity to reduce redundancy. Combined with a curriculum learning strategy from easy to hard samples, MiniRec significantly reduces training cost while largely preserving performance. Extensive experiments demonstrate MiniRec's effectiveness, highlighting the importance of reward-aligned, trajectory-informed data selection in RL-based LLM recommendation.