MiniRec: Data-Efficient Reinforcement Learning for LLM-based Recommendation

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of reinforcement learning (RL)-driven large language model (LLM)-based recommender systems, which often suffer from low data efficiency and high computational costs due to the mismatch between static data selection strategies and RL’s dynamic nature. To overcome this, the authors propose MiniRec, a novel framework that integrates reward alignment and optimization trajectory awareness into data selection. MiniRec evaluates sample learnability using reward signals, measures representativeness by aligning sample gradients with the ideal policy update direction, and incorporates diversity constraints alongside a curriculum learning strategy that progresses from easy to hard samples. This approach transcends conventional loss- or coverage-based selection methods, significantly reducing training data requirements while preserving recommendation performance. Extensive experiments demonstrate MiniRec’s superior data efficiency and effectiveness.

Technology Category

Application Category

πŸ“ Abstract
The integration of reinforcement learning (RL) into large language models (LLMs) has opened new opportunities for recommender systems by eliciting reasoning and improving user preference modeling. However, RL-based LLM recommendation faces significant efficiency challenges, making full-data training costly. Existing data selection methods define sample value based on learnability or representativeness, yet their loss- or gradient-driven or dataset coverage-driven criteria often misalign with RL learning dynamics, resulting in suboptimal performance. To address this, we propose MiniRec, a data selection framework tailored for RL-based LLM recommendation. MiniRec evaluates sample learnability using key RL signals -- rewards -- pruning samples that are too easy (too high reward) or too difficult (consistently low reward). It assesses representativeness by aligning sample gradients with the approximated"ideal"global RL optimization trajectory, selecting samples that mainly drive model updates, and it also enforces diversity to reduce redundancy. Combined with a curriculum learning strategy from easy to hard samples, MiniRec significantly reduces training cost while largely preserving performance. Extensive experiments demonstrate MiniRec's effectiveness, highlighting the importance of reward-aligned, trajectory-informed data selection in RL-based LLM recommendation.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
large language models
recommendation
data efficiency
data selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward-aligned data selection
trajectory-informed sampling
curriculum learning
data-efficient RL
LLM-based recommendation
πŸ”Ž Similar Papers
No similar papers found.