An Offline Reinforcement Learning Algorithm Customized for Multi-Task Fusion in Large-Scale Recommender Systems

πŸ“… 2024-04-19
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In multi-task fusion (MTF) recommendation, offline reinforcement learning (Offline RL) suffers from suboptimal policies due to overly conservative constraints and insufficient exploration mechanisms. To address this, we propose a synergistic framework integrating Offline RL with customized online exploration: (1) a targeted exploration mechanism focusing on high-value state-action pairs to mitigate offline data distribution bias; and (2) a progressive training paradigm to enhance policy convergence and generalization. Evaluated on Tencent News’ short-video feed, the method has been fully deployed across multiple large-scale industrial recommendation systems. It yields statistically significant improvements in key metrics: user satisfaction (+3.2%), click-through rate (+2.8%), and completion rate (+4.1%). This work establishes a scalable, high-impact paradigm for deploying Offline RL in real-world recommendation scenarios.

Technology Category

Application Category

πŸ“ Abstract
As the last critical stage of RSs, Multi-Task Fusion (MTF) is responsible for combining multiple scores outputted by Multi-Task Learning (MTL) into a final score to maximize user satisfaction, which determines the ultimate recommendation results. Recently, to optimize long-term user satisfaction within a recommendation session, Reinforcement Learning (RL) is used for MTF in the industry. However, the offline RL algorithms used for MTF so far have the following severe problems: 1) to avoid out-of-distribution (OOD) problem, their constraints are overly strict, which seriously damage their performance; 2) they are unaware of the exploration policy used for producing training data and never interact with real environment, so only suboptimal policy can be learned; 3) the traditional exploration policies are inefficient and hurt user experience. To solve the above problems, we propose a novel method named IntegratedRL-MTF customized for MTF in large-scale RSs. IntegratedRL-MTF integrates offline RL model with our online exploration policy to relax overstrict and complicated constraints, which significantly improves its performance. We also design an extremely efficient exploration policy, which eliminates low-value exploration space and focuses on exploring potential high-value state-action pairs. Moreover, we adopt progressive training mode to further enhance our model's performance with the help of our exploration policy. We conduct extensive offline and online experiments in the short video channel of Tencent News. The results demonstrate that our model outperforms other models remarkably. IntegratedRL-MTF has been fully deployed in our RS and other large-scale RSs in Tencent, which have achieved significant improvements.
Problem

Research questions and friction points this paper is trying to address.

Offline Reinforcement Learning
Multi-task Fusion Recommendation Systems
Exploration Strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

IntegratedRL-MTF
Multi-Task Fusion
Offline-Online Reinforcement Learning
πŸ”Ž Similar Papers
No similar papers found.
P
Peng Liu
Tencent Inc., Beijing, China
C
Cong Xu
Tencent Inc., Beijing, China
M
Min Zhao
Tencent Inc., Beijing, China
J
Jiawei Zhu
Tencent Inc., Beijing, China
B
Bin Wang
Tencent Inc., Beijing, China
Y
Yi Ren
Tencent Inc., Beijing, China