🤖 AI Summary
Offline reinforcement learning (RL) for ad recommendation under sparse rewards suffers from overestimation bias, distributional shift, and inadequate modeling of budget constraints. Method: We propose a causal Markov decision process framework tailored for advertising decisions, featuring causal state encoding and conditional sequence modeling. We design a multi-task offline RL architecture incorporating causal attention mechanisms to jointly optimize channel recommendation and dynamic budget allocation, while enabling policy decoupling. Contribution/Results: Our method significantly outperforms state-of-the-art baselines in both offline evaluation and large-scale online A/B tests. It effectively mitigates overestimation and distributional shift, yielding consistent improvements in core metrics—including CTR, CVR, and ROI—as well as overall system revenue.
📝 Abstract
Online advertising in recommendation platforms has gained significant attention, with a predominant focus on channel recommendation and budget allocation strategies. However, current offline reinforcement learning (RL) methods face substantial challenges when applied to sparse advertising scenarios, primarily due to severe overestimation, distributional shifts, and overlooking budget constraints. To address these issues, we propose MTORL, a novel multi-task offline RL model that targets two key objectives. First, we establish a Markov Decision Process (MDP) framework specific to the nuances of advertising. Then, we develop a causal state encoder to capture dynamic user interests and temporal dependencies, facilitating offline RL through conditional sequence modeling. Causal attention mechanisms are introduced to enhance user sequence representations by identifying correlations among causal states. We employ multi-task learning to decode actions and rewards, simultaneously addressing channel recommendation and budget allocation. Notably, our framework includes an automated system for integrating these tasks into online advertising. Extensive experiments on offline and online environments demonstrate MTORL's superiority over state-of-the-art methods.