π€ AI Summary
This work addresses the challenge of large-scale personalized recommendation, where balancing multiple stakeholder objectives, satisfying complex business constraints, and enabling effective exploration remain difficult. We propose a scalable multi-stakeholder contextual bandit framework that, for the first time, end-to-end integrates neural Thompson sampling with a large-scale linear programming solver capable of handling billions of variables. The former models multi-objective rewards under uncertainty, while the latter efficiently performs constrained action selection during serving, compatible with arbitrary neural network architectures. Experiments demonstrate that our approach significantly outperforms strong baselines on both public benchmarks and synthetic datasets, and has already delivered substantial business gains in LinkedInβs email marketing system.
π Abstract
We present BanditLP, a scalable multi-stakeholder contextual bandit framework that unifies neural Thompson Sampling for learning objective-specific outcomes with a large-scale linear program for constrained action selection at serving time. The methodology is application-agnostic, compatible with arbitrary neural architectures, and deployable at web scale, with an LP solver capable of handling billions of variables. Experiments on public benchmarks and synthetic data show consistent gains over strong baselines. We apply this approach in LinkedIn's email marketing system and demonstrate business win, illustrating the value of integrated exploration and constrained optimization in production.