🤖 AI Summary
This work addresses the challenge in large-scale multi-task recommendation systems where uniform parameter scaling fails to accommodate the heterogeneous capacity demands of individual tasks, leading to high inference costs and diminishing returns for sparse tasks. To this end, we propose SMES, a framework built upon a Sparse Mixture-of-Experts (Sparse MoE) architecture that integrates a hybrid activation mechanism combining task-shared and task-private experts. SMES further introduces progressive expert routing and a global multi-gate load-balancing regularizer, which jointly enhance task-specific modeling while preserving instance-level sparsity. This design effectively constrains the number of activated experts and mitigates load imbalance. Deployed on Kuaishou’s short-video platform—serving over 400 million daily active users—the method achieves a 0.29% gain in GAUC and a 0.31% increase in user watch time.
📝 Abstract
Industrial recommender systems typically rely on multi-task learning to estimate diverse user feedback signals and aggregate them for ranking. Recent advances in model scaling have shown promising gains in recommendation. However, naively increasing model capacity imposes prohibitive online inference costs and often yields diminishing returns for sparse tasks with skewed label distributions. This mismatch between uniform parameter scaling and heterogeneous task capacity demands poses a fundamental challenge for scalable multi-task recommendation. In this work, we investigate parameter sparsification as a principled scaling paradigm and identify two critical obstacles when applying sparse Mixture-of-Experts (MoE) to multi-task recommendation: exploded expert activation that undermines instance-level sparsity and expert load skew caused by independent task-wise routing. To address these challenges, we propose SMES, a scalable sparse MoE framework with progressive expert routing. SMES decomposes expert activation into a task-shared expert subset jointly selected across tasks and task-adaptive private experts, explicitly bounding per-instance expert execution while preserving task-specific capacity. In addition, SMES introduces a global multi-gate load-balancing regularizer that stabilizes training by regulating aggregated expert utilization across all tasks. SMES has been deployed in Kuaishou large-scale short-video services, supporting over 400 million daily active users. Extensive online experiments demonstrate stable improvements, with GAUC gain of 0.29% and a 0.31% uplift in user watch time.