SMES: Towards Scalable Multi-Task Recommendation via Expert Sparsity

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in large-scale multi-task recommendation systems where uniform parameter scaling fails to accommodate the heterogeneous capacity demands of individual tasks, leading to high inference costs and diminishing returns for sparse tasks. To this end, we propose SMES, a framework built upon a Sparse Mixture-of-Experts (Sparse MoE) architecture that integrates a hybrid activation mechanism combining task-shared and task-private experts. SMES further introduces progressive expert routing and a global multi-gate load-balancing regularizer, which jointly enhance task-specific modeling while preserving instance-level sparsity. This design effectively constrains the number of activated experts and mitigates load imbalance. Deployed on Kuaishou’s short-video platform—serving over 400 million daily active users—the method achieves a 0.29% gain in GAUC and a 0.31% increase in user watch time.

Technology Category

Application Category

📝 Abstract
Industrial recommender systems typically rely on multi-task learning to estimate diverse user feedback signals and aggregate them for ranking. Recent advances in model scaling have shown promising gains in recommendation. However, naively increasing model capacity imposes prohibitive online inference costs and often yields diminishing returns for sparse tasks with skewed label distributions. This mismatch between uniform parameter scaling and heterogeneous task capacity demands poses a fundamental challenge for scalable multi-task recommendation. In this work, we investigate parameter sparsification as a principled scaling paradigm and identify two critical obstacles when applying sparse Mixture-of-Experts (MoE) to multi-task recommendation: exploded expert activation that undermines instance-level sparsity and expert load skew caused by independent task-wise routing. To address these challenges, we propose SMES, a scalable sparse MoE framework with progressive expert routing. SMES decomposes expert activation into a task-shared expert subset jointly selected across tasks and task-adaptive private experts, explicitly bounding per-instance expert execution while preserving task-specific capacity. In addition, SMES introduces a global multi-gate load-balancing regularizer that stabilizes training by regulating aggregated expert utilization across all tasks. SMES has been deployed in Kuaishou large-scale short-video services, supporting over 400 million daily active users. Extensive online experiments demonstrate stable improvements, with GAUC gain of 0.29% and a 0.31% uplift in user watch time.
Problem

Research questions and friction points this paper is trying to address.

multi-task recommendation
model scaling
expert sparsity
sparse tasks
parameter efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Mixture-of-Experts
Multi-Task Recommendation
Expert Sparsity
Load Balancing
Progressive Routing
Yukun Zhang
Yukun Zhang
哈尔滨工业大学(深圳)
computer scienceai
S
Si Dong
Kuaishou Technology Co., Ltd.
X
Xu Wang
Kuaishou Technology Co., Ltd.
B
Bo Chen
Kuaishou Technology Co., Ltd.
Q
Qinglin Jia
Kuaishou Technology Co., Ltd.
S
Shengzhe Wang
Kuaishou Technology Co., Ltd.
J
Jinlong Jiao
Kuaishou Technology Co., Ltd.
R
Runhan Li
Kuaishou Technology Co., Ltd.
Jiaqing Liu
Jiaqing Liu
Renmin University of China
Natural Language ProcessingDeep LearningMachine LearningFinance
Chaoyi Ma
Chaoyi Ma
University of Florida
Data ScienceBig DataNetwork Traffic MeasurementData Streaming Summay
R
Ruiming Tang
Kuaishou Technology Co., Ltd.
Guorui Zhou
Guorui Zhou
Unknown affiliation
Recommender System,Advertising,Artificial Intelligence,Machine Learning,NLP
H
Han Li
Kuaishou Technology Co., Ltd.
Kun Gai
Kun Gai
Senior Director & Researcher, Alibaba Group
Machine LearningComputational Advertising