A Parameter Update Balancing Algorithm for Multi-task Ranking Models in Recommendation Systems

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses task conflict in multi-task ranking models for recommender systems, caused by misalignment between gradients and actual parameter updates under momentum-based optimizers (e.g., Adam). We propose Multi-Task Optimization (MTO), the first method to achieve task balance explicitly at the **parameter update level**. Unlike conventional gradient- or loss-based balancing paradigms, MTO decomposes, normalizes, and reweights each task’s actual contribution to shared-parameter updates, incorporating a task-sensitive scaling mechanism. MTO is architecture-agnostic and integrates seamlessly with mainstream multi-task architectures such as ESMM and PLE. It achieves significant improvements over state-of-the-art methods across multiple recommendation benchmarks; demonstrates generalizability to CV multi-task datasets; and has been successfully deployed on Huawei’s AppGallery industrial platform, delivering consistent gains in primary ranking performance.

Technology Category

Application Category

📝 Abstract
Multi-task ranking models have become essential for modern real-world recommendation systems. While most recommendation researches focus on designing sophisticated models for specific scenarios, achieving performance improvement for multi-task ranking models across various scenarios still remains a significant challenge. Training all tasks naively can result in inconsistent learning, highlighting the need for the development of multi-task optimization (MTO) methods to tackle this challenge. Conventional methods assume that the optimal joint gradient on shared parameters leads to optimal parameter updates. However, the actual update on model parameters may deviates significantly from gradients when using momentum based optimizers such as Adam, and we design and execute statistical experiments to support the observation. In this paper, we propose a novel Parameter Update Balancing algorithm for multi-task optimization, denoted as PUB. In contrast to traditional MTO method which are based on gradient level tasks fusion or loss level tasks fusion, PUB is the first work to optimize multiple tasks through parameter update balancing. Comprehensive experiments on benchmark multi-task ranking datasets demonstrate that PUB consistently improves several multi-task backbones and achieves state-of-the-art performance. Additionally, experiments on benchmark computer vision datasets show the great potential of PUB in various multi-task learning scenarios. Furthermore, we deployed our method for an industrial evaluation on the real-world commercial platform, HUAWEI AppGallery, where PUB significantly enhances the online multi-task ranking model, efficiently managing the primary traffic of a crucial channel.
Problem

Research questions and friction points this paper is trying to address.

Multi-task ranking model optimization
Parameter update balancing algorithm
Performance improvement across scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter Update Balancing algorithm
Optimizes multi-task through updates
Enhances multi-task ranking models
🔎 Similar Papers
No similar papers found.
J
Jun Yuan
Huawei Technologies Co., Ltd., Shenzhen, China
G
Guohao Cai
Huawei Noah’s Ark Lab, Shenzhen, China
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning