🤖 AI Summary
In billion-scale multi-task recommendation, existing methods suffer from performance degradation due to heterogeneous graph structures across tasks and neglect of macro-level graph topology.
Method: This paper proposes the first multi-task learning framework incorporating macro-graph structural embeddings. It introduces (1) a Macro Graph Bottom that unifies cross-task expert associations and captures global graph topological features; and (2) a dynamic macro-prediction tower that enables task-specific macro-graph representation learning and adaptive cross-task knowledge fusion. The method integrates graph neural networks, multi-task expert networks, and a dynamic weight ensemble mechanism.
Results: Our approach achieves significant improvements over state-of-the-art methods on three public benchmarks. Deployed in the homepage recommendation system of a leading industrial platform, online A/B tests demonstrate a +2.1% lift in CTR and a +3.4% increase in user session duration.
📝 Abstract
Graph-based multi-task learning at billion-scale presents a significant challenge, as different tasks correspond to distinct billion-scale graphs. Traditional multi-task learning methods often neglect these graph structures, relying solely on individual user and item embeddings. However, disregarding graph structures overlooks substantial potential for improving performance. In this paper, we introduce the Macro Graph of Expert (MGOE) framework, the first approach capable of leveraging macro graph embeddings to capture task-specific macro features while modeling the correlations between task-specific experts. Specifically, we propose the concept of a Macro Graph Bottom, which, for the first time, enables multi-task learning models to incorporate graph information effectively. We design the Macro Prediction Tower to dynamically integrate macro knowledge across tasks. MGOE has been deployed at scale, powering multi-task learning for the homepage of a leading billion-scale recommender system. Extensive offline experiments conducted on three public benchmark datasets demonstrate its superiority over state-of-the-art multi-task learning methods, establishing MGOE as a breakthrough in multi-task graph-based recommendation. Furthermore, online A/B tests confirm the superiority of MGOE in billion-scale recommender systems.