🤖 AI Summary
To address information redundancy and noise interference in multi-scenario, multi-task recommendation (MSMTR), this paper proposes a lightweight, automated information flow selection framework. Unlike existing methods relying on complex mixture-of-experts architectures, our approach introduces the first learnable information flow filtering mechanism, which decouples four types of information units via low-rank adaptation (LoRA) to enable flexible fusion and precise pruning of ineffective flows with minimal parameter overhead. A performance-feedback-driven dynamic pruning function is further incorporated to significantly suppress noise propagation. Evaluated on two public benchmarks and online A/B tests, our method achieves a 2.1% lift in CVR while reducing model parameters by 37% and training cost by 29%, demonstrating superior efficiency and practicality.
📝 Abstract
Multi-scenario multi-task recommendation (MSMTR) systems must address recommendation demands across diverse scenarios while simultaneously optimizing multiple objectives, such as click-through rate and conversion rate. Existing MSMTR models typically consist of four information units: scenario-shared, scenario-specific, task-shared, and task-specific networks. These units interact to generate four types of relationship information flows, directed from scenario-shared or scenario-specific networks to task-shared or task-specific networks. However, these models face two main limitations: 1) They often rely on complex architectures, such as mixture-of-experts (MoE) networks, which increase the complexity of information fusion, model size, and training cost. 2) They extract all available information flows without filtering out irrelevant or even harmful content, introducing potential noise. Regarding these challenges, we propose a lightweight Automated Information Flow Selection (AutoIFS) framework for MSMTR. To tackle the first issue, AutoIFS incorporates low-rank adaptation (LoRA) to decouple the four information units, enabling more flexible and efficient information fusion with minimal parameter overhead. To address the second issue, AutoIFS introduces an information flow selection network that automatically filters out invalid scenario-task information flows based on model performance feedback. It employs a simple yet effective pruning function to eliminate useless information flows, thereby enhancing the impact of key relationships and improving model performance. Finally, we evaluate AutoIFS and confirm its effectiveness through extensive experiments on two public benchmark datasets and an online A/B test.