🤖 AI Summary
In real-world online advertising, advertisers report only a subset of conversions, leading to incomplete labels and severe distribution skew in multi-objective CVR prediction. To address these challenges, we propose KAML, a fine-grained knowledge transfer framework. KAML introduces an attribution-driven masking (ADM) strategy—its first-of-its-kind—to mitigate label scarcity bias, and a hierarchical knowledge extraction (HKE) mechanism to suppress noise interference. It jointly integrates attribution modeling, dynamic masking, hierarchical tower-based knowledge distillation, and ranking-aware loss optimization. Evaluated on industrial offline datasets and large-scale online A/B tests, KAML consistently outperforms state-of-the-art multi-task learning methods, achieving an average +1.23% improvement in CVR AUC. Moreover, it demonstrates significantly enhanced generalization and robustness. KAML establishes a novel unified CVR modeling paradigm for scenarios with incomplete multi-label supervision.
📝 Abstract
In most real-world online advertising systems, advertisers typically have diverse customer acquisition goals. A common solution is to use multi-task learning (MTL) to train a unified model on post-click data to estimate the conversion rate (CVR) for these diverse targets. In practice, CVR prediction often encounters missing conversion data as many advertisers submit only a subset of user conversion actions due to privacy or other constraints, making the labels of multi-task data incomplete. If the model is trained on all available samples where advertisers submit user conversion actions, it may struggle when deployed to serve a subset of advertisers targeting specific conversion actions, as the training and deployment data distributions are mismatched. While considerable MTL efforts have been made, a long-standing challenge is how to effectively train a unified model with the incomplete and skewed multi-label data. In this paper, we propose a fine-grained Knowledge transfer framework for Asymmetric Multi-Label data (KAML). We introduce an attribution-driven masking strategy (ADM) to better utilize data with asymmetric multi-label data in training. However, the more relaxed masking in ADM is a double-edged sword: it provides additional training signals but also introduces noise due to skewed data. To address this, we propose a hierarchical knowledge extraction mechanism (HKE) to model the sample discrepancy within the target task tower. Finally, to maximize the utility of unlabeled samples, we incorporate ranking loss strategy to further enhance our model. The effectiveness of KAML has been demonstrated through comprehensive evaluations on offline industry datasets and online A/B tests, which show significant performance improvements over existing MTL baselines.