Distillation-Accelerated Uncertainty Modeling for Multi-Objective RTA Interception

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time bidding (RTB) traffic filtering requires efficient low-quality traffic rejection to ensure downstream data quality, yet faces two key challenges: insufficient accuracy in traffic quality assessment and unreliable model confidence estimates; moreover, conventional uncertainty modeling suffers from poor latency due to repeated inference. This paper proposes a unified framework integrating multi-objective learning with lightweight uncertainty modeling. We introduce knowledge distillation into Bayesian uncertainty estimation for the first time—enabling accurate prediction and well-calibrated confidence while drastically reducing computational overhead. Our approach jointly optimizes CTR, CVR, and traffic quality scoring; models predictive uncertainty via variational inference; and compresses the uncertainty inference pathway through teacher–student distillation. Evaluated on the JD.com advertising dataset, the distilled model achieves a 10× speedup in inference latency, alongside simultaneous improvements in AUC and expected calibration error (ECE), thereby achieving co-optimization of efficiency and reliability.

Technology Category

Application Category

📝 Abstract
Real-Time Auction (RTA) Interception aims to filter out invalid or irrelevant traffic to enhance the integrity and reliability of downstream data. However, two key challenges remain: (i) the need for accurate estimation of traffic quality together with sufficiently high confidence in the model's predictions, typically addressed through uncertainty modeling, and (ii) the efficiency bottlenecks that such uncertainty modeling introduces in real-time applications due to repeated inference. To address these challenges, we propose DAUM, a joint modeling framework that integrates multi-objective learning with uncertainty modeling, yielding both traffic quality predictions and reliable confidence estimates. Building on DAUM, we further apply knowledge distillation to reduce the computational overhead of uncertainty modeling, while largely preserving predictive accuracy and retaining the benefits of uncertainty estimation. Experiments on the JD advertisement dataset demonstrate that DAUM consistently improves predictive performance, with the distilled model delivering a tenfold increase in inference speed.
Problem

Research questions and friction points this paper is trying to address.

Accurate traffic quality estimation with reliable confidence
Efficient uncertainty modeling for real-time applications
Knowledge distillation to reduce computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint modeling integrates multi-objective learning with uncertainty
Knowledge distillation reduces computational overhead of uncertainty modeling
Distilled model achieves tenfold inference speed increase while preserving accuracy
🔎 Similar Papers
No similar papers found.
G
Gaoxiang Zhao
Department of Applied Mathematics, Harbin Institute of Technology, Weihai, Weihai, China
R
Ruina Qiu
JD.COM, Beijing, China
P
Pengpeng Zhao
JD.COM, Beijing, China
R
Rongjin Wang
JD.COM, Beijing, China
Z
Zhangang Lin
JD.COM, Beijing, China
Xiaoqiang Wang
Xiaoqiang Wang
Florida State University
Phase Field MethodsEdge-Weighted Centroidal Voronoi Tessellations