🤖 AI Summary
Traditional A/B testing for search ranking evaluation on online platforms suffers from low statistical power and long iteration cycles, while offline evaluation exhibits poor accuracy and limited effectiveness in candidate model selection. To address these challenges, this paper proposes a novel online evaluation framework that integrates interleaving with counterfactual evaluation. Leveraging real user interaction data and grounded in causal inference principles, the framework enables rapid, high-sensitivity, and low-overhead validation of ranking policies. Experiments demonstrate that it achieves up to 100× higher statistical power than conventional A/B testing, substantially reducing evaluation latency and improving the efficiency of identifying high-performing models. Deployed in Airbnb’s production environment, the framework has successfully driven multiple ranking iterations and delivered significant business impact.
📝 Abstract
Evaluation plays a crucial role in the development of ranking algorithms on search and recommender systems. It enables online platforms to create user-friendly features that drive commercial success in a steady and effective manner. The online environment is particularly conducive to applying causal inference techniques, such as randomized controlled experiments (known as A/B test), which are often more challenging to implement in fields like medicine and public policy. However, businesses face unique challenges when it comes to effective A/B test. Specifically, achieving sufficient statistical power for conversion-based metrics can be time-consuming, especially for significant purchases like booking accommodations. While offline evaluations are quicker and more cost-effective, they often lack accuracy and are inadequate for selecting candidates for A/B test. To address these challenges, we developed interleaving and counterfactual evaluation methods to facilitate rapid online assessments for identifying the most promising candidates for A/B tests. Our approach not only increased the sensitivity of experiments by a factor of up to 100 (depending on the approach and metrics) compared to traditional A/B testing but also streamlined the experimental process. The practical insights gained from usage in production can also benefit organizations with similar interests.