Harnessing the Power of Interleaving and Counterfactual Evaluation for Airbnb Search Ranking

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional A/B testing for search ranking evaluation on online platforms suffers from low statistical power and long iteration cycles, while offline evaluation exhibits poor accuracy and limited effectiveness in candidate model selection. To address these challenges, this paper proposes a novel online evaluation framework that integrates interleaving with counterfactual evaluation. Leveraging real user interaction data and grounded in causal inference principles, the framework enables rapid, high-sensitivity, and low-overhead validation of ranking policies. Experiments demonstrate that it achieves up to 100× higher statistical power than conventional A/B testing, substantially reducing evaluation latency and improving the efficiency of identifying high-performing models. Deployed in Airbnb’s production environment, the framework has successfully driven multiple ranking iterations and delivered significant business impact.

Technology Category

Application Category

📝 Abstract
Evaluation plays a crucial role in the development of ranking algorithms on search and recommender systems. It enables online platforms to create user-friendly features that drive commercial success in a steady and effective manner. The online environment is particularly conducive to applying causal inference techniques, such as randomized controlled experiments (known as A/B test), which are often more challenging to implement in fields like medicine and public policy. However, businesses face unique challenges when it comes to effective A/B test. Specifically, achieving sufficient statistical power for conversion-based metrics can be time-consuming, especially for significant purchases like booking accommodations. While offline evaluations are quicker and more cost-effective, they often lack accuracy and are inadequate for selecting candidates for A/B test. To address these challenges, we developed interleaving and counterfactual evaluation methods to facilitate rapid online assessments for identifying the most promising candidates for A/B tests. Our approach not only increased the sensitivity of experiments by a factor of up to 100 (depending on the approach and metrics) compared to traditional A/B testing but also streamlined the experimental process. The practical insights gained from usage in production can also benefit organizations with similar interests.
Problem

Research questions and friction points this paper is trying to address.

Improving A/B test efficiency for search ranking algorithms
Enhancing sensitivity of online evaluation methods
Accelerating identification of promising ranking candidates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interleaving for rapid online evaluations
Counterfactual evaluation enhances sensitivity
Streamlines A/B testing process significantly
🔎 Similar Papers
No similar papers found.