🤖 AI Summary
Existing feature attribution methods for learning-to-rank (LTR) lack ranking-aware theoretical foundations, often yielding contradictory or counterintuitive results that undermine interpretability.
Method: This paper introduces the first game-theoretic, axiomatized framework for ranking—formally specifying ranking-specific axioms including ranking consistency and efficiency—and derives Rank-SHAP, the first axiomatic extension of Shapley values to ranking tasks.
Contribution/Results: We evaluate Rank-SHAP on MSLR-WEB30K and Istella with state-of-the-art LTR models (e.g., LambdaMART, DeepRank) and validate it via user studies. Results demonstrate significant improvements in attribution consistency and alignment with human judgment. Axiomatic analysis further reveals that most existing attribution methods violate fundamental ranking axioms. This work establishes the first rigorous, axiom-based foundation for explainable LTR.
📝 Abstract
Numerous works propose post-hoc, model-agnostic explanations for learning to rank, focusing on ordering entities by their relevance to a query through feature attribution methods. However, these attributions often weakly correlate or contradict each other, confusing end users. We adopt an axiomatic game-theoretic approach, popular in the feature attribution community, to identify a set of fundamental axioms that every ranking-based feature attribution method should satisfy. We then introduce Rank-SHAP, extending classical Shapley values to ranking. We evaluate the RankSHAP framework through extensive experiments on two datasets, multiple ranking methods and evaluation metrics. Additionally, a user study confirms RankSHAP's alignment with human intuition. We also perform an axiomatic analysis of existing rank attribution algorithms to determine their compliance with our proposed axioms. Ultimately, our aim is to equip practitioners with a set of axiomatically backed feature attribution methods for studying IR ranking models, that ensure generality as well as consistency.