🤖 AI Summary
This paper addresses the explainability challenge in information retrieval: “Why was a given document excluded from the top-K ranked results?” We introduce, for the first time, a word-level counterfactual explanation task—identifying and inserting critical missing terms that would substantially improve the document’s ranking score. Our method integrates gradient-guided term importance estimation with controllable perturbation optimization, ensuring compatibility with both classical (e.g., BM25, DRMM, DSSM) and modern neural retrievers (e.g., ColBERT, MonoT5). Extensive evaluation across multiple benchmark datasets demonstrates that our generated counterfactual suggestions achieve superior relevance and actionable utility compared to existing relevance explanation paradigms. The approach consistently enhances interpretability without compromising model-agnostic applicability. All code is publicly available.
📝 Abstract
Explainability has become a crucial concern in today's world, aiming to enhance transparency in machine learning and deep learning models. Information retrieval is no exception to this trend. In existing literature on explainability of information retrieval, the emphasis has predominantly been on illustrating the concept of relevance concerning a retrieval model. The questions addressed include why a document is relevant to a query, why one document exhibits higher relevance than another, or why a specific set of documents is deemed relevant for a query. However, limited attention has been given to understanding why a particular document is not favored (e.g. not within top-K) with respect to a query and a retrieval model. In an effort to address this gap, our work focus on the question of what terms need to be added within a document to improve its ranking. This in turn answers the question of which words played a role in not being favored in the document by a retrieval model for a particular query. We use a counterfactual framework to solve the above-mentioned research problem. To the best of our knowledge, we mark the first attempt to tackle this specific counterfactual problem (i.e. examining the absence of which words can affect the ranking of a document). Our experiments show the effectiveness of our proposed approach in predicting counterfactuals for both statistical (e.g. BM25) and deep-learning-based models (e.g. DRMM, DSSM, ColBERT, MonoT5). The code implementation of our proposed approach is available in https://anonymous.4open.science/r/CfIR-v2.