🤖 AI Summary
Existing explainable search (Explainable Search) approaches are hindered by reliance on either opaque black-box models or domain-specific expert knowledge, limiting their generality and applicability.
Method: We propose a knowledge-agnostic, search-process-driven explanation paradigm grounded in Monte Carlo Tree Search (MCTS). For the first time, we systematically analyze how core MCTS components—including PUCT variants, rollout policy optimization, and visit-count attribution—contribute intrinsically to interpretability, formalizing two explanation types: *path saliency* and *policy divergence*.
Contribution/Results: Through a proof-of-concept system evaluated across multiple benchmark tasks, we demonstrate that high-quality, diverse explanations can be generated solely from MCTS execution traces—without accessing model internals or any domain priors. This work breaks the conventional dependence of eXplainable AI (XAI) on model transparency or human expertise, establishing a foundation for universal, process-based explainable search.
📝 Abstract
Typically, research on Explainable Artificial Intelligence (XAI) focuses on black-box models within the context of a general policy in a known, specific domain. This paper advocates for the need for knowledge-agnostic explainability applied to the subfield of XAI called Explainable Search, which focuses on explaining the choices made by intelligent search techniques. It proposes Monte-Carlo Tree Search (MCTS) enhancements as a solution to obtaining additional data and providing higher-quality explanations while remaining knowledge-free, and analyzes the most popular enhancements in terms of the specific types of explainability they introduce. So far, no other research has considered the explainability of MCTS enhancements. We present a proof-of-concept that demonstrates the advantages of utilizing enhancements.