🤖 AI Summary
Current interpretability lacks an operational definition, resulting in weak theoretical foundations and insufficient general guidance for model design. Method: We propose the first unified, formal, and actionable definition of interpretability, systematically characterizing its essential properties, core assumptions, design principles, and architectural constraints; based on this, we develop a generic blueprint for interpretable model construction and introduce novel, interpretability-native data structures and computational workflows. Contribution/Results: We implement these advances in XAI-Struct, an open-source software library. This work establishes the first systematic theoretical framework for explainable AI (XAI), closing the full loop from definition to theory to tooling. It enables rigorous, standardized, and engineering-oriented development of interpretable models, advancing the field toward scientific maturity and practical deployability.
📝 Abstract
We argue that existing definitions of interpretability are not actionable in that they fail to inform users about general, sound, and robust interpretable model design. This makes current interpretability research fundamentally ill-posed. To address this issue, we propose a definition of interpretability that is general, simple, and subsumes existing informal notions within the interpretable AI community. We show that our definition is actionable, as it directly reveals the foundational properties, underlying assumptions, principles, data structures, and architectural features necessary for designing interpretable models. Building on this, we propose a general blueprint for designing interpretable models and introduce the first open-sourced library with native support for interpretable data structures and processes.