Foundations of Interpretable Models

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current interpretability lacks an operational definition, resulting in weak theoretical foundations and insufficient general guidance for model design. Method: We propose the first unified, formal, and actionable definition of interpretability, systematically characterizing its essential properties, core assumptions, design principles, and architectural constraints; based on this, we develop a generic blueprint for interpretable model construction and introduce novel, interpretability-native data structures and computational workflows. Contribution/Results: We implement these advances in XAI-Struct, an open-source software library. This work establishes the first systematic theoretical framework for explainable AI (XAI), closing the full loop from definition to theory to tooling. It enables rigorous, standardized, and engineering-oriented development of interpretable models, advancing the field toward scientific maturity and practical deployability.

Technology Category

Application Category

📝 Abstract
We argue that existing definitions of interpretability are not actionable in that they fail to inform users about general, sound, and robust interpretable model design. This makes current interpretability research fundamentally ill-posed. To address this issue, we propose a definition of interpretability that is general, simple, and subsumes existing informal notions within the interpretable AI community. We show that our definition is actionable, as it directly reveals the foundational properties, underlying assumptions, principles, data structures, and architectural features necessary for designing interpretable models. Building on this, we propose a general blueprint for designing interpretable models and introduce the first open-sourced library with native support for interpretable data structures and processes.
Problem

Research questions and friction points this paper is trying to address.

Lack actionable definitions for interpretable model design
Current interpretability research is fundamentally ill-posed
Need foundational properties for designing interpretable models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes a new interpretability definition
Introduces interpretable model blueprint
Develops open-sourced interpretability library
🔎 Similar Papers
No similar papers found.