🤖 AI Summary
Formal verification of deep learning model interpretability suffers from high computational complexity and lacks rigorous theoretical guarantees.
Method: This paper proposes a novel modeling paradigm that couples *inference equivariance* with the *Markov property*. It introduces inference equivariance—previously unexplored in interpretability modeling—formally defines *Markov interpretability*, and implements efficient, scalable symbolic-neural co-execution via neural reparameterization.
Contributions: (1) Breaks the traditional exponential verification bottleneck, enabling scalable and formally verifiable causal-level explanations; (2) Constructs the first neural interpretable reasoner that simultaneously achieves high expressive power and formal transparency; (3) Establishes a new “neural generation + interpretable execution” paradigm, providing both theoretical foundations and a concrete system implementation pathway for trustworthy AI.
📝 Abstract
We formalize a novel modeling framework for achieving interpretability in deep learning, anchored in the principle of inference equivariance. While the direct verification of interpretability scales exponentially with the number of variables of the system, we show that this complexity can be mitigated by treating interpretability as a Markovian property and employing neural re-parametrization techniques. Building on these insights, we propose a new modeling paradigm -- neural generation and interpretable execution -- that enables scalable verification of equivariance. This paradigm provides a general approach for designing Neural Interpretable Reasoners that are not only expressive but also transparent.