A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations

๐Ÿ“… 2024-12-20
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing deep prototype-based networks (e.g., CBC) achieve strong performance but suffer from interpretability contradictions, insufficient robustness, and a theoretical disconnect between shallow and deep models. Method: This paper proposes a unified robust and interpretable prototype classification framework. First, it establishes a rigorous equivalence between deep prototype networks and radial basis function (RBF) classifiers, bridging the theoretical gap. Second, it introduces the first prototype learning mechanism with provable โ„“โ‚‚-robustness guarantees. Third, it jointly employs probabilistic modeling and robust optimization to eliminate interpretability conflicts. Contributions/Results: The deep variant achieves state-of-the-art accuracy across multiple benchmarks; the shallow variant retains full interpretability while significantly outperforming comparable methods and providing certified โ„“โ‚‚-robustnessโ€”thereby unifying performance, interpretability, and robustness within a single principled framework.

Technology Category

Application Category

๐Ÿ“ Abstract
Prototype-based classification learning methods are known to be inherently interpretable. However, this paradigm suffers from major limitations compared to deep models, such as lower performance. This led to the development of the so-called deep Prototype-Based Networks (PBNs), also known as prototypical parts models. In this work, we analyze these models with respect to different properties, including interpretability. In particular, we focus on the Classification-by-Components (CBC) approach, which uses a probabilistic model to ensure interpretability and can be used as a shallow or deep architecture. We show that this model has several shortcomings, like creating contradicting explanations. Based on these findings, we propose an extension of CBC that solves these issues. Moreover, we prove that this extension has robustness guarantees and derive a loss that optimizes robustness. Additionally, our analysis shows that most (deep) PBNs are related to (deep) RBF classifiers, which implies that our robustness guarantees generalize to shallow RBF classifiers. The empirical evaluation demonstrates that our deep PBN yields state-of-the-art classification accuracy on different benchmarks while resolving the interpretability shortcomings of other approaches. Further, our shallow PBN variant outperforms other shallow PBNs while being inherently interpretable and exhibiting provable robustness guarantees.
Problem

Research questions and friction points this paper is trying to address.

Deep Prototype Base Networks
Interpretability Issues
Model Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved Component-Based Classification
Depth Prototype Base Networks
Enhanced Stability and Explainability
๐Ÿ”Ž Similar Papers
No similar papers found.
S
S. Saralajew
NEC Laboratories Europe, Germany
A
Ashish Rana
NEC Laboratories Europe, Germany
T
Thomas Villmann
University of Applied Sciences Mittweida, Germany
Ammar Shaker
Ammar Shaker
NEC Laboratories Europe GmbH
Artificial IntelligenceData Mining & Machine Learning