🤖 AI Summary
This work addresses the limitations of prototype-based models in large-scale image classification—namely, their weak generalization, reliance on costly fine-tuning, and susceptibility to prototype drift—by introducing vector quantization into the latent space. The proposed approach employs a discrete, learnable codebook to constrain prototype representations, enabling stable, data-anchored, and interpretable prototype modeling without requiring fine-tuning of the backbone network. Evaluated on benchmark datasets including ImageNet, CUB-200, and Cars-196, the method achieves competitive classification accuracy while significantly enhancing model interpretability and prototype consistency.
📝 Abstract
Prototypical parts-based models offer a"this looks like that"paradigm for intrinsic interpretability, yet they typically struggle with ImageNet-scale generalization and often require computationally expensive backbone finetuning. Furthermore, existing methods frequently suffer from"prototype drift,"where learned prototypes lack tangible grounding in the training distribution and change their activation under small perturbations. We present ProtoQuant, a novel architecture that achieves prototype stability and grounded interpretability through latent vector quantization. By constraining prototypes to a discrete learned codebook within the latent space, we ensure they remain faithful representations of the training data without the need to update the backbone. This design allows ProtoQuant to function as an efficient, interpretable head that scales to large-scale datasets. We evaluate ProtoQuant on ImageNet and several fine-grained benchmarks (CUB-200, Cars-196). Our results demonstrate that ProtoQuant achieves competitive classification accuracy while generalizing to ImageNet and comparable interpretability metrics to other prototypical-parts-based methods.