🤖 AI Summary
Existing binary Vision Transformer (ViT) methods suffer from significant performance degradation or rely on full-precision components. This paper proposes DIDB-ViT, a high-fidelity binarization framework that preserves the original ViT architecture and computational efficiency. Our approach introduces three key innovations: (1) a differentiable information-driven attention mechanism to mitigate information loss during binarization; (2) discrete Haar wavelet-based frequency-domain decomposition to enhance query-key similarity modeling; and (3) a reparameterized RPReLU activation function to strengthen binary feature representation. Evaluated across multiple ViT architectures, DIDB-ViT consistently outperforms state-of-the-art binary and quantized ViT methods. On ImageNet classification, it improves Top-1 accuracy by 3.2%; on ADE20K semantic segmentation, it achieves a +2.8% gain in mIoU. The method enables efficient edge deployment without architectural modification or full-precision fallbacks.
📝 Abstract
The binarization of vision transformers (ViTs) offers a promising approach to addressing the trade-off between high computational/storage demands and the constraints of edge-device deployment. However, existing binary ViT methods often suffer from severe performance degradation or rely heavily on full-precision modules. To address these issues, we propose DIDB-ViT, a novel binary ViT that is highly informative while maintaining the original ViT architecture and computational efficiency. Specifically, we design an informative attention module incorporating differential information to mitigate information loss caused by binarization and enhance high-frequency retention. To preserve the fidelity of the similarity calculations between binary Q and K tensors, we apply frequency decomposition using the discrete Haar wavelet and integrate similarities across different frequencies. Additionally, we introduce an improved RPReLU activation function to restructure the activation distribution, expanding the model's representational capacity. Experimental results demonstrate that our DIDB-ViT significantly outperforms state-of-the-art network quantization methods in multiple ViT architectures, achieving superior image classification and segmentation performance.