🤖 AI Summary
This work addresses the fundamental challenge of unifying linguistic and visual feature representations within spiking neural networks (SNNs) for multimodal understanding. To this end, we propose the first SNN-based framework specifically designed for multimodal understanding. Methodologically, we introduce a novel two-stage paradigm—“alignment pretraining followed by dual-loss fine-tuning”—and design a spike-domain cross-modal contrastive learning architecture comprising vision and text encoders jointly optimized with a spike-based cross-modal alignment module. This enables, for the first time, effective cross-modal co-representation learning and zero-shot generalization in SNNs. Experiments demonstrate that our framework achieves performance on par with state-of-the-art artificial neural networks (ANNs) across major multimodal benchmarks, while reducing energy consumption by over 60%. Moreover, it maintains strong robustness in image classification and supports open-vocabulary recognition.
📝 Abstract
Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency. Despite these advancements, the integration of linguistic and visual features into a unified representation through spike trains poses a significant challenge, and the application of SNNs to multimodal scenarios remains largely unexplored. This paper presents SpikeCLIP, a novel framework designed to bridge the modality gap in spike-based computation. Our approach employs a two-step recipe: an "alignment pre-training" to align features across modalities, followed by a "dual-loss fine-tuning" to refine the model's performance. Extensive experiments reveal that SNNs achieve results on par with ANNs while substantially reducing energy consumption across various datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust image classification capabilities, even when dealing with classes that fall outside predefined categories. This study marks a significant advancement in the development of energy-efficient and biologically plausible multimodal learning systems.