SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network

📅 2023-10-10
🏛️ Neural Networks
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge of unifying linguistic and visual feature representations within spiking neural networks (SNNs) for multimodal understanding. To this end, we propose the first SNN-based framework specifically designed for multimodal understanding. Methodologically, we introduce a novel two-stage paradigm—“alignment pretraining followed by dual-loss fine-tuning”—and design a spike-domain cross-modal contrastive learning architecture comprising vision and text encoders jointly optimized with a spike-based cross-modal alignment module. This enables, for the first time, effective cross-modal co-representation learning and zero-shot generalization in SNNs. Experiments demonstrate that our framework achieves performance on par with state-of-the-art artificial neural networks (ANNs) across major multimodal benchmarks, while reducing energy consumption by over 60%. Moreover, it maintains strong robustness in image classification and supports open-vocabulary recognition.
📝 Abstract
Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency. Despite these advancements, the integration of linguistic and visual features into a unified representation through spike trains poses a significant challenge, and the application of SNNs to multimodal scenarios remains largely unexplored. This paper presents SpikeCLIP, a novel framework designed to bridge the modality gap in spike-based computation. Our approach employs a two-step recipe: an "alignment pre-training" to align features across modalities, followed by a "dual-loss fine-tuning" to refine the model's performance. Extensive experiments reveal that SNNs achieve results on par with ANNs while substantially reducing energy consumption across various datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust image classification capabilities, even when dealing with classes that fall outside predefined categories. This study marks a significant advancement in the development of energy-efficient and biologically plausible multimodal learning systems.
Problem

Research questions and friction points this paper is trying to address.

Bridging modality gap in spike-based computation
Integrating linguistic and visual features via SNNs
Achieving energy-efficient multimodal learning with SNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns visual and linguistic features via spike trains
Uses dual-loss fine-tuning to enhance performance
Achieves ANN-level accuracy with lower energy use
🔎 Similar Papers
No similar papers found.
T
Tianlong Li
School of Computer Science, Fudan University, Shanghai 200433, China, Shanghai Key Laboratory of Intelligent Information Processing
W
Wenhao Liu
School of Computer Science, Fudan University, Shanghai 200433, China, Shanghai Key Laboratory of Intelligent Information Processing
C
Changze Lv
School of Computer Science, Fudan University, Shanghai 200433, China, Shanghai Key Laboratory of Intelligent Information Processing
Jianhan Xu
Jianhan Xu
Fudan University
Natural Language Processing
C
Cenyuan Zhang
School of Computer Science, Fudan University, Shanghai 200433, China, Shanghai Key Laboratory of Intelligent Information Processing
Muling Wu
Muling Wu
Fudan University
Xiaoqing Zheng
Xiaoqing Zheng
Fudan University
Natural Language Processing and Machine Learning
X
Xuanjing Huang
School of Computer Science, Fudan University, Shanghai 200433, China, Shanghai Key Laboratory of Intelligent Information Processing