🤖 AI Summary
To address modality imbalance and temporal misalignment in multimodal Spiking Neural Networks (SNNs), this paper proposes the Temporal Attention-guided Adaptive Fusion (TAAF) framework. TAAF innovatively integrates temporal attention, learnable time-warping, and a modality-aware temporal balancing fusion loss to enable timestep-wise dynamic weight allocation and synchronized convergence rate adjustment across modalities—mimicking biological cortical multisensory integration. The entire architecture operates under event-driven computation, ensuring compatibility with neuromorphic hardware. Evaluated on CREMA-D, AVE, and EAD benchmarks, TAAF achieves state-of-the-art accuracies of 77.55%, 70.65%, and 97.5%, respectively—outperforming existing SNN baselines. Moreover, it accelerates convergence by 23% and reduces inference energy consumption by 31%. This work establishes a novel paradigm for energy-efficient, biologically interpretable multimodal brain-inspired computing.
📝 Abstract
Multimodal spiking neural networks (SNNs) hold significant potential for energy-efficient sensory processing but face critical challenges in modality imbalance and temporal misalignment. Current approaches suffer from uncoordinated convergence speeds across modalities and static fusion mechanisms that ignore time-varying cross-modal interactions. We propose the temporal attention-guided adaptive fusion framework for multimodal SNNs with two synergistic innovations: 1) The Temporal Attention-guided Adaptive Fusion (TAAF) module that dynamically assigns importance scores to fused spiking features at each timestep, enabling hierarchical integration of temporally heterogeneous spike-based features; 2) The temporal adaptive balanced fusion loss that modulates learning rates per modality based on the above attention scores, preventing dominant modalities from monopolizing optimization. The proposed framework implements adaptive fusion, especially in the temporal dimension, and alleviates the modality imbalance during multimodal learning, mimicking cortical multisensory integration principles. Evaluations on CREMA-D, AVE, and EAD datasets demonstrate state-of-the-art performance (77.55%, 70.65% and 97.5%accuracy, respectively) with energy efficiency. The system resolves temporal misalignment through learnable time-warping operations and faster modality convergence coordination than baseline SNNs. This work establishes a new paradigm for temporally coherent multimodal learning in neuromorphic systems, bridging the gap between biological sensory processing and efficient machine intelligence.