🤖 AI Summary
This work addresses the challenge of high-fidelity reconstruction of dynamic visual content from low-temporal-resolution fMRI signals—a task hindered by existing methods’ inability to jointly model semantic, structural, and motion features, as well as hallucination biases introduced by video generation models. We propose a fMRI–vision–language trimodal contrastive learning framework incorporating sparse causal attention, enabling multi-frame-consistent motion prediction and disentangled feature representation under a single fMRI frame constraint. To avoid end-to-end video generation’s instability, we further introduce a next-frame prediction objective and a lightweight dilated Stable Diffusion synthesis module. Our method achieves state-of-the-art performance across multiple public video–fMRI benchmark datasets. Visualization and ablation analyses confirm strong neuroscientific interpretability, with significant improvements in semantic accuracy, structural fidelity, and temporal consistency.
📝 Abstract
Reconstructing human dynamic vision from brain activity is a challenging task with great scientific significance. Although prior video reconstruction methods have made substantial progress, they still suffer from several limitations, including: (1) difficulty in simultaneously reconciling semantic (e.g. categorical descriptions), structure (e.g. size and color), and consistent motion information (e.g. order of frames); (2) low temporal resolution of fMRI, which poses a challenge in decoding multiple frames of video dynamics from a single fMRI frame; (3) reliance on video generation models, which introduces ambiguity regarding whether the dynamics observed in the reconstructed videos are genuinely derived from fMRI data or are hallucinations from generative model. To overcome these limitations, we propose a two-stage model named Mind-Animator. During the fMRI-to-feature stage, we decouple semantic, structure, and motion features from fMRI. Specifically, we employ fMRI-vision-language tri-modal contrastive learning to decode semantic feature from fMRI and design a sparse causal attention mechanism for decoding multi-frame video motion features through a next-frame-prediction task. In the feature-to-video stage, these features are integrated into videos using an inflated Stable Diffusion, effectively eliminating external video data interference. Extensive experiments on multiple video-fMRI datasets demonstrate that our model achieves state-of-the-art performance. Comprehensive visualization analyses further elucidate the interpretability of our model from a neurobiological perspective. Project page: https://mind-animator-design.github.io/.