🤖 AI Summary
This study investigates the alignment between human brain representations and those of multimodal large language models (MLLMs) during the processing of narrative videos across varying temporal scales, and how task prompts modulate this alignment. Using fMRI data collected while participants viewed full-length movies, the authors systematically compare dynamic alignment patterns between MLLMs, unimodal video models, and cortical regions across 3–12 second video segments. They report a novel finding: extended narrative context significantly enhances alignment between MLLMs and high-order cortical integration areas—an effect absent in unimodal models. Shorter windows primarily align with perceptual and early language regions, whereas longer windows engage higher-order semantic integration areas. Furthermore, task prompts induce region-specific tuning of neural representations, revealing the dynamic, hierarchical nature of cortex-to-model mapping.
📝 Abstract
Understanding how humans and artificial intelligence systems process complex narrative videos is a fundamental challenge at the intersection of neuroscience and machine learning. This study investigates how the temporal context length of video clips (3--12 s clips) and the narrative-task prompting shape brain-model alignment during naturalistic movie watching. Using fMRI recordings from participants viewing full-length movies, we examine how brain regions sensitive to narrative context dynamically represent information over varying timescales and how these neural patterns align with model-derived features. We find that increasing clip duration substantially improves brain alignment for multimodal large language models (MLLMs), whereas unimodal video models show little to no gain. Further, shorter temporal windows align with perceptual and early language regions, while longer windows preferentially align higher-order integrative regions, mirrored by a layer-to-cortex hierarchy in MLLMs. Finally, narrative-task prompts (multi-scene summary, narrative summary, character motivation, and event boundary detection) elicit task-specific, region-dependent brain alignment patterns and context-dependent shifts in clip-level tuning in higher-order regions. Together, our results position long-form narrative movies as a principled testbed for probing biologically relevant temporal integration and interpretable representations in long-context MLLMs.