How does longer temporal context enhance multimodal narrative video processing in the brain?

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the alignment between human brain representations and those of multimodal large language models (MLLMs) during the processing of narrative videos across varying temporal scales, and how task prompts modulate this alignment. Using fMRI data collected while participants viewed full-length movies, the authors systematically compare dynamic alignment patterns between MLLMs, unimodal video models, and cortical regions across 3–12 second video segments. They report a novel finding: extended narrative context significantly enhances alignment between MLLMs and high-order cortical integration areas—an effect absent in unimodal models. Shorter windows primarily align with perceptual and early language regions, whereas longer windows engage higher-order semantic integration areas. Furthermore, task prompts induce region-specific tuning of neural representations, revealing the dynamic, hierarchical nature of cortex-to-model mapping.

Technology Category

Application Category

📝 Abstract
Understanding how humans and artificial intelligence systems process complex narrative videos is a fundamental challenge at the intersection of neuroscience and machine learning. This study investigates how the temporal context length of video clips (3--12 s clips) and the narrative-task prompting shape brain-model alignment during naturalistic movie watching. Using fMRI recordings from participants viewing full-length movies, we examine how brain regions sensitive to narrative context dynamically represent information over varying timescales and how these neural patterns align with model-derived features. We find that increasing clip duration substantially improves brain alignment for multimodal large language models (MLLMs), whereas unimodal video models show little to no gain. Further, shorter temporal windows align with perceptual and early language regions, while longer windows preferentially align higher-order integrative regions, mirrored by a layer-to-cortex hierarchy in MLLMs. Finally, narrative-task prompts (multi-scene summary, narrative summary, character motivation, and event boundary detection) elicit task-specific, region-dependent brain alignment patterns and context-dependent shifts in clip-level tuning in higher-order regions. Together, our results position long-form narrative movies as a principled testbed for probing biologically relevant temporal integration and interpretable representations in long-context MLLMs.
Problem

Research questions and friction points this paper is trying to address.

temporal context
multimodal narrative video
brain-model alignment
fMRI
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

temporal context
multimodal large language models
brain-model alignment
narrative processing
fMRI
🔎 Similar Papers
No similar papers found.
P
Prachi Jindal
IIT Delhi, India
A
Anant Khandelwal
Microsoft Research, Bangalore, India
Manish Gupta
Manish Gupta
Bing, Microsoft
Deep LearningNatural language processingWeb MiningData miningNeuroscience
B
Bapi S. Raju
IIIT-Hyderabad, India
S
Subba Reddy Oota
Technische Universität Berlin, Germany
Tanmoy Chakraborty
Tanmoy Chakraborty
Associate Professor, IIT Delhi, India
Natural Language ProcessingLarge Language ModelsSocial Computing