Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether instruction-tuned multimodal large language models (MLLMs) achieve improved alignment with human brain representations during visual–linguistic processing under natural-language instructions. Using fMRI-based neural response prediction and cross-modal representational alignment analysis, we evaluate how well text embeddings from MLLMs’ instruction responses predict brain activity across ten natural instruction types. Our key contributions are: (1) MLLM instruction responses exhibit significant, strong correlations with neural activity in the occipitotemporal cortex; (2) instructions dynamically modulate model encoding of task-relevant visual concepts (e.g., counting, object identification), and distinct instructions share most of the explainable neural variance; (3) MLLMs’ brain alignment performance matches that of CLIP and substantially surpasses pure vision baselines. These findings reveal that instructions function not merely as output controllers but as critical intervention variables that actively shape model internal representations to better align with human cognitive processing.

Technology Category

Application Category

📝 Abstract
Transformer-based language models, though not explicitly trained to mimic brain recordings, have demonstrated surprising alignment with brain activity. Progress in these models-through increased size, instruction-tuning, and multimodality-has led to better representational alignment with neural data. Recently, a new class of instruction-tuned multimodal LLMs (MLLMs) have emerged, showing remarkable zero-shot capabilities in open-ended multimodal vision tasks. However, it is unknown whether MLLMs, when prompted with natural instructions, lead to better brain alignment and effectively capture instruction-specific representations. To address this, we first investigate brain alignment, i.e., measuring the degree of predictivity of neural visual activity using text output response embeddings from MLLMs as participants engage in watching natural scenes. Experiments with 10 different instructions show that MLLMs exhibit significantly better brain alignment than vision-only models and perform comparably to non-instruction-tuned multimodal models like CLIP. We also find that while these MLLMs are effective at generating high-quality responses suitable to the task-specific instructions, not all instructions are relevant for brain alignment. Further, by varying instructions, we make the MLLMs encode instruction-specific visual concepts related to the input image. This analysis shows that MLLMs effectively capture count-related and recognition-related concepts, demonstrating strong alignment with brain activity. Notably, the majority of the explained variance of the brain encoding models is shared between MLLM embeddings of image captioning and other instructions. These results suggest that enhancing MLLMs' ability to capture task-specific information could lead to better differentiation between various types of instructions, and thereby improving their precision in predicting brain responses.
Problem

Research questions and friction points this paper is trying to address.

Investigates brain alignment of instruction-tuned multimodal LLMs with neural visual activity
Compares MLLMs' brain alignment against vision-only and non-instruction-tuned models
Explores instruction-specific visual concept encoding in MLLMs for brain response prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction-tuned MLLMs improve brain alignment
MLLMs capture instruction-specific visual concepts
Task-specific MLLMs enhance brain response prediction
🔎 Similar Papers
No similar papers found.