🤖 AI Summary
Current artificial neural networks (ANNs) lack clear correspondence with the brain’s multimodal processing mechanisms: unimodal models ignore cross-modal integration, while multimodal models focus predominantly on high-level outputs and lack neuron-level interpretability. Method: We propose the first neuron-level multimodal analysis framework, integrating fMRI voxel encoding, cross-modal voxel mapping, and artificial neuron activation modeling to systematically compare hierarchical neural correspondences of CLIP and METER in vision-language joint representation. Contribution/Results: We find that artificial neurons exhibit significant brain-like properties—including functional network affiliation, structural redundancy, and activation polarity patterns—and, for the first time, demonstrate that architectural differences critically determine such neurobiological plausibility. Our framework successfully predicts activity across multiple brain functional networks, validating that vision-language models implement brain-like hierarchical representations and bidirectional information flow.
📝 Abstract
While brain-inspired artificial intelligence(AI) has demonstrated promising results, current understanding of the parallels between artificial neural networks (ANNs) and human brain processing remains limited: (1) unimodal ANN studies fail to capture the brain's inherent multimodal processing capabilities, and (2) multimodal ANN research primarily focuses on high-level model outputs, neglecting the crucial role of individual neurons. To address these limitations, we propose a novel neuron-level analysis framework that investigates the multimodal information processing mechanisms in vision-language models (VLMs) through the lens of human brain activity. Our approach uniquely combines fine-grained artificial neuron (AN) analysis with fMRI-based voxel encoding to examine two architecturally distinct VLMs: CLIP and METER. Our analysis reveals four key findings: (1) ANs successfully predict biological neurons (BNs) activities across multiple functional networks (including language, vision, attention, and default mode), demonstrating shared representational mechanisms; (2) Both ANs and BNs demonstrate functional redundancy through overlapping neural representations, mirroring the brain's fault-tolerant and collaborative information processing mechanisms; (3) ANs exhibit polarity patterns that parallel the BNs, with oppositely activated BNs showing mirrored activation trends across VLM layers, reflecting the complexity and bidirectional nature of neural information processing; (4) The architectures of CLIP and METER drive distinct BNs: CLIP's independent branches show modality-specific specialization, whereas METER's cross-modal design yields unified cross-modal activation, highlighting the architecture's influence on ANN brain-like properties. These results provide compelling evidence for brain-like hierarchical processing in VLMs at the neuronal level.