🤖 AI Summary
This work investigates whether large language models (LLMs) spontaneously develop a unified, cross-lingual and cross-modal semantic representation space—spanning text, code, images, audio, and arithmetic—and empirically tests the “semantic hub” hypothesis: that intermediate model layers form functionally shared representations analogous to the human brain’s cross-modal semantic hubs.
Method: Drawing inspiration from neuroscience’s “hub-and-spoke” model, we integrate logit lens interpretability analysis, cross-modal and cross-lingual embedding similarity metrics, controlled representational interventions, and intermediate-layer probing.
Contribution/Results: We find strong semantic alignment across diverse inputs at intermediate layers; moreover, interventions on unimodal representations reliably predict output changes in other modalities—demonstrating that this space is not a training byproduct but actively leveraged for inference. These results uncover intrinsic mechanisms underlying multilingual and multimodal semantic alignment, establishing a novel paradigm for universal intelligent representation modeling.
📝 Abstract
Modern language models can process inputs across diverse languages and modalities. We hypothesize that models acquire this capability through learning a shared representation space across heterogeneous data types (e.g., different languages and modalities), which places semantically similar inputs near one another, even if they are from different modalities/languages. We term this the semantic hub hypothesis, following the hub-and-spoke model from neuroscience (Patterson et al., 2007) which posits that semantic knowledge in the human brain is organized through a transmodal semantic"hub"which integrates information from various modality-specific"spokes"regions. We first show that model representations for semantically equivalent inputs in different languages are similar in the intermediate layers, and that this space can be interpreted using the model's dominant pretraining language via the logit lens. This tendency extends to other data types, including arithmetic expressions, code, and visual/audio inputs. Interventions in the shared representation space in one data type also predictably affect model outputs in other data types, suggesting that this shared representations space is not simply a vestigial byproduct of large-scale training on broad data, but something that is actively utilized by the model during input processing.