🤖 AI Summary
This work addresses the opacity of implicit language representations in closed-source multimodal systems. We propose a black-box analytical framework based on “multi-round telephone games,” leveraging concept preference biases observed in image–text–image cyclic generation. By jointly modeling concept co-occurrence statistics and reasoning-capable large language models (Reasoning-LLMs), we construct a global concept association graph. Evaluated on the Telescope dataset (>10,000 concept pairs), our approach achieves the first quantitative characterization of implicit linguistic structure, uncovering deep semantic relationships that transcend superficial modality-level similarity. It further identifies systemic preference biases inherited from training data and pinpoints stable yet fragile conceptual pathways. These advances significantly enhance both the interpretability and controllability of multimodal systems.
📝 Abstract
Recent closed-source multimodal systems have made great advances, but their hidden language for understanding the world remains opaque because of their black-box architectures. In this paper, we use the systems'preference bias to study their hidden language: During the process of compressing the input images (typically containing multiple concepts) into texts and then reconstructing them into images, the systems'inherent preference bias introduces specific shifts in the outputs, disrupting the original input concept co-occurrence. We employ the multi-round"telephone game"to strategically leverage this bias. By observing the co-occurrence frequencies of concepts in telephone games, we quantitatively investigate the concept connection strength in the understanding of multimodal systems, i.e.,"hidden language."We also contribute Telescope, a dataset of 10,000+ concept pairs, as the database of our telephone game framework. Our telephone game is test-time scalable: By iteratively running telephone games, we can construct a global map of concept connections in multimodal systems'understanding. Here we can identify preference bias inherited from training, assess generalization capability advancement, and discover more stable pathways for fragile concept connections. Furthermore, we use Reasoning-LLMs to uncover unexpected concept relationships that transcend textual and visual similarities, inferring how multimodal systems understand and simulate the world. This study offers a new perspective on the hidden language of multimodal systems and lays the foundation for future research on the interpretability and controllability of multimodal systems.