LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches struggle to uncover the semantic content of visual tokens across different layers of large language models (LLMs), hindering a deeper understanding of vision–language representation alignment. This work proposes LatentLens, a method that maps visual tokens into a large-scale contextualized text embedding space and leverages k-nearest neighbor retrieval to generate human-readable natural language descriptions. For the first time, LatentLens enables high-granularity interpretation of visual tokens across all model layers and multiple vision–language models (VLMs). Experiments demonstrate that LatentLens significantly outperforms baseline methods such as LogitLens, revealing that the vast majority of visual tokens maintain clear and fine-grained semantics throughout all layers in ten diverse VLMs, thereby providing strong evidence for the high degree of alignment in multimodal representations.

Technology Category

Application Category

📝 Abstract
Transforming a large language model (LLM) into a Vision-Language Model (VLM) can be achieved by mapping the visual tokens from a vision encoder into the embedding space of an LLM. Intriguingly, this mapping can be as simple as a shallow MLP transformation. To understand why LLMs can so readily process visual tokens, we need interpretability methods that reveal what is encoded in the visual token representations at every layer of LLM processing. In this work, we introduce LatentLens, a novel approach for mapping latent representations to descriptions in natural language. LatentLens works by encoding a large text corpus and storing contextualized token representations for each token in that corpus. Visual token representations are then compared to their contextualized textual representations, with the top-k nearest neighbor representations providing descriptions of the visual token. We evaluate this method on 10 different VLMs, showing that commonly used methods, such as LogitLens, substantially underestimate the interpretability of visual tokens. With LatentLens instead, the majority of visual tokens are interpretable across all studied models and all layers. Qualitatively, we show that the descriptions produced by LatentLens are semantically meaningful and provide more fine-grained interpretations for humans compared to individual tokens. More broadly, our findings contribute new evidence on the alignment between vision and language representations, opening up new directions for analyzing latent representations.
Problem

Research questions and friction points this paper is trying to address.

interpretability
visual tokens
large language models
vision-language models
latent representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LatentLens
visual tokens
interpretability
vision-language models
contextualized representations
🔎 Similar Papers
No similar papers found.