🤖 AI Summary
This study investigates the emergence mechanism of “text readability”—the capability to recognize textual content within images—during visual-language model (VLM) training, challenging the implicit assumption that multimodal capabilities evolve synchronously. Method: Leveraging a contrastive learning framework, we systematically analyze multi-stage capability evolution on standard VLM architectures, evaluating performance across diverse vision-language tasks—including those involving rendered text (e.g., screenshots, posters). Contribution/Results: We discover that text recognition capability emerges abruptly in the mid-to-late training phase, whereas semantic understanding develops gradually from early stages. Tasks requiring alignment of images containing rendered text exhibit the slowest convergence. This is the first empirical demonstration of a staged developmental dissociation between symbolic processing (text detection/reading) and semantic processing in multimodal representations. The findings provide theoretical grounding and empirical evidence for optimizing training strategies and enhancing robustness in text-aware visual understanding.
📝 Abstract
We investigate how the ability to recognize textual content within images emerges during the training of Vision-Language Models (VLMs). Our analysis reveals a critical phenomenon: the ability to read textual information in a given image extbf{(text readability)} emerges abruptly after substantial training iterations, in contrast to semantic content understanding which develops gradually from the early stages of training. This delayed emergence may reflect how contrastive learning tends to initially prioritize general semantic understanding, with text-specific symbolic processing developing later. Interestingly, the ability to match images with rendered text develops even slower, indicating a deeper need for semantic integration. These findings highlight the need for tailored training strategies to accelerate robust text comprehension in VLMs, laying the groundwork for future research on optimizing multimodal learning.