🤖 AI Summary
This work investigates the impact of timing in visual data introduction during vision-language model (VLM) training on multi-task performance. We propose a systematic experimental framework comparing early multimodal pretraining—where visual tokens are injected partway through language-only pretraining—against the conventional two-stage paradigm. On a 1B-parameter model, we quantitatively demonstrate for the first time that injecting visual tokens at the 80% pretraining checkpoint yields a 2.0% average improvement across six vision-language and pure-text benchmarks, outperforming late-fusion alternatives. Our methodology incorporates multi-scale training, dynamic image-text ratio scheduling, and unified cross-task evaluation. The key contribution is revealing the temporal sensitivity of joint vision-language pretraining: early visual token injection enhances visual understanding without compromising pure-language capabilities, establishing a more efficient VLM training paradigm.
📝 Abstract
Pre-trained LLMs that are further trained with image data perform well on vision-language tasks. While adding images during a second training phase effectively unlocks this capability, it is unclear how much of a gain or loss this two-step pipeline gives over VLMs which integrate images earlier into the training process. To investigate this, we train models spanning various datasets, scales, image-text ratios, and amount of pre-training done before introducing vision tokens. We then fine-tune these models and evaluate their downstream performance on a suite of vision-language and text-only tasks. We find that pre-training with a mixture of image and text data allows models to perform better on vision-language tasks while maintaining strong performance on text-only evaluations. On an average of 6 diverse tasks, we find that for a 1B model, introducing visual tokens 80% of the way through pre-training results in a 2% average improvement over introducing visual tokens to a fully pre-trained model.