OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal Learning

๐Ÿ“… 2025-09-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the low training efficiency and high resource consumption of OpenVision, this paper introduces OpenVision 2: a purely generative pretraining paradigm that eliminates the text encoder and employs image captioning as the sole supervisory signal for end-to-end, efficient visual encoder training. Built upon the ViT architecture, the model interfaces seamlessly with generative language models. Compared to the original OpenVision, OpenVision 2 achieves comparable performance on multimodal benchmarks (e.g., MMBench, SEED-Bench), reduces training time by 1.5ร—, decreases GPU memory consumption by 1.8ร—, and scales the maximum batch size to 8kโ€”significantly alleviating scalability bottlenecks. Its core innovation lies in decoupling multimodal alignment from textual understanding, enabling highly efficient visual representation learning through a minimalist loss design.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper provides a simplification on OpenVision's architecture and loss design for enhancing its training efficiency. Following the prior vision-language pretraining works CapPa and AIMv2, as well as modern multimodal designs like LLaVA, our changes are straightforward: we remove the text encoder (and therefore the contrastive loss), retaining only the captioning loss as a purely generative training signal. We name this new version OpenVision 2. The initial results are promising: despite this simplification, OpenVision 2 competitively matches the original model's performance on a broad set of multimodal benchmarks while substantially cutting both training time and memory consumption. For example, with ViT-L/14, it reduces training time by about 1.5x (from 83h to 57h), and memory usage by about 1.8x (from 24.5GB to 13.8GB, equivalently allowing the maximum batch size to grow from 2k to 8k). This superior training efficiency also allows us to scale far beyond the largest vision encoder used in OpenVision, reaching more than 1 billion parameters. We hold a strong belief that this lightweight, generative-only paradigm is compelling for future vision encoder development in multimodal foundation models.
Problem

Research questions and friction points this paper is trying to address.

Simplifies architecture and loss for training efficiency
Removes text encoder and contrastive loss
Reduces training time and memory consumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

Removed text encoder and contrastive loss
Used purely generative captioning loss
Enhanced training efficiency and scalability
๐Ÿ”Ž Similar Papers
No similar papers found.