From Pixels to Words -- Towards Native Vision-Language Primitives at Scale

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Native vision-language models (VLMs) face fundamental limitations compared to modular VLMs—particularly in architectural fragmentation, multimodal misalignment, and data inefficiency—hindering their broader adoption and reproducible research. Method: We introduce the NEO family of native VLMs, built on a dense monolithic architecture that pioneers direct pixel-to-token mapping and unified semantic encoding, enabling end-to-end joint learning of visual perception and linguistic understanding. Trained on 390 million image-text pairs, NEO eliminates multimodal conflicts and establishes core design principles for native VLMs. Contribution/Results: NEO matches state-of-the-art modular VLMs across diverse cross-modal benchmarks while requiring significantly less training data. All code and models are fully open-sourced, enabling reproducible, scalable, and democratized research on native VLMs. This work advances both the theoretical foundations and practical accessibility of end-to-end multimodal learning.

Technology Category

Application Category

📝 Abstract
The edifice of native Vision-Language Models (VLMs) has emerged as a rising contender to typical modular VLMs, shaped by evolving model architectures and training paradigms. Yet, two lingering clouds cast shadows over its widespread exploration and promotion: (-) What fundamental constraints set native VLMs apart from modular ones, and to what extent can these barriers be overcome? (-) How to make research in native VLMs more accessible and democratized, thereby accelerating progress in the field. In this paper, we clarify these challenges and outline guiding principles for constructing native VLMs. Specifically, one native VLM primitive should: (i) effectively align pixel and word representations within a shared semantic space; (ii) seamlessly integrate the strengths of formerly separate vision and language modules; (iii) inherently embody various cross-modal properties that support unified vision-language encoding, aligning, and reasoning. Hence, we launch NEO, a novel family of native VLMs built from first principles, capable of rivaling top-tier modular counterparts across diverse real-world scenarios. With only 390M image-text examples, NEO efficiently develops visual perception from scratch while mitigating vision-language conflicts inside a dense and monolithic model crafted from our elaborate primitives. We position NEO as a cornerstone for scalable and powerful native VLMs, paired with a rich set of reusable components that foster a cost-effective and extensible ecosystem. Our code and models are publicly available at: https://github.com/EvolvingLMMs-Lab/NEO.
Problem

Research questions and friction points this paper is trying to address.

Identifying fundamental constraints distinguishing native from modular vision-language models
Developing accessible and democratized research approaches for native VLMs
Creating unified vision-language primitives for seamless cross-modal integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns pixel and word representations in shared space
Integrates separate vision and language modules seamlessly
Embodies unified cross-modal encoding, aligning, and reasoning