🤖 AI Summary
To address data inefficiency and model redundancy in vision-language model (VLM) training, this work proposes the “Data Metabolism” paradigm—a data-centric, full-lifecycle VLM development framework. Methodologically, it integrates multi-stage data cleaning and augmentation, task-aware data composition, user-driven data flywheel feedback loops, and lightweight architecture fine-tuning with rigorous evaluation. Its core contribution lies in elevating data governance to a dynamic, self-optimizing system that continuously evolves through data curation, iterative refinement, and customized feedback. Experiments demonstrate that our released model, Capybara-VL—parameterized at only 1/10 the scale of leading closed-source VLMs—achieves competitive performance on visual question answering, scientific reasoning, and text-intensive tasks. This significantly improves training efficiency and deployment feasibility without sacrificing capability.
📝 Abstract
Data curation plays a crucial role in training powerful Visual Language Models (VLMs). In this work, we introduce the concept of Data Metabolism and present our data-centric framework to build VLMs throughout the development lifecycle. Starting from a standard model architecture, we discuss and provide insights into two crucial development steps: data curation and iteration, forming a closed-loop system that continuously improves model performance. We show a detailed codebook on how to process existing massive datasets and build user-specific data flywheel. As a demonstration, we release a VLM, named Capybara-VL, which excels in typical multimodal tasks (e.g. , visual question answering, scientific reasoning, and text-rich tasks). Despite its relatively compact size, Capybara-VL surpasses several open-source models that are up to 10 times larger in size. Moreover, it achieves results that are on par with those of several leading proprietary models, demonstrating its remarkable competitiveness. These results highlight the power of our data-centric framework and the potential of training smaller and more efficient VLMs.