🤖 AI Summary
This work addresses the challenge of developing models that simultaneously exhibit strong scientific reasoning and general multimodal capabilities without relying on massive domain-specific datasets. The authors propose a data-efficient, transparent, and fully reproducible end-to-end training paradigm comprising high-quality scientific data curation, supervised fine-tuning, and reinforcement learning. Using fewer than five million samples, the resulting model significantly reduces dependence on large-scale pretraining data while achieving performance on scientific reasoning tasks comparable to that of much larger models. Moreover, it remains competitive on standard vision and multimodal benchmarks, demonstrating that scientific intelligence and general-purpose capabilities can effectively coexist within a single architecture.
📝 Abstract
We present Innovator-VL, a scientific multimodal large language model designed to advance understanding and reasoning across diverse scientific domains while maintaining excellent performance on general vision tasks. Contrary to the trend of relying on massive domain-specific pretraining and opaque pipelines, our work demonstrates that principled training design and transparent methodology can yield strong scientific intelligence with substantially reduced data requirements. (i) First, we provide a fully transparent, end-to-end reproducible training pipeline, covering data collection, cleaning, preprocessing, supervised fine-tuning, reinforcement learning, and evaluation, along with detailed optimization recipes. This facilitates systematic extension by the community. (ii) Second, Innovator-VL exhibits remarkable data efficiency, achieving competitive performance on various scientific tasks using fewer than five million curated samples without large-scale pretraining. These results highlight that effective reasoning can be achieved through principled data selection rather than indiscriminate scaling. (iii) Third, Innovator-VL demonstrates strong generalization, achieving competitive performance on general vision, multimodal reasoning, and scientific benchmarks. This indicates that scientific alignment can be integrated into a unified model without compromising general-purpose capabilities. Our practices suggest that efficient, reproducible, and high-performing scientific multimodal models can be built even without large-scale data, providing a practical foundation for future research.