HoneyBee: Data Recipes for Vision-Language Reasoners

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language reasoning datasets lack systematic curation principles, hindering models’ development of complex reasoning capabilities. Method: We propose a principled data curation methodology that (i) identifies critical influences of context sources, (ii) incorporates image description–based auxiliary signals and pure-text reasoning interventions, and (iii) designs a holistic data expansion strategy—including chain-of-thought annotation, data-source control, image-question pairing augmentation, multi-turn question generation, and test-time decoding optimization. Contribution/Results: Leveraging this framework, we construct HoneyBee, a 2.5M-sample vision-language reasoning dataset. On MathVerse, a 3B-parameter model trained on HoneyBee surpasses prior state-of-the-art by 7.8% absolute accuracy, reduces inference cost by 73%, and preserves precision—demonstrating for the first time that high-quality reasoning data is decisive in enabling substantial performance gains for small-parameter models.

Technology Category

Application Category

📝 Abstract
Recent advances in vision-language models (VLMs) have made them highly effective at reasoning tasks. However, the principles underlying the construction of performant VL reasoning training datasets remain poorly understood. In this work, we introduce several data curation approaches and study their impacts on VL reasoning capabilities by carefully controlling training and evaluation setups. We analyze the effects of context (image and question pair) sources, implement targeted data interventions, and explore scaling up images, questions, and chain-of-thought (CoT) solutions. Our findings reveal that (a) context source strategies significantly affect VLM performance, (b) interventions such as auxiliary signals from image captions and the inclusion of text-only reasoning yield substantial gains, and (c) scaling all data dimensions (e.g., unique questions per image and unique CoTs per image-question pair) consistently improves reasoning capability. Motivated by these insights, we introduce HoneyBee, a large-scale, high-quality CoT reasoning dataset with 2.5M examples consisting 350K image-question pairs. VLMs trained with HoneyBee outperform state-of-the-art models across model sizes. For instance, a HoneyBee-trained VLM with 3B parameters outperforms the SOTA model and the base model by 7.8% and 24.8%, respectively, on MathVerse. Furthermore, we propose a test-time scaling strategy that reduces decoding cost by 73% without sacrificing accuracy. Overall, this work presents improved strategies for VL reasoning dataset curation research.
Problem

Research questions and friction points this paper is trying to address.

Understanding principles for constructing effective vision-language reasoning datasets
Studying impacts of data curation approaches on VL reasoning capabilities
Developing improved strategies for vision-language reasoning dataset curation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curating diverse image-question pairs for reasoning
Adding auxiliary signals and text-only reasoning interventions
Scaling images, questions, and chain-of-thought solutions
🔎 Similar Papers
No similar papers found.