🤖 AI Summary
Current vision-language models (VLMs) face dual bottlenecks in chemical image understanding: excessive background content inflates visual token computation overhead, while task granularity remains confined to the molecular level, hindering reaction-level chemical reasoning. To address this, we propose an efficient visual token reduction mechanism and introduce ChemRxn-V—the first reaction-level multimodal benchmark for chemistry, covering molecular structure recognition and reaction product prediction. By jointly optimizing architecture and task design on large-scale chemical image-text pairs, our model achieves state-of-the-art performance on both molecular and reaction tasks using only 4B parameters and 1/16 of the original visual tokens—outperforming ChemVLM significantly. It also attains faster training and inference speeds. Our core contributions are: (1) a lightweight, computationally efficient visual representation; (2) a novel reaction-level task paradigm; and (3) an open-source, multimodal chemical benchmark enabling diverse downstream applications.
📝 Abstract
While Vision Language Models (VLMs) have demonstrated remarkable capabilities in general visual understanding, their application in the chemical domain has been limited, with previous works predominantly focusing on text and thus overlooking critical visual information, such as molecular structures. Current approaches that directly adopt standard VLMs for chemical tasks suffer from two primary issues: (i) computational inefficiency of processing entire chemical images with non-informative backgrounds. (ii) a narrow scope on molecular-level tasks that restricts progress in chemical reasoning. In this work, we propose extbf{TinyChemVL}, an efficient and powerful chemical VLM that leverages visual token reduction and reaction-level tasks to improve model efficiency and reasoning capacity. Also, we propose extbf{ChemRxn-V}, a reaction-level benchmark for assessing vision-based reaction recognition and prediction tasks. Directly predicting reaction products from molecular images poses a non-trivial challenge, as it requires models to integrate both recognition and reasoning capacities. Our results demonstrate that with only 4B parameters, TinyChemVL achieves superior performance on both molecular and reaction tasks while demonstrating faster inference and training speeds compared to existing models. Notably, TinyChemVL outperforms ChemVLM while utilizing only 1/16th of the visual tokens. This work builds efficient yet powerful VLMs for chemical domains by co-designing model architecture and task complexity.