TinyChemVL: Advancing Chemical Vision-Language Models via Efficient Visual Token Reduction and Complex Reaction Tasks

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) face dual bottlenecks in chemical image understanding: excessive background content inflates visual token computation overhead, while task granularity remains confined to the molecular level, hindering reaction-level chemical reasoning. To address this, we propose an efficient visual token reduction mechanism and introduce ChemRxn-V—the first reaction-level multimodal benchmark for chemistry, covering molecular structure recognition and reaction product prediction. By jointly optimizing architecture and task design on large-scale chemical image-text pairs, our model achieves state-of-the-art performance on both molecular and reaction tasks using only 4B parameters and 1/16 of the original visual tokens—outperforming ChemVLM significantly. It also attains faster training and inference speeds. Our core contributions are: (1) a lightweight, computationally efficient visual representation; (2) a novel reaction-level task paradigm; and (3) an open-source, multimodal chemical benchmark enabling diverse downstream applications.

Technology Category

Application Category

📝 Abstract
While Vision Language Models (VLMs) have demonstrated remarkable capabilities in general visual understanding, their application in the chemical domain has been limited, with previous works predominantly focusing on text and thus overlooking critical visual information, such as molecular structures. Current approaches that directly adopt standard VLMs for chemical tasks suffer from two primary issues: (i) computational inefficiency of processing entire chemical images with non-informative backgrounds. (ii) a narrow scope on molecular-level tasks that restricts progress in chemical reasoning. In this work, we propose extbf{TinyChemVL}, an efficient and powerful chemical VLM that leverages visual token reduction and reaction-level tasks to improve model efficiency and reasoning capacity. Also, we propose extbf{ChemRxn-V}, a reaction-level benchmark for assessing vision-based reaction recognition and prediction tasks. Directly predicting reaction products from molecular images poses a non-trivial challenge, as it requires models to integrate both recognition and reasoning capacities. Our results demonstrate that with only 4B parameters, TinyChemVL achieves superior performance on both molecular and reaction tasks while demonstrating faster inference and training speeds compared to existing models. Notably, TinyChemVL outperforms ChemVLM while utilizing only 1/16th of the visual tokens. This work builds efficient yet powerful VLMs for chemical domains by co-designing model architecture and task complexity.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational inefficiency in chemical image processing
Expands scope beyond molecular-level to complex reaction tasks
Improves vision-based reaction recognition and prediction capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient visual token reduction for chemical images
Reaction-level tasks for enhanced reasoning capacity
Co-designing model architecture with task complexity
🔎 Similar Papers
No similar papers found.
X
Xuanle Zhao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
S
Shuxin Zeng
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Y
Yinyuan Cai
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
X
Xiang Cheng
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Duzhen Zhang
Duzhen Zhang
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingMultimodalLarge Language ModelsContinual LearningAI4Science
Xiuyi Chen
Xiuyi Chen
Baidu <<< CASIA
RAGMultiModalDialogue
B
Bo Xu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences