ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reasoning depth and poor interpretability of existing chemical vision-language models, which are predominantly confined to end-to-end question answering without explicit modeling of reaction mechanisms. To overcome these limitations, we propose ChemVLR, the first framework that integrates fine-grained chemical perception—such as functional group recognition—with explicit reasoning path generation. We introduce a cross-modal reverse engineering strategy to construct a large-scale, high-quality dataset of chemical reasoning traces and employ a three-stage training paradigm to systematically enhance model capabilities. Evaluated on molecular and reaction understanding benchmarks, ChemVLR achieves state-of-the-art performance, significantly outperforming both leading open-source and closed-source models. Ablation studies further confirm the effectiveness of our data construction methodology and training framework.
📝 Abstract
While Vision-Language Models (VLMs) have demonstrated significant potential in chemical visual understanding, current models are predominantly optimized for direct visual question-answering tasks. This paradigm often results in "black-box" systems that fail to utilize the inherent capability of Large Language Models (LLMs) to infer underlying reaction mechanisms. In this work, we introduce ChemVLR, a chemical VLM designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems. To facilitate this methodology, we implement a cross-modality reverse-engineering strategy, combined with a rigorous filtering pipeline, to curate a large-scale reasoning-and-captioning dataset comprising 760k high-quality samples across molecular and reaction tasks. Furthermore, we adopt a three-stage training framework that systemically builds model perception and reasoning capacity. Experiments demonstrate that ChemVLR achieves state-of-the-art (SOTA) performance, surpassing both leading proprietary models and domain-specific open-source baselines. We also provide comprehensive ablation studies to validate our training strategy and data generation designs. Code and model weights will be available at https://github.com/xxlllz/ChemVLR.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Chemical Understanding
Reasoning
Interpretability
Reaction Mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

chemical vision-language model
reasoning-in-perception
fine-grained chemical descriptors
cross-modality reverse-engineering
interpretable reasoning
🔎 Similar Papers
No similar papers found.
X
Xuanle Zhao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
X
Xinyuan Cai
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
X
Xiang Cheng
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Xiuyi Chen
Xiuyi Chen
Baidu <<< CASIA
RAGMultiModalDialogue
B
Bo Xu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences