🤖 AI Summary
Large multimodal models (MLLMs) lack systematic evaluation in comprehending dense graphical reaction language—such as reaction mechanisms and molecular structure identification—in chemical literature. Method: We introduce RxnBench, the first multimodal benchmark tailored for chemical literature, featuring two tasks—single-figure question answering (FD-QA) and full-document question answering (FD-QA)—and propose a hierarchical evaluation framework to assess cross-modal figure-text-table integration, reaction-logic reasoning, and precise structural perception. Contribution/Results: Experiments reveal that state-of-the-art MLLMs achieve <50% accuracy on FD-QA; inference-time techniques like chain-of-thought prompting significantly improve performance. Our analysis uncovers critical limitations in general-purpose vision encoders and underscores the urgent need for domain-specific visual representations and chemistry-aware reasoning modules. RxnBench establishes a new standard for evaluating chemical AI, offering not only a rigorous benchmark but also methodological advances and foundational insights into multimodal scientific understanding.
📝 Abstract
The integration of Multimodal Large Language Models (MLLMs) into chemistry promises to revolutionize scientific discovery, yet their ability to comprehend the dense, graphical language of reactions within authentic literature remains underexplored. Here, we introduce RxnBench, a multi-tiered benchmark designed to rigorously evaluate MLLMs on chemical reaction understanding from scientific PDFs. RxnBench comprises two tasks: Single-Figure QA (SF-QA), which tests fine-grained visual perception and mechanistic reasoning using 1,525 questions derived from 305 curated reaction schemes, and Full-Document QA (FD-QA), which challenges models to synthesize information from 108 articles, requiring cross-modal integration of text, schemes, and tables. Our evaluation of MLLMs reveals a critical capability gap: while models excel at extracting explicit text, they struggle with deep chemical logic and precise structural recognition. Notably, models with inference-time reasoning significantly outperform standard architectures, yet none achieve 50% accuracy on FD-QA. These findings underscore the urgent need for domain-specific visual encoders and stronger reasoning engines to advance autonomous AI chemists.