🤖 AI Summary
Existing benchmarks overlook the multimodality and domain-specific complexity of chemical tables, hindering the application of multimodal large language models (MLLMs) to chemical scientific understanding. To address this, we introduce ChemTable—the first dedicated multimodal benchmark for chemical experimental tables—curated from real scientific literature and encompassing symbolic notation, structured variables, and embedded molecular graphs. ChemTable supports two core tasks: table recognition (structural parsing and content extraction) and table understanding (descriptive and reasoning-based question answering). We systematically define a multimodal semantic hierarchy for chemical tables (e.g., reagents, catalysts, yields, molecular structures) and provide expert-annotated cell polygons, logical layouts, and domain-specific labels. Leveraging OCR, structured image parsing, multimodal fusion, and chemistry-knowledge-enhanced techniques, we comprehensively evaluate both open- and closed-source MLLMs. Experiments reveal substantial performance gaps between models and human experts in chemical reasoning, as well as pronounced disparities across model families—establishing a reproducible, high-fidelity evaluation standard for scientific AI.
📝 Abstract
Chemical tables encode complex experimental knowledge through symbolic expressions, structured variables, and embedded molecular graphics. Existing benchmarks largely overlook this multimodal and domain-specific complexity, limiting the ability of multimodal large language models to support scientific understanding in chemistry. In this work, we introduce ChemTable, a large-scale benchmark of real-world chemical tables curated from the experimental sections of literature. ChemTable includes expert-annotated cell polygons, logical layouts, and domain-specific labels, including reagents, catalysts, yields, and graphical components and supports two core tasks: (1) Table Recognition, covering structure parsing and content extraction; and (2) Table Understanding, encompassing both descriptive and reasoning-oriented question answering grounded in table structure and domain semantics. We evaluated a range of representative multimodal models, including both open-source and closed-source models, on ChemTable and reported a series of findings with practical and conceptual insights. Although models show reasonable performance on basic layout parsing, they exhibit substantial limitations on both descriptive and inferential QA tasks compared to human performance, and we observe significant performance gaps between open-source and closed-source models across multiple dimensions. These results underscore the challenges of chemistry-aware table understanding and position ChemTable as a rigorous and realistic benchmark for advancing scientific reasoning.