SPM-Bench: Benchmarking Large Language Models for Scanning Probe Microscopy

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of high-quality, uncontaminated, and low-cost evaluation benchmarks for large language models in specialized scientific domains such as scanning probe microscopy (SPM). To this end, the authors introduce SPM-Bench, the first doctoral-level multimodal SPM benchmark, which leverages an automated data synthesis pipeline to efficiently generate authoritative image–text pairs. Methodologically, they propose the Anchor-Gated Sieve (AGs) technique combined with a hybrid cloud–local architecture to enable low-cost, high-purity data extraction. Furthermore, they design a novel metric, Strict Imperfection Penalty F1 (SIP-F1), which for the first time quantifies a model’s reasoning “temperament” and capability boundaries in complex physical scenarios. This work not only establishes a generalizable paradigm for scientific data synthesis but also enables fine-grained evaluation of large models’ domain-specific scientific behaviors.

Technology Category

Application Category

📝 Abstract
As LLMs achieved breakthroughs in general reasoning, their proficiency in specialized scientific domains reveals pronounced gaps in existing benchmarks due to data contamination, insufficient complexity, and prohibitive human labor costs. Here we present SPM-Bench, an original, PhD-level multimodal benchmark specifically designed for scanning probe microscopy (SPM). We propose a fully automated data synthesis pipeline that ensures both high authority and low-cost. By employing Anchor-Gated Sieve (AGS) technology, we efficiently extract high-value image-text pairs from arXiv and journal papers published between 2023 and 2025. Through a hybrid cloud-local architecture where VLMs return only spatial coordinates "llbox" for local high-fidelity cropping, our pipeline achieves extreme token savings while maintaining high dataset purity. To accurately and objectively evaluate the performance of the LLMs, we introduce the Strict Imperfection Penalty F1 (SIP-F1) score. This metric not only establishes a rigorous capability hierarchy but also, for the first time, quantifies model "personalities" (Conservative, Aggressive, Gambler, or Wise). By correlating these results with model-reported confidence and perceived difficulty, we expose the true reasoning boundaries of current AI in complex physical scenarios. These insights establish SPM-Bench as a generalizable paradigm for automated scientific data synthesis.
Problem

Research questions and friction points this paper is trying to address.

large language models
scanning probe microscopy
scientific benchmark
data contamination
evaluation metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

SPM-Bench
Anchor-Gated Sieve
automated data synthesis
SIP-F1 score
multimodal benchmark
🔎 Similar Papers
No similar papers found.
Peiyao Xiao
Peiyao Xiao
Ph.D. candidate at University at Buffalo
Multi-objective optimizationFederated learningBilevel optimization
X
Xiaogang Li
Alibaba Group
C
Chengliang Xu
Alibaba Group
J
Jiayi Wang
Skylenage
Ben Wang
Ben Wang
University of Oklahoma
Z
Zichao Chen
Alibaba Group
Z
Zeyu Wang
Alibaba Group
K
Kejun Yu
Skylenage
Y
Yueqian Chen
Skylenage
X
Xulin Liu
Skylenage
W
Wende Xiao
Skylenage
Bing Zhao
Bing Zhao
SRI International
Natural Language ProcessingMachine LearningOptimizations
H
Hu Wei
Alibaba Group