🤖 AI Summary
Handwritten mathematical expression recognition (HMER) has long been hindered by the scarcity of high-quality annotated data. To address this, we introduce the first scalable synthetic data engine that generates high-fidelity, diverse handwritten-style formula images directly from LaTeX source code, yielding Tex80M—the largest HMER dataset to date, comprising 80 million samples. Leveraging a hybrid training strategy that combines Tex80M with a small amount of real handwritten data, we train TexTeller, an end-to-end HMER model. TexTeller achieves state-of-the-art performance across all major benchmarks—including CROHME and HME100K—demonstrating substantial improvements in generalization and robustness. We publicly release Tex80M, the TexTeller model, and full training code, establishing critical infrastructure to advance large-scale HMER research and development.
📝 Abstract
Large foundation models have achieved significant performance gains through scalable training on massive datasets. However, the field of extbf{H}andwritten extbf{M}athematical extbf{E}xpression extbf{R}ecognition (HMER) has been impeded by the scarcity of data, primarily due to the arduous and costly process of manual annotation. To bridge this gap, we propose a novel method integrating limited handwritten formulas with large-scale LaTeX-rendered formulas by developing a scalable data engine to generate complex and consistent LaTeX sequences. With this engine, we built the largest formula dataset to date, termed exttt{Tex80M}, comprising over 80 million high-quality training instances. Then we propose exttt{TexTeller}, the first HMER model trained at scale, by mix-training exttt{Tex80M} with a relatively small HME dataset. The expansive training dataset and our refined pipeline have equipped exttt{TexTeller} with state-of-the-art (SOTA) performance across nearly all benchmarks. To advance the field, we will openly release our complete model, entire dataset, and full codebase, enabling further research building upon our contributions.