Towards Scalable Training for Handwritten Mathematical Expression Recognition

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Handwritten mathematical expression recognition (HMER) has long been hindered by the scarcity of high-quality annotated data. To address this, we introduce the first scalable synthetic data engine that generates high-fidelity, diverse handwritten-style formula images directly from LaTeX source code, yielding Tex80M—the largest HMER dataset to date, comprising 80 million samples. Leveraging a hybrid training strategy that combines Tex80M with a small amount of real handwritten data, we train TexTeller, an end-to-end HMER model. TexTeller achieves state-of-the-art performance across all major benchmarks—including CROHME and HME100K—demonstrating substantial improvements in generalization and robustness. We publicly release Tex80M, the TexTeller model, and full training code, establishing critical infrastructure to advance large-scale HMER research and development.

Technology Category

Application Category

📝 Abstract
Large foundation models have achieved significant performance gains through scalable training on massive datasets. However, the field of extbf{H}andwritten extbf{M}athematical extbf{E}xpression extbf{R}ecognition (HMER) has been impeded by the scarcity of data, primarily due to the arduous and costly process of manual annotation. To bridge this gap, we propose a novel method integrating limited handwritten formulas with large-scale LaTeX-rendered formulas by developing a scalable data engine to generate complex and consistent LaTeX sequences. With this engine, we built the largest formula dataset to date, termed exttt{Tex80M}, comprising over 80 million high-quality training instances. Then we propose exttt{TexTeller}, the first HMER model trained at scale, by mix-training exttt{Tex80M} with a relatively small HME dataset. The expansive training dataset and our refined pipeline have equipped exttt{TexTeller} with state-of-the-art (SOTA) performance across nearly all benchmarks. To advance the field, we will openly release our complete model, entire dataset, and full codebase, enabling further research building upon our contributions.
Problem

Research questions and friction points this paper is trying to address.

Addressing data scarcity in Handwritten Mathematical Expression Recognition
Integrating handwritten and LaTeX formulas for scalable training
Achieving SOTA performance with large-scale dataset and model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable data engine generates LaTeX sequences
Largest formula dataset Tex80M with 80M instances
TexTeller model mix-trained for SOTA performance
🔎 Similar Papers
No similar papers found.
H
Haoyang Li
Beijing University of Posts and Telecommunications
J
Jiaqing Li
Beijing University of Posts and Telecommunications
Jialun Cao
Jialun Cao
The Hong Kong University of Science and Technology
SE for AIAI for SE
Z
Zongyuan Yang
Beijing University of Posts and Telecommunications
Y
Yongping Xiong
Beijing University of Posts and Telecommunications