๐ค AI Summary
Handwritten Mathematical Expression Recognition (HMER) faces persistent challenges in structural modeling and symbol ambiguity due to unconstrained symbol layout and highly variable handwriting styles. This paper presents the first full-parameter fine-tuning of a vision-language model (VLM) for HMER without architectural modification, enabling a unified multi-task learning framework. We propose three synergistic auxiliary tasks: (1) Tree-Aware Chain-of-Thought, which performs structured spatial reasoning over expression trees; (2) Error-Driven Learning, dynamically correcting predictions for visually similar symbols via error feedback; and (3) Symbol Counting, enforcing symbol-level consistency in long expressions. Leveraging data-driven task design and joint optimization, our method achieves new state-of-the-art results on CROHME and HME100Kโoutperforming the lightweight specialized model SSAN by 16.31% and surpassing zero-shot Gemini 2.5 Flash by 24.42%.
๐ Abstract
Handwritten Mathematical Expression Recognition (HMER) remains a persistent challenge in Optical Character Recognition (OCR) due to the inherent freedom of symbol layout and variability in handwriting styles. Prior methods have faced performance bottlenecks, proposing isolated architectural modifications that are difficult to integrate coherently into a unified framework. Meanwhile, recent advances in pretrained vision-language models (VLMs) have demonstrated strong cross-task generalization, offering a promising foundation for developing unified solutions. In this paper, we introduce Uni-MuMER, which fully fine-tunes a VLM for the HMER task without modifying its architecture, effectively injecting domain-specific knowledge into a generalist framework. Our method integrates three data-driven tasks: Tree-Aware Chain-of-Thought (Tree-CoT) for structured spatial reasoning, Error-Driven Learning (EDL) for reducing confusion among visually similar characters, and Symbol Counting (SC) for improving recognition consistency in long expressions. Experiments on the CROHME and HME100K datasets show that Uni-MuMER achieves new state-of-the-art performance, surpassing the best lightweight specialized model SSAN by 16.31% and the top-performing VLM Gemini2.5-flash by 24.42% in the zero-shot setting. Our datasets, models, and code are open-sourced at: https://github.com/BFlameSwift/Uni-MuMER