🤖 AI Summary
Visual language models (VLMs) exhibit significant limitations in detecting and correcting errors in handwritten mathematical solutions. Method: We introduce FERMAT, the first multi-dimensional benchmark for handwritten mathematics, covering four error types—computational, conceptual, symbolic, and presentational—and comprising 2,200+ authentic handwritten solutions from grades 7–12, augmented with controllably perturbed samples. We propose the first multi-granularity evaluation framework for handwritten mathematical errors and systematically assess nine state-of-the-art VLMs, including Gemini-1.5-Pro. Contribution/Results: Experiments reveal a substantial performance drop under handwritten input—accuracy improves markedly when inputs are converted to printed text—confirming handwriting understanding as a critical bottleneck. Gemini-1.5-Pro achieves the highest error correction rate of 77%. This work uncovers structural weaknesses of VLMs in handwritten mathematical reasoning, establishing a foundational benchmark and guiding direction for robust modeling and educational AI applications.
📝 Abstract
Recent advancements in Vision-Language Models (VLMs) have opened new possibilities in automatic grading of handwritten student responses, particularly in mathematics. However, a comprehensive study to test the ability of VLMs to evaluate and reason over handwritten content remains absent. To address this gap, we introduce FERMAT, a benchmark designed to assess the ability of VLMs to detect, localize and correct errors in handwritten mathematical content. FERMAT spans four key error dimensions - computational, conceptual, notational, and presentation - and comprises over 2,200 handwritten math solutions derived from 609 manually curated problems from grades 7-12 with intentionally introduced perturbations. Using FERMAT we benchmark nine VLMs across three tasks: error detection, localization, and correction. Our results reveal significant shortcomings in current VLMs in reasoning over handwritten text, with Gemini-1.5-Pro achieving the highest error correction rate (77%). We also observed that some models struggle with processing handwritten content, as their accuracy improves when handwritten inputs are replaced with printed text or images. These findings highlight the limitations of current VLMs and reveal new avenues for improvement. We release FERMAT and all the associated resources in the open-source to drive further research.