Toward an Artificial General Teacher: Procedural Geometry Data Generation and Visual Grounding with Vision-Language Models

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor performance of existing visual referring segmentation models on geometric diagrams, primarily due to the substantial domain shift between natural images and abstract geometric figures, as well as the lack of annotated data. The authors formulate visual explanations in geometry education as a referring segmentation task and introduce a novel programmatic synthesis engine that automatically generates over 200,000 geometric diagrams with pixel-level masks and diverse linguistic descriptions—without requiring manual annotation. They fine-tune vision-language models such as Florence-2 on this synthetic dataset and propose Buffered IoU, a new evaluation metric better suited for fine-grained structural alignment. Experiments show that the fine-tuned models achieve 49% IoU and 85% Buffered IoU, vastly outperforming zero-shot baselines (<1% IoU), thereby laying the groundwork for building general-purpose AI tutors capable of step-by-step visual explanation.
📝 Abstract
We study visual explanation in geometry education as a Referring Image Segmentation (RIS) problem: given a diagram and a natural language description, the task is to produce a pixel-level mask for the referred geometric element. However, existing RIS models trained on natural image benchmarks such as RefCOCO fail catastrophically on geometric diagrams due to the fundamental domain shift between photographic scenes and abstract, textureless schematics. To address the absence of suitable training data, we present a fully automated procedural data engine that generates over 200,000 synthetic geometry diagrams with pixel-perfect segmentation masks and linguistically diverse referring expressions, requiring zero manual annotation. We further propose domain-specific fine-tuning of vision-language models (VLMs), demonstrating that a fine-tuned Florence-2 achieves 49% IoU and 85% Buffered IoU (BIoU), compared to <1% IoU in zero-shot settings. We introduce Buffered IoU, a geometry-aware evaluation metric that accounts for thin-structure localization, and show that it better reflects true segmentation quality than standard IoU. Our results establish a foundation for building Artificial General Teachers (AGTs) capable of providing visually grounded, step-by-step explanations of geometry problems.
Problem

Research questions and friction points this paper is trying to address.

Referring Image Segmentation
geometry education
domain shift
visual grounding
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

procedural data generation
referring image segmentation
vision-language models
geometry education
Buffered IoU
🔎 Similar Papers
No similar papers found.