🤖 AI Summary
This work addresses the semantic gap between pixel-level vision models and human symbolic understanding in symbolic visual learning. Methodologically, it proposes a self-supervised symbolic autoencoder framework that (1) parses diagrams into geometric primitives—points, lines, and regions—and their structural relations, mapping them to an interpretable latent symbolic space; (2) introduces a hierarchical process-reward modeling mechanism with point-line-shape consistency constraints and stabilized exploration, enabling end-to-end reconstruction via an executable engine; and (3) refines the model via fine-tuning that integrates neural-symbolic systems with inference-based visual reward signals. Experiments demonstrate substantial improvements: 98.2% reduction in geometric graph reconstruction MSE; 0.6% higher diagram reconstruction accuracy than GPT-4o (7B); 13% gain in MathGlance perceptual capability; and 3% absolute improvements on both MathVerse and GeoQA reasoning benchmarks. The approach advances interpretable, executable symbolic visual understanding.
📝 Abstract
Symbolic computer vision represents diagrams through explicit logical rules and structured representations, enabling interpretable understanding in machine vision. This requires fundamentally different learning paradigms from pixel-based visual models. Symbolic visual learners parse diagrams into geometric primitives-points, lines, and shapes-whereas pixel-based learners operate on textures and colors. We propose a novel self-supervised symbolic auto-encoder that encodes diagrams into structured primitives and their interrelationships within the latent space, and decodes them through our executable engine to reconstruct the input diagrams. Central to this architecture is Symbolic Hierarchical Process Reward Modeling, which applies hierarchical step-level parsing rewards to enforce point-on-line, line-on-shape, and shape-on-relation consistency. Since vanilla reinforcement learning exhibits poor exploration in the policy space during diagram reconstruction; we thus introduce stabilization mechanisms to balance exploration and exploitation. We fine-tune our symbolic encoder on downstream tasks, developing a neuro-symbolic system that integrates the reasoning capabilities of neural networks with the interpretability of symbolic models through reasoning-grounded visual rewards. Evaluations across reconstruction, perception, and reasoning tasks demonstrate the effectiveness of our approach: achieving a 98.2% reduction in MSE for geometric diagram reconstruction, surpassing GPT-4o by 0.6% with a 7B model on chart reconstruction, and improving by +13% on the MathGlance perception benchmark, and by +3% on MathVerse and GeoQA reasoning benchmarks.