π€ AI Summary
Existing large vision-language models exhibit limited performance in analyzing the evolution of ancient Chinese character forms, primarily due to insufficient modeling of glyph recognition and evolutionary reasoning. To address this gap, this work introduces GEVO, a glyph-driven fine-tuning framework that explicitly enforces glyph evolution consistency to enhance the modelβs understanding of ancient character transformation patterns. The study also presents the first multimodal benchmark for this domain, comprising over 130,000 samples across 11 distinct tasks. Experimental results demonstrate that even a 2-billion-parameter model fine-tuned with GEVO achieves consistent and significant performance gains across all tasks. Both the benchmark dataset and the trained models are publicly released to facilitate further research.
π Abstract
In recent years, rapid advances in Multimodal Large Language Models (MLLMs) have increasingly stimulated research on ancient Chinese scripts. As the evolution of written characters constitutes a fundamental pathway for understanding cultural transformation and historical continuity, how MLLMs can be systematically leveraged to support and advance text evolution analysis remains an open and largely underexplored problem. To bridge this gap, we construct a comprehensive benchmark comprising 11 tasks and over 130,000 instances, specifically designed to evaluate the capability of MLLMs in analyzing the evolution of ancient Chinese scripts. We conduct extensive evaluations across multiple widely used MLLMs and observe that, while existing models demonstrate a limited ability in glyph-level comparison, their performance on core tasks-such as character recognition and evolutionary reasoning-remains substantially constrained. Motivated by these findings, we propose a glyph-driven fine-tuning framework (GEVO) that explicitly encourages models to capture evolutionary consistency in glyph transformations and enhances their understanding of text evolution. Experimental results show that even models at the 2B scale achieve consistent and comprehensive performance improvements across all evaluated tasks. To facilitate future research, we publicly release both the benchmark and the trained models\footnote{https://github.com/songruiecho/GEVO}.