🤖 AI Summary
Automated formalization of natural language statements lacks reliable, efficient, and semantically sensitive evaluation metrics. To address this, we propose GTED—a novel framework that (1) standardizes formalized statements, (2) constructs operator trees capturing logical structure, and (3) computes Generalized Tree Edit Distance via dynamic programming to quantify semantic similarity. GTED is the first structure-aware tree edit distance metric tailored for formalization quality assessment, overcoming key limitations of prior approaches: weak semantic understanding, high computational overhead, and dependence on theorem provers. Evaluated on the miniF2F and ProofNet benchmarks, GTED consistently outperforms all baseline metrics—achieving state-of-the-art accuracy and Cohen’s Kappa, with substantial gains in evaluation fidelity and practical utility.
📝 Abstract
Statement autoformalization, the automated translation of statement from natural language into formal languages, has become a subject of extensive research, yet the development of robust automated evaluation metrics remains limited. Existing evaluation methods often lack semantic understanding, face challenges with high computational costs, and are constrained by the current progress of automated theorem proving. To address these issues, we propose GTED (Generalized Tree Edit Distance), a novel evaluation framework that first standardizes formal statements and converts them into operator trees, then determines the semantic similarity using the eponymous GTED metric. On the miniF2F and ProofNet benchmarks, GTED outperforms all baseline metrics by achieving the highest accuracy and Kappa scores, thus providing the community with a more faithful metric for automated evaluation. The code and experimental results are available at https://github.com/XiaoyangLiu-sjtu/GTED.