🤖 AI Summary
Large language models (LLMs) lack objective, unsupervised, and overfitting-resistant evaluation methods for truth-defined tasks—such as machine translation and logical reasoning—where ground-truth labels are unambiguous yet costly or infeasible to obtain manually.
Method: This paper introduces the first self-extensible evaluation benchmark, pioneering a fully automated paradigm integrating formal logical modeling, programmatic task synthesis, and verifiable ground-truth generation. It enables adaptive difficulty scaling, zero human annotation, and strong generalization via automated task generation, deterministic truth derivation, and stochastic data construction.
Contribution/Results: (1) It overcomes the limitations of static benchmarks by enabling continuous, dynamic evaluation; (2) Empirical validation shows high correlation (average Spearman’s ρ > 0.92) between its scores and those of established translation and reasoning benchmarks, significantly improving assessment efficiency and generalizability—particularly in low-resource and unsupervised settings.
📝 Abstract
This paper presents $forall$uto$exists$$lor!land$L, a novel benchmark for scaling Large Language Model (LLM) assessment in formal tasks with clear notions of correctness, such as truth maintenance in translation and logical reasoning. $forall$uto$exists$$lor!land$L is the first benchmarking paradigm that offers several key advantages necessary for scaling objective evaluation of LLMs without human labeling: (a) ability to evaluate LLMs of increasing sophistication by auto-generating tasks at different levels of difficulty; (b) auto-generation of ground truth that eliminates dependence on expensive and time-consuming human annotation; (c) the use of automatically generated, randomized datasets that mitigate the ability of successive LLMs to overfit to static datasets used in many contemporary benchmarks. Empirical analysis shows that an LLM's performance on $forall$uto$exists$$lor!land$L is highly indicative of its performance on a diverse array of other benchmarks focusing on translation and reasoning tasks, making it a valuable autonomous evaluation paradigm in settings where hand-curated datasets can be hard to obtain and/or update.