🤖 AI Summary
Current AGI evaluation lacks a cross-modal unified framework that jointly characterizes task difficulty and model/human capability, hindering systematic analysis of capability gaps and long-tail challenges across vision, language, and action domains. To address this, we propose the first Elo-based dynamic rating system for joint cross-modal (vision–language–action) assessment, moving beyond unidimensional accuracy metrics. Our method integrates multi-source benchmarks—including VQA, RLBench, and BIG-bench—via Bayesian modeling and an iterative adversarial scoring algorithm, enabling fine-grained, difficulty-aware, bidirectional evaluation of both tasks and models. The system generates interpretable difficulty–capability distribution maps and quantifies the gap between current models and full task mastery. Extensive validation across diverse AGI scenarios demonstrates robustness and strong generalization capability.
📝 Abstract
As the field progresses toward Artificial General Intelligence (AGI), there is a pressing need for more comprehensive and insightful evaluation frameworks that go beyond aggregate performance metrics. This paper introduces a unified rating system that jointly models the difficulty of individual test cases and the competency of AI models (or humans) across vision, language, and action domains. Unlike existing metrics that focus solely on models, our approach allows for fine-grained, difficulty-aware evaluations through competitive interactions between models and tasks, capturing both the long-tail distribution of real-world challenges and the competency gap between current models and full task mastery. We validate the generalizability and robustness of our system through extensive experiments on multiple established datasets and models across distinct AGI domains. The resulting rating distributions offer novel perspectives and interpretable insights into task difficulty, model progression, and the outstanding challenges that remain on the path to achieving full AGI task mastery.