🤖 AI Summary
Current claims regarding Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) lack rigorous, benchmark-pollution-resistant evaluation frameworks. Method: We propose SuperARC—an open intelligence test grounded in recursion theory and the first principles of algorithmic probability. It introduces a novel intelligence metric founded on Kolmogorov complexity (rather than statistical compression) and establishes a unified evaluation framework integrating model abstraction with optimal Bayesian planning, emphasizing compositional reasoning and inverse problem solving (e.g., inferring generative models from observations). Contribution/Results: Empirical evaluation reveals no convergence trend among mainstream large language models (LLMs) across core AGI/ASI dimensions; their performance remains fragile and marginally improvable. In contrast, theoretically grounded neurosymbolic methods achieve significant superiority over LLMs on short binary sequence inversion tasks. SuperARC thus provides a falsifiable, benchmark-pollution-resilient paradigm for assessing the fundamental nature of intelligence.
📝 Abstract
We introduce an open-ended test grounded in algorithmic probability that can avoid benchmark contamination in the quantitative evaluation of frontier models in the context of their Artificial General Intelligence (AGI) and Superintelligence (ASI) claims. Unlike other tests, this test does not rely on statistical compression methods (such as GZIP or LZW), which are more closely related to Shannon entropy than to Kolmogorov complexity. The test challenges aspects related to features of intelligence of fundamental nature such as synthesis and model creation in the context of inverse problems (generating new knowledge from observation). We argue that metrics based on model abstraction and optimal Bayesian inference for planning can provide a robust framework for testing intelligence, including natural intelligence (human and animal), narrow AI, AGI, and ASI. Our results show no clear evidence of LLM convergence towards a defined level of intelligence, particularly AGI or ASI. We found that LLM model versions tend to be fragile and incremental, as new versions may perform worse than older ones, with progress largely driven by the size of training data. The results were compared with a hybrid neurosymbolic approach that theoretically guarantees model convergence from optimal inference based on the principles of algorithmic probability and Kolmogorov complexity. The method outperforms LLMs in a proof-of-concept on short binary sequences. Our findings confirm suspicions regarding the fundamental limitations of LLMs, exposing them as systems optimised for the perception of mastery over human language. Progress among different LLM versions from the same developers was found to be inconsistent and limited, particularly in the absence of a solid symbolic counterpart.