🤖 AI Summary
This work addresses the conceptual ambiguity and limited practical guidance inherent in current notions of Artificial General Intelligence (AGI). It proposes a novel paradigm—Superhuman Adaptive Intelligence (SAI)—that shifts focus away from the pursuit of ill-defined generality toward achieving superhuman performance in specific tasks while compensating for human cognitive blind spots. By integrating insights from cognitive science and AI theory, the paper establishes a conceptual framework and evaluation criteria for SAI that are independent of any particular algorithmic implementation. This reorientation reframes the objectives of AI development, clarifies key confusions in AGI discourse, and offers a clear, actionable pathway for future research and policy formulation.
📝 Abstract
Everyone from AI executives and researchers to doomsayers, politicians, and activists is talking about Artificial General Intelligence (AGI). Yet, they often don't seem to agree on its exact definition. One common definition of AGI is an AI that can do everything a human can do, but are humans truly general? In this paper, we address what's wrong with our conception of AGI, and why, even in its most coherent formulation, it is a flawed concept to describe the future of AI. We explore whether the most widely accepted definitions are plausible, useful, and truly general. We argue that AI must embrace specialization, rather than strive for generality, and in its specialization strive for superhuman performance, and introduce Superhuman Adaptable Intelligence (SAI). SAI is defined as intelligence that can learn to exceed humans at anything important that we can do, and that can fill in the skill gaps where humans are incapable. We then lay out how SAI can help hone a discussion around AI that was blurred by an overloaded definition of AGI, and extrapolate the implications of using it as a guide for the future.