🤖 AI Summary
This work investigates scalable supervision success—where a weaker AI supervises a stronger AI—under capability asymmetry. Method: We propose a piecewise-linear Elo model treating supervision as a game-theoretic process, capturing the nonlinear interplay between supervision capability and deception capability; we introduce Nested Scalable Oversight (NSO) theory to characterize optimal hierarchical structures and success conditions. Contribution/Results: Validated across four supervision games (Nim, Mafia, Debate, and others), we identify a universal scaling law: supervision success drops below 52% when the supervisee’s Elo exceeds the supervisor’s by 400 points, decreasing monotonically with growing capability gap. Leveraging Chatbot Arena Elo as a proxy for general intelligence, combined with game-theoretic analysis, numerical optimization, and analytical derivation, our framework provides the first quantitative, theoretically grounded foundation for scalable AI oversight.
📝 Abstract
Scalable oversight, the process by which weaker AI systems supervise stronger ones, has been proposed as a key strategy to control future superintelligent systems. However, it is still unclear how scalable oversight itself scales. To address this gap, we propose a framework that quantifies the probability of successful oversight as a function of the capabilities of the overseer and the system being overseen. Specifically, our framework models oversight as a game between capability-mismatched players; the players have oversight-specific and deception-specific Elo scores that are a piecewise-linear function of their general intelligence, with two plateaus corresponding to task incompetence and task saturation. We validate our framework with a modified version of the game Nim and then apply it to four oversight games:"Mafia","Debate","Backdoor Code"and"Wargames". For each game, we find scaling laws that approximate how domain performance depends on general AI system capability (using Chatbot Arena Elo as a proxy for general capability). We then build on our findings in a theoretical study of Nested Scalable Oversight (NSO), a process in which trusted models oversee untrusted stronger models, which then become the trusted models in the next step. We identify conditions under which NSO succeeds and derive numerically (and in some cases analytically) the optimal number of oversight levels to maximize the probability of oversight success. In our numerical examples, the NSO success rate is below 52% when overseeing systems that are 400 Elo points stronger than the baseline overseer, and it declines further for overseeing even stronger systems.