🤖 AI Summary
This paper addresses the “AI supervision” challenge—the difficulty of effectively evaluating and supervising high-capability language models (LMs) by humans—by revealing that increasing model capability induces error convergence, thereby undermining the reliability of LLM-as-a-judge evaluation. We propose a probabilistic similarity metric based on error overlap and systematically analyze error distributions across >100 model pairs on diverse tasks. Our analysis identifies similarity between supervisor and supervisee as the root cause of supervision bias. Crucially, we provide the first empirical evidence of knowledge complementarity between weaker supervisors and stronger learners, enabling robust weak-to-strong generalization. Beyond diagnosing the origin of self-preference bias, we establish a principled, diagnosable, and correctable supervision evaluation framework—advancing both the theoretical foundations and practical methodologies for trustworthy AI supervision.
📝 Abstract
As Language Model (LM) capabilities advance, evaluating and supervising them at scale is getting harder for humans. There is hope that other language models can automate both these tasks, which we refer to as"AI Oversight". We study how model similarity affects both aspects of AI oversight by proposing a probabilistic metric for LM similarity based on overlap in model mistakes. Using this metric, we first show that LLM-as-a-judge scores favor models similar to the judge, generalizing recent self-preference results. Then, we study training on LM annotations, and find complementary knowledge between the weak supervisor and strong student model plays a crucial role in gains from"weak-to-strong generalization". As model capabilities increase, it becomes harder to find their mistakes, and we might defer more to AI oversight. However, we observe a concerning trend -- model mistakes are becoming more similar with increasing capabilities, pointing to risks from correlated failures. Our work underscores the importance of reporting and correcting for model similarity, especially in the emerging paradigm of AI oversight.