🤖 AI Summary
Existing LLM evaluation suffers from insufficient fairness, poor scalability, and data contamination. To address these challenges, we propose Teach2Eval—a novel indirect evaluation framework grounded in pedagogical capability: it prompts LLMs to act as “teachers” instructing weaker student models on target tasks, then automatically converts their teaching outputs into standardized multiple-choice questions (MCQs). This enables dynamic, contamination-resistant, and scalable automated assessment. Teach2Eval pioneers a cognitive ability measurement paradigm wherein teaching efficacy serves as a proxy metric—overcoming limitations of static benchmarks while ensuring fairness, interpretability, and orthogonality across cognitive dimensions. Evaluated on 26 mainstream LLMs, Teach2Eval achieves strong rank correlation with human judgments and model-level dynamic rankings. Moreover, it supports fine-grained training feedback, substantially enhancing the guidance value and interpretability of LLM evaluation.
📝 Abstract
Recent progress in large language models (LLMs) has outpaced the development of effective evaluation methods. Traditional benchmarks rely on task-specific metrics and static datasets, which often suffer from fairness issues, limited scalability, and contamination risks. In this paper, we introduce Teach2Eval, an indirect evaluation framework inspired by the Feynman Technique. Instead of directly testing LLMs on predefined tasks, our method evaluates a model's multiple abilities to teach weaker student models to perform tasks effectively. By converting open-ended tasks into standardized multiple-choice questions (MCQs) through teacher-generated feedback, Teach2Eval enables scalable, automated, and multi-dimensional assessment. Our approach not only avoids data leakage and memorization but also captures a broad range of cognitive abilities that are orthogonal to current benchmarks. Experimental results across 26 leading LLMs show strong alignment with existing human and model-based dynamic rankings, while offering additional interpretability for training guidance.