Teach2Eval: An Indirect Evaluation Method for LLM by Judging How It Teaches

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation suffers from insufficient fairness, poor scalability, and data contamination. To address these challenges, we propose Teach2Eval—a novel indirect evaluation framework grounded in pedagogical capability: it prompts LLMs to act as “teachers” instructing weaker student models on target tasks, then automatically converts their teaching outputs into standardized multiple-choice questions (MCQs). This enables dynamic, contamination-resistant, and scalable automated assessment. Teach2Eval pioneers a cognitive ability measurement paradigm wherein teaching efficacy serves as a proxy metric—overcoming limitations of static benchmarks while ensuring fairness, interpretability, and orthogonality across cognitive dimensions. Evaluated on 26 mainstream LLMs, Teach2Eval achieves strong rank correlation with human judgments and model-level dynamic rankings. Moreover, it supports fine-grained training feedback, substantially enhancing the guidance value and interpretability of LLM evaluation.

Technology Category

Application Category

📝 Abstract
Recent progress in large language models (LLMs) has outpaced the development of effective evaluation methods. Traditional benchmarks rely on task-specific metrics and static datasets, which often suffer from fairness issues, limited scalability, and contamination risks. In this paper, we introduce Teach2Eval, an indirect evaluation framework inspired by the Feynman Technique. Instead of directly testing LLMs on predefined tasks, our method evaluates a model's multiple abilities to teach weaker student models to perform tasks effectively. By converting open-ended tasks into standardized multiple-choice questions (MCQs) through teacher-generated feedback, Teach2Eval enables scalable, automated, and multi-dimensional assessment. Our approach not only avoids data leakage and memorization but also captures a broad range of cognitive abilities that are orthogonal to current benchmarks. Experimental results across 26 leading LLMs show strong alignment with existing human and model-based dynamic rankings, while offering additional interpretability for training guidance.
Problem

Research questions and friction points this paper is trying to address.

Indirect evaluation of LLMs via teaching weaker models
Addresses fairness, scalability, and contamination in benchmarks
Converts open tasks to MCQs for multi-dimensional assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Indirect evaluation via teaching weaker student models
Converts tasks into MCQs with teacher feedback
Scalable, automated, multi-dimensional assessment framework
🔎 Similar Papers
No similar papers found.
Y
Yuhang Zhou
School of Computer Science, Fudan University; Shanghai Innovation Institute
X
Xutian Chen
School of Computer Science, Fudan University; Shanghai Innovation Institute
Y
Yixin Cao
School of Computer Science, Fudan University
Yuchen Ni
Yuchen Ni
Fudan University
LLM
Y
Yu He
School of Computer Science, Fudan University; Shanghai Innovation Institute
S
Siyu Tian
School of Computer Science, Fudan University
X
Xiang Liu
Computer Science Department, NYU Shanghai
J
Jian Zhang
DataGrand Inc.
C
Chuanjun Ji
DataGrand Inc.
Guangnan Ye
Guangnan Ye
Fudan University
Computer Vision - Machine Learning
X
Xipeng Qiu
School of Computer Science, Fudan University