Find Your Optimal Teacher: Personalized Data Synthesis via Router-Guided Multi-Teacher Distillation

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work reveals that synthetic data generated by strong teacher models does not necessarily yield optimal learning outcomes for student models—highlighting a fundamental mismatch between teacher output quality and student learnability. To address this, we propose PerSyn, the first framework to introduce a “route-then-generate” paradigm: a query-level router jointly models student learnability and teacher response quality to dynamically assign each query to its optimal teacher. PerSyn further integrates multi-teacher distillation, router-guided data allocation, and conditional generation strategies to enable personalized instruction tuning and mathematical reasoning training across diverse model families and scales. Experiments demonstrate that PerSyn significantly improves knowledge distillation efficiency and final performance across a range of student models—from small to medium-sized—consistently outperforming or matching state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Training student models on synthetic data generated by strong teacher models is a promising way to distilling the capabilities of teachers. However, recent studies show that stronger models are not always optimal teachers, revealing a mismatch between teacher outputs and student learnability. To address this issue, we propose PerSyn (Personalized data Synthesis), a novel synthesis strategy that operates under a new ``Route then Generate'' paradigm to create data tailored to each student model, enabling it to learn more effectively. Specifically, PerSyn first assigns each prompt to its optimal teacher via a query-level router that jointly considers student learnability and teacher response quality. Each teacher then synthesizes data only for its assigned prompts, making the process more efficient than the conventional ``Generate then Select'' paradigm, where all teachers must generate parallel responses for the entire prompt set before constructing the final dataset. Extensive experiments across different model families and scales demonstrate that PerSyn consistently achieves superior or comparable performance to all baselines in instruct tuning and math reasoning settings. Further analysis verifies the effectiveness of PerSyn and offers extra insights to propel future research.
Problem

Research questions and friction points this paper is trying to address.

Personalized data synthesis addresses student-teacher mismatch
Router assigns prompts to optimal teachers for efficiency
Tailored data generation improves student model learnability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Router assigns prompts to optimal teachers
Teachers generate personalized data for students
Route then Generate paradigm improves efficiency
Hengyuan Zhang
Hengyuan Zhang
Ph.D. Student, University of California San Diego
RoboticsComputer VisionAutonomous VehiclesSensor Fusion
Shiping Yang
Shiping Yang
Simon Fraser University
Natural Language ProcessingLarge Language Models
X
Xiao Liang
University of California, Los Angeles
C
Chenming Shang
Dartmouth College
Y
Yuxuan Jiang
University of Maryland, Baltimore County
C
Chaofan Tao
The University of Hong Kong
J
Jing Xiong
The University of Hong Kong
Hayden Kwok-Hay So
Hayden Kwok-Hay So
Univeristy of Hong Kong
reconfigurable computinghardware/software co-designdomain-specific architecturesFPGA overlay
Ruobing Xie
Ruobing Xie
Tencent
Large Language ModelRecommender SystemNatural Language Processing
A
Angel X. Chang
Simon Fraser University
N
Ngai Wong
The University of Hong Kong