π€ AI Summary
This work addresses the challenge of applying knowledge distillation in resource-constrained settings, where conventional approaches require extensive training data and parameter updates. The authors propose a training-free contextual distillation framework that injects the teacher modelβs knowledge into the student model via prompts. By integrating multi-trajectory reasoning generation, teacher-guided experience extraction, and a usage-statistics-based experience compression mechanism, the method enables efficient knowledge transfer without updating student parameters. This approach represents the first instance of context-based experience distillation that operates entirely without training, while effectively controlling the growth of the experience repository and mitigating noise accumulation. On the MathVision and VisualPuzzles benchmarks, the method achieves performance close to that of fully trained distillation using only 100 samples, reducing training costs by over fivefold.
π Abstract
Knowledge distillation is typically realized by transferring a teacher model's knowledge into a student's parameters through supervised or reinforcement-based optimization. While effective, such approaches require repeated parameter updates and large-scale training data, limiting their applicability in resource-constrained environments. In this work, we propose TED, a training-free, context-based distillation framework that shifts the update target of distillation from model parameters to an in-context experience injected into the student's prompt. For each input, the student generates multiple reasoning trajectories, while a teacher independently produces its own solution. The teacher then compares the student trajectories with its reasoning and the ground-truth answer, extracting generalized experiences that capture effective reasoning patterns. These experiences are continuously refined and updated over time. A key challenge of context-based distillation is unbounded experience growth and noise accumulation. TED addresses this with an experience compression mechanism that tracks usage statistics and selectively merges, rewrites, or removes low-utility experiences. Experiments on multimodal reasoning benchmarks MathVision and VisualPuzzles show that TED consistently improves performance. On MathVision, TED raises the performance of Qwen3-VL-8B from 0.627 to 0.702, and on VisualPuzzles from 0.517 to 0.561 with just 100 training samples. Under this low-data, no-update setting, TED achieves performance competitive with fully trained parameter-based distillation while reducing training cost by over 5x, demonstrating that meaningful knowledge transfer can be achieved through contextual experience.