Privileged Information Distillation for Language Models

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively transferring capabilities enhanced by privileged information (PI) in multi-agent environments when such PI is inaccessible during inference and only action trajectories are observable. To tackle this, the authors propose π-Distill, a joint training framework coupled with an On-Policy Self-Distillation (OPSD) reinforcement learning strategy, enabling efficient knowledge transfer from a PI-equipped teacher model to a PI-free student model using only action trajectories as supervision. This approach departs from conventional distillation paradigms that rely on full chain-of-thought supervision, and demonstrates significant performance gains over standard supervised fine-tuning followed by reinforcement learning across multiple agent benchmarks, thereby improving the reasoning capabilities of models operating without access to privileged information.

Technology Category

Application Category

📝 Abstract
Training-time privileged information (PI) can enable language models to succeed on tasks they would otherwise fail, making it a powerful tool for reinforcement learning in hard, long-horizon settings. However, transferring capabilities learned with PI to policies that must act without it at inference time remains a fundamental challenge. We study this problem in the context of distilling frontier models for multi-turn agentic environments, which typically hide their internal reasoning and expose only action trajectories. This breaks standard distillation pipelines, since successful behavior is observable, but the reasoning process is not. For this, we introduce {\pi}-Distill, a joint teacher-student objective that trains a PI-conditioned teacher and an unconditioned student simultaneously using the same model. Additionally, we also introduce On-Policy Self-Distillation (OPSD), an alternative approach that trains using Reinforcement Learning (RL) with a reverse KL-penalty between the student and the PI-conditioned teacher. We show that both of these algorithms effectively distill frontier agents using action-only PI. Specifically, we find that {\pi}-Distill and, in some cases, OPSD, outperform industry standard practices (Supervised finetuning followed by RL) that assume access to full Chain-of-Thought supervision across multiple agentic benchmarks, models, and forms of PI. We complement our results with extensive analysis that characterizes the factors enabling effective learning with PI, focusing primarily on {\pi}-Distill and characterizing when OPSD is competitive.
Problem

Research questions and friction points this paper is trying to address.

Privileged Information
Knowledge Distillation
Language Models
Reinforcement Learning
Agentic Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privileged Information Distillation
π-Distill
On-Policy Self-Distillation
Action-only Supervision
Reinforcement Learning
🔎 Similar Papers
No similar papers found.