🤖 AI Summary
This work investigates how to precisely steer language models toward optimizing behavior on arbitrary differentiable objectives through synthetic training data. The authors propose a reinforcement learning–based approach for synthetic data generation that integrates supervised fine-tuning with higher-order gradient techniques, enabling fine-grained attribution of model outputs for the first time. They introduce a Dataset Policy Gradient (DPG) reward mechanism designed to approximate true gradients without requiring direct access to the model’s internal parameters. This framework allows flexible manipulation of model outputs while remaining agnostic to model internals. The method demonstrates strong empirical performance across diverse and challenging tasks, including embedding QR codes into generated text, producing specified UUIDs, minimizing ℓ² norm of outputs, and performing cross-lingual rewriting.
📝 Abstract
What are the limits of controlling language models via synthetic training data? We develop a reinforcement learning (RL) primitive, the Dataset Policy Gradient (DPG), which can precisely optimize synthetic data generators to produce a dataset of targeted examples. When used for supervised fine-tuning (SFT) of a target model, these examples cause the target model to do well on a differentiable metric of our choice. Our approach achieves this by taking exact data attribution via higher-order gradients and using those scores as policy gradient rewards. We prove that this procedure closely approximates the true, intractable gradient for the synthetic data generator. To illustrate the potential of DPG, we show that, using only SFT on generated examples, we can cause the target model's LM head weights to (1) embed a QR code, (2) embed the pattern $\texttt{67}$, and (3) have lower $\ell^2$ norm. We additionally show that we can cause the generator to (4) rephrase inputs in a new language and (5) produce a specific UUID, even though neither of these objectives is conveyed in the generator's input prompts. These findings suggest that DPG is a powerful and flexible technique for shaping model properties using only synthetic training examples.