🤖 AI Summary
Addressing the challenge of generating synthetic text that simultaneously ensures readability, privacy preservation, and theoretical guarantees for LLM fine-tuning—particularly under constraints of limited real data and privacy sensitivity—this paper proposes the first theoretically rigorous synthetic text generation framework based on gradient matching. Our method employs the Alternating Direction Method of Multipliers (ADMM) to optimize synthetic token embeddings such that their gradients closely approximate those computed on real data, thereby guaranteeing convergence of fine-tuning to a neighborhood of the true solution. Coupled with low-perplexity decoding, the approach yields semantically coherent and human-readable text. Experiments on multi-class classification tasks demonstrate that LLMs fine-tuned solely on synthetic data achieve performance comparable to those trained on real data, while significantly improving training efficiency and privacy protection. The core contribution lies in the first principled integration of gradient matching into synthetic text generation, unifying readability, privacy, and provable convergence.
📝 Abstract
Synthetic data has the potential to improve the performance, training efficiency, and privacy of real training examples. Nevertheless, existing approaches for synthetic text generation are mostly heuristics and cannot generate human-readable text without compromising the privacy of real data or provide performance guarantees for training Large Language Models (LLMs). In this work, we propose the first theoretically rigorous approach for generating synthetic human-readable text that guarantees the convergence and performance of LLMs during fine-tuning on a target task. To do so, we leverage Alternating Direction Method of Multipliers (ADMM) that iteratively optimizes the embeddings of synthetic examples to match the gradient of the target training or validation data, and maps them to a sequence of text tokens with low perplexity. In doing so, the generated synthetic text can guarantee convergence of the model to a close neighborhood of the solution obtained by fine-tuning on real data. Experiments on various classification tasks confirm the effectiveness of our proposed approach.