π€ AI Summary
This work addresses the problem of reverse-engineering input prompts that induce a given language model to generate a specified target output sequence. The authors treat the language model as a differentiable function operating on sequences of token distributions and relax discrete input prompts into continuous distributional representations. This relaxation enables end-to-end differentiable optimization while keeping the model parameters frozen. Leveraging gradient-based optimization, the method efficiently searches for optimal prompts and successfully recovers prompts of lengths 10 and 80 that reliably reproduce target output sequences of length 20 across multiple white-box language models. The approach significantly advances the feasibility and efficiency of prompt inversion, demonstrating a practical pathway for understanding and controlling model behavior through input reconstruction.
π Abstract
Despite emerging research on Language Models (LM), few approaches analyse the invertibility of LMs. That is, given a LM and a desirable target output sequence of tokens, determining what input prompts would yield the target output remains an open problem. We formulate this problem as a classical gradient-based optimisation. First, we propose a simple algorithm to achieve end-to-end differentiability of a given (frozen) LM and then find optimised prompts via gradient descent. Our central insight is to view LMs as functions operating on sequences of distributions over tokens (rather than the traditional view as functions on sequences of tokens). Our experiments and ablations demonstrate that our DLM-powered inversion can reliably and efficiently optimise prompts of lengths $10$ and $80$ for targets of length $20$, for several white-box LMs (out-of-the-box).