Language Model Inversion through End-to-End Differentiation

πŸ“… 2026-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the problem of reverse-engineering input prompts that induce a given language model to generate a specified target output sequence. The authors treat the language model as a differentiable function operating on sequences of token distributions and relax discrete input prompts into continuous distributional representations. This relaxation enables end-to-end differentiable optimization while keeping the model parameters frozen. Leveraging gradient-based optimization, the method efficiently searches for optimal prompts and successfully recovers prompts of lengths 10 and 80 that reliably reproduce target output sequences of length 20 across multiple white-box language models. The approach significantly advances the feasibility and efficiency of prompt inversion, demonstrating a practical pathway for understanding and controlling model behavior through input reconstruction.

Technology Category

Application Category

πŸ“ Abstract
Despite emerging research on Language Models (LM), few approaches analyse the invertibility of LMs. That is, given a LM and a desirable target output sequence of tokens, determining what input prompts would yield the target output remains an open problem. We formulate this problem as a classical gradient-based optimisation. First, we propose a simple algorithm to achieve end-to-end differentiability of a given (frozen) LM and then find optimised prompts via gradient descent. Our central insight is to view LMs as functions operating on sequences of distributions over tokens (rather than the traditional view as functions on sequences of tokens). Our experiments and ablations demonstrate that our DLM-powered inversion can reliably and efficiently optimise prompts of lengths $10$ and $80$ for targets of length $20$, for several white-box LMs (out-of-the-box).
Problem

Research questions and friction points this paper is trying to address.

Language Model Inversion
Prompt Optimization
End-to-End Differentiability
Gradient-based Optimization
Token Sequence Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language Model Inversion
End-to-End Differentiation
Prompt Optimization
Gradient-Based Optimization
Token Distribution Sequences
πŸ”Ž Similar Papers
No similar papers found.
K
Kevin Yandoka DenamganaΓ―
School of Informatics, University of Edinburgh, Edinburgh, UK
Kartic Subr
Kartic Subr
University of Edinburgh
samplingroboticscomputer graphics