🤖 AI Summary
Language Model Inversion (LMI) recovers sensitive prompts from model outputs, posing severe threats to user privacy and system security. To address this, we propose a novel LMI framework grounded in latent-space invariance. First, we introduce the Invariant Latent-Space Hypothesis (ILSH), identifying source and cyclic invariance as fundamental principles underlying prompt inversion. Second, we design a lightweight inverse encoder trained in two stages—without requiring large-scale inversion corpora—and optimized via training-free neighborhood search. Third, at the representation level, we fuse multi-output information through contrastive alignment, supervised reinforcement learning, sparse representation concatenation, and pseudo-representation denoising; the target LLM itself serves as an invariant decoder for efficient inverse mapping. Evaluated on nine benchmarks, our method achieves a +4.77 BLEU average gain while drastically reducing reliance on inversion data. Empirical analysis further reveals the limited efficacy of existing defenses, underscoring both the necessity and advancement of our approach.
📝 Abstract
Language model inversion (LMI), i.e., recovering hidden prompts from outputs, emerges as a concrete threat to user privacy and system security. We recast LMI as reusing the LLM's own latent space and propose the Invariant Latent Space Hypothesis (ILSH): (1) diverse outputs from the same source prompt should preserve consistent semantics (source invariance), and (2) input<->output cyclic mappings should be self-consistent within a shared latent space (cyclic invariance). Accordingly, we present Inv^2A, which treats the LLM as an invariant decoder and learns only a lightweight inverse encoder that maps outputs to a denoised pseudo-representation. When multiple outputs are available, they are sparsely concatenated at the representation layer to increase information density. Training proceeds in two stages: contrastive alignment (source invariance) and supervised reinforcement (cyclic invariance). An optional training-free neighborhood search can refine local performance. Across 9 datasets covering user and system prompt scenarios, Inv^2A outperforms baselines by an average of 4.77% BLEU score while reducing dependence on large inverse corpora. Our analysis further shows that prevalent defenses provide limited protection, underscoring the need for stronger strategies. The source code and data involved in this paper can be found in https://github.com/yyy01/Invariant_Attacker.