Reverse Engineering Human Preferences with Reinforcement Learning

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of LLM-as-a-judge evaluation, where discriminative judge models’ preference signals can be maliciously exploited to overfit generated responses. We propose a novel reverse-engineering paradigm for human preferences that operates solely on the input side—optimizing only a hidden prefix while leaving model outputs unaltered. Methodologically, we freeze the parameters of the large language model and introduce a trainable prompt generator; it optimizes the prefix via PPO-based reinforcement learning, using the judge-LLM’s preference score as reward. The framework achieves strong stealth (undetectable by standard probing), cross-model generalization (effective transfer to unseen judge/generator pairs), and significantly improves downstream evaluation scores. Our key contribution is the first formulation of preference reverse engineering as an input-side optimization problem—bypassing output manipulation entirely—thereby enhancing both the robustness of automated evaluation and the practicality and security of alignment methods.

Technology Category

Application Category

📝 Abstract
The capabilities of Large Language Models (LLMs) are routinely evaluated by other LLMs trained to predict human preferences. This framework--known as LLM-as-a-judge--is highly scalable and relatively low cost. However, it is also vulnerable to malicious exploitation, as LLM responses can be tuned to overfit the preferences of the judge. Previous work shows that the answers generated by a candidate-LLM can be edited post hoc to maximise the score assigned to them by a judge-LLM. In this study, we adopt a different approach and use the signal provided by judge-LLMs as a reward to adversarially tune models that generate text preambles designed to boost downstream performance. We find that frozen LLMs pipelined with these models attain higher LLM-evaluation scores than existing frameworks. Crucially, unlike other frameworks which intervene directly on the model's response, our method is virtually undetectable. We also demonstrate that the effectiveness of the tuned preamble generator transfers when the candidate-LLM and the judge-LLM are replaced with models that are not used during training. These findings raise important questions about the design of more reliable LLM-as-a-judge evaluation settings. They also demonstrate that human preferences can be reverse engineered effectively, by pipelining LLMs to optimise upstream preambles via reinforcement learning--an approach that could find future applications in diverse tasks and domains beyond adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Reverse engineer human preferences using reinforcement learning
Enhance LLM evaluation scores via adversarial preamble tuning
Address vulnerabilities in LLM-as-a-judge evaluation frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using reinforcement learning to optimize text preambles
Pipelining frozen LLMs with tuned preamble generators
Transferring effectiveness to unseen models post-training
🔎 Similar Papers
No similar papers found.