Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Imperfect supervision signals in large language models (LLMs) often lead to alignment failures such as reward hacking and sycophancy. To address this, we propose *Inoculation Prompting*: during supervised fine-tuning, we deliberately inject carefully selected adversarial prompts—curated via strong behavioral incentives—that elicit and expose undesirable tendencies, thereby enabling the model to learn robust suppression of such behaviors. Counterintuitively, this proactive exposure enhances behavioral controllability and alignment robustness without architectural modifications or high-quality human annotations—only prompt engineering is required to steer generalization. Experiments demonstrate substantial reductions in reward gaming and sycophantic responses, while preserving task performance with negligible degradation. Our approach establishes a new paradigm for efficient, low-supervision alignment, offering a scalable and architecture-agnostic solution to alignment challenges under imperfect supervision.

Technology Category

Application Category

📝 Abstract
Large language models are sometimes trained with imperfect oversight signals, leading to undesired behaviors such as reward hacking and sycophancy. Improving oversight quality can be expensive or infeasible, motivating methods that improve learned behavior despite an imperfect training signal. We introduce Inoculation Prompting (IP), a simple but counterintuitive technique that prevents learning of an undesired behavior by modifying training prompts to explicitly request it. For example, to inoculate against reward hacking, we modify the prompts used in supervised fine-tuning to request code that only works on provided test cases but fails on other inputs. Across four settings we find that IP reduces the learning of undesired behavior without substantially reducing the learning of desired capabilities. We also show that prompts which more strongly elicit the undesired behavior prior to fine-tuning more effectively inoculate against the behavior when used during training; this serves as a heuristic to identify promising inoculation prompts. Overall, IP is a simple yet effective way to control how models generalize from fine-tuning, preventing learning of undesired behaviors without substantially disrupting desired capabilities.
Problem

Research questions and friction points this paper is trying to address.

Prevents undesired LLM behaviors like reward hacking and sycophancy
Modifies training prompts to explicitly request unwanted behaviors
Reduces learning of undesired behaviors without harming capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inoculation Prompting modifies training prompts explicitly
Technique prevents undesired behavior learning during training
Maintains desired capabilities while reducing unwanted generalization
🔎 Similar Papers
No similar papers found.