When Do LLM Preferences Predict Downstream Behavior?

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the intrinsic preferences of large language models (LLMs) can spontaneously drive their downstream behaviors, thereby examining if preference-driven mechanisms underlie AI misalignment phenomena such as "strategic concealment." For the first time, without explicit instructions, the research systematically evaluates behavioral consistency across five state-of-the-art LLMs in three scenarios—donation recommendations, refusal behaviors, and task performance—using entity-based preferences as behavioral probes. Two independent preference measurement methods are employed, and model behaviors are tested in simulated user environments involving BoolQ question answering and complex agent tasks. Results show that all models exhibit preference-influenced donation and refusal behaviors, yet only a subset demonstrates weak effects on task performance, with no significant evidence in complex tasks, thereby delineating the boundaries of preference-driven behavior in LLMs.

Technology Category

Application Category

📝 Abstract
Preference-driven behavior in LLMs may be a necessary precondition for AI misalignment such as sandbagging: models cannot strategically pursue misaligned goals unless their behavior is influenced by their preferences. Yet prior work has typically prompted models explicitly to act in specific ways, leaving unclear whether observed behaviors reflect instruction-following capabilities vs underlying model preferences. Here we test whether this precondition for misalignment is present. Using entity preferences as a behavioral probe, we measure whether stated preferences predict downstream behavior in five frontier LLMs across three domains: donation advice, refusal behavior, and task performance. Conceptually replicating prior work, we first confirm that all five models show highly consistent preferences across two independent measurement methods. We then test behavioral consequences in a simulated user environment. We find that all five models give preference-aligned donation advice. All five models also show preference-correlated refusal patterns when asked to recommend donations, refusing more often for less-preferred entities. All preference-related behaviors that we observe here emerge without instructions to act on preferences. Results for task performance are mixed: on a question-answering benchmark (BoolQ), two models show small but significant accuracy differences favoring preferred entities; one model shows the opposite pattern; and two models show no significant relationship. On complex agentic tasks, we find no evidence of preference-driven performance differences. While LLMs have consistent preferences that reliably predict advice-giving behavior, these preferences do not consistently translate into downstream task performance.
Problem

Research questions and friction points this paper is trying to address.

LLM preferences
downstream behavior
AI misalignment
preference-driven behavior
behavioral prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

preference-driven behavior
AI alignment
large language models
behavioral probing
instruction-following
🔎 Similar Papers
No similar papers found.
K
Katarina Slama
UK AI Security Institute
A
Alexandra Souly
UK AI Security Institute
Dishank Bansal
Dishank Bansal
Meta
Model based RLRoboticsDeep Learning
H
Henry Davidson
UK AI Security Institute
Christopher Summerfield
Christopher Summerfield
University of Oxford
Cognitive ScienceNeuroscience
L
Lennart Luettgau
UK AI Security Institute