LLM Whisperer: An Inconspicuous Attack to Bias LLM Responses

📅 2024-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a covert manipulation attack against LLM prompting services: prompt providers subtly bias model outputs toward specific target concepts (e.g., brands, political parties) via semantically equivalent synonym substitutions (e.g., “affordable” → “budget-friendly”), imperceptibly to users. Methodologically, the attack integrates semantic similarity filtering with adversarial prompt engineering, validated through rigorous statistical testing (p < 0.01) and double-blind user studies. Results demonstrate that minimal perturbations increase target mention rates by up to 78%, satisfying three criteria of effectiveness: (i) perceptual invisibility—human evaluators cannot detect prompt alterations; (ii) statistical controllability—output shifts are significant and reproducible; and (iii) behavioral impact—user attention and adoption behavior measurably shift toward targets. This study is the first to systematically establish “semantic drift attacks” as a substantive threat to user autonomy in LLM-mediated interactions, providing foundational theoretical insights and empirical evidence for prompt security and human-AI trust mechanism design.

Technology Category

Application Category

📝 Abstract
Writing effective prompts for large language models (LLM) can be unintuitive and burdensome. In response, services that optimize or suggest prompts have emerged. While such services can reduce user effort, they also introduce a risk: the prompt provider can subtly manipulate prompts to produce heavily biased LLM responses. In this work, we show that subtle synonym replacements in prompts can increase the likelihood (by a difference up to 78%) that LLMs mention a target concept (e.g., a brand, political party, nation). We substantiate our observations through a user study, showing that our adversarially perturbed prompts 1) are indistinguishable from unaltered prompts by humans, 2) push LLMs to recommend target concepts more often, and 3) make users more likely to notice target concepts, all without arousing suspicion. The practicality of this attack has the potential to undermine user autonomy. Among other measures, we recommend implementing warnings against using prompts from untrusted parties.
Problem

Research questions and friction points this paper is trying to address.

Subtle synonym replacements bias LLM responses.
Adversarial prompts increase target concept mentions.
Untrusted prompt sources undermine user autonomy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subtle synonym replacements bias LLM responses
Adversarial prompts manipulate target concept mentions
User study confirms undetectable prompt manipulation
🔎 Similar Papers