StealthRank: LLM Ranking Manipulation via Stealthy Prompt Optimization

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces a stealthy ranking manipulation attack against LLM-driven product recommendation systems: adversarial text sequences are embedded into item descriptions to elevate target items’ rankings while evading anomaly detection. Methodologically, we propose the first implicit prompt optimization framework grounded in energy-based modeling and Langevin dynamics, generating StealthRank Prompts (SRPs) that jointly optimize attack efficacy and textual stealth—ensuring high fluency and low detectability. Experiments across multiple state-of-the-art LLMs demonstrate significant improvements in target-item ranking positions, with attack success rates surpassing existing SOTA baselines; notably, the stealth metric improves by up to 42%, effectively bypassing current textual anomaly detection mechanisms.

Technology Category

Application Category

📝 Abstract
The integration of large language models (LLMs) into information retrieval systems introduces new attack surfaces, particularly for adversarial ranking manipulations. We present StealthRank, a novel adversarial ranking attack that manipulates LLM-driven product recommendation systems while maintaining textual fluency and stealth. Unlike existing methods that often introduce detectable anomalies, StealthRank employs an energy-based optimization framework combined with Langevin dynamics to generate StealthRank Prompts (SRPs)-adversarial text sequences embedded within product descriptions that subtly yet effectively influence LLM ranking mechanisms. We evaluate StealthRank across multiple LLMs, demonstrating its ability to covertly boost the ranking of target products while avoiding explicit manipulation traces that can be easily detected. Our results show that StealthRank consistently outperforms state-of-the-art adversarial ranking baselines in both effectiveness and stealth, highlighting critical vulnerabilities in LLM-driven recommendation systems.
Problem

Research questions and friction points this paper is trying to address.

Manipulating LLM-driven product recommendation systems stealthily
Generating adversarial text sequences to influence ranking mechanisms
Exposing vulnerabilities in LLM-based information retrieval systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-based optimization for stealthy prompts
Langevin dynamics to generate adversarial sequences
Covert ranking manipulation without detectable traces
🔎 Similar Papers
Y
Yiming Tang
University of Southern California
Y
Yi Fan
University of Southern California
C
Chenxiao Yu
University of Southern California
Tiankai Yang
Tiankai Yang
University of Southern California
Y
Yue Zhao
University of Southern California
Xiyang Hu
Xiyang Hu
PhD, Carnegie Mellon University
Machine LearningTrustworthyHuman-AIGenerative AIOut of Distribution