🤖 AI Summary
This work exposes a security vulnerability in large language models (LLMs) when employed as neural re-rankers for information retrieval: their ranking behavior can be manipulated via natural-language adversarial prompts. To exploit this, we propose a two-stage token-level optimization framework. In Stage I, we jointly optimize the ranking objective using greedy coordinate gradient (GCG) and entropy-weighted dynamic scheduling. In Stage II, we impose joint constraints from readability scoring and temperature-sampling regularization to ensure high linguistic naturalness of the perturbed prompt. Compared to existing attacks, our method achieves significantly higher target-document ranking gains while preserving semantic coherence and fluency. We validate its effectiveness, robustness, and stealth across multiple mainstream LLMs—including Llama-3, Qwen2, and Gemma-2—demonstrating consistent success under diverse retrieval settings. This is the first systematic study to uncover and characterize critical security risks inherent in the LLM-based re-ranking paradigm.
📝 Abstract
Large language models (LLMs) are increasingly used as rerankers in information retrieval, yet their ranking behavior can be steered by small, natural-sounding prompts. To expose this vulnerability, we present Rank Anything First (RAF), a two-stage token optimization method that crafts concise textual perturbations to consistently promote a target item in LLM-generated rankings while remaining hard to detect. Stage 1 uses Greedy Coordinate Gradient to shortlist candidate tokens at the current position by combining the gradient of the rank-target with a readability score; Stage 2 evaluates those candidates under exact ranking and readability losses using an entropy-based dynamic weighting scheme, and selects a token via temperature-controlled sampling. RAF generates ranking-promoting prompts token-by-token, guided by dual objectives: maximizing ranking effectiveness and preserving linguistic naturalness. Experiments across multiple LLMs show that RAF significantly boosts the rank of target items using naturalistic language, with greater robustness than existing methods in both promoting target items and maintaining naturalness. These findings underscore a critical security implication: LLM-based reranking is inherently susceptible to adversarial manipulation, raising new challenges for the trustworthiness and robustness of modern retrieval systems. Our code is available at: https://github.com/glad-lab/RAF.