🤖 AI Summary
This work addresses unsupervised corpus poisoning attacks against dense retrieval systems. We propose the first attack framework that directly optimizes malicious documents in the continuous embedding space—bypassing reliance on query priors or discrete token-level manipulations. Our method abandons assumptions about query distribution and instead jointly optimizes for adversarial effectiveness, textual naturalness, and detection robustness via three key components: (i) gradient-based optimization in the embedding space, (ii) token-level diversity constraints to preserve linguistic coherence, and (iii) low-perplexity text generation. Experiments demonstrate that our single-document attack completes in under two minutes—four times faster than prior approaches—while substantially degrading retrieval accuracy across domains. Moreover, the generated adversarial documents exhibit distributions closely aligned with natural language, rendering them significantly harder for state-of-the-art detectors to identify.
📝 Abstract
This paper concerns corpus poisoning attacks in dense information retrieval, where an adversary attempts to compromise the ranking performance of a search algorithm by injecting a small number of maliciously generated documents into the corpus. Our work addresses two limitations in the current literature. First, attacks that perform adversarial gradient-based word substitution search do so in the discrete lexical space, while retrieval itself happens in the continuous embedding space. We thus propose an optimization method that operates in the embedding space directly. Specifically, we train a perturbation model with the objective of maintaining the geometric distance between the original and adversarial document embeddings, while also maximizing the token-level dissimilarity between the original and adversarial documents. Second, it is common for related work to have a strong assumption that the adversary has prior knowledge about the queries. In this paper, we focus on a more challenging variant of the problem where the adversary assumes no prior knowledge about the query distribution (hence, unsupervised). Our core contribution is an adversarial corpus attack that is fast and effective. We present comprehensive experimental results on both in- and out-of-domain datasets, focusing on two related tasks: a top-1 attack and a corpus poisoning attack. We consider attacks under both a white-box and a black-box setting. Notably, our method can generate successful adversarial examples in under two minutes per target document; four times faster compared to the fastest gradient-based word substitution methods in the literature with the same hardware. Furthermore, our adversarial generation method generates text that is more likely to occur under the distribution of natural text (low perplexity), and is therefore more difficult to detect.