🤖 AI Summary
Retrieval-Augmented Generation (RAG) systems are vulnerable to black-box opinion manipulation—particularly in contentious domains—yet existing attacks either require internal model access (white-box) or lack systematic generalizability.
Method: We propose the first transferable black-box attack targeting *polarity reversal* of generated opinions. Our approach employs inverse modeling via instruction engineering and proxy retrieval model training: it reconstructs retrieval behavior solely through black-box API interactions and applies polarity-directed adversarial optimization to enable cross-topic transfer.
Contribution/Results: Evaluated across four contentious issue categories, our method achieves an average opinion-reversal success rate of 16.7%, induces a 50% shift in response polarity, amplifies user cognitive bias by 20%, and evades state-of-the-art defenses. To our knowledge, this is the first black-box attack specifically designed for RAG-based opinion influence scenarios that demonstrates empirically validated, real-world manipulative capability.
📝 Abstract
Retrieval-Augmented Generation (RAG) addresses hallucination and real-time constraints by dynamically retrieving relevant information from a knowledge database to supplement the LLMs' input. When presented with a query, RAG selects the most semantically similar texts from its knowledge bases and uses them as context for the LLMs to generate more accurate responses. RAG also creates a new attack surface, especially since RAG databases are frequently sourced from public domains. While existing studies have predominantly focused on optimizing RAG's performance and efficiency, emerging research has begun addressing the security concerns associated with RAG. However, these works have some limitations, typically focusing on either white-box methodologies or heuristic-based black-box attacks. Furthermore, prior research has mainly targeted simple factoid question answering, which is neither practically challenging nor resistant to correction. In this paper, we unveil a more realistic and threatening scenario: opinion manipulation for controversial topics against RAG. Particularly, we propose a novel RAG black-box attack method, termed FlipedRAG, which is transfer-based. By leveraging instruction engineering, we obtain partial retrieval model outputs from black-box RAG system, facilitating the training of surrogate models to enhance the effectiveness of opinion manipulation attack. Extensive experimental results confirms that our approach significantly enhances the average success rate of opinion manipulation by 16.7%. It achieves an average of a 50% directional change in the opinion polarity of RAG responses across four themes. Additionally, it induces a 20% shift in user cognition. Furthermore, we discuss the efficacy of potential defense mechanisms and conclude that they are insufficient in mitigating this type of attack, highlighting the urgent need to develop novel defensive strategies.