SearchAttack: Red-Teaming LLMs against Knowledge-to-Action Threats under Online Web Search

πŸ“… 2026-01-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies web search as a critical attack surface for retrieval-augmented large language models (LLMs), which can bypass built-in safety mechanisms when exposed to harmful retrieved content. The authors propose SearchAttack, a novel red-teaming paradigm that circumvents the limitations of traditional jailbreaking approaches reliant on internal model generation. SearchAttack operates by stripping harmful semantics from user queries, retaining only structural skeletons and fragmented cues, and then leveraging structured prompts to guide the LLM in reconstructing malicious content during the retrieval-augmented generation process. Experimental results demonstrate that SearchAttack significantly increases attack success rates in realistic scenarios, effectively exposing previously overlooked security vulnerabilities in mainstream retrieval-augmented LLM systems.

Technology Category

Application Category

πŸ“ Abstract
Recently, people have suffered from LLM hallucination and have become increasingly aware of the reliability gap of LLMs in open and knowledge-intensive tasks. As a result, they have increasingly turned to search-augmented LLMs to mitigate this issue. However, LLM-driven search also becomes an attractive target for misuse. Once the returned content directly contains targeted, ready-to-use harmful instructions or takeaways for users, it becomes difficult to withdraw or undo such exposure. To investigate LLMs'unsafe search behavior issues, we first propose \textbf{\textit{SearchAttack}} for red-teaming, which (1) rephrases harmful semantics via dense and benign knowledge to evade direct in-context decoding, thus eliciting unsafe information retrieval, (2) stress-tests LLMs'reward-chasing bias by steering them to synthesize unsafe retrieved content. We also curate an emergent, domain-specific illicit activity benchmark for search-based threat assessment, and introduce a fact-checking framework to ground and quantify harm in both offline and online attack settings. Extensive experiments are conducted to red-team the search-augmented LLMs for responsible vulnerability assessment. Empirically, SearchAttack demonstrates strong effectiveness in attacking these systems. We also find that LLMs without web search can still be steered into harmful content output due to their information-seeking stereotypical behaviors.
Problem

Research questions and friction points this paper is trying to address.

SearchAttack
red-teaming
search-augmented LLMs
harmful information-seeking
safety bypass
Innovation

Methods, ideas, or system contributions that make the work stand out.

SearchAttack
red-teaming
search-augmented LLMs
harmful content reconstruction
security vulnerability
πŸ”Ž Similar Papers
No similar papers found.