CREST-Search: Comprehensive Red-teaming for Evaluating Safety Threats in Large Language Models Powered by Web Search

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Web-search-enhanced large language models (Web-LLMs) face critical security risks—particularly vulnerabilities arising from adversarial prompts interacting with untrusted web content. Method: We propose CREST-Search, the first dedicated red-teaming framework for Web-LLMs. It introduces WebSearch-Harm, a curated dataset of harmful web search queries; leverages in-context learning to generate adversarial queries; incorporates an iterative feedback mechanism to refine attack efficacy; and fine-tunes a red-teaming agent to systematically bypass safety filters. Contribution/Results: Experiments demonstrate that CREST-Search significantly improves vulnerability detection rates across mainstream Web-LLMs, effectively exposing their defensive weaknesses in realistic search-augmented interactions. The framework underscores the necessity of search-specific security evaluation methodologies and motivates the development of tailored defense mechanisms for web-enhanced LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel at tasks such as dialogue, summarization, and question answering, yet they struggle to adapt to specialized domains and evolving facts. To overcome this, web search has been integrated into LLMs, allowing real-time access to online content. However, this connection magnifies safety risks, as adversarial prompts combined with untrusted sources can cause severe vulnerabilities. We investigate red teaming for LLMs with web search and present CREST-Search, a framework that systematically exposes risks in such systems. Unlike existing methods for standalone LLMs, CREST-Search addresses the complex workflow of search-enabled models by generating adversarial queries with in-context learning and refining them through iterative feedback. We further construct WebSearch-Harm, a search-specific dataset to fine-tune LLMs into efficient red-teaming agents. Experiments show that CREST-Search effectively bypasses safety filters and reveals vulnerabilities in modern web-augmented LLMs, underscoring the need for specialized defenses to ensure trustworthy deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating safety threats in web-augmented large language models
Systematically exposing risks in search-enabled LLM workflows
Addressing vulnerabilities from adversarial queries with untrusted sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates adversarial queries using in-context learning
Refines queries through iterative feedback mechanisms
Fine-tunes LLMs with WebSearch-Harm dataset
🔎 Similar Papers
No similar papers found.
H
Haoran Ou
Nanyang Technological University, Singapore
Kangjie Chen
Kangjie Chen
Nanyang Technological University
Trustworthy AIRed-teamingBackdoor AttacksLLM-based Agents
Xingshuo Han
Xingshuo Han
Unknown affiliation
Autonomous Driving Security & Safety
Gelei Deng
Gelei Deng
Nanyang Technological University
CybersecuritySystem securityRobotics SecurityAI SecuritySoftware Testing
J
Jie Zhang
CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore
Han Qiu
Han Qiu
NTU
T
Tianwei Zhang
Nanyang Technological University, Singapore