Search Arena: Analyzing Search-Augmented LLMs

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks large-scale, multi-turn, multilingual human preference datasets, hindering systematic evaluation of retrieval-augmented large language models (RAG-LLMs) across scenarios in terms of trustworthiness, source preference, and generalization. Method: We construct the first RAG-specific human preference dataset comprising over 24,000 real-world interaction samples, fully logging user intents, system execution traces, and multi-turn responses. Contribution/Results: We uncover two key empirical findings: (1) users’ preferences are systematically misled by citation count, and (2) community-platform sources significantly outperform static encyclopedic ones. We propose a novel cross-scenario evaluation paradigm—“general chat vs. search-intensive”—demonstrating that search augmentation improves performance even on non-search tasks, whereas purely parametric models severely underperform on search tasks. We publicly release the dataset—including 12,000+ human votes—and associated code to advance trustworthy RAG research.

Technology Category

Application Category

📝 Abstract
Search-augmented language models combine web search with Large Language Models (LLMs) to improve response groundedness and freshness. However, analyzing these systems remains challenging: existing datasets are limited in scale and narrow in scope, often constrained to static, single-turn, fact-checking questions. In this work, we introduce Search Arena, a crowd-sourced, large-scale, human-preference dataset of over 24,000 paired multi-turn user interactions with search-augmented LLMs. The dataset spans diverse intents and languages, and contains full system traces with around 12,000 human preference votes. Our analysis reveals that user preferences are influenced by the number of citations, even when the cited content does not directly support the attributed claims, uncovering a gap between perceived and actual credibility. Furthermore, user preferences vary across cited sources, revealing that community-driven platforms are generally preferred and static encyclopedic sources are not always appropriate and reliable. To assess performance across different settings, we conduct cross-arena analyses by testing search-augmented LLMs in a general-purpose chat environment and conventional LLMs in search-intensive settings. We find that web search does not degrade and may even improve performance in non-search settings; however, the quality in search settings is significantly affected if solely relying on the model's parametric knowledge. We open-sourced the dataset to support future research in this direction. Our dataset and code are available at: https://github.com/lmarena/search-arena.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale datasets for analyzing search-augmented LLMs
User preferences influenced by citation count, not content relevance
Web search impact varies across different LLM settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale crowd-sourced multi-turn dataset
Cross-arena analysis of search-augmented LLMs
Human preference insights on citation influence
🔎 Similar Papers
No similar papers found.