🤖 AI Summary
To address the challenges of ambiguous queries and cross-merchant multi-hop reasoning in local life services, this paper introduces LocalSearchBench—the first vertical-domain agent search benchmark—comprising 150K real-world, multi-city, multi-vertical instances and 300 scenario-based multi-hop question-answering tasks. We propose a novel evaluation paradigm for fuzzy intent understanding and multi-hop reasoning, and release LocalPlayground, a unified interactive environment, to fill the gap in vertical retrieval evaluation. Leveraging large reasoning models (LRMs), we design a multi-step tool-calling and retrieval-coordination framework integrated with real merchant and product knowledge. Experiments reveal that even the state-of-the-art model (DeepSeek-V3.1) achieves only 34.34% answer accuracy, with significant deficiencies in completeness (77.33%) and faithfulness (61.99%). These results underscore LocalSearchBench’s critical value in advancing intelligent agent research for vertical domains.
📝 Abstract
Recent advances in large reasoning models (LRMs) have enabled agentic search systems to perform complex multi-step reasoning across multiple sources. However, most studies focus on general information retrieval and rarely explores vertical domains with unique challenges. In this work, we focus on local life services and introduce LocalSearchBench, which encompass diverse and complex business scenarios. Real-world queries in this domain are often ambiguous and require multi-hop reasoning across merchants and products, remaining challenging and not fully addressed. As the first comprehensive benchmark for agentic search in local life services, LocalSearchBench includes over 150,000 high-quality entries from various cities and business types. We construct 300 multi-hop QA tasks based on real user queries, challenging agents to understand questions and retrieve information in multiple steps. We also developed LocalPlayground, a unified environment integrating multiple tools for agent interaction. Experiments show that even state-of-the-art LRMs struggle on LocalSearchBench: the best model (DeepSeek-V3.1) achieves only 34.34% correctness, and most models have issues with completeness (average 77.33%) and faithfulness (average 61.99%). This highlights the need for specialized benchmarks and domain-specific agent training in local life services. Code, Benchmark, and Leaderboard are available at localsearchbench.github.io.