CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) face three key challenges in cricket analytics: inadequate modeling of domain-specific nuances, limited comprehension of complex pattern shifts, insufficient multilingual (particularly English–Hindi code-mixed) SQL generation capability, and absence of a dedicated evaluation benchmark. To address this, we introduce CricBench—the first multilingual (English/Hindi) Text-to-SQL benchmark tailored for cricket analytics, comprising expert-crafted gold-standard SQL queries and a rigorous logical equivalence evaluation protocol. Key findings include: (1) Hindi–English code-mixed prompts achieve performance on par with or better than English-only prompts for domain-specific SQL generation; (2) general-purpose models (e.g., GPT-4o) suffer over 20% accuracy degradation on CricBench compared to the BIRD benchmark, demonstrating a significant decoupling between general and domain-specific capabilities; and (3) DeepSeek-R1 achieves state-of-the-art accuracy at 50.6%.

Technology Category

Application Category

📝 Abstract
Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally. Enthusiasts and analysts frequently seek advanced statistical insights, such as long-term historical performance trends or complex player comparisons, that are often unavailable through standard web searches. While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored. To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data. To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness. Recognizing linguistic diversity, we construct the benchmark in both English and Hindi, establishing a framework that is open for further extension to other regional languages. We evaluate six state-of-the-art models, including GPT-4o, Claude 3.7 Sonnet, and open-source models, using a strict evaluation protocol. Our results reveal that high performance on general benchmarks does not guarantee success in specialized domains. While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7%) and GPT-4o (33.7%), it still exhibits a significant accuracy drop when moving from general benchmarks (BIRD) to CricBench. Furthermore, we observe that code-mixed Hindi queries frequently yield parity or higher accuracy compared to English, challenging the assumption that English is the optimal prompt language for specialized SQL tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to handle cricket-specific analytics queries
Assesses performance on multilingual and complex SQL schema variations
Investigates accuracy gap between general and specialized domain benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark for cricket analytics evaluation
Gold standard dataset with expert-authored complex queries
Evaluation reveals domain-specific performance gaps in LLMs
🔎 Similar Papers
No similar papers found.
V
Vaibhav Devraj
Birla Institute of Technology and Science (BITS), Pilani
D
Dhruv Kumar
Birla Institute of Technology and Science (BITS), Pilani
Jagat Sesh Challa
Jagat Sesh Challa
Assistant Professor, Department of Computer Science & Information Systems, BITS Pilani
Big Data AnalyticsComputer VisionFederated LearningMaterials InformaticsHCI