MiLQ: Benchmarking IR Models for Bilingual Web Search with Mixed Language Queries

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bilingual users frequently issue Chinese–English mixed queries, yet existing information retrieval (IR) research lacks systematic evaluation of such queries. Method: We introduce MiLQ, the first publicly available benchmark for mixed-language queries, constructed from real user search behavior and covering high-preference, naturally occurring code-switched queries. Our approach incorporates code-switching data augmentation, multi-model comparison (e.g., mBERT, ColBERT), token-level matching analysis, and user preference surveys. Contributions/Results: (1) MiLQ fills a critical gap in cross-lingual IR by enabling rigorous evaluation of mixed-query performance; (2) code-switching–aware training significantly improves model robustness to linguistic heterogeneity; (3) deliberate insertion of English terms enhances recall of English documents—yielding an average 18.7% MRR gain on MiLQ—demonstrating that mixed queries provide substantial, measurable benefits for cross-lingual retrieval.

Technology Category

Application Category

📝 Abstract
Despite bilingual speakers frequently using mixed-language queries in web searches, Information Retrieval (IR) research on them remains scarce. To address this, we introduce MiLQ,Mixed-Language Query test set, the first public benchmark of mixed-language queries, confirmed as realistic and highly preferred. Experiments show that multilingual IR models perform moderately on MiLQ and inconsistently across native, English, and mixed-language queries, also suggesting code-switched training data's potential for robust IR models handling such queries. Meanwhile, intentional English mixing in queries proves an effective strategy for bilinguals searching English documents, which our analysis attributes to enhanced token matching compared to native queries.
Problem

Research questions and friction points this paper is trying to address.

Lack of research on mixed-language queries in IR
Need for robust IR models handling mixed-language queries
Effectiveness of English mixing in bilingual document searches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces first public mixed-language query benchmark
Tests multilingual IR models on diverse queries
Proposes code-switched training for robust IR
🔎 Similar Papers
No similar papers found.
J
Jonghwi Kim
Graduate School of Artificial Intelligence, POSTECH, Republic of Korea
D
Deokhyung Kang
Graduate School of Artificial Intelligence, POSTECH, Republic of Korea
S
Seonjeong Hwang
Graduate School of Artificial Intelligence, POSTECH, Republic of Korea
Yunsu Kim
Yunsu Kim
aiXplain, Inc.
Natural Language ProcessingMachine TranslationMachine Learning
Jungseul Ok
Jungseul Ok
Associate Professor, CSE/AI, POSTECH
Reinforcement LearningMachine Learning
Gary Lee
Gary Lee
Graduate School of Artificial Intelligence, POSTECH, Republic of Korea; Department of Computer Science and Engineering, POSTECH, Republic of Korea