🤖 AI Summary
Bilingual users frequently issue Chinese–English mixed queries, yet existing information retrieval (IR) research lacks systematic evaluation of such queries. Method: We introduce MiLQ, the first publicly available benchmark for mixed-language queries, constructed from real user search behavior and covering high-preference, naturally occurring code-switched queries. Our approach incorporates code-switching data augmentation, multi-model comparison (e.g., mBERT, ColBERT), token-level matching analysis, and user preference surveys. Contributions/Results: (1) MiLQ fills a critical gap in cross-lingual IR by enabling rigorous evaluation of mixed-query performance; (2) code-switching–aware training significantly improves model robustness to linguistic heterogeneity; (3) deliberate insertion of English terms enhances recall of English documents—yielding an average 18.7% MRR gain on MiLQ—demonstrating that mixed queries provide substantial, measurable benefits for cross-lingual retrieval.
📝 Abstract
Despite bilingual speakers frequently using mixed-language queries in web searches, Information Retrieval (IR) research on them remains scarce. To address this, we introduce MiLQ,Mixed-Language Query test set, the first public benchmark of mixed-language queries, confirmed as realistic and highly preferred. Experiments show that multilingual IR models perform moderately on MiLQ and inconsistently across native, English, and mixed-language queries, also suggesting code-switched training data's potential for robust IR models handling such queries. Meanwhile, intentional English mixing in queries proves an effective strategy for bilinguals searching English documents, which our analysis attributes to enhanced token matching compared to native queries.