HLTCOE at TREC 2024 NeuCLIR Track

šŸ“… 2025-09-30
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work addresses cross-lingual/multilingual information retrieval (CLIR/MLIR) and cross-lingual report generation. Methodologically, we (1) enhance the PLAID model with three multilingual training strategies—target-domain (TD), global-domain (GD), and multilingual domain-adaptive (MTD)—and language-adaptive batching; (2) design a dual-path large language model (LLM)-based report generator supporting sub-question decomposition, cross-document factual extraction, consistency verification, and automatic citation generation; and (3) integrate mT5, GPT-4o, and Claude-3.5-Sonnet via translation-based distillation, score fusion, and system-level optimization. Our framework achieves state-of-the-art performance across all NeuCLIR tracks—including news and technical-document CLIR/MLIR and automated cross-lingual report generation—demonstrating, for the first time, the effectiveness of multi-model collaboration and LLM-driven factual aggregation in complex end-to-end cross-lingual retrieval–generation tasks.

Technology Category

Application Category

šŸ“ Abstract
The HLTCOE team applied PLAID, an mT5 reranker, GPT-4 reranker, score fusion, and document translation to the TREC 2024 NeuCLIR track. For PLAID we included a variety of models and training techniques -- Translate Distill (TD), Generate Distill (GD) and multi-lingual translate-distill (MTD). TD uses scores from the mT5 model over English MS MARCO query-document pairs to learn how to score query-document pairs where the documents are translated to match the CLIR setting. GD follows TD but uses passages from the collection and queries generated by an LLM for training examples. MTD uses MS MARCO translated into multiple languages, allowing experiments on how to batch the data during training. Finally, for report generation we experimented with system combination over different runs. One family of systems used either GPT-4o or Claude-3.5-Sonnet to summarize the retrieved results from a series of decomposed sub-questions. Another system took the output from those two models and verified/combined them with Claude-3.5-Sonnet. The other family used GPT4o and GPT3.5Turbo to extract and group relevant facts from the retrieved documents based on the decomposed queries. The resulting submissions directly concatenate the grouped facts to form the report and their documents of origin as the citations. The team submitted runs to all NeuCLIR tasks: CLIR and MLIR news tasks as well as the technical documents task and the report generation task.
Problem

Research questions and friction points this paper is trying to address.

Improving cross-language information retrieval for multilingual documents
Developing training techniques for neural reranking in CLIR settings
Generating automated reports from retrieved multilingual search results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applied PLAID mT5 reranker with Translate Distill training
Used GPT-4 and Claude-3.5 for multilingual summarization
Extracted and grouped facts from documents using GPT models
šŸ”Ž Similar Papers
No similar papers found.