IR2: Information Regularization for Information Retrieval

📅 2024-02-25
🏛️ International Conference on Language Resources and Evaluation
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address overfitting of synthetic data in complex query information retrieval under few-shot settings, this paper proposes the first IR-oriented multi-stage information regularization framework. Methodologically, it introduces regularization into large language model–driven synthetic query generation for the first time, designing plug-and-play regularization strategies across three stages: input (perturbation), prompt (constraint), and output (calibration), integrated with task-adaptive loss control. The contributions are threefold: (1) it establishes the first data generation paradigm explicitly optimized for complex-query IR; (2) it achieves significant improvements over state-of-the-art synthetic methods on three major benchmarks—DORIS-MAE, ArguAna, and WhatsThatBook—with substantial gains in average retrieval effectiveness; and (3) it reduces synthetic data generation cost by 50%.

Technology Category

Application Category

📝 Abstract
Effective information retrieval (IR) in settings with limited training data, particularly for complex queries, remains a challenging task. This paper introduces IR2, Information Regularization for Information Retrieval, a technique for reducing overfitting during synthetic data generation. This approach, representing a novel application of regularization techniques in synthetic data creation for IR, is tested on three recent IR tasks characterized by complex queries: DORIS-MAE, ArguAna, and WhatsThatBook. Experimental results indicate that our regularization techniques not only outperform previous synthetic query generation methods on the tasks considered but also reduce cost by up to 50%. Furthermore, this paper categorizes and explores three regularization methods at different stages of the query synthesis pipeline—input, prompt, and output—each offering varying degrees of performance improvement compared to models where no regularization is applied. This provides a systematic approach for optimizing synthetic data generation in data-limited, complex-query IR scenarios. All code, prompts and synthetic data are available at https://github.com/Info-Regularization/Information-Regularization.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited training data in complex query information retrieval
Reduces overfitting during synthetic data generation for IR
Optimizes synthetic data generation in data-limited IR scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularization reduces overfitting in synthetic data
Three-stage regularization improves query synthesis pipeline
Cost-effective synthetic data generation for complex queries
🔎 Similar Papers
No similar papers found.
J
Jianyou Wang
Laboratory for Emerging Intelligence, University of California, San Diego
K
Kaicheng Wang
Laboratory for Emerging Intelligence, University of California, San Diego
X
Xiaoyue Wang
Laboratory for Emerging Intelligence, University of California, San Diego
W
Weili Cao
Laboratory for Emerging Intelligence, University of California, San Diego
R
R. Paturi
Laboratory for Emerging Intelligence, University of California, San Diego
Leon Bergen
Leon Bergen
Associate Professor, UCSD
Computational Linguistics