AdaRewriter: Unleashing the Power of Prompting-based Conversational Query Reformulation via Test-Time Adaptation

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In conversational search, users’ ambiguous queries pose significant challenges for precise reformulation into standalone search queries. To address this, we propose a test-time adaptive, prompt-driven query rewriting framework that operates via black-box API calls—requiring no access to large language model (LLM) parameters or fine-tuning. Our method dynamically optimizes rewriting outputs during inference. Key contributions include: (1) a lightweight reward model trained with contrastive ranking loss; and (2) an end-to-end, tuning-free test-time adaptation strategy that jointly leverages result-supervised reward modeling and Best-of-N re-ranking to select the optimal rewrite. Extensive experiments across five conversational search benchmarks demonstrate substantial improvements over state-of-the-art methods. Moreover, our approach exhibits strong robustness and generalization—achieving consistent performance on unseen domains and under diverse mainstream commercial LLM APIs.

Technology Category

Application Category

📝 Abstract
Prompting-based conversational query reformulation has emerged as a powerful approach for conversational search, refining ambiguous user queries into standalone search queries. Best-of-N reformulation over the generated candidates via prompting shows impressive potential scaling capability. However, both the previous tuning methods (training time) and adaptation approaches (test time) can not fully unleash their benefits. In this paper, we propose AdaRewriter, a novel framework for query reformulation using an outcome-supervised reward model via test-time adaptation. By training a lightweight reward model with contrastive ranking loss, AdaRewriter selects the most promising reformulation during inference. Notably, it can operate effectively in black-box systems, including commercial LLM APIs. Experiments on five conversational search datasets show that AdaRewriter significantly outperforms the existing methods across most settings, demonstrating the potential of test-time adaptation for conversational query reformulation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing ambiguous user queries into standalone search queries
Improving query reformulation via test-time adaptation
Selecting optimal reformulations using a lightweight reward model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses test-time adaptation for query reformulation
Employs contrastive ranking loss for reward model
Operates effectively in black-box LLM systems
🔎 Similar Papers
No similar papers found.
Yilong Lai
Yilong Lai
Southeast University
Natural Language ProcessingLarge Language Model
J
Jialong Wu
School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China
Zhenglin Wang
Zhenglin Wang
Southeast University
Natural Language ProcessingEfficient NLP
Deyu Zhou
Deyu Zhou
Professor, School of computer science and engineering, SEU
natural language processing