Clarifying Ambiguities: on the Role of Ambiguity Types in Prompting Methods for Clarification Generation

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ambiguous user queries in information retrieval often lead to inefficient clarification, as existing methods lack systematic modeling of ambiguity types and their corresponding clarification strategies. Method: This paper proposes AT-CoT, a novel prompting paradigm that integrates ambiguity type identification—distinguishing among entity, intent, and contextual ambiguity—as a prerequisite step in chain-of-thought (CoT) reasoning, thereby guiding large language models to generate goal-directed clarification questions. Contribution/Results: AT-CoT is the first approach to explicitly model the mapping between query ambiguity types and targeted clarification strategies, yielding interpretable reasoning paths and objective-oriented clarification. Empirical evaluation on multiple datasets with human-annotated clarification questions demonstrates that AT-CoT significantly outperforms standard CoT and few-shot baselines. Further, user simulation experiments confirm its effectiveness in reducing information need ambiguity.

Technology Category

Application Category

📝 Abstract
In information retrieval (IR), providing appropriate clarifications to better understand users' information needs is crucial for building a proactive search-oriented dialogue system. Due to the strong in-context learning ability of large language models (LLMs), recent studies investigate prompting methods to generate clarifications using few-shot or Chain of Thought (CoT) prompts. However, vanilla CoT prompting does not distinguish the characteristics of different information needs, making it difficult to understand how LLMs resolve ambiguities in user queries. In this work, we focus on the concept of ambiguity for clarification, seeking to model and integrate ambiguities in the clarification process. To this end, we comprehensively study the impact of prompting schemes based on reasoning and ambiguity for clarification. The idea is to enhance the reasoning abilities of LLMs by limiting CoT to predict first ambiguity types that can be interpreted as instructions to clarify, then correspondingly generate clarifications. We name this new prompting scheme Ambiguity Type-Chain of Thought (AT-CoT). Experiments are conducted on various datasets containing human-annotated clarifying questions to compare AT-CoT with multiple baselines. We also perform user simulations to implicitly measure the quality of generated clarifications under various IR scenarios.
Problem

Research questions and friction points this paper is trying to address.

Study prompting methods for generating query clarifications in IR
Model ambiguity types to enhance LLM reasoning for clarifications
Propose AT-CoT to improve clarification quality in search dialogues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates ambiguity types into Chain of Thought prompting
Enhances LLMs' reasoning by predicting ambiguity types first
Proposes AT-CoT for clarification generation in IR systems
🔎 Similar Papers
No similar papers found.
A
Anfu Tang
Sorbonne Université, CNRS, ISIR, F-75005 Paris, France
Laure Soulier
Laure Soulier
Associate Professor, Sorbonne université - ISIR, Paris (France)
Information retrievalnatural language processingmachine learning
V
Vincent Guigue
AgroParisTech, UMR MIA-PS, Palaiseau, France