Building Open-Retrieval Conversational Question Answering Systems by Generating Synthetic Data and Decontextualizing User Questions

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-retrieval conversational question answering (OR-CONVQA) suffers from scarce domain-specific annotated data, hindering system development. Method: This paper proposes an end-to-end, annotation-free framework: (1) an automated pipeline for generating high-fidelity, multi-turn synthetic dialogue data; (2) joint modeling of question decontextualization and rewriting to decouple user queries from dialogue history, enabling direct reuse of off-the-shelf general-purpose retrievers; and (3) an LLM-based, interpretable response generation module that integrates retrieved evidence with dialogue state. Contribution/Results: Experiments demonstrate substantial improvements in question rewriting quality and retrieval accuracy across multiple domains. The approach achieves high-quality, low-dependency conversational QA under data-scarce conditions, establishing a scalable technical paradigm for OR-CONVQA in annotation-limited settings.

Technology Category

Application Category

📝 Abstract
We consider open-retrieval conversational question answering (OR-CONVQA), an extension of question answering where system responses need to be (i) aware of dialog history and (ii) grounded in documents (or document fragments) retrieved per question. Domain-specific OR-CONVQA training datasets are crucial for real-world applications, but hard to obtain. We propose a pipeline that capitalizes on the abundance of plain text documents in organizations (e.g., product documentation) to automatically produce realistic OR-CONVQA dialogs with annotations. Similarly to real-world humanannotated OR-CONVQA datasets, we generate in-dialog question-answer pairs, self-contained (decontextualized, e.g., no referring expressions) versions of user questions, and propositions (sentences expressing prominent information from the documents) the system responses are grounded in. We show how the synthetic dialogs can be used to train efficient question rewriters that decontextualize user questions, allowing existing dialog-unaware retrievers to be utilized. The retrieved information and the decontextualized question are then passed on to an LLM that generates the system's response.
Problem

Research questions and friction points this paper is trying to address.

Generating synthetic data for OR-CONVQA training
Decontextualizing user questions for better retrieval
Training efficient question rewriters using synthetic dialogs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generating synthetic OR-CONVQA dialogs automatically
Decontextualizing user questions for retrieval compatibility
Leveraging LLMs for grounded system responses
🔎 Similar Papers
No similar papers found.
C
Christos Vlachos
Department of Informatics, Athens University of Economics and Business, Greece; Omilia - Conversational Intelligence
N
Nikolaos Stylianou
Omilia - Conversational Intelligence
A
Alexandra Fiotaki
Omilia - Conversational Intelligence
S
Spiros Methenitis
Omilia - Conversational Intelligence
E
Elisavet Palogiannidi
NCSR Demokritos, Greece
Themos Stafylakis
Themos Stafylakis
Assoc. Prof. at Athens Univ. of Economics and Business | Omilia | Archimedes/Athena R.C.
Voice BiometricsSpeaker RecognitionAudiovisual ASRNLPMachine Learning
Ion Androutsopoulos
Ion Androutsopoulos
Professor, Department of Informatics, Athens University of Economics and Business
Artificial IntelligenceNatural Language ProcessingComputational Linguistics