DQA: Diagnostic Question Answering for IT Support

๐Ÿ“… 2026-04-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses a critical limitation in existing multi-turn question-answering systems for enterprise IT support: the absence of explicit diagnostic state representation, which hinders effective evidence accumulation and differentiation among competing root cause hypotheses. To overcome this, the authors propose the Diagnostic Question Answering (DQA) framework, which explicitly models diagnostic state by centering on root causes to aggregate historically retrieved cases. The approach integrates dialogue query rewriting with state-conditioned response generation, enabling systematic troubleshooting. By moving beyond conventional multi-turn retrieval-augmented generation that relies solely on document-level retrieval, DQA achieves substantial performance gainsโ€”raising trajectory-level success rates from 41.3% to 78.7% and reducing average dialogue turns from 8.4 to 3.9 across 150 real-world support scenarios.
๐Ÿ“ Abstract
Enterprise IT support interactions are fundamentally diagnostic: effective resolution requires iterative evidence gathering from ambiguous user reports to identify an underlying root cause. While retrieval-augmented generation (RAG) provides grounding through historical cases, standard multi-turn RAG systems lack explicit diagnostic state and therefore struggle to accumulate evidence and resolve competing hypotheses across turns. We introduce DQA, a diagnostic question-answering framework that maintains persistent diagnostic state and aggregates retrieved cases at the level of root causes rather than individual documents. DQA combines conversational query rewriting, retrieval aggregation, and state-conditioned response generation to support systematic troubleshooting under enterprise latency and context constraints. We evaluate DQA on 150 anonymized enterprise IT support scenarios using a replay-based protocol. Averaged over three independent runs, DQA achieves a 78.7% success rate under a trajectory-level success criterion, compared to 41.3% for a multi-turn RAG baseline, while reducing average turns from 8.4 to 3.9.
Problem

Research questions and friction points this paper is trying to address.

Diagnostic Question Answering
IT Support
Root Cause Identification
Multi-turn RAG
Evidence Accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

diagnostic question answering
retrieval-augmented generation
root cause aggregation
diagnostic state tracking
enterprise IT support
๐Ÿ”Ž Similar Papers
No similar papers found.