Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the problem of inefficient interaction design and insufficient prompt guidance in human-AI collaborative question answering, which leads to unstable response quality. To tackle this, we propose two novel interaction paradigms—Nudging (intelligent response suggestions) and Highlight (key document highlighting)—and introduce the first dual-configuration prompt-guidance mechanism. Through two controlled experiments (N = 31 + 106), combining quantitative evaluation and qualitative analysis, we find that merely augmenting human and AI capabilities does not guarantee performance gains; rather, interaction configuration critically determines collaboration quality. Specifically, the Nudging configuration significantly improves response accuracy and consistency over pure AI output. Our work reveals the decisive role of interaction design in collaborative efficacy and distills reusable design principles for collaborative QA systems. These findings provide theoretical grounding and practical frameworks for prompt engineering and human-AI interface optimization.

Technology Category

Application Category

📝 Abstract
Understanding the dynamics of human-AI interaction in question answering is crucial for enhancing collaborative efficiency. Extending from our initial formative study, which revealed challenges in human utilization of conversational AI support, we designed two configurations for prompt guidance: a Nudging approach, where the AI suggests potential responses for human agents, and a Highlight strategy, emphasizing crucial parts of reference documents to aid human responses. Through two controlled experiments, the first involving 31 participants and the second involving 106 participants, we compared these configurations against traditional human-only approaches, both with and without AI assistance. Our findings suggest that effective human-AI collaboration can enhance response quality, though merely combining human and AI efforts does not ensure improved outcomes. In particular, the Nudging configuration was shown to help improve the quality of the output when compared to AI alone. This paper delves into the development of these prompt guidance paradigms, offering insights for refining human-AI collaborations in conversational question-answering contexts and contributing to a broader understanding of human perceptions and expectations in AI partnerships.
Problem

Research questions and friction points this paper is trying to address.

Enhancing human-AI collaboration efficiency in question answering
Comparing nudging vs highlight prompt guidance strategies
Improving response quality through effective interaction configurations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nudging approach suggests potential AI responses
Highlight strategy emphasizes key document parts
Controlled experiments compare human-AI configurations
🔎 Similar Papers
No similar papers found.