Testing Question Answering Software with Context-Driven Question Generation

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing QA system testing methods suffer from two key limitations: (1) synthetically generated questions lack naturalness and fail to trigger real-world defects, and (2) reliance on static datasets restricts question diversity and contextual relevance. To address these, we propose CQ²A, a context-driven question generation framework that innovatively integrates large language models (LLMs) with semantic context modeling. CQ²A first extracts entities and relations from input contexts to construct realistic answers, then prompts an LLM to generate natural, contextually grounded test questions. It further incorporates consistency verification and constraint checking to ensure high-quality output. Extensive experiments across three benchmark datasets demonstrate that CQ²A significantly improves defect detection rate, question naturalness, and context coverage. Moreover, fine-tuning QA systems with CQ²A-generated test cases substantially reduces error rates, validating its practical utility in robustness evaluation and model improvement.

Technology Category

Application Category

📝 Abstract
Question-answering software is becoming increasingly integrated into our daily lives, with prominent examples including Apple Siri and Amazon Alexa. Ensuring the quality of such systems is critical, as incorrect answers could lead to significant harm. Current state-of-the-art testing approaches apply metamorphic relations to existing test datasets, generating test questions based on these relations. However, these methods have two key limitations. First, they often produce unnatural questions that humans are unlikely to ask, reducing the effectiveness of the generated questions in identifying bugs that might occur in real-world scenarios. Second, these questions are generated from pre-existing test datasets, ignoring the broader context and thus limiting the diversity and relevance of the generated questions. In this work, we introduce CQ^2A, a context-driven question generation approach for testing question-answering systems. Specifically, CQ^2A extracts entities and relationships from the context to form ground truth answers, and utilizes large language models to generate questions based on these ground truth answers and the surrounding context. We also propose the consistency verification and constraint checking to increase the reliability of LLM's outputs. Experiments conducted on three datasets demonstrate that CQ^2A outperforms state-of-the-art approaches on the bug detection capability, the naturalness of the generated questions as well as the coverage of the context. Moreover, the test cases generated by CQ^2A reduce error rate when utilized for fine-tuning the QA software under test
Problem

Research questions and friction points this paper is trying to address.

Generating unnatural questions reduces bug detection effectiveness
Existing methods ignore context limiting question diversity
Testing approaches lack reliability in real-world scenario coverage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates questions using entities and relationships from context
Uses large language models to create natural test questions
Improves reliability with consistency verification and constraint checking
🔎 Similar Papers
No similar papers found.
S
Shuang Liu
Renmin University of China
Z
Zhirun Zhang
Tianjin University
Jinhao Dong
Jinhao Dong
Peking University
SE Augments AITrustworthy Software DevelopmentPre-trainingCode Generation
Z
Zan Wang
Tianjin University
Qingchao Shen
Qingchao Shen
Tianjin University
Compiler TestingAI TestingSE4AIDeep Learning Testing
J
Junjie Chen
Tianjin University
W
Wei Lu
Renmin University of China
X
Xiaoyong Du
Renmin University of China