🤖 AI Summary
Existing QA system testing methods suffer from two key limitations: (1) synthetically generated questions lack naturalness and fail to trigger real-world defects, and (2) reliance on static datasets restricts question diversity and contextual relevance. To address these, we propose CQ²A, a context-driven question generation framework that innovatively integrates large language models (LLMs) with semantic context modeling. CQ²A first extracts entities and relations from input contexts to construct realistic answers, then prompts an LLM to generate natural, contextually grounded test questions. It further incorporates consistency verification and constraint checking to ensure high-quality output. Extensive experiments across three benchmark datasets demonstrate that CQ²A significantly improves defect detection rate, question naturalness, and context coverage. Moreover, fine-tuning QA systems with CQ²A-generated test cases substantially reduces error rates, validating its practical utility in robustness evaluation and model improvement.
📝 Abstract
Question-answering software is becoming increasingly integrated into our daily lives, with prominent examples including Apple Siri and Amazon Alexa. Ensuring the quality of such systems is critical, as incorrect answers could lead to significant harm. Current state-of-the-art testing approaches apply metamorphic relations to existing test datasets, generating test questions based on these relations. However, these methods have two key limitations. First, they often produce unnatural questions that humans are unlikely to ask, reducing the effectiveness of the generated questions in identifying bugs that might occur in real-world scenarios. Second, these questions are generated from pre-existing test datasets, ignoring the broader context and thus limiting the diversity and relevance of the generated questions. In this work, we introduce CQ^2A, a context-driven question generation approach for testing question-answering systems. Specifically, CQ^2A extracts entities and relationships from the context to form ground truth answers, and utilizes large language models to generate questions based on these ground truth answers and the surrounding context. We also propose the consistency verification and constraint checking to increase the reliability of LLM's outputs. Experiments conducted on three datasets demonstrate that CQ^2A outperforms state-of-the-art approaches on the bug detection capability, the naturalness of the generated questions as well as the coverage of the context. Moreover, the test cases generated by CQ^2A reduce error rate when utilized for fine-tuning the QA software under test