ConQuer: A Framework for Concept-Based Quiz Generation

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address prevalent issues in AI-generated quizzes—such as incomplete conceptual coverage, limited cognitive diversity (e.g., insufficient representation across Bloom’s taxonomy levels), and poor pedagogical alignment—this paper proposes a subject-domain concept graph–guided, external knowledge–enhanced quiz generation framework. Methodologically, it introduces a concept-driven, multi-source knowledge fusion paradigm integrating knowledge graph retrieval, multi-stage concept-constrained decoding, and prompt engineering optimization. A novel LLM-as-a-judge mechanism is proposed for interpretable, automated assessment of conceptual coverage, Bloom’s cognitive level distribution, and alignment with instructional objectives. Empirical evaluation demonstrates a 4.8% improvement in overall multi-dimensional scoring and a 77.52% pairwise win rate against baselines; ablation studies confirm the substantial contribution of each component.

Technology Category

Application Category

📝 Abstract
Quizzes play a crucial role in education by reinforcing students' understanding of key concepts and encouraging self-directed exploration. However, compiling high-quality quizzes can be challenging and require deep expertise and insight into specific subject matter. Although LLMs have greatly enhanced the efficiency of quiz generation, concerns remain regarding the quality of these AI-generated quizzes and their educational impact on students. To address these issues, we introduce ConQuer, a concept-based quiz generation framework that leverages external knowledge sources. We employ comprehensive evaluation dimensions to assess the quality of the generated quizzes, using LLMs as judges. Our experiment results demonstrate a 4.8% improvement in evaluation scores and a 77.52% win rate in pairwise comparisons against baseline quiz sets. Ablation studies further underscore the effectiveness of each component in our framework. Code available at https://github.com/sofyc/ConQuer.
Problem

Research questions and friction points this paper is trying to address.

Challenges in generating high-quality educational quizzes
Concerns about AI-generated quiz quality and impact
Need for concept-based quiz generation using external knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages external knowledge sources for quiz generation
Uses LLMs as judges for quiz quality assessment
Improves evaluation scores by 4.8% over baselines
🔎 Similar Papers
No similar papers found.
Yicheng Fu
Yicheng Fu
Stanford University
Natural Language ProcessingLarge Language Model
Z
Zikui Wang
Stanford University, CA, USA
Liuxin Yang
Liuxin Yang
EECS, Stanford University
NLP
M
Meiqing Huo
Stanford University, CA, USA
Z
Zhongdongming Dai
University of California San Diego, CA, USA