🤖 AI Summary
Designing high-quality multiple-choice questions (MCQs) for visual literacy is challenged by multimodal coordination, task diversity, and learner variability, yet existing tools offer limited support for iterative, personalized design. This work proposes VizQStudio, a teacher-centered visual analytics system that introduces, for the first time, multimodal large language model (MLLM)-driven simulated students with configurable learner profiles. This capability enables educators to anticipate misconceptions, calibrate item difficulty, and refine question components prior to deployment. Integrating MLLMs, user modeling, and interactive visualization, VizQStudio supports exploratory analysis of cognitive responses. Through expert interviews, case studies, in-class deployments, and large-scale online experiments, the system demonstrates significant improvements in assessment item quality and teacher decision-making efficacy, advancing the responsible application of AI in educational evaluation.
📝 Abstract
Multiple-choice questions (MCQs) are a widely used educational tool, particularly in domains such as visualization literacy that require broad conceptual coverage and support diverse real-world applications. However, designing high-quality visualization literacy MCQs remains challenging, as instructors must coordinate multimodal elements (e.g., charts, question stems, and distractors), address diverse visualization tasks, and accommodate learners with heterogeneous backgrounds. Existing visualization literacy assessments primarily rely on standardized, fixed item banks, offering limited support for iterative question design that adapts to differences in learners' abilities, backgrounds, and reasoning strategies. To address these challenges, we present VizQStudio, a visual analytics system that supports instructors in iteratively designing and refining visualization literacy MCQs using MLLM-powered simulated students. Instructors can specify diverse student profiles spanning demographics, knowledge levels, and learning-related traits. The system then visualizes how simulated students reason about and respond to different question components, helping instructors explore potential misconceptions, difficulty calibration, and design trade-offs prior to classroom deployment. We investigate VizQStudio through a mixed-method evaluation, including expert interviews, case studies, a classroom deployment, and a large-scale online study. Overall, this work reframes MLLM-based student simulation in assessment authoring as a design-time, exploratory aid. By examining both its value and limitations in realistic instructional settings, we surface design insights that inform how future systems can support instructor-centered, iterative, and responsible uses of AI for multimodal assessment design in visualization literacy and related domains.