Designing and Evaluating Hint Generation Systems for Science Education

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the pedagogical problem that directly providing answers in science education undermines conceptual understanding and critical thinking. To mitigate this, we propose an active-learning-oriented automatic prompt generation system. Methodologically, we leverage large language models to construct two complementary prompt-chain paradigms—static pre-generated and dynamic adaptive—designed to scaffold students’ knowledge construction without revealing answers. A user study with 41 participants was conducted for quantitative evaluation. Our contributions are threefold: (1) the first systematic comparison of static versus dynamic prompting in science learning, revealing stronger preference for dynamic prompts among certain learners; (2) empirical evidence that conventional automated evaluation metrics fail to capture nuanced impacts of prompting strategies on learning experience; and (3) a scalable, learner-centered prompt engineering framework for intelligent tutoring systems, substantiated by empirical findings.

Technology Category

Application Category

📝 Abstract
Large language models are influencing the education landscape, with students relying on them in their learning process. Often implemented using general-purpose models, these systems are likely to give away the answers, which could hinder conceptual understanding and critical thinking. We study the role of automatic hint generation as a pedagogical strategy to promote active engagement with the learning content, while guiding learners toward the answers. Focusing on scientific topics at the secondary education level, we explore the potential of large language models to generate chains of hints that scaffold learners without revealing answers. We compare two distinct hinting strategies: static hints, pre-generated for each problem, and dynamic hints, adapted to learners' progress. Through a quantitative study with 41 participants, we uncover different preferences among learners with respect to hinting strategies, and identify the limitations of automatic evaluation metrics to capture them. Our findings highlight key design considerations for future research on hint generation and intelligent tutoring systems that seek to develop learner-centered educational technologies.
Problem

Research questions and friction points this paper is trying to address.

Developing hint generation systems to promote active learning in science education
Comparing static and dynamic hinting strategies for scaffolding student progress
Evaluating limitations of automatic metrics in capturing learner preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates scaffolded hint chains using large language models
Compares static pre-generated versus dynamic adaptive hints
Focuses on promoting active learning without revealing answers
🔎 Similar Papers
No similar papers found.