π€ AI Summary
This study addresses the challenges teachers face in assessment design, particularly difficulties in authoring and a lack of tools supporting iterative development. Through a seven-month co-design process with 13 educators, the authors introduce a novel conceptual model that characterizes the dual processes of assessment creation and requirement iteration. Building on this model, they developed Ripplet, a web-based tool leveraging large language models (LLMs) to support assessment design. Ripplet incorporates multi-level reusable interaction mechanisms that facilitate a shift from generative to curatorial practices, thereby encouraging teachersβ reflective engagement with assessment quality. A user study with 15 teachers demonstrated that using Ripplet led to higher-quality formative assessments, more meaningful investment of effort, and the successful completion of assessment tasks previously deemed infeasible.
π Abstract
Assessments are critical in education, but creating them can be difficult. To address this challenge in a grounded way, we partnered with 13 teachers in a seven-month codesign process. We developed a conceptual model that characterizes the iterative dual process where teachers develop assessments while simultaneously refining requirements. To enact this model in practice, we built Ripplet, a web-based tool with multilevel reusable interactions to support assessment authoring. The extended codesign revealed that Ripplet enabled teachers to create formative assessments they would not have otherwise made, shifted their practices from generation to curation, and helped them reflect more on assessment quality. In a user study with 15 additional teachers, compared to their current practices, teachers felt the results were more worth their effort and that assessment quality improved.