🤖 AI Summary
This work addresses the limitations of large language models (LLMs) in repairing programs written in block-based languages like Scratch, where challenges arise from nested structures, event-driven concurrency, and tight coupling between code and multimedia assets, often leading to semantically incorrect fixes. To tackle this, the authors construct the first executable benchmark for Scratch program repair, comprising 100 structurally and semantically complex projects, each accompanied by executable test suites, minimal repair constraints, and complete multimedia resources. They further introduce a novel multimodal, executable evaluation framework tailored for block-based programming, featuring a three-tier protocol: virtual machine execution validation, block-level edit distance combined with behavioral trajectory comparison, and structured interpretability scoring. This framework enables fine-grained, reproducible assessment of LLM repair capabilities and establishes a foundation for future model training and evaluation in the block-based programming domain.
📝 Abstract
LLMs have achieved strong performance on text-based programming tasks, yet they remain unreliable for block-based languages such as Scratch. Scratch programs exhibit deeply nested, non-linear structures, event-driven concurrency across multiple sprites, and tight coupling between code and multimedia assets, properties that differ fundamentally from textual code. As a result, LLMs often misinterpret Scratch semantics and generate large, invasive edits that are syntactically valid but semantically incorrect when repairing buggy programs. We introduce ScratchEval, the first executable benchmark designed to evaluate LLM-based repair for Scratch programs, covering program understanding, debugging, analysis, and repair. The benchmark contains 100 curated Scratch projects from the public repository, selected for structural and semantic complexity. Each project is paired with executable test suites, bug descriptions with corresponding fixes, block-level edit constraints defining minimal semantically correct repairs, and required multimedia assets. The benchmark is constructed through a human-in-the-loop pipeline combining automated project mining with expert validation of trigger-outcome semantics and representative bug patterns, with emphasis on event ordering, concurrency, and state management. To enable rigorous and reproducible evaluation, we propose a three-layer executable protocol measuring functional correctness via VM-level execution, repair quality using block-level edit distance and behavioral trajectory comparisons, and explanation quality via structured rubrics assessing alignment between model reasoning and generated patches. Using ScratchEval, we study domain-specific fine-tuning, training data effectiveness, and model generalization to unseen bug types. ScratchEval provides a reproducible foundation for evaluating and post-training LLMs on block-based programming tasks.