🤖 AI Summary
To address the lack of systematic assessment tools for evaluating decomposition and abstraction skills in introductory programming courses, this study proposes the first three-dimensional quantitative framework for measuring task complexity in procedural programming—capturing repetitiveness, explicitness/implicitness of code patterns, and strength of data dependencies. Based on this framework, we design an education-oriented complexity grading model, develop an interactive Decomposition & Abstraction (DA) exercise tool supporting parameterized problem generation and visual exploration, and construct a curated task repository spanning multiple difficulty levels. The framework enhances pedagogical predictability and scaffolding fidelity, with empirical evidence demonstrating its effectiveness in fostering higher-order structural thinking in students’ computational reasoning. It further aligns with emerging pedagogical demands in the generative AI era—particularly the need to strengthen code comprehension, reasoning, and design capabilities.
📝 Abstract
Decomposition and abstraction is an essential component of computational thinking, yet it is not always emphasized in introductory programming courses. In addition, as generative AI further reduces the focus on syntax and increases the importance of higher-level code reasoning, there is renewed opportunity to teach DA explicitly. In this paper, we introduce a framework for systematically assessing the complexity of code structuring tasks, where students must identify and separate meaningful abstractions within existing, unstructured code. The framework defines three dimensions of task complexity, each with multiple levels: repetition, code pattern, and data dependency. To support practical use, we provide example tasks mapped to these levels and offer an interactive tool for generating and exploring DA problems. The framework is designed to support the development of educational tasks that build students' skills with DA in the procedural paradigm.