🤖 AI Summary
This study addresses the prevailing overemphasis on formal generation and efficiency—rather than pedagogical potential—in AI adoption within architectural design studios. It pioneers the systematic integration of large language models (LLMs) into design pedagogy as a scaffold for student autonomy, collaborative discourse, and metacognitive reflection. Grounded in Bloom’s taxonomy, we develop a quantifiable, multi-level learning outcomes assessment framework. We propose three intelligent intervention mechanisms—personalized feedback delivery, collaborative process orchestration, and cognitive scaffolding—implemented via interactive case analysis, hypothetical design scenario generation, and natural language understanding/generation. Empirical evaluation demonstrates that the approach effectively mitigates key challenges: difficulties in self-regulated learning, tensions in peer feedback, and imbalances between knowledge transmission and creative development. Results show significant improvements in higher-order learning outcomes, including conceptual comprehension, analytical application, and synthetic evaluation.
📝 Abstract
The study explores the role of large language models (LLMs) in the context of the architectural design studio, understood as the pedagogical core of architectural education. Traditionally, the studio has functioned as an experiential learning space where students tackle design problems through reflective practice, peer critique, and faculty guidance. However, the integration of artificial intelligence (AI) in this environment has been largely focused on form generation, automation, and representation-al efficiency, neglecting its potential as a pedagogical tool to strengthen student autonomy, collaboration, and self-reflection. The objectives of this research were: (1) to identify pedagogical challenges in self-directed, peer-to-peer, and teacher-guided learning processes in architecture studies; (2) to propose AI interventions, particularly through LLM, that contribute to overcoming these challenges; and (3) to align these interventions with measurable learning outcomes using Bloom's taxonomy. The findings show that the main challenges include managing student autonomy, tensions in peer feedback, and the difficulty of balancing the transmission of technical knowledge with the stimulation of creativity in teaching. In response to this, LLMs are emerging as complementary agents capable of generating personalized feedback, organizing collaborative interactions, and offering adaptive cognitive scaffolding. Furthermore, their implementation can be linked to the cognitive levels of Bloom's taxonomy: facilitating the recall and understanding of architectural concepts, supporting application and analysis through interactive case studies, and encouraging synthesis and evaluation through hypothetical design scenarios.