🤖 AI Summary
This study addresses the risk that generative artificial intelligence (GenAI), when poorly designed, may foster user overreliance and undermine groups’ self-regulatory capacity in collaborative settings involving socially shared metacognition. To mitigate this, the authors propose an initial set of design principles integrating Group Awareness Tools (GATs) with GenAI, aiming to subtly scaffold the development of socially shared metacognition without excessive AI intervention. Key mechanisms include visualizing cognitive discrepancies and stimulating productive cognitive conflict, thereby promoting group autonomy through implicit guidance. Grounded in a comprehensive literature review, the proposed framework articulates strategic approaches to position GenAI as an enabler rather than a director of collaborative processes, offering a theoretical foundation for future empirical investigations and system development.
📝 Abstract
Socially shared metacognition (SSM) refers to the collective monitoring and regulation of joint cognitive processes in collaborative problem-solving, and is essential for effective knowledge work and learning. Generative AI (GenAI)-based systems offer new opportunities to support SSM, but emerging evidence suggests that poorly designed systems can encourage over-reliance on AI-generated explicit instruction and erode groups' capacity to develop autonomous regulatory processes. Group awareness tools (GATs) address this challenge through established design principles that make social and cognitive awareness information visible, highlight differences between group members to create cognitive conflict, and trigger autonomous elaboration and discussion, thereby implicitly guiding autonomous SSM emergence. This paper explores the design of GenAI-augmented GATs to support autonomous SSM in collaborative work and learning through an initial literature search, presenting preliminary design principles for discussion.