🤖 AI Summary
This study addresses the risk that generative artificial intelligence (GenAI) may foster excessive user reliance during collaborative sensemaking, thereby undermining autonomous thinking. To counter this, the work proposes integrating GenAI with Group Awareness Tools (GATs) to externalize collaborative data and visualize cognitive discrepancies among participants. This approach implicitly triggers cognitive conflict, guiding groups toward self-directed sensemaking without explicit directives. Emphasizing “implicit scaffolding” over direct instruction, the research formulates preliminary design principles for GenAI-augmented GATs. By doing so, it offers both theoretical grounding and practical pathways to mitigate AI dependency while fostering autonomous collaboration and deeper learning.
📝 Abstract
Sensemaking in collaborative work and learning is increasingly supported by GenAI systems, however, emerging evidence suggests that poorly designed GenAI systems tend to provide explicit instruction that groups passively follow, fostering over-reliance and eroding autonomous sensemaking. Group awareness tools (GATs) address this challenge through implicit guidance: rather than instructing groups on what to do, GATs externalize observable collaboration data through visualizations that reveal differences between group members to create cognitive conflict, which triggers autonomous elaboration and discussion, thereby implicitly guiding autonomous sensemaking emergence. Drawing on an initial literature search of existing GAT systems, this paper explores the design of GenAI-augmented GATs to support autonomous sensemaking in collaborative work and learning, presenting preliminary design principles for discussion.