🤖 AI Summary
This study investigates how AI’s implicit role—supportive versus adversarial—in human-AI collaboration affects learners’ agency, discourse patterns, and collaborative experience. Using a randomized controlled experiment with three conditions (human-only, supportive-AI, adversarial-AI) in a creative writing task, we integrated transitional network analysis, sequential pattern mining, Gaussian mixture clustering, embedded creativity assessment, and multidimensional experience scales. Our work is the first to empirically demonstrate how AI personification reconfigures the “creative–regulative” tension. Results show that adversarial AI significantly increases reflective discourse and challenge-oriented interaction but systematically reduces psychological safety and team satisfaction; supportive AI accelerates consensus formation; while AI assumes the challenger role, reflective regulation remains predominantly human-led. The findings uncover a critical design trade-off between cognitive gain and affective safety, offering theoretical grounding and empirical evidence for ethically balanced, effective implicit AI collaboration.
📝 Abstract
Generative AI is increasingly embedded in collaborative learning, yet little is known about how AI personas shape learner agency when AI teammates are present but not disclosed. This mechanism study examines how supportive and contrarian AI personas reconfigure emergent learner agency, discourse patterns, and experiences in implicit human-AI creative collaboration. A total of 224 university students were randomly assigned to 97 online triads in one of three conditions: human-only control, hybrid teams with a supportive AI, or hybrid teams with a contrarian AI. Participants completed an individual-group-individual movie-plot writing task; the 10-minute group chat was coded using a creative-regulatory framework. We combined transition network analysis, theory-driven sequential pattern mining, and Gaussian mixture clustering to model structural, temporal, and profile-level manifestations of agency, and linked these to cognitive load, psychological safety, teamwork satisfaction, and embedding-based creative performance. Contrarian AI produced challenge- and reflection-rich discourse structures and motifs indicating productive friction, whereas supportive AI fostered agreement-centred trajectories and smoother convergence. Clustering showed AI agents concentrated in challenger profiles, with reflective regulation uniquely human. While no systematic differences emerged in cognitive load or creative gains, contrarian AI consistently reduced teamwork satisfaction and psychological safety. The findings reveal a design tension between leveraging cognitive conflict and maintaining affective safety and ownership in hybrid human-AI teams.