When Machines Join the Moral Circle: The Persona Effect of Generative AI Agents in Collaborative Reasoning

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how generative AI, acting as a moral discussion partner in collaborative learning, influences the ethical reasoning process—not final decisions. We experimentally manipulated AI personae (supportive vs. adversarial) and employed multimodal analytical methods: Moral Foundations Dictionary scoring, argument coding, BERTopic-based thematic modeling, dynamic time warping for semantic trajectory analysis, and cognitive network modeling. These techniques enabled quantitative assessment of impacts on moral framework distribution, cross-framework connectivity, and argument quality. Results show that both AI types significantly reduced topic drift and enhanced discourse focus. The supportive AI strengthened integrative reasoning across Care and Fairness foundations, whereas the adversarial AI broadened moral perspective diversity. Our core contribution is the first empirical demonstration that AI can structurally reshape the dynamic organization of moral discourse—thereby fostering more reflective and pluralistic collaborative ethical reasoning.

Technology Category

Application Category

📝 Abstract
Generative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between-subjects experiment with university students (N=217) who discussed an autonomous-vehicle dilemma in triads under three conditions: human-only control, supportive AI teammate, or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human-only groups, indicating more stable topical focus. Post-discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.
Problem

Research questions and friction points this paper is trying to address.

Examining AI personas' impact on moral discourse dynamics
Analyzing how supportive versus contrarian AI alters reasoning patterns
Investigating AI's role in stabilizing topical focus during deliberation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used moral foundations lexicons for discourse analysis
Applied BERTopic modeling to track semantic trajectories
Employed epistemic network analysis on moral frames
🔎 Similar Papers
No similar papers found.