🤖 AI Summary
This study investigates how personality traits expressed through language by large language model–driven conversational agents influence users’ perceptions and decisions in charitable donation contexts. Through a crowdsourced experiment (N=360), the research systematically manipulates three linguistic personality dimensions—attitude, authority, and reasoning style—and employs structural equation modeling to analyze their effects on affect, trust, and donation behavior. The work provides the first multidimensional deconstruction of linguistic personality in conversational agents, revealing that while personality traits do not directly determine donations, they indirectly shape behavior through trust, perceived competence, and situational empathy. Notably, pessimistic agents, despite lowering user mood and evaluations, significantly increased donation intent—an unexpected finding that underscores potential manipulative risks and ethical challenges in agent design.
📝 Abstract
Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.