🤖 AI Summary
This study investigates how the informativeness and expressed confidence of AI-generated responses influence users’ belief strength and attitudinal stance shifts. Using a preregistered online experiment, we employed quantitative psychometric measures and multilevel statistical modeling while controlling for confounds including prior beliefs, task type, and perceived stance consistency. We introduce a novel dual-dimensional framework—“belief conversion” (adjustment in belief strength) and “belief displacement” (shift in directional stance)—to disentangle distinct mechanisms of belief change. Results indicate that moderately confident, highly detailed responses elicit the largest overall belief change; conversely, highly confident responses increase stance displacement but attenuate belief strength adjustment. These findings reveal a nonlinear relationship between AI discourse features and human belief dynamics, challenging assumptions of monotonic effects of confidence. The study provides empirical evidence and a theoretical framework to guide the design of trustworthy, psychologically informed AI interfaces.
📝 Abstract
The growing use of AI-generated responses in everyday tools raises concern about how subtle features such as supporting detail or tone of confidence may shape people's beliefs. To understand this, we conducted a pre-registered online experiment (N = 304) investigating how the detail and confidence of AI-generated responses influence belief change. We introduce an analysis framework with two targeted measures: belief switch and belief shift. These distinguish between users changing their initial stance after AI input and the extent to which they adjust their conviction toward or away from the AI's stance, thereby quantifying not only categorical changes but also more subtle, continuous adjustments in belief strength that indicate a reinforcement or weakening of existing beliefs. Using this framework, we find that detailed responses with medium confidence are associated with the largest overall belief changes. Highly confident messages tend to elicit belief shifts but induce fewer stance reversals. Our results also show that task type (fact-checking versus opinion evaluation), prior conviction, and perceived stance agreement further modulate the extent and direction of belief change. These findings illustrate how different properties of AI responses interact with user beliefs in subtle but potentially consequential ways and raise practical as well as ethical considerations for the design of LLM-powered systems.