🤖 AI Summary
This study investigates how AI-generated emotional framing impairs human detection of logical fallacies. Using eight large language models, we injected emotionally charged language into logically identical fallacious arguments while preserving their underlying logical structure; the optimal model was selected to generate experimental stimuli, followed by rigorous human behavioral experiments. We systematically uncover, for the first time, the interaction mechanism between emotional framing and logical fallacies: emotional wording significantly reduces fallacy detection accuracy (mean F1 score decline of 14.5%) while increasing subjective persuasiveness, with heterogeneous effects across emotion types. Our contributions are twofold: (1) establishing emotional contagion as a critical enhancer of fallacy concealment, and (2) providing empirical evidence and methodological foundations for mitigating cognitive biases in AI-mediated public communication.
📝 Abstract
Logical fallacies are common in public communication and can mislead audiences; fallacious arguments may still appear convincing despite lacking soundness, because convincingness is inherently subjective. We present the first computational study of how emotional framing interacts with fallacies and convincingness, using large language models (LLMs) to systematically change emotional appeals in fallacious arguments. We benchmark eight LLMs on injecting emotional appeal into fallacious arguments while preserving their logical structures, then use the best models to generate stimuli for a human study. Our results show that LLM-driven emotional framing reduces human fallacy detection in F1 by 14.5% on average. Humans perform better in fallacy detection when perceiving enjoyment than fear or sadness, and these three emotions also correlate with significantly higher convincingness compared to neutral or other emotion states. Our work has implications for AI-driven emotional manipulation in the context of fallacious argumentation.