Disagreements in Reasoning: How a Model's Thinking Process Dictates Persuasion in Multi-Agent Systems

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the prevailing assumption that model scale predominantly determines persuasive efficacy, instead investigating the cognitive foundations of persuasion dynamics between large language models (LLMs) and large inference models (LIMs) in multi-agent systems (MAS). We develop a controlled multi-agent experimental platform to systematically analyze how chain-of-thought sharing and reasoning depth influence persuasion outcomes. Our key contribution is the introduction of “persuasion duality”: explicit reasoning simultaneously enhances persuasiveness and strengthens individual resistance to persuasion, leading to attenuation of influence across multi-hop interactions. Empirical results demonstrate that transparent reasoning significantly improves persuasion efficacy; however, more advanced models exhibit heightened cognitive rigidity in group interactions. These findings uncover a fundamental trade-off between reasoning capability and persuasion robustness, offering a novel paradigm and empirical grounding for designing safe, interpretable MAS.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of recent Multi-Agent Systems (MAS), where Large Language Models (LLMs) and Large Reasoning Models (LRMs) usually collaborate to solve complex problems, necessitates a deep understanding of the persuasion dynamics that govern their interactions. This paper challenges the prevailing hypothesis that persuasive efficacy is primarily a function of model scale. We propose instead that these dynamics are fundamentally dictated by a model's underlying cognitive process, especially its capacity for explicit reasoning. Through a series of multi-agent persuasion experiments, we uncover a fundamental trade-off we term the Persuasion Duality. Our findings reveal that the reasoning process in LRMs exhibits significantly greater resistance to persuasion, maintaining their initial beliefs more robustly. Conversely, making this reasoning process transparent by sharing the "thinking content" dramatically increases their ability to persuade others. We further consider more complex transmission persuasion situations and reveal complex dynamics of influence propagation and decay within multi-hop persuasion between multiple agent networks. This research provides systematic evidence linking a model's internal processing architecture to its external persuasive behavior, offering a novel explanation for the susceptibility of advanced models and highlighting critical implications for the safety, robustness, and design of future MAS.
Problem

Research questions and friction points this paper is trying to address.

Understanding persuasion dynamics in multi-agent systems with LLMs and LRMs
Challenging the hypothesis that persuasion depends primarily on model scale
Investigating how reasoning processes affect belief resistance and persuasive ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model's cognitive process dictates persuasion dynamics
Reasoning transparency increases persuasive ability dramatically
Internal processing architecture links to persuasive behavior