Persuade Me if You Can: A Framework for Evaluating Persuasion Effectiveness and Susceptibility Among Large Language Models

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the persuasive capabilities and susceptibility to misinformation of large language models (LLMs), exposing risks of misuse and shortcomings in ethical alignment. We propose PMIYC, a multi-agent interactive framework that conducts automated, scalable dual-dimensional evaluation—assessing both output persuasiveness and input robustness—via adversarial multi-turn dialogues between persuasive and persuaded LLM agents across subjective judgment and misinformation scenarios. Our work reveals, for the first time, a significant asymmetry: while Llama-3.3-70B and GPT-4o achieve comparable persuasion effectiveness—both outperforming Claude 3 Haiku by ~30%—GPT-4o demonstrates over 50% higher resistance to misinformation than Llama-3.3-70B. This establishes the first comprehensive evaluation paradigm for LLM persuasiveness that jointly benchmarks persuasive efficacy and adversarial robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) demonstrate persuasive capabilities that rival human-level persuasion. While these capabilities can be used for social good, they also present risks of potential misuse. Moreover, LLMs' susceptibility to persuasion raises concerns about alignment with ethical principles. To study these dynamics, we introduce Persuade Me If You Can (PMIYC), an automated framework for evaluating persuasion through multi-agent interactions. Here, Persuader agents engage in multi-turn conversations with the Persuadee agents, allowing us to measure LLMs' persuasive effectiveness and their susceptibility to persuasion. We conduct comprehensive evaluations across diverse LLMs, ensuring each model is assessed against others in both subjective and misinformation contexts. We validate the efficacy of our framework through human evaluations and show alignment with prior work. PMIYC offers a scalable alternative to human annotation for studying persuasion in LLMs. Through PMIYC, we find that Llama-3.3-70B and GPT-4o exhibit similar persuasive effectiveness, outperforming Claude 3 Haiku by 30%. However, GPT-4o demonstrates over 50% greater resistance to persuasion for misinformation compared to Llama-3.3-70B. These findings provide empirical insights into the persuasive dynamics of LLMs and contribute to the development of safer AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluate persuasion effectiveness in Large Language Models
Assess susceptibility to persuasion in ethical contexts
Develop scalable framework for multi-agent interaction analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multi-agent interaction framework
Evaluation of LLM persuasion effectiveness
Scalable alternative to human annotation