Make an Offer They Can't Refuse: Grounding Bayesian Persuasion in Real-World Dialogues without Pre-Commitment

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based persuasion research overlooks strategic information design under information asymmetry and often relies on unrealistic pre-commitment assumptions. Method: This paper introduces Bayesian Persuasion (BP) into natural-language, single-turn dialogues for the first time, proposing a “commit–communicate” mechanism that operates without pre-commitment. It develops two BP frameworks—semi-formal natural language (SFNL) and fully natural language (FNL)—that guide the persuadee’s belief updating via controllable information architectures. The approach integrates BP theory, natural language generation, and supervised fine-tuning, with multi-dimensional evaluation on both LLMs and human participants. Contribution/Results: BP-based strategies significantly improve persuasion success rates; fine-tuned small models achieve performance comparable to large models; SFNL excels in logical rigor, while FNL achieves stronger emotional resonance. This work establishes an interpretable, scalable paradigm for strategic persuasion in LLMs.

Technology Category

Application Category

📝 Abstract
Persuasion, a fundamental social capability for humans, remains a challenge for AI systems such as large language models (LLMs). Current studies often overlook the strategic use of information asymmetry in message design or rely on strong assumptions regarding pre-commitment. In this work, we explore the application of Bayesian Persuasion (BP) in natural language within single-turn dialogue settings, to enhance the strategic persuasion capabilities of LLMs. Our framework incorporates a commitment-communication mechanism, where the persuader explicitly outlines an information schema by narrating their potential types (e.g., honest or dishonest), thereby guiding the persuadee in performing the intended Bayesian belief update. We evaluate two variants of our approach: Semi-Formal-Natural-Language (SFNL) BP and Fully-Natural-Language (FNL) BP, benchmarking them against both naive and strong non-BP (NBP) baselines within a comprehensive evaluation framework. This framework covers a diverse set of persuadees -- including LLM instances with varying prompts and fine-tuning and human participants -- across tasks ranging from specially designed persuasion scenarios to general everyday situations. Experimental results on LLM-based agents reveal three main findings: (1) LLMs guided by BP strategies consistently achieve higher persuasion success rates than NBP baselines; (2) SFNL exhibits greater credibility and logical coherence, while FNL shows stronger emotional resonance and robustness in naturalistic conversations; (3) with supervised fine-tuning, smaller models can attain BP performance comparable to that of larger models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing strategic persuasion in AI dialogues without pre-commitment assumptions
Applying Bayesian Persuasion in natural language for single-turn interactions
Addressing information asymmetry through commitment-communication mechanisms in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Persuasion framework enhances LLM strategic dialogue
Commitment-communication mechanism outlines information schema explicitly
Semi-Formal and Fully-Natural-Language variants improve persuasion robustness
🔎 Similar Papers
No similar papers found.
B
Buwei He
Beijing University of Posts and Telecommunications, Beijing Institute for General Artificial Intelligence, Beijing, China
Y
Yang Liu
Beijing Institute for General Artificial Intelligence, Beijing, China
Zhaowei Zhang
Zhaowei Zhang
Peking University
AI GovernanceAI AlignmentGame TheoryHuman-AI Collaboration
Zixia Jia
Zixia Jia
BigAI
NLP
H
Huijia Wu
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zhaofeng He
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zilong Zheng
Beijing Institute for General Artificial Intelligence, Beijing, China
Yipeng Kang
Yipeng Kang
BIGAI
Natural language processing