🤖 AI Summary
This work addresses key limitations in large language model (LLM)-generated persuasive dialogue—namely, insufficient fluency, shallow logical reasoning, and heavy reliance on human annotation. To this end, we propose a multi-LLM role-based collaborative framework grounded in prompt engineering, which establishes a multi-agent communication architecture. By simulating structured interactions—including debate, critical questioning, and iterative refinement—the framework integrates role-playing, feedback-driven reinforcement, and diversity constraints. Its core contribution is the first scalable, multi-LLM collaboration paradigm, substantially enhancing dialogue naturalness, lexical and syntactic diversity, and the efficacy of persuasive strategies. Experimental results demonstrate that the generated dialogues exhibit high robustness and generalization across both conventional and socially sensitive domains—including taboo topics—while significantly reducing the need for manual curation or annotation.
📝 Abstract
Large Language Models (LLMs) have shown proficiency in generating persuasive dialogue, yet concerns about the fluency and sophistication of their outputs persist. This paper presents a multi-LLM communication framework designed to enhance the generation of persuasive data automatically. This framework facilitates the efficient production of high-quality, diverse linguistic content with minimal human oversight. Through extensive evaluations, we demonstrate that the generated data excels in naturalness, linguistic diversity, and the strategic use of persuasion, even in complex scenarios involving social taboos. The framework also proves adept at generalizing across novel contexts. Our results highlight the framework's potential to significantly advance research in both computational and social science domains concerning persuasive communication.