🤖 AI Summary
This study investigates how large language models (LLMs) internally conceptualize “persuasiveness” in public speaking—a dimension poorly understood in current LLM research.
Method: Leveraging authentic transcripts from the French “180-Second PhD” speech competition, we design controlled prompt-based tasks to enhance or attenuate persuasiveness, using GPT-4o as the test model. We introduce an interpretable feature set integrating rhetorical devices (e.g., rhetorical questions, parallelism) and discourse markers (e.g., “admittedly,” “the crux lies in”), coupled with fine-grained linguistic analysis.
Contribution/Results: We find that GPT-4o modulates persuasiveness systematically—not via human-like logical argumentation or credibility construction—but primarily through affective lexical polarity, syntactic choices (e.g., ratios of interrogatives/exclamatives), and prosodic cues (e.g., pause placement). This work provides the first attributable, quantifiable, and interpretable analysis of LLM behavior along the persuasiveness dimension, establishing a novel methodology for LLM stylistic modeling and human–AI rhetorical collaboration.
📝 Abstract
This study examines how large language models understand the concept of persuasiveness in public speaking by modifying speech transcripts from PhD candidates in the "Ma These en 180 Secondes" competition, using the 3MT French dataset. Our contributions include a novel methodology and an interpretable textual feature set integrating rhetorical devices and discourse markers. We prompt GPT-4o to enhance or diminish persuasiveness and analyze linguistic shifts between original and generated speech in terms of the new features. Results indicate that GPT-4o applies systematic stylistic modifications rather than optimizing persuasiveness in a human-like manner. Notably, it manipulates emotional lexicon and syntactic structures (such as interrogative and exclamatory clauses) to amplify rhetorical impact.