🤖 AI Summary
This work addresses the challenge of explainable persuasive argument assessment. We propose a three-stage interpretable causal framework: (1) constructing a large-scale, human-annotated argument dataset; (2) designing the first causal-aware topic model (CTM), which explicitly captures causal relationships between argument features and persuasiveness; and (3) enabling both persuasiveness prediction and component-level counterfactual attribution explanations. Unlike conventional black-box evaluation paradigms, our framework supports fine-grained causal attribution. Empirical evaluation on vegetarianism-related arguments demonstrates strong human agreement (85.2%) and high out-of-topic generalization accuracy (79.3%), significantly outperforming existing baselines. The framework advances interpretability in argument mining by grounding explanations in causal inference rather than mere correlation, thereby facilitating transparent, actionable insights into argument effectiveness.
📝 Abstract
We introduce a three-part framework for constructing persuasive messages, AutoPersuade. First, we curate a large collection of arguments and gather human evaluations of their persuasiveness. Next, we introduce a novel topic model to identify the features of these arguments that influence persuasion. Finally, we use the model to predict the persuasiveness of new arguments and to assess the causal effects of argument components, offering an explanation of the results. We demonstrate the effectiveness of AutoPersuade in an experimental study on arguments for veganism, validating our findings through human studies and out-of-sample predictions.