🤖 AI Summary
Long-standing online discussion promotion strategies have lacked large-scale empirical evaluation, primarily due to prohibitively high human participation costs. This paper introduces SynDisco: the first fully LLM-driven synthetic discussion simulation framework, enabling concurrent simulation of hundreds of participants and circumventing human bottlenecks. Methodologically, it proposes a novel reinforcement learning–inspired LLM content regulation strategy; identifies that lightweight models better foster discussion diversity; and establishes an automated pipeline for instruction tuning, annotation, and evaluation. Key contributions include: (1) releasing VMD—the first open-source Virtual Moderation Dataset—comprising ten thousand multi-model–generated and expert-annotated samples; (2) demonstrating that our strategy significantly outperforms both human-crafted guidelines and off-the-shelf LLM regulation baselines; and (3) enabling efficient, scalable, and fully autonomous large-scale strategy evaluation.
📝 Abstract
Despite the ever-growing importance of online moderation, there has been no large-scale study evaluating the effectiveness of alternative moderation strategies. This is largely due to the lack of appropriate datasets, and the difficulty of getting human discussants, moderators, and evaluators involved in multiple experiments. In this paper, we propose a methodology for leveraging synthetic experiments performed exclusively by Large Language Models (LLMs) to initially bypass the need for human participation in experiments involving online moderation. We evaluate six LLM moderation configurations; two currently used real-life moderation strategies (guidelines issued for human moderators for online moderation and real-life facilitation), two baseline strategies (guidelines elicited for LLM alignment work, and LLM moderation with minimal prompting) a baseline with no moderator at all, as well as our own proposed strategy inspired by a Reinforcement Learning (RL) formulation of the problem. We find that our own moderation strategy significantly outperforms established moderation guidelines, as well as out-of-the-box LLM moderation. We also find that smaller LLMs, with less intensive instruction-tuning, can create more varied discussions than larger models. In order to run these experiments, we create and release an efficient, purpose-built, open-source Python framework, dubbed"SynDisco"to easily simulate hundreds of discussions using LLM user-agents and moderators. Additionally, we release the Virtual Moderation Dataset (VMD), a large dataset of LLM-generated and LLM-annotated discussions, generated by three families of open-source LLMs accompanied by an exploratory analysis of the dataset.