Evaluating Moderation in Online Social Network

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Toxic content propagation on online social platforms demands governance mechanisms that balance theoretical rigor with practical feasibility. This paper proposes a toxicity propagation simulation framework based on an extended SEIZ (Susceptible–Exposed–Infected–Zombie) epidemic model. It introduces, for the first time in content moderation simulation, user-level modeling of the Dark Triad personality traits—narcissism, Machiavellianism, and psychopathy—as key determinants of susceptibility and transmission behavior. We design a threshold-driven, configurable, and interpretable personalized moderator that dynamically adjusts intervention intensity according to individual psychological profiles, thereby departing from conventional uniform-intervention paradigms. Experimental results demonstrate that the proposed intelligent moderator significantly suppresses toxicity diffusion: average propagation rate and duration decrease by 47% relative to a baseline moderator. These findings validate the critical advantages of personality-aware moderation strategies in enhancing both intervention efficacy and interpretability.

Technology Category

Application Category

📝 Abstract
The spread of toxic content on online platforms presents complex challenges that call for both theoretical insight and practical tools to test intervention strategies. In this novel research paper, we introduce a simulation-based framework that extends the classical SEIZ (Susceptible-Exposed-Infected-Skeptic) epidemic model to capture the dynamics of toxic message propagation. Our simulator incorporates active moderation mechanisms through two distinct variants: a basic moderator, which implements uniform, non-personalized interventions, and smart moderator, which leverages user-specific psychological profiles based on Dark Triad traits to apply personalized, threshold-driven moderation. By varying parameter configurations, the simulator allows for systematic exploration of how different moderation strategies influence user state transitions over time. Simulation results demonstrate that while generic interventions can curb toxicity under certain conditions, profile-aware moderation proves significantly more effective in limiting both the spread and persistence of toxic behavior. This simulation framework offers a flexible and extensible tool for studying and designing adaptive moderation strategies in complex online social systems.
Problem

Research questions and friction points this paper is trying to address.

Simulating toxic content spread in online social networks
Comparing generic versus personalized moderation strategies
Developing a flexible tool for adaptive moderation design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended SEIZ model for toxic content dynamics
Smart moderator uses Dark Triad psychological profiles
Simulation framework tests adaptive moderation strategies