🤖 AI Summary
This study investigates how user-initiated blocking behavior on the decentralized social platform Bluesky coordinates community safety with individual autonomy. Leveraging three months of real-world behavioral data, we construct an 86-dimensional, multi-source behavioral profile and present the first systematic empirical model of self-moderation in decentralized systems. We propose an interpretable blocking-risk prediction framework—logistic regression augmented with SHAP—to quantify and explain blocking likelihood. Our analysis reveals high predictability of blocking (AUC = 0.89), identifying three primary drivers: content incitement, interaction anomaly, and account novelty. The findings uncover dynamic trade-offs between individual agency and collective security, offering empirical grounding and methodological guidance for designing lightweight, trustworthy, transparent, and accountable governance mechanisms in decentralized platforms.
📝 Abstract
Moderation and blocking behavior, both closely related to the mitigation of abuse and misinformation on social platforms, are fundamental mechanisms for maintaining healthy online communities. However, while centralized platforms typically employ top-down moderation, decentralized networks rely on users to self-regulate through mechanisms like blocking actions to safeguard their online experience. Given the novelty of the decentralized paradigm, addressing self-moderation is critical for understanding how community safety and user autonomy can be effectively balanced. This study examines user blocking on Bluesky, a decentralized social networking platform, providing a comprehensive analysis of over three months of user activity through the lens of blocking behaviour. We define profiles based on 86 features that describe user activity, content characteristics, and network interactions, addressing two primary questions: (1) Is the likelihood of a user being blocked inferable from their online behavior? and (2) What behavioral features are associated with an increased likelihood of being blocked? Our findings offer valuable insights and contribute with a robust analytical framework to advance research in moderation on decentralized social networks.