🤖 AI Summary
Existing computational models for identifying “good behavior” in online communities exhibit significant limitations—overemphasizing prosociality while neglecting the diverse, community-specific values users genuinely endorse.
Method: Leveraging 16,000 highly upvoted comments from 80 Reddit subreddits (2016 & 2022), we treat upvotes as proxies for behavioral acceptability and introduce the first temporally grounded, multi-level (macro–meso–micro) value extraction framework, integrating large language models with frequency-based clustering.
Contribution/Results: State-of-the-art prosociality models miss, on average, 82% of empirically observed community values. Our approach identifies 64 (2016) and 72 (2022) empirically grounded value items—replicating established qualitative dimensions (e.g., empathy, fairness) while uncovering high-frequency emergent implicit norms (e.g., “precise questioning,” “restrained rebuttal”). This work establishes a scalable, data-driven paradigm for value discovery to inform equitable, evidence-based community governance.
📝 Abstract
A major task for moderators of online spaces is norm-setting, essentially creating shared norms for user behavior in their communities. Platform design principles emphasize the importance of highlighting norm-adhering examples and explicitly stating community norms. However, norms and values vary between communities and go beyond content-level attributes, making it challenging for platforms and researchers to provide automated ways to identify desirable behavior to be highlighted. Current automated approaches to detect desirability are limited to measures of prosocial behavior, but we do not know whether these measures fully capture the spectrum of what communities value. In this paper, we use upvotes, which express community approval, as a proxy for desirability and examine 16,000 highly-upvoted comments across 80 popular sub-communities on Reddit. Using a large language model, we extract values from these comments across two years (2016 and 2022) and compile 64 and 72 $ extit{macro}$, $ extit{meso}$, and $ extit{micro}$ values for 2016 and 2022 respectively, based on their frequency across communities. Furthermore, we find that existing computational models for measuring prosociality were inadequate to capture on average $82%$ of the values we extracted. Finally, we show that our approach can not only extract most of the qualitatively-identified values from prior taxonomies, but also uncover new values that are actually encouraged in practice. Our findings highlight the need for nuanced models of desirability that go beyond preexisting prosocial measures. This work has implications for improving moderator understanding of their community values and provides a framework that can supplement qualitative approaches with larger-scale content analyses.