The Language of Approval: Identifying the Drivers of Positive Feedback Online

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite widespread interest in linguistic drivers of positive feedback (e.g., upvotes, rewards) in online communities, causal evidence remains scarce—particularly regarding heterogeneity across authors and communities. Method: Leveraging 11 million posts, we integrate quasi-experimental causal inference—controlling for confounders including author reputation, temporal trends, and community context—with interpretable machine learning to isolate text-level mechanisms influencing upvote likelihood. Contribution/Results: We identify that linguistic complexity, hedging (tentative expression), and toxicity significantly reduce upvote probability—revealing a “policy–practice gap” between formal community guidelines and actual user feedback behavior. Our predictive model achieves high AUC, offering an empirically grounded foundation for designing reward-aligned community guidelines, user onboarding interventions, and moderation policies. This work introduces a novel analytical framework bridging causal linguistics and platform governance.

Technology Category

Application Category

📝 Abstract
Positive feedback via likes and awards is central to online governance, yet which attributes of users' posts elicit rewards -- and how these vary across authors and communities -- remains unclear. To examine this, we combine quasi-experimental causal inference with predictive modeling on 11M posts from 100 subreddits. We identify linguistic patterns and stylistic attributes causally linked to rewards, controlling for author reputation, timing, and community context. For example, overtly complicated language, tentative style, and toxicity reduce rewards. We use our set of curated features to train models that can detect highly-upvoted posts with high AUC. Our audit of community guidelines highlights a ``policy-practice gap'' -- most rules focus primarily on civility and formatting requirements, with little emphasis on the attributes identified to drive positive feedback. These results inform the design of community guidelines, support interfaces that teach users how to craft desirable contributions, and moderation workflows that emphasize positive reinforcement over purely punitive enforcement.
Problem

Research questions and friction points this paper is trying to address.

Identifying linguistic patterns driving online positive feedback
Examining variation in reward mechanisms across communities
Addressing policy-practice gap in community guideline design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combined causal inference with predictive modeling
Identified linguistic patterns linked to rewards
Trained models detecting highly-upvoted posts
🔎 Similar Papers
No similar papers found.