Beyond Likes: How Normative Feedback Complements Engagement Signals on Social Media

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online platforms’ reliance on engagement signals (e.g., likes) amplifies toxic or low-inclusivity content, exacerbating the “popularity-as-validity” bias. To address this, we propose a structured prosocial feedback mechanism grounded in positive psychology—specifically, individual well-being, constructive technology use, and character strengths—as its normative foundation. Leveraging large language models, we generate expert-level psychological scale scores for user comments, supplementing conventional engagement metrics. Validated through a preregistered user experiment and an ML-driven feedback system, our approach significantly increases user preference for high-quality content, attenuates conformity to highly liked but normatively poor content, reduces community toxicity, and improves alignment between platform evaluations and expert judgments. This work constitutes the first integration of psychometrically grounded normative scoring into social feedback systems, enabling synergistic optimization of both participatory engagement and value-aligned content curation.

Technology Category

Application Category

📝 Abstract
Many online platforms incorporate engagement signals--such as likes and upvotes--into their content ranking systems and interface design. These signals are designed to boost user engagement. However, they can unintentionally elevate content that is less inclusive and may not support normatively desirable behavior. This issue becomes especially concerning when toxic content correlates strongly with popularity indicators such as likes and upvotes. In this study, we propose structured prosocial feedback as a complementary signal to likes and upvotes--one that highlights content quality based on normative criteria to help address the limitations of conventional engagement signals. We begin by designing and implementing a machine learning feedback system powered by a large language model (LLM), which evaluates user comments based on principles of positive psychology, such as individual well-being, constructive social media use, and character strengths. We then conduct a pre-registered user study to examine how existing peer-based and the new expert-based feedback interact to shape users' selection of comments in a social media setting. Results show that peer feedback increases conformity to popularity cues, while expert feedback shifts preferences toward normatively higher-quality content. Moreover, incorporating expert feedback alongside peer evaluations improves alignment with expert assessments and contributes to a less toxic community environment. This illustrates the added value of normative cues--such as expert scores generated by LLMs using psychological rubrics--and underscores the potential benefits of incorporating such signals into platform feedback systems to foster healthier online environments.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of engagement signals like likes and upvotes
Reducing toxic content linked to popularity indicators
Incorporating normative feedback to improve content quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered feedback system evaluates comments
Combines peer and expert feedback signals
Reduces toxicity with normative quality cues
Y
Yuchen Wu
Tsinghua University, China
M
Mingduo Zhao
University of California, Berkeley, USA
John Canny
John Canny
University of California, Berkeley
HCIUbicompICTDData MiningHealth Technologies