Will I Get Hate Speech Predicting the Volume of Abusive Replies before Posting in Social Media

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study introduces the novel task of *pre-publication abusive reply volume prediction*, aiming to estimate the number of abusive replies a tweet will elicit before it is posted—enabling proactive risk mitigation. We propose a multi-source fusion regression model integrating four feature categories: textual content, text-level metadata, tweet-level metadata, and account-level attributes, augmented with interpretability analysis. Experimental results reveal that content features dominate predictive performance (89% contribution), whereas user identity features exhibit negligible predictive utility—providing the first empirical evidence that online abusive responses are fundamentally *content-driven*, not *identity-driven*. Notably, the model achieves high accuracy using only the to-be-posted text (MAE = 0.32), establishing a deployable, content-centric paradigm for safety interventions.

Technology Category

Application Category

📝 Abstract
Despite the growing body of research tackling offensive language in social media, this research is predominantly reactive, determining if content already posted in social media is abusive. There is a gap in predictive approaches, which we address in our study by enabling to predict the volume of abusive replies a tweet will receive after being posted. We formulate the problem from the perspective of a social media user asking: ``if I post a certain message on social media, is it possible to predict the volume of abusive replies it might receive?'' We look at four types of features, namely text, text metadata, tweet metadata, and account features, which also help us understand the extent to which the user or the content helps predict the number of abusive replies. This, in turn, helps us develop a model to support social media users in finding the best way to post content. One of our objectives is also to determine the extent to which the volume of abusive replies that a tweet will get are motivated by the content of the tweet or by the identity of the user posting it. Our study finds that one can build a model that performs competitively by developing a comprehensive set of features derived from the content of the message that is going to be posted. In addition, our study suggests that features derived from the user's identity do not impact model performance, hence suggesting that it is especially the content of a post that triggers abusive replies rather than who the user is.
Problem

Research questions and friction points this paper is trying to address.

Predict volume of abusive replies before posting on social media.
Identify features influencing abusive replies: content vs. user identity.
Develop predictive model using text, metadata, and account features.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts abusive replies before posting
Uses text, metadata, and account features
Focuses on content, not user identity
🔎 Similar Papers
No similar papers found.