🤖 AI Summary
This study addresses the prevalence of hostile interactions in response to posts by mainstream media and government accounts on social media, which often degrade the quality of public discourse. It presents the first systematic quantification of such interactions and introduces an automated natural language processing approach to distinguish between constructive (good-faith) and non-constructive (hostile or off-topic) replies. Analysis of a large dataset reveals that 68.3% of replies are non-constructive, with 91.7% originating from verified accounts, suggesting that algorithmic amplification may exacerbate discourse degradation even when authoritative sources are involved. The work provides a scalable technical framework and empirical foundation for assessing and mitigating low-quality public conversations online.
📝 Abstract
The quality of a user's social media experience is determined both by the content they see and by the quality of the conversation and interaction around it. In this paper, we look at replies to tweets from mainstream media outlets and official government agencies and assess if they are good faith, engaging honestly and constructively with the original post, or bad faith, attacking the author or derailing the conversation. We assess automated approaches that may help in making this determination and then show that within our dataset of replies to mainstream media outlets and government agencies, bad faith interactions constitute 68.3% of all replies we studied, suggesting potential concerns about the quality of discourse in these specific conversational contexts. This is particularly true from verified accounts, where 91.7% of replies were bad faith. Given that verified accounts are algorithmically amplified, we discuss the implications of our work for understanding the user experience on social media.