🤖 AI Summary
Social media platforms face critical challenges in addressing online harassment—including detection latency, inefficient response mechanisms, and the marginalization of victims—particularly among minority ethnic student populations. Method: Drawing on 230 surveys and 15 in-depth interviews, this study adopts a victim-centred design paradigm grounded in empirical victim experiences and self-defense strategies. It proposes the ARI system blueprint: integrating user-driven crowdsourced awareness (Awareness), perpetrator-oriented economic accountability (Repercussion), and personalized, transparent intervention (Intervention). The design employs mixed-method qualitative research, needs-driven prototyping, incentive modeling, and privacy-enhancing architecture. Contribution/Results: The work delivers a deployable system specification that reconciles privacy preservation, anonymity, and attributable accountability. It establishes a fair, sustainable, and technically feasible anti-abuse mechanism prototype for platform deployment.
📝 Abstract
Online abuse, a persistent aspect of social platform interactions, impacts user well-being and exposes flaws in platform designs that include insufficient detection efforts and inadequate victim protection measures. Ensuring safety in platform interactions requires the integration of victim perspectives in the design of abuse detection and response systems. In this paper, we conduct surveys (n = 230) and semi-structured interviews (n = 15) with students at a minority-serving institution in the US, to explore their experiences with abuse on a variety of social platforms, their defense strategies, and their recommendations for social platforms to improve abuse responses. We build on study findings to propose design requirements for abuse defense systems and discuss the role of privacy, anonymity, and abuse attribution requirements in their implementation. We introduce ARI, a blueprint for a unified, transparent, and personalized abuse response system for social platforms that sustainably detects abuse by leveraging the expertise of platform users, incentivized with proceeds obtained from abusers.