🤖 AI Summary
Current centralized content moderation systems overlook the subjectivity of harm and rely excessively on blunt removal, resulting in the loss of valuable content. This paper proposes a novel paradigm of personalized content transformation: the browser extension DIY-MOD enables users to define “harmful” content contextually—based on their lived experiences—and apply non-removal, real-time visual transformations (e.g., blurring, artistic stylization) directly in the frontend, thereby preserving informational integrity while enhancing perceived safety. The system integrates insights from user interviews with lightweight, real-time processing techniques to realize a customizable two-module architecture—comprising adaptive detection and configurable transformation components—featuring a multi-strategy transformation engine and an intuitive rule configuration interface. A user study demonstrates that this approach significantly improves users’ sense of control, subjective safety, and community engagement, establishing the first individual-centered, fine-grained, and reversible framework for content governance.
📝 Abstract
Centralized content moderation paradigm both falls short and over-reaches: 1) it fails to account for the subjective nature of harm, and 2) it acts with blunt suppression in response to content deemed harmful, even when such content can be salvaged. We first investigate this through formative interviews, documenting how seemingly benign content becomes harmful due to individual life experiences. Based on these insights, we developed DIY-MOD, a browser extension that operationalizes a new paradigm: personalized content transformation. Operating on a user's own definition of harm, DIY-MOD transforms sensitive elements within content in real-time instead of suppressing the content itself. The system selects the most appropriate transformation for a piece of content from a diverse palette--from obfuscation to artistic stylizing--to match the user's specific needs while preserving the content's informational value. Our two-session user study demonstrates that this approach increases users' sense of agency and safety, enabling them to engage with content and communities they previously needed to avoid.