🤖 AI Summary
Social media users with disabilities frequently encounter ableist discourse and microaggressions, yet current generic AI content moderation systems exhibit high false-positive rates, lack interpretability, and afford users no meaningful control. Method: Through in-depth interviews and participatory design workshops with 23 disabled users, we co-developed the first user-centered, configurable, interpretable, and tiered framework for filtering ableist content. Results: We found users distrust AI moderation accuracy and prefer configuring filters by discrimination type—not severity—and widely accept content warnings as the least intrusive intervention; AI-assisted rephrasing is feasible but requires careful implementation. Our findings yield seven inclusive content governance design principles, prioritizing user agency, harm mitigation, and safety—thereby challenging platform-wide “one-size-fits-all” moderation paradigms.
📝 Abstract
Disabled people on social media often experience ableist hate and microaggressions. Prior work has shown that platform moderation often fails to remove ableist hate leaving disabled users exposed to harmful content. This paper examines how personalized moderation can safeguard users from viewing ableist comments. During interviews and focus groups with 23 disabled social media users, we presented design probes to elicit perceptions on configuring their filters of ableist speech (e.g. intensity of ableism and types of ableism) and customizing the presentation of the ableist speech to mitigate the harm (e.g. AI rephrasing the comment and content warnings). We found that participants preferred configuring their filters through types of ableist speech and favored content warnings. We surface participants distrust in AI-based moderation, skepticism in AI's accuracy, and varied tolerances in viewing ableist hate. Finally we share design recommendations to support users' agency, mitigate harm from hate, and promote safety.