Can Interpretability Layouts Influence Human Perception of Offensive Sentences?

📅 2024-03-01
🏛️ EXTRAAMAS
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether explainability layouts—specifically three visualization formats—impact human identification and judgment of gender- and race-based hate speech. Method: A mixed within- and between-subjects user study was conducted, analyzed via generalized additive models (GAMs) and qualitative coding. Contribution/Results: While layout type did not significantly alter users’ hate-speech ratings (p > 0.05), it significantly increased the frequency of corrective feedback—such as flagging misclassifications or questioning model rationale—and enabled deeper diagnostic reasoning about model trustworthiness and bias origins beyond accuracy alone. This is the first empirical demonstration that explainability layouts, though not improving inter-rater consistency in detection, actively foster participatory verification and critical model reflection. The findings establish a novel design paradigm for explainable AI in high-stakes content moderation contexts, emphasizing user agency and interpretive engagement over mere classification fidelity.

Technology Category

Application Category

📝 Abstract
This paper conducts a user study to assess whether three machine learning (ML) interpretability layouts can influence participants' views when evaluating sentences containing hate speech, focusing on the"Misogyny"and"Racism"classes. Given the existence of divergent conclusions in the literature, we provide empirical evidence on using ML interpretability in online communities through statistical and qualitative analyses of questionnaire responses. The Generalized Additive Model estimates participants' ratings, incorporating within-subject and between-subject designs. While our statistical analysis indicates that none of the interpretability layouts significantly influences participants' views, our qualitative analysis demonstrates the advantages of ML interpretability: 1) triggering participants to provide corrective feedback in case of discrepancies between their views and the model, and 2) providing insights to evaluate a model's behavior beyond traditional performance metrics.
Problem

Research questions and friction points this paper is trying to address.

Assess if interpretability layouts affect hate speech perception
Evaluate ML interpretability's role in online community moderation
Compare statistical and qualitative impacts of interpretability on feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

User study with three ML interpretability layouts
Generalized Additive Model for rating estimation
Qualitative analysis of ML interpretability benefits
🔎 Similar Papers
No similar papers found.