🤖 AI Summary
This study investigates implicit toxicity in bug report discussions within open-source software—non-abusive yet severely detrimental to collaborative efficiency and issue resolution. Through qualitative content analysis, thematic coding, and cross-case comparison of 203 GitHub bug threads (including 81 toxic instances), we systematically identify three root causes: misalignment in severity perception, tool-induced frustration, and absence of professional communication norms. Moving beyond conventional toxicity detection focused solely on overtly aggressive language, we propose a context-sensitive toxicity attribution model, empirically demonstrating that toxicity significantly reduces the likelihood of associated pull request (PR) generation. Our contribution includes an intervention framework centered on “perceptual alignment” and “structured communication,” along with six actionable best practices—already preliminarily adopted and validated across multiple mainstream open-source projects.
📝 Abstract
Toxicity in bug report discussions poses significant challenges to the collaborative dynamics of open-source software development. Bug reports are crucial for identifying and resolving defects, yet their inherently problem-focused nature and emotionally charged context make them susceptible to toxic interactions. This study explores toxicity in GitHub bug reports through a qualitative analysis of 203 bug threads, including 81 toxic ones. Our findings reveal that toxicity frequently arises from misaligned perceptions of bug severity and priority, unresolved frustrations with tools, and lapses in professional communication. These toxic interactions not only derail productive discussions but also reduce the likelihood of actionable outcomes, such as linking issues with pull requests. Our preliminary findings offer actionable recommendations to improve bug resolution by mitigating toxicity.