"Over-the-Hood" AI Inclusivity Bugs and How 3 AI Product Teams Found and Fixed Them

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses “over-the-hood” inclusivity bias in AI products—a design flaw that implicitly excludes users by overlooking diverse problem-solving approaches. We propose six novel, AI-specific inclusivity defect patterns and introduce GenderMag-for-AI, an adapted inclusivity design method tailored to AI’s unique characteristics. Through field observation, user behavioral analysis, and cross-functional collaborative workshops, we systematically identified 83 inclusivity defects across three AI product teams and successfully remediated 47. Results demonstrate the method’s effectiveness in detecting and mitigating latent inclusivity vulnerabilities in AI contexts. It significantly enhances designers’ ability to recognize and respond to cognitive diversity and heterogeneous interaction strategies among users. By bridging theory and practice, this work provides a scalable, actionable methodology for inclusive AI engineering—advancing both design practice and responsible AI development.

Technology Category

Application Category

📝 Abstract
While much research has shown the presence of AI's "under-the-hood" biases (e.g., algorithmic, training data, etc.), what about "over-the-hood" inclusivity biases: barriers in user-facing AI products that disproportionately exclude users with certain problem-solving approaches? Recent research has begun to report the existence of such biases -- but what do they look like, how prevalent are they, and how can developers find and fix them? To find out, we conducted a field study with 3 AI product teams, to investigate what kinds of AI inclusivity bugs exist uniquely in user-facing AI products, and whether/how AI product teams might harness an existing (non-AI-oriented) inclusive design method to find and fix them. The teams' work resulted in identifying 6 types of AI inclusivity bugs arising 83 times, fixes covering 47 of these bug instances, and a new variation of the GenderMag inclusive design method, GenderMag-for-AI, that is especially effective at detecting certain kinds of AI inclusivity bugs.
Problem

Research questions and friction points this paper is trying to address.

Identifies user-facing AI product barriers excluding certain problem-solving approaches
Investigates prevalence and characteristics of over-the-hood AI inclusivity biases
Develops GenderMag-for-AI method to detect and fix AI inclusivity bugs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teams used existing inclusive design method GenderMag
Developed new GenderMag-for-AI variation for detection
Identified and fixed 47 AI inclusivity bug instances
🔎 Similar Papers
No similar papers found.