🤖 AI Summary
This study addresses occupational risks faced by crowdworkers—such as annotators, content moderators, and red-teaming practitioners—who routinely encounter harmful AI-generated content in crowdsourcing settings. Method: Adopting a participatory co-design approach, we conducted in-depth interviews and contextual analyses involving task designers, platform operators, and workers to integrate diverse stakeholder perspectives and requirements. Contribution/Results: We propose a “socio-technical trade-off–oriented” risk disclosure design framework that systematically identifies core tensions—including efficiency versus worker protection, and transparency versus usability. Based on this framework, we developed a functional prototype for risk communication embedded within crowdsourcing platforms, alongside an actionable implementation guide. This work establishes the first disclosure mechanism design paradigm for responsible AI labor practice grounded in multi-stakeholder governance.
📝 Abstract
Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted co-design sessions with 29 task designers, workers, and platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform stakeholders envision their role in shaping risk disclosure practices. We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices. We contribute design recommendations and feature concepts for risk disclosure mechanisms in the context of RAI content work.