Ask before you Build: Rethinking AI-for-Good in Human Trafficking Interventions

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-for-Good initiatives addressing complex sociotechnical challenges—such as human trafficking—frequently exhibit technological determinism, oversimplifying exploitative structures, exacerbating power asymmetries, and inadvertently harming affected communities. This paper introduces *Radical Questioning* (RQ), a five-step pre-design ethical assessment framework centered on marginalized populations, designed to rigorously interrogate the necessity, legitimacy, and potential harms of AI intervention *before* technical development begins. Distinct from conventional ethics guidelines, RQ is grounded in critical technical practice and employs context-sensitive qualitative analysis—drawing on real-world anti-trafficking interventions—to surface sociocultural dynamics obscured by mainstream technical solutions. Empirical application demonstrates that RQ successfully disrupts surveillance-oriented design pathways, shifting AI practice from risk mitigation toward survivor-centered empowerment and sustainable support. Its core contribution lies in establishing a structural, pre-algorithmic ethical intervention mechanism.

Technology Category

Application Category

📝 Abstract
AI for good initiatives often rely on the assumption that technical interventions can resolve complex social problems. In the context of human trafficking (HT), such techno-solutionism risks oversimplifying exploitation, reinforcing power imbalances and causing harm to the very communities AI claims to support. In this paper, we introduce the Radical Questioning (RQ) framework as a five step, pre-project ethical assessment tool to critically evaluate whether AI should be built at all, especially in domains involving marginalized populations and entrenched systemic injustice. RQ does not replace principles based ethics but precedes it, offering an upstream, deliberative space to confront assumptions, map power, and consider harms before design. Using a case study in AI for HT, we demonstrate how RQ reveals overlooked sociocultural complexities and guides us away from surveillance based interventions toward survivor empowerment tools. While developed in the context of HT, RQ's five step structure can generalize to other domains, though the specific questions must be contextual. This paper situates RQ within a broader AI ethics philosophy that challenges instrumentalist norms and centers relational, reflexive responsibility.
Problem

Research questions and friction points this paper is trying to address.

Evaluating ethical risks of AI in human trafficking interventions
Challenging techno-solutionism in complex social injustice contexts
Proposing pre-development framework to prevent harm to marginalized groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Radical Questioning framework for ethical AI assessment
Focuses on pre-project evaluation to avoid harmful interventions
Shifts from surveillance to survivor empowerment tools
🔎 Similar Papers
No similar papers found.