🤖 AI Summary
This study exposes how entrenched U.S. municipal public procurement systems structurally impede local AI governance: outdated regulatory frameworks engender suboptimal AI vendor selection, ambiguous accountability, and misaligned decision-making authority—thereby amplifying algorithmic harms. Drawing on semi-structured interviews with 19 procurement officials across seven cities and employing qualitative thematic analysis grounded in institutional ethnography, the study systematically identifies, for the first time, three empirically grounded challenges to algorithmic risk governance within procurement processes. It proposes a practice-oriented reform pathway for municipal AI procurement and introduces the FAccT (Fairness, Accountability, and Transparency) Community Co-Governance Framework—a concrete, actionable institutional intervention designed to align procurement practices with ethical AI principles. The findings provide policymakers and technical communities with empirically validated levers for systemic reform at the municipal level.
📝 Abstract
Most AI tools adopted by governments are not developed internally, but instead are acquired from third-party vendors in a process called public procurement. In this paper, we conduct the first empirical study of how United States cities' procurement practices shape critical decisions surrounding public sector AI. We conduct semi-structured interviews with 19 city employees who oversee AI procurement across 7 U.S. cities. We found that cities' legacy procurement practices, which are shaped by decades-old laws and norms, establish infrastructure that determines which AI is purchased, and which actors hold decision-making power over procured AI. We characterize the emerging actions cities have taken to adapt their purchasing practices to address algorithmic harms. From employees' reflections on real-world AI procurements, we identify three key challenges that motivate but are not fully addressed by existing AI procurement reform initiatives. Based on these findings, we discuss implications and opportunities for the FAccT community to support cities in foreseeing and preventing AI harms throughout the public procurement processes.