🤖 AI Summary
Contemporary government AI systems suffer from decision-making bias and insufficient transparency: technocentric transparency practices fail to align with public comprehension and actionable needs, overlooking the socio-material contexts of algorithmic decisions. This paper introduces the concept of “meaningful transparency,” transcending conventional algorithmic explainability by grounding transparency in public understanding, capacity for intervention, and specific governance contexts. Drawing on socio-technical systems theory and human-centered AI principles, we employ qualitative methods—including participatory design, situated analysis, and multi-stakeholder deliberation—to develop a transparency framework that enables collaborative reflection, critical interrogation, and adaptive adjustment of AI decisions by citizens and civil servants alike. The framework demonstrably enhances public comprehension of AI governance and strengthens civic agency. It offers an actionable pathway toward equitable, accountable, and democratically legitimate public-sector AI systems.
📝 Abstract
Artificial intelligence has become a part of the provision of governmental services, from making decisions about benefits to issuing fines for parking violations. However, AI systems rarely live up to the promise of neutral optimisation, creating biased or incorrect outputs and reducing the agency of both citizens and civic workers to shape the way decisions are made. Transparency is a principle that can both help subjects understand decisions made about them and shape the processes behind those decisions. However, transparency as practiced around AI systems tends to focus on the production of technical objects that represent algorithmic aspects of decision making. These are often difficult for publics to understand, do not connect to potential for action, and do not give insight into the wider socio-material context of decision making. In this paper, we build on existing approaches that take a human-centric view on AI transparency, combined with a socio-technical systems view, to develop the concept of meaningful transparency for civic AI systems: transparencies that allow publics to engage with AI systems that affect their lives, connecting understanding with potential for action.