🤖 AI Summary
Public-sector AI deployment risks accountability ambiguity and opacity, potentially triggering “moral submergence” that undermines bureaucratic legitimacy and governance continuity. To address this, we propose an “institutional–technical co-design” framework grounded in moral agency theory and Weberian bureaucracy, articulating a three-component moral agency model: (1) delineating clear human accountability pathways; (2) embedding human-in-the-loop verification mechanisms to supervise AI operations; and (3) constraining AI deployment scope to ensure legislative fidelity and alignment with long-term governance objectives. The framework explicates the techno-institutional coupling through which AI functions as an organizational transparency instrument. Empirical and conceptual analysis demonstrates that it not only avoids responsibility diffusion but also enhances administrative transparency and institutional legitimacy. This work offers an original, ethically rigorous, and organizationally adaptive solution for public AI governance.
📝 Abstract
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI), but face important concerns about accountability and transparency when using AI systems. These concerns center on threats to the twin aims of bureaucracy: legitimate and faithful implementation of legislation, and the provision of stable, long-term governance. Both aims are threatened when AI systems are misattributed as either mere tools or moral subjects - a framing error that creates ethics sinks, constructs that facilitate dissipation of responsibility by obscuring clear lines of human moral agency. Here, we reject the notion that such outcomes are inevitable. Rather, where they appear, they are the product of structural design decisions across both the technology and the institution deploying it. We support this claim via a systematic application of conceptions of moral agency in AI ethics to Weberian bureaucracy. We establish that it is both desirable and feasible to render AI systems as tools for the generation of organizational transparency and legibility, which continue the processes of Weberian rationalization initiated by previous waves of digitalization. We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures: (a) maintain clear and just human lines of accountability, (b) ensure humans whose work is augmented by AI systems can verify the systems are functioning correctly, and (c) introduce AI only where it doesn't inhibit the capacity of bureaucracies towards either of their twin aims of legitimacy and stewardship. We suggest that AI introduced within this framework can not only improve efficiency and productivity while avoiding ethics sinks, but also improve the transparency and even the legitimacy of a bureaucracy.