🤖 AI Summary
The inherent non-attributability of generative AI—arising from its opaque, stochastic generation process—undermines traditional transparency-based accountability mechanisms, rendering outputs difficult to trace to specific actors or decisions.
Method: Drawing on the metaphor of “artificially created nature,” this paper reconceptualizes opaque AI systems as unpredictable natural phenomena, thereby advocating a shift from retrospective liability to prospective, precautionary governance. Through conceptual analysis and socio-technical systems theory, it demonstrates that transparency is neither necessary nor sufficient for accountability. It then proposes a platformized governance architecture embedded with structured civic participation, emphasizing multi-stakeholder collaboration in anticipatory risk identification and adaptive response.
Contribution: This study pioneers the systematic application of the precautionary principle to generative AI accountability research. It advances a novel governance paradigm that transcends the “transparency fallacy,” offering a foundational framework for institutionally robust, forward-looking AI regulation.
📝 Abstract
The rapid development of generative artificial intelligence (AI) technologies raises concerns about the accountability of sociotechnical systems. Current generative AI systems rely on complex mechanisms that make it difficult for even experts to fully trace the reasons behind the outputs. This paper first examines existing research on AI transparency and accountability and argues that transparency is not a sufficient condition for accountability but can contribute to its improvement. We then discuss that if it is not possible to make generative AI transparent, generative AI technology becomes ``artificially created nature'' in a metaphorical sense, and suggest using the precautionary principle approach to consider AI risks. Finally, we propose that a platform for citizen participation is needed to address the risks of generative AI.