🤖 AI Summary
This paper addresses the tension between AI and democratic values—where AI may undermine autonomy, fairness, and trust, yet also enhance transparency, civic participation, and evidence-based policymaking. Methodologically, it introduces a novel “Democracy–Trustworthy AI Two-Dimensional Taxonomy,” systematically mapping both AI’s democratic risks (AIRD) and democratic gains (AIPD) onto the EU’s seven requirements for trustworthy AI. Grounded in democratic political theory and EU AI regulatory frameworks, the study develops targeted mitigation strategies through normative analysis, interdisciplinary institutional design, and policy contextualization—without relying on empirical data or algorithmic modeling. The resulting framework provides an actionable analytical lens for scholars assessing AI’s democratic impact, policymakers conducting ethical impact assessments, and engineers integrating democratic values into AI system design—thereby advancing inclusive, accountable, and resilient democratic governance in the algorithmic age. (149 words)
📝 Abstract
Artificial Intelligence (AI) poses both significant risks and valuable opportunities for democratic governance. This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy: the AI Risks to Democracy (AIRD) taxonomy, which identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust; and the AI's Positive Contributions to Democracy (AIPD) taxonomy, which highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking. Grounded in the European Union's approach to ethical AI governance, and particularly the seven Trustworthy AI requirements proposed by the European Commission's High-Level Expert Group on AI, each identified risk is aligned with mitigation strategies based on EU regulatory and normative frameworks. Our analysis underscores the transversal importance of transparency and societal well-being across all risk categories and offers a structured lens for aligning AI systems with democratic values. By integrating democratic theory with practical governance tools, this paper offers a normative and actionable framework to guide research, regulation, and institutional design to support trustworthy, democratic AI. It provides scholars with a conceptual foundation to evaluate the democratic implications of AI, equips policymakers with structured criteria for ethical oversight, and helps technologists align system design with democratic principles. In doing so, it bridges the gap between ethical aspirations and operational realities, laying the groundwork for more inclusive, accountable, and resilient democratic systems in the algorithmic age.