Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the tension between AI and democratic values—where AI may undermine autonomy, fairness, and trust, yet also enhance transparency, civic participation, and evidence-based policymaking. Methodologically, it introduces a novel “Democracy–Trustworthy AI Two-Dimensional Taxonomy,” systematically mapping both AI’s democratic risks (AIRD) and democratic gains (AIPD) onto the EU’s seven requirements for trustworthy AI. Grounded in democratic political theory and EU AI regulatory frameworks, the study develops targeted mitigation strategies through normative analysis, interdisciplinary institutional design, and policy contextualization—without relying on empirical data or algorithmic modeling. The resulting framework provides an actionable analytical lens for scholars assessing AI’s democratic impact, policymakers conducting ethical impact assessments, and engineers integrating democratic values into AI system design—thereby advancing inclusive, accountable, and resilient democratic governance in the algorithmic age. (149 words)

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) poses both significant risks and valuable opportunities for democratic governance. This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy: the AI Risks to Democracy (AIRD) taxonomy, which identifies how AI can undermine core democratic principles such as autonomy, fairness, and trust; and the AI's Positive Contributions to Democracy (AIPD) taxonomy, which highlights AI's potential to enhance transparency, participation, efficiency, and evidence-based policymaking. Grounded in the European Union's approach to ethical AI governance, and particularly the seven Trustworthy AI requirements proposed by the European Commission's High-Level Expert Group on AI, each identified risk is aligned with mitigation strategies based on EU regulatory and normative frameworks. Our analysis underscores the transversal importance of transparency and societal well-being across all risk categories and offers a structured lens for aligning AI systems with democratic values. By integrating democratic theory with practical governance tools, this paper offers a normative and actionable framework to guide research, regulation, and institutional design to support trustworthy, democratic AI. It provides scholars with a conceptual foundation to evaluate the democratic implications of AI, equips policymakers with structured criteria for ethical oversight, and helps technologists align system design with democratic principles. In doing so, it bridges the gap between ethical aspirations and operational realities, laying the groundwork for more inclusive, accountable, and resilient democratic systems in the algorithmic age.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI's risks and opportunities for democratic governance
Aligning AI systems with democratic values through EU frameworks
Bridging ethical AI aspirations with practical governance tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual taxonomy for AI risks and contributions
EU Trustworthy AI requirements alignment
Normative framework for democratic AI governance
🔎 Similar Papers
2024-08-14AGI - Artificial General Intelligence - Robotics - Safety & AlignmentCitations: 27
O
Oier Mentxaka
Dept. of Computer Science and Artificial Intelligence, DaSCI, University of Granada, Spain
N
Natalia D'iaz-Rodr'iguez
Dept. of Computer Science and Artificial Intelligence, DaSCI, University of Granada, Spain
Mark Coeckelbergh
Mark Coeckelbergh
Professor of Philosophy of Media and Technology, University of Vienna
philosophy of technologyethics
M
Marcos L'opez de Prado
School of Engineering, Cornell University, Ithaca, NY , United States; Dept. of Mathematics, Khalifa University of Science and Technology, Abu Dhabi, UAE; ADIA Lab, Al Maryah Island, Abu Dhabi, UAE
E
Emilia G'omez
Joint Research Centre, European Commission, Seville, Spain
David Fernández Llorca
David Fernández Llorca
European Commission - Joint Research Centre and University of Alcalá
Artificial IntelligenceTrustworthy AIAutonomous VehiclesIntelligent Transportation Systems
E
E. Herrera-Viedma
Dept. of Computer Science and Artificial Intelligence, DaSCI, University of Granada, Spain
Francisco Herrera
Francisco Herrera
Professor Computer Science and AI, DaSCI Research Institute, Granada University, Spain
Artificial IntelligenceComputational IntelligenceData ScienceTrustworthy AI