Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All?

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI’s deep integration into critical systems poses significant societal, economic, and democratic risks when failures occur. Challenging the prevailing “regulation stifles innovation” paradigm, this paper argues that the absence of robust, rights-based governance has already exacerbated disinformation, algorithmic bias, and accountability deficits. Drawing on the EU AI Act and analogous regulatory models (e.g., aviation, pharmaceuticals), we propose a “risk-tiered, responsibility-driven” governance framework. It centers on a mandatory Fundamental Rights Impact Assessment (FRIA) as a pre-deployment requirement, integrated with regulatory sandboxes, real-world testing, AI literacy initiatives, and transparent accountability mechanisms. Our core contribution redefines technological innovation: responsible regulation is not an impediment but a foundational enabler—enhancing legal certainty, consumer trust, and ethical competitiveness. Ultimately, the framework advances synergistic alignment between technical progress and democratic values. (149 words)

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) now permeates critical infrastructures and decision-making systems where failures produce social, economic, and democratic harm. This position paper challenges the entrenched belief that regulation and innovation are opposites. As evidenced by analogies from aviation, pharmaceuticals, and welfare systems and recent cases of synthetic misinformation, bias and unaccountable decision-making, the absence of well-designed regulation has already created immeasurable damage. Regulation, when thoughtful and adaptive, is not a brake on innovation--it is its foundation. The present position paper examines the EU AI Act as a model of risk-based, responsibility-driven regulation that addresses the Collingridge Dilemma: acting early enough to prevent harm, yet flexibly enough to sustain innovation. Its adaptive mechanisms--regulatory sandboxes, small and medium enterprises (SMEs) support, real-world testing, fundamental rights impact assessment (FRIA) -- demonstrate how regulation can accelerate responsibly, rather than delay, technological progress. The position paper summarises how governance tools transform perceived burdens into tangible advantages: legal certainty, consumer trust, and ethical competitiveness. Ultimately, the paper reframes progress: innovation and regulation advance together. By embedding transparency, impact assessments, accountability, and AI literacy into design and deployment, the EU framework defines what responsible innovation truly means--technological ambition disciplined by democratic values and fundamental rights.
Problem

Research questions and friction points this paper is trying to address.

Challenges the belief that regulation opposes innovation in AI systems
Examines how EU AI Act prevents harm while sustaining technological progress
Demonstrates how governance tools transform regulatory burdens into competitive advantages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive regulation enables responsible AI innovation
EU AI Act uses risk-based governance tools
Regulatory sandboxes and impact assessments support progress
🔎 Similar Papers
No similar papers found.
J
Josu Eguiluz Castañeira
Adevinta ServicesCo S.L.U.; Pompeu Fabra University (UPF)
Axel Brando
Axel Brando
Research Group Leader TAIES / HPES Lab / BSC-CNS. Former Industrial Ph.D. at BBVA and UB
Trustworthy AIEthical AIUncertainty modellingComputer ScientistMathematician
M
Migle Laukyte
Pompeu Fabra University (UPF)
M
Marc Serra-Vidal
Kleinanzeigen.de GmbH (Adevinta)