Will Power Return to the Clouds? From Divine Authority to GenAI Authority

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies generative AI (GenAI) as evolving into a novel “arbiter of truth,” whose centralized authority—exceeding historical religious institutions in scale and societal penetration—poses critical ethical risks: algorithmic opacity, linguistic inequity, bias amplification loops, and synthetic disinformation. Methodologically, it integrates Foucault’s power/knowledge framework, Weber’s typology of authority, and Floridi’s dataism, conducting a cross-historical comparative analysis grounded in platform transparency reports and empirical data. The study makes two key contributions: first, it introduces a dual-authority paradigm—“rational-technocratic” and “agential-technocratic”—to characterize GenAI’s legitimacy structures; second, it proposes a four-pillar governance framework comprising (1) an international model registry, (2) regional AI observatories, (3) public critical AI literacy education, and (4) community-based data trusts. This framework mitigates epistemic monopolies, narrows the trust–dependence gap, and disrupts the entrenchment of digital orthodoxy.

Technology Category

Application Category

📝 Abstract
Generative AI systems now mediate newsfeeds, search rankings, and creative content for hundreds of millions of users, positioning a handful of private firms as de-facto arbiters of truth. Drawing on a comparative-historical lens, this article juxtaposes the Galileo Affair, a touchstone of clerical knowledge control, with contemporary Big-Tech content moderation. We integrate Foucault's power/knowledge thesis, Weber's authority types (extended to a rational-technical and emerging agentic-technical modality), and Floridi's Dataism to analyze five recurrent dimensions: disciplinary power, authority modality, data pluralism, trust versus reliance, and resistance pathways. Primary sources (Inquisition records; platform transparency reports) and recent empirical studies on AI trust provide the evidentiary base. Findings show strong structural convergences: highly centralized gatekeeping, legitimacy claims couched in transcendent principles, and systematic exclusion of marginal voices. Divergences lie in temporal velocity, global scale, and the widening gap between public reliance and trust in AI systems. Ethical challenges cluster around algorithmic opacity, linguistic inequity, bias feedback loops, and synthetic misinformation. We propose a four-pillar governance blueprint: (1) a mandatory international model-registry with versioned policy logs, (2) representation quotas and regional observatories to de-center English-language hegemony, (3) mass critical-AI literacy initiatives, and (4) public-private support for community-led data trusts. Taken together, these measures aim to narrow the trust-reliance gap and prevent GenAI from hardcoding a twenty-first-century digital orthodoxy.
Problem

Research questions and friction points this paper is trying to address.

Analyzes GenAI's role as a new authority shaping truth and information control.
Compares historical clerical power with modern tech firms' content moderation practices.
Addresses ethical issues like algorithmic opacity, bias, and synthetic misinformation in AI.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparative-historical analysis of AI governance using power theories
Four-pillar blueprint for international AI regulation and transparency
Proposes data trusts and literacy to counter algorithmic centralization
🔎 Similar Papers
No similar papers found.