Architecting Trust in Artificial Epistemic Agents

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first trust-centered framework for AI cognitive agents, addressing growing concerns about the unreliability of large language models in knowledge production and their misalignment with human cognitive norms—issues that risk epistemic degradation and knowledge drift. Integrating insights from cognitive philosophy, AI alignment, and sociotechnical infrastructure, the framework introduces novel concepts such as “epistemic shelters” and incorporates mechanisms for technical provenance, falsifiability, and alignment with cognitive norms. By doing so, it offers a normative roadmap for cultivating a robust and inclusive human-AI co-governed knowledge ecosystem, thereby enhancing cognitive resilience, mitigating epistemic risks, and strengthening collective decision-making capacities.

Technology Category

Application Category

📝 Abstract
Large language models increasingly function as epistemic agents -- entities that can 1) autonomously pursue epistemic goals and 2) actively shape our shared knowledge environment. They curate the information we receive, often supplanting traditional search-based methods, and are frequently used to generate both personal and deeply specialized advice. How they perform these functions, including whether they are reliable and properly calibrated to both individual and collective epistemic norms, is therefore highly consequential for the choices we make. We argue that the potential impact of epistemic AI agents on practices of knowledge creation, curation and synthesis, particularly in the context of complex multi-agent interactions, creates new informational interdependencies that necessitate a fundamental shift in evaluation and governance of AI. While a well-calibrated ecosystem could augment human judgment and collective decision-making, poorly aligned agents risk causing cognitive deskilling and epistemic drift, making the calibration of these models to human norms a high-stakes necessity. To ensure a beneficial human-AI knowledge ecosystem, we propose a framework centered on building and cultivating the trustworthiness of epistemic AI agents; aligning AI these agents with human epistemic goals; and reinforcing the surrounding socio-epistemic infrastructure. In this context, trustworthy AI agents must demonstrate epistemic competence, robust falsifiability, and epistemically virtuous behaviors, supported by technical provenance systems and "knowledge sanctuaries" designed to protect human resilience. This normative roadmap provides a path toward ensuring that future AI systems act as reliable partners in a robust and inclusive knowledge ecosystem.
Problem

Research questions and friction points this paper is trying to address.

epistemic agents
trustworthiness
knowledge ecosystem
epistemic norms
AI governance
Innovation

Methods, ideas, or system contributions that make the work stand out.

epistemic agents
trustworthiness
knowledge sanctuaries
technical provenance
epistemic alignment
🔎 Similar Papers
No similar papers found.