Position: Epistemic Artificial Intelligence is Essential for Machine Learning Models to Know When They Do Not Know

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI models exhibit weak uncertainty modeling capabilities under out-of-distribution (OOD), unknown, and adversarial conditions, compromising decision robustness in high-stakes autonomous systems such as self-driving vehicles. To address this, we propose the “Cognitive Artificial Intelligence” paradigm—the first systematic framework advocating “learning from ignorance,” wherein uncertainty modeling is intrinsically embedded as a core model capability rather than a post-hoc refinement. Methodologically, our approach integrates Bayesian inference, confidence calibration, metacognitive monitoring, and interpretable uncertainty quantification to realize intelligent agents with self-monitoring and active rejection capabilities. Empirical evaluation demonstrates that our framework significantly improves safety response rates on OOD inputs and enhances generalization reliability. It delivers a verifiable, interpretable uncertainty-aware decision-making mechanism for high-risk autonomous systems.

Technology Category

Application Category

📝 Abstract
Despite the impressive achievements of AI, including advancements in generative models and large language models, there remains a significant gap in the ability of AI to handle uncertainty and generalize beyond the training data. We argue that AI models, especially in autonomous systems, fail to make robust predictions when faced with unfamiliar or adversarial data, as evidenced by incidents with autonomous vehicles. Traditional machine learning approaches struggle to address these issues due to an overemphasis on data fitting and domain adaptation. This position paper posits a paradigm shift towards epistemic artificial intelligence, emphasizing the need for models to learn not only from what they know but also from their ignorance. This approach, which focuses on recognizing and managing uncertainty, offers a potential solution to improve the resilience and robustness of AI systems, ensuring that they can better handle unpredictable real-world environments.
Problem

Research questions and friction points this paper is trying to address.

AI lacks ability to handle uncertainty and generalize beyond training data
Autonomous systems fail to make robust predictions with unfamiliar data
Traditional ML overemphasizes data fitting, ignoring uncertainty management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Epistemic AI focuses on uncertainty recognition
Models learn from both knowledge and ignorance
Enhances resilience in unpredictable environments
🔎 Similar Papers
No similar papers found.