🤖 AI Summary
Current AI models exhibit weak uncertainty modeling capabilities under out-of-distribution (OOD), unknown, and adversarial conditions, compromising decision robustness in high-stakes autonomous systems such as self-driving vehicles. To address this, we propose the “Cognitive Artificial Intelligence” paradigm—the first systematic framework advocating “learning from ignorance,” wherein uncertainty modeling is intrinsically embedded as a core model capability rather than a post-hoc refinement. Methodologically, our approach integrates Bayesian inference, confidence calibration, metacognitive monitoring, and interpretable uncertainty quantification to realize intelligent agents with self-monitoring and active rejection capabilities. Empirical evaluation demonstrates that our framework significantly improves safety response rates on OOD inputs and enhances generalization reliability. It delivers a verifiable, interpretable uncertainty-aware decision-making mechanism for high-risk autonomous systems.
📝 Abstract
Despite the impressive achievements of AI, including advancements in generative models and large language models, there remains a significant gap in the ability of AI to handle uncertainty and generalize beyond the training data. We argue that AI models, especially in autonomous systems, fail to make robust predictions when faced with unfamiliar or adversarial data, as evidenced by incidents with autonomous vehicles. Traditional machine learning approaches struggle to address these issues due to an overemphasis on data fitting and domain adaptation. This position paper posits a paradigm shift towards epistemic artificial intelligence, emphasizing the need for models to learn not only from what they know but also from their ignorance. This approach, which focuses on recognizing and managing uncertainty, offers a potential solution to improve the resilience and robustness of AI systems, ensuring that they can better handle unpredictable real-world environments.