🤖 AI Summary
Contemporary AI systems suffer from deficiencies in reliability, explainability, and auditability. Method: This work proposes a trustworthy AI framework grounded in s(CASP)—a constraint-based Answer Set Programming system—systematically mapping s(CASP) to Lenat and Marcus’s 16 requirements for trustworthy AI for the first time, while extending it with inconsistency detection and alternative-world assumptions. The framework integrates goal-directed reasoning, nonmonotonic commonsense modeling, and counterfactual semantic extensions to balance explainability and inferential flexibility. Contribution/Results: Empirically validated on dialogue agents and virtual embodied reasoners, the framework enables full-chain reasoning trace generation, real-time conflict identification, and transparent decision auditing—effectively bridging the gap between the unauditability of pure LLMs and the excessive rigidity of traditional symbolic systems.
📝 Abstract
Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.