Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary AI systems suffer from deficiencies in reliability, explainability, and auditability. Method: This work proposes a trustworthy AI framework grounded in s(CASP)—a constraint-based Answer Set Programming system—systematically mapping s(CASP) to Lenat and Marcus’s 16 requirements for trustworthy AI for the first time, while extending it with inconsistency detection and alternative-world assumptions. The framework integrates goal-directed reasoning, nonmonotonic commonsense modeling, and counterfactual semantic extensions to balance explainability and inferential flexibility. Contribution/Results: Empirically validated on dialogue agents and virtual embodied reasoners, the framework enables full-chain reasoning trace generation, real-time conflict identification, and transparent decision auditing—effectively bridging the gap between the unauditability of pure LLMs and the excessive rigidity of traditional symbolic systems.

Technology Category

Application Category

📝 Abstract
Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI trustworthiness via explainable reasoning
Bridging sub-symbolic and rule-based reasoning limitations
Addressing 16+2 desiderata for reliable commonsense AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Goal-directed constraint-based answer set programming
Emulates reliable human-style commonsense reasoning
Supports 16+2 desiderata for trustworthy AI
🔎 Similar Papers
No similar papers found.
A
Alexis R. Tudor
University of Texas at Dallas, Texas, USA
Yankai Zeng
Yankai Zeng
University of Texas at Dallas
H
Huaduo Wang
University of Texas at Dallas, Texas, USA
J
Joaquin Arias
CETINIA, Universidad Rey Juan Carlos, Madrid, Spain
Gopal Gupta
Gopal Gupta
Professor of Computer Science, The University of Texas at Dallas
Programming languagesLogic ProgrammingArtificial Intelligence