🤖 AI Summary
This paper addresses the quantitative assessment of inconsistency degree in propositional knowledge bases, focusing on six NP-decidable inconsistency measures—including contension and hitting set. We systematically compare the computational performance of SAT-based solving (using MiniSat) versus Answer Set Programming (ASP, using Clingo), employing uniform propositional encodings and complexity-theoretic modeling. Experimental results demonstrate that ASP consistently outperforms both SAT solvers and naive baselines across all six measures, with particularly pronounced advantages on large-scale knowledge bases. Our primary contribution lies in revealing ASP’s intrinsic suitability for symbolic reasoning tasks involving structural constraints—such as minimal model enumeration and set cover—thereby establishing a more efficient and scalable paradigm for inconsistency measurement.
📝 Abstract
We present algorithms based on satisfiability problem (SAT) solving, as well as answer set programming (ASP), for solving the problem of determining inconsistency degrees in propositional knowledge bases. We consider six different inconsistency measures whose respective decision problems lie on the first level of the polynomial hierarchy. Namely, these are the contension, forgetting-based, hitting set, max-distance, sum-distance, and hit-distance inconsistency measures. In an extensive experimental analysis, we compare the SAT-based and ASP-based approaches with each other, as well as with a set of naive baseline algorithms. Our results demonstrate that, overall, both the SAT-based and the ASP-based approaches clearly outperform the naive baseline methods in terms of runtime. The results further show that the proposed ASP-based approaches perform superior to the SAT-based ones with regard to all six inconsistency measures considered in this work. Moreover, we conduct additional experiments to explain the aforementioned results in greater detail.