Honey, I shrunk the hypothesis space (through logical preprocessing)

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Inductive Logic Programming (ILP), the vast hypothesis space severely impedes search efficiency. This paper proposes a background-knowledge-driven logical preprocessing mechanism that achieves, for the first time, sample-free semantic contraction of the hypothesis space: using Answer Set Programming (ASP), it automatically identifies and prunes hypotheses logically inconsistent with background knowledge (e.g., “an even number cannot be odd”), thereby drastically reducing the search space prior to inference. Integrated into a constraint-based ILP system, the method requires only 10 seconds of preprocessing across diverse tasks—including visual reasoning and game playing—reducing learning time from over 10 hours to just 2 seconds, with no loss in prediction accuracy. The core contribution is the first semantic-driven, sample-agnostic framework for hypothesis space contraction, significantly enhancing the scalability and practical applicability of ILP.

Technology Category

Application Category

📝 Abstract
Inductive logic programming (ILP) is a form of logical machine learning. The goal is to search a hypothesis space for a hypothesis that generalises training examples and background knowledge. We introduce an approach that 'shrinks' the hypothesis space before an ILP system searches it. Our approach uses background knowledge to find rules that cannot be in an optimal hypothesis regardless of the training examples. For instance, our approach discovers relationships such as"even numbers cannot be odd"and"prime numbers greater than 2 are odd". It then removes violating rules from the hypothesis space. We implement our approach using answer set programming and use it to shrink the hypothesis space of a constraint-based ILP system. Our experiments on multiple domains, including visual reasoning and game playing, show that our approach can substantially reduce learning times whilst maintaining predictive accuracies. For instance, given just 10 seconds of preprocessing time, our approach can reduce learning times from over 10 hours to only 2 seconds.
Problem

Research questions and friction points this paper is trying to address.

Reduces hypothesis space size using logical preprocessing
Identifies and removes non-optimal rules before learning
Improves ILP efficiency while maintaining accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preprocess hypothesis space using background knowledge
Remove invalid rules via answer set programming
Maintain accuracy while drastically cutting learning time
🔎 Similar Papers
No similar papers found.