Learning Logical Rules using Minimum Message Length

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unifying probabilistic and logical learning remains a foundational challenge in artificial intelligence. This paper introduces Bayesian Inductive Logic Programming (BILP), a novel framework that integrates the Minimum Message Length (MML) principle with logically structured priors over logic programs. Specifically, it defines a genericity prior—favoring syntactically concise hypotheses—and a likelihood function—favoring accurate data fit—enabling efficient rule induction from positive examples only. The approach is inherently data-efficient and robust to class imbalance. Empirically, BILP outperforms state-of-the-art methods based on Minimum Description Length (MDL), as well as purely probabilistic or purely logical approaches, on tasks including game strategy discovery and drug molecule generation. Crucially, it is the first framework to jointly achieve interpretability (via human-readable logic rules), statistical rigor (via principled Bayesian inference), and strong practical generalization—demonstrating that probabilistic and logical paradigms can be coherently unified without compromising any of these desiderata.

Technology Category

Application Category

📝 Abstract
Unifying probabilistic and logical learning is a key challenge in AI. We introduce a Bayesian inductive logic programming approach that learns minimum message length programs from noisy data. Our approach balances hypothesis complexity and data fit through priors, which explicitly favour more general programs, and a likelihood that favours accurate programs. Our experiments on several domains, including game playing and drug design, show that our method significantly outperforms previous methods, notably those that learn minimum description length programs. Our results also show that our approach is data-efficient and insensitive to example balance, including the ability to learn from exclusively positive examples.
Problem

Research questions and friction points this paper is trying to address.

Unifying probabilistic and logical learning in AI
Learning minimum message length programs from noisy data
Balancing hypothesis complexity and data fit effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian inductive logic programming approach
Balances hypothesis complexity and data fit
Learns from exclusively positive examples
🔎 Similar Papers
No similar papers found.