🤖 AI Summary
Unifying probabilistic and logical learning remains a foundational challenge in artificial intelligence. This paper introduces Bayesian Inductive Logic Programming (BILP), a novel framework that integrates the Minimum Message Length (MML) principle with logically structured priors over logic programs. Specifically, it defines a genericity prior—favoring syntactically concise hypotheses—and a likelihood function—favoring accurate data fit—enabling efficient rule induction from positive examples only. The approach is inherently data-efficient and robust to class imbalance. Empirically, BILP outperforms state-of-the-art methods based on Minimum Description Length (MDL), as well as purely probabilistic or purely logical approaches, on tasks including game strategy discovery and drug molecule generation. Crucially, it is the first framework to jointly achieve interpretability (via human-readable logic rules), statistical rigor (via principled Bayesian inference), and strong practical generalization—demonstrating that probabilistic and logical paradigms can be coherently unified without compromising any of these desiderata.
📝 Abstract
Unifying probabilistic and logical learning is a key challenge in AI. We introduce a Bayesian inductive logic programming approach that learns minimum message length programs from noisy data. Our approach balances hypothesis complexity and data fit through priors, which explicitly favour more general programs, and a likelihood that favours accurate programs. Our experiments on several domains, including game playing and drug design, show that our method significantly outperforms previous methods, notably those that learn minimum description length programs. Our results also show that our approach is data-efficient and insensitive to example balance, including the ability to learn from exclusively positive examples.