🤖 AI Summary
Traditional Inductive Logic Programming (ILP) typically produces only a single hypothesis per training run, while ensemble methods—though beneficial for generalization—incur prohibitive computational overhead. To address this, we propose Symbolic Snapshot Ensemble (SSE): an efficient ensemble framework that dynamically saves multiple intermediate hypotheses during a *single* ILP training process and fuses them via Minimum Description Length (MDL)-based weighting. SSE avoids redundant training, preserves full symbolic interpretability, and enables computationally lightweight ensembling. Evaluated on multiple standard ILP benchmarks, SSE achieves an average 4.0% improvement in predictive accuracy with less than 1% increase in computational cost. Our key contribution is the first principled integration of intermediate training states as ensemble resources—thereby significantly improving the efficiency–performance trade-off in logical machine learning without sacrificing transparency or logical fidelity.
📝 Abstract
Inductive logic programming (ILP) is a form of logical machine learning. Most ILP algorithms learn a single hypothesis from a single training run. Ensemble methods train an ILP algorithm multiple times to learn multiple hypotheses. In this paper, we train an ILP algorithm only once and save intermediate hypotheses. We then combine the hypotheses using a minimum description length weighting scheme. Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.