Symbolic Snapshot Ensembles

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Inductive Logic Programming (ILP) typically produces only a single hypothesis per training run, while ensemble methods—though beneficial for generalization—incur prohibitive computational overhead. To address this, we propose Symbolic Snapshot Ensemble (SSE): an efficient ensemble framework that dynamically saves multiple intermediate hypotheses during a *single* ILP training process and fuses them via Minimum Description Length (MDL)-based weighting. SSE avoids redundant training, preserves full symbolic interpretability, and enables computationally lightweight ensembling. Evaluated on multiple standard ILP benchmarks, SSE achieves an average 4.0% improvement in predictive accuracy with less than 1% increase in computational cost. Our key contribution is the first principled integration of intermediate training states as ensemble resources—thereby significantly improving the efficiency–performance trade-off in logical machine learning without sacrificing transparency or logical fidelity.

Technology Category

Application Category

📝 Abstract
Inductive logic programming (ILP) is a form of logical machine learning. Most ILP algorithms learn a single hypothesis from a single training run. Ensemble methods train an ILP algorithm multiple times to learn multiple hypotheses. In this paper, we train an ILP algorithm only once and save intermediate hypotheses. We then combine the hypotheses using a minimum description length weighting scheme. Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Improving predictive accuracy in inductive logic programming
Reducing computational overhead for ensemble methods
Combining intermediate hypotheses with MDL weighting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Saves intermediate hypotheses from single training run
Combines hypotheses using MDL weighting scheme
Improves accuracy with minimal computational overhead
🔎 Similar Papers
No similar papers found.