🤖 AI Summary
This work addresses the lifelong learning challenge in neuro-symbolic AI, specifically focusing on knowledge reuse in Inductive Logic Programming (ILP) under continuous task sequences. We propose the first lifelong ILP framework that explicitly supports composability and transferability of logical rules. Our method integrates differentiable logic reasoning, knowledge distillation, and parameter regularization to enable continual rule extraction, compression, and cross-task transfer. The key contribution is the first explicit modeling of structural rule composability within neuro-symbolic systems—allowing models to acquire new rules efficiently without catastrophic forgetting of prior knowledge. Evaluated on multi-task ILP benchmarks, our approach achieves significant improvements in learning efficiency, generalization, and scalability. It establishes a novel paradigm for continual learning in neuro-symbolic AI, advancing both theoretical foundations and practical applicability of interpretable, adaptive reasoning systems.
📝 Abstract
Solving Inductive Logic Programming (ILP) problems with neural networks is a key challenge in Neural-Symbolic Ar- tificial Intelligence (AI). While most research has focused on designing novel network architectures for individual prob- lems, less effort has been devoted to exploring new learning paradigms involving a sequence of problems. In this work, we investigate lifelong learning ILP, which leverages the com- positional and transferable nature of logic rules for efficient learning of new problems. We introduce a compositional framework, demonstrating how logic rules acquired from ear- lier tasks can be efficiently reused in subsequent ones, leading to improved scalability and performance. We formalize our approach and empirically evaluate it on sequences of tasks. Experimental results validate the feasibility and advantages of this paradigm, opening new directions for continual learn- ing in Neural-Symbolic AI.