Curriculum Abductive Learning

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Abductive Learning suffers from combinatorial explosion in the abductive search space and training instability due to large-scale knowledge bases. Method: This paper proposes a knowledge-base-structure-aware progressive curriculum learning framework. It explicitly leverages the intrinsic hierarchical structure of knowledge bases to design a layered curriculum strategy, incrementally and smoothly integrating logical knowledge into training—departing from conventional static, black-box assumptions. The approach unifies symbolic abductive reasoning, iterative model optimization, and structured decomposition of the knowledge base. Contribution/Results: Experiments demonstrate significantly improved training stability and faster convergence. The method achieves superior final accuracy on multi-task benchmarks, especially under complex knowledge-intensive scenarios. It establishes a novel paradigm for interpretable and robust neuro-symbolic learning.

Technology Category

Application Category

📝 Abstract
Abductive Learning (ABL) integrates machine learning with logical reasoning in a loop: a learning model predicts symbolic concept labels from raw inputs, which are revised through abduction using domain knowledge and then fed back for retraining. However, due to the nondeterminism of abduction, the training process often suffers from instability, especially when the knowledge base is large and complex, resulting in a prohibitively large abduction space. While prior works focus on improving candidate selection within this space, they typically treat the knowledge base as a static black box. In this work, we propose Curriculum Abductive Learning (C-ABL), a method that explicitly leverages the internal structure of the knowledge base to address the ABL training challenges. C-ABL partitions the knowledge base into a sequence of sub-bases, progressively introduced during training. This reduces the abduction space throughout training and enables the model to incorporate logic in a stepwise, smooth way. Experiments across multiple tasks show that C-ABL outperforms previous ABL implementations, significantly improves training stability, convergence speed, and final accuracy, especially under complex knowledge setting.
Problem

Research questions and friction points this paper is trying to address.

Addresses instability in Abductive Learning training process
Reduces large abduction space via structured knowledge partitioning
Improves training stability, convergence speed, and final accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates machine learning with logical reasoning loop
Partitions knowledge base into progressive sub-bases
Reduces abduction space for stable stepwise training
🔎 Similar Papers
No similar papers found.