Robust Hypothesis Generation: LLM-Automated Language Bias for Inductive Logic Programming

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robust hypothesis generation in open environments remains challenging: traditional Inductive Logic Programming (ILP) relies on manually defined symbolic structures, while pure large language model (LLM)-based approaches suffer from noise sensitivity. Method: This paper introduces the first fully automated, data-driven framework for constructing language bias. Leveraging a multi-LLM collaborative agent architecture, it automatically extracts predicate vocabulary from raw text, generates relational templates, and performs end-to-end translation from text to logical facts—achieving symbol grounding—and subsequently drives an ILP solver to learn interpretable, formally verifiable logical rules. Contribution/Results: The core innovation is the first elimination of ILP’s dependence on prior symbolic structure, synergistically integrating LLMs’ generalization capability with ILP’s formal verifiability. Experiments demonstrate significant performance gains over both classical ILP and state-of-the-art LLM-only baselines across diverse, complex scenarios, achieving high accuracy, strong robustness to noise, and superior generalization.

Technology Category

Application Category

📝 Abstract
Automating robust hypothesis generation in open environments is pivotal for AI cognition. We introduce a novel framework integrating a multi-agent system, powered by Large Language Models (LLMs), with Inductive Logic Programming (ILP). Our system's LLM agents autonomously define a structured symbolic vocabulary (predicates) and relational templates , i.e., emph{language bias} directly from raw textual data. This automated symbolic grounding (the construction of the language bias), traditionally an expert-driven bottleneck for ILP, then guides the transformation of text into facts for an ILP solver, which inductively learns interpretable rules. This approach overcomes traditional ILP's reliance on predefined symbolic structures and the noise-sensitivity of pure LLM methods. Extensive experiments in diverse, challenging scenarios validate superior performance, paving a new path for automated, explainable, and verifiable hypothesis generation.
Problem

Research questions and friction points this paper is trying to address.

Automating robust hypothesis generation in open AI environments
Overcoming ILP's reliance on predefined symbolic structures
Reducing noise-sensitivity in pure LLM methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent LLM system automates symbolic vocabulary
Automated language bias construction for ILP
Combines LLMs and ILP for robust hypothesis
🔎 Similar Papers
No similar papers found.