Lifted Relational Probabilistic Inference via Implicit Learning

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of performing first-order probabilistic queries from partial and noisy observations by proposing a novel approach that integrates inductive learning with deductive reasoning. It introduces a dual lifting mechanism—grounding-lift and world-lift—that, for the first time, enables simultaneous polynomial-time lifting over both individuals and possible worlds. By incorporating a bounded-degree Sum-of-Squares (SOS) hierarchy, the method implicitly combines first-order logical axioms with observed data, circumventing the need for explicit model construction. This framework establishes the first system capable of implicit learning in first-order probabilistic logic while supporting efficient lifted inference, thereby enabling scalable uncertainty-aware querying in large relational domains.

Technology Category

Application Category

📝 Abstract
Reconciling the tension between inductive learning and deductive reasoning in first-order relational domains is a longstanding challenge in AI. We study the problem of answering queries in a first-order relational probabilistic logic through a joint effort of learning and reasoning, without ever constructing an explicit model. Traditional lifted inference assumes access to a complete model and exploits symmetry to evaluate probabilistic queries; however, learning such models from partial, noisy observations is intractable in general. We reconcile these two challenges through implicit learning to reason and first-order relational probabilistic inference techniques. More specifically, we merge incomplete first-order axioms with independently sampled, partially observed examples into a bounded-degree fragment of the sum-of-squares (SOS) hierarchy in polynomial time. Our algorithm performs two lifts simultaneously: (i) grounding-lift, where renaming-equivalent ground moments share one variable, collapsing the domain of individuals; and (ii) world-lift, where all pseudo-models (partial world assignments) are enforced in parallel, producing a global bound that holds across all worlds consistent with the learned constraints. These innovations yield the first polynomial-time framework that implicitly learns a first-order probabilistic logic and performs lifted inference over both individuals and worlds.
Problem

Research questions and friction points this paper is trying to address.

lifted inference
relational probabilistic logic
implicit learning
first-order logic
probabilistic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

lifted inference
implicit learning
first-order probabilistic logic
sum-of-squares hierarchy
relational reasoning
🔎 Similar Papers
No similar papers found.
Luise Ge
Luise Ge
Washington University in St Louis
learning and reasoningalgorithms and complexityvalue alignmentcomputational social choice
Brendan Juba
Brendan Juba
Associate Professor, Washington University in St. Louis
Theoretical Computer ScienceArtificial Intelligence
K
Kris Nilsson
Computer Science & Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
A
Alison Shao
Computer Science & Engineering, Washington University in St. Louis, St. Louis, Missouri, USA