🤖 AI Summary
This work addresses the challenge of performing first-order probabilistic queries from partial and noisy observations by proposing a novel approach that integrates inductive learning with deductive reasoning. It introduces a dual lifting mechanism—grounding-lift and world-lift—that, for the first time, enables simultaneous polynomial-time lifting over both individuals and possible worlds. By incorporating a bounded-degree Sum-of-Squares (SOS) hierarchy, the method implicitly combines first-order logical axioms with observed data, circumventing the need for explicit model construction. This framework establishes the first system capable of implicit learning in first-order probabilistic logic while supporting efficient lifted inference, thereby enabling scalable uncertainty-aware querying in large relational domains.
📝 Abstract
Reconciling the tension between inductive learning and deductive reasoning in first-order relational domains is a longstanding challenge in AI. We study the problem of answering queries in a first-order relational probabilistic logic through a joint effort of learning and reasoning, without ever constructing an explicit model. Traditional lifted inference assumes access to a complete model and exploits symmetry to evaluate probabilistic queries; however, learning such models from partial, noisy observations is intractable in general. We reconcile these two challenges through implicit learning to reason and first-order relational probabilistic inference techniques. More specifically, we merge incomplete first-order axioms with independently sampled, partially observed examples into a bounded-degree fragment of the sum-of-squares (SOS) hierarchy in polynomial time. Our algorithm performs two lifts simultaneously: (i) grounding-lift, where renaming-equivalent ground moments share one variable, collapsing the domain of individuals; and (ii) world-lift, where all pseudo-models (partial world assignments) are enforced in parallel, producing a global bound that holds across all worlds consistent with the learned constraints. These innovations yield the first polynomial-time framework that implicitly learns a first-order probabilistic logic and performs lifted inference over both individuals and worlds.