OrLog: Resolving Complex Queries with LLMs and Probabilistic Reasoning

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current retrieval systems struggle to effectively handle complex queries involving logical constraints (e.g., AND, OR, NOT), often suffering from performance limitations due to either ignoring such constraints or relying on unreliable generative reasoning. This work proposes OrLog, a novel framework that decouples predicate credibility estimation from probabilistic logical inference without requiring generation. Specifically, OrLog leverages a single forward pass of a large language model to assess the credibility of atomic predicates, then employs a probabilistic inference engine to compute the posterior probability that a query is satisfied. By eliminating the need for explicit query structure and complete evidence—common requirements in traditional neuro-symbolic systems—OrLog substantially improves top-1 accuracy, particularly for disjunctive queries, while reducing average token consumption by approximately 90%.

Technology Category

Application Category

📝 Abstract
Resolving complex information needs that come with multiple constraints should consider enforcing the logical operators encoded in the query (i.e., conjunction, disjunction, negation) on the candidate answer set. Current retrieval systems either ignore these constraints in neural embeddings or approximate them in a generative reasoning process that can be inconsistent and unreliable. Although well-suited to structured reasoning, existing neuro-symbolic approaches remain confined to formal logic or mathematics problems as they often assume unambiguous queries and access to complete evidence, conditions rarely met in information retrieval. To bridge this gap, we introduce OrLog, a neuro-symbolic retrieval framework that decouples predicate-level plausibility estimation from logical reasoning: a large language model (LLM) provides plausibility scores for atomic predicates in one decoding-free forward pass, from which a probabilistic reasoning engine derives the posterior probability of query satisfaction. We evaluate OrLog across multiple backbone LLMs, varying levels of access to external knowledge, and a range of logical constraints, and compare it against base retrievers and LLM-as-reasoner methods. Provided with entity descriptions, OrLog can significantly boost top-rank precision compared to LLM reasoning with larger gains on disjunctive queries. OrLog is also more efficient, cutting mean tokens by $\sim$90\% per query-entity pair. These results demonstrate that generation-free predicate plausibility estimation combined with probabilistic reasoning enables constraint-aware retrieval that outperforms monolithic reasoning while using far fewer tokens.
Problem

Research questions and friction points this paper is trying to address.

complex queries
logical constraints
information retrieval
neuro-symbolic reasoning
probabilistic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic retrieval
probabilistic reasoning
predicate plausibility estimation
decoding-free LLM inference
logical constraint satisfaction
🔎 Similar Papers
No similar papers found.
M
Mohanna Hoveyda
Radboud University, Nijmegen, The Netherlands
Jelle Piepenbrock
Jelle Piepenbrock
Eindhoven University of Technology
machine learningautomated theorem proving
A
A. D. Vries
Radboud University, Nijmegen, The Netherlands
M
M. D. Rijke
University of Amsterdam, Amsterdam, The Netherlands
Faegheh Hasibi
Faegheh Hasibi
Assistant Professor, Radboud University
Information retrievalNatural language processingConversational AI