A Neuro-Symbolic Framework for Reasoning under Perceptual Uncertainty: Bridging Continuous Perception and Discrete Symbolic Planning

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the semantic and uncertainty gap between continuous perception and discrete symbolic reasoning to enhance AI systems’ robust planning under perceptual uncertainty. We propose a novel neuro-symbolic architecture: a front-end Transformer-GNN hybrid model that extracts probabilistic symbolic predicates from visual inputs; and a back-end uncertainty-aware symbolic planner that—uniquely—integrates perceptual confidence calibration and active information gathering into symbolic reasoning, while establishing a theoretical link between perception uncertainty and planning convergence. Evaluated across >10,000 PyBullet scenarios, our method achieves a predicate extraction F1-score of 0.68, a 90.7% success rate on multi-object manipulation tasks—surpassing the best POMDP baseline by 10–14 percentage points—and sub-15ms per planning step. The framework thus delivers high efficiency, interpretability, and generalization.

Technology Category

Application Category

📝 Abstract
Bridging continuous perceptual signals and discrete symbolic reasoning is a fundamental challenge in AI systems that must operate under uncertainty. We present a neuro-symbolic framework that explicitly models and propagates uncertainty from perception to planning, providing a principled connection between these two abstraction levels. Our approach couples a transformer-based perceptual front-end with graph neural network (GNN) relational reasoning to extract probabilistic symbolic states from visual observations, and an uncertainty-aware symbolic planner that actively gathers information when confidence is low. We demonstrate the framework's effectiveness on tabletop robotic manipulation as a concrete application: the translator processes 10,047 PyBullet-generated scenes (3--10 objects) and outputs probabilistic predicates with calibrated confidences (overall F1=0.68). When embedded in the planner, the system achieves 94%/90%/88% success on Simple Stack, Deep Stack, and Clear+Stack benchmarks (90.7% average), exceeding the strongest POMDP baseline by 10--14 points while planning within 15,ms. A probabilistic graphical-model analysis establishes a quantitative link between calibrated uncertainty and planning convergence, providing theoretical guarantees that are validated empirically. The framework is general-purpose and can be applied to any domain requiring uncertainty-aware reasoning from perceptual input to symbolic planning.
Problem

Research questions and friction points this paper is trying to address.

Bridging continuous perception and discrete symbolic reasoning under uncertainty
Modeling uncertainty propagation from perception to planning systems
Developing probabilistic symbolic states from visual observations for planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based perception extracts probabilistic symbolic states
Uncertainty-aware planner actively gathers information when needed
Probabilistic graphical model links calibrated uncertainty to planning
🔎 Similar Papers
No similar papers found.
Jiahao Wu
Jiahao Wu
The Chinese University of Hong Kong
Medical RobotsRobot-assisted MicrosurgeryMotion Planning
S
Shengwen Yu
Guangzhou College of Commerce, Guangzhou, Guangdong, China