🤖 AI Summary
Autonomous agents struggle to make safe and compliant decisions in uncertain environments. Method: This paper proposes the Constitution Controller (CoCo) framework, integrating neuro-symbolic systems with deep probabilistic logic programming to realize a perception–reasoning–control closed loop. Its core innovation is a self-doubt mechanism that models real-time features—such as velocity and sensor status—as confidence-modulating signals, thereby enabling dynamic rule validation under Bayesian inference and adaptive correction of reinforcement learning policies. Results: Evaluated in realistic air traffic scenarios, CoCo significantly enhances agents’ capacity for principled suspicion of anomalous states, improves rule adherence and decision safety, and achieves superior reliability under noisy data and time-varying operational constraints.
📝 Abstract
Ensuring reliable and rule-compliant behavior of autonomous agents in uncertain environments remains a fundamental challenge in modern robotics. Our work shows how neuro-symbolic systems, which integrate probabilistic, symbolic white-box reasoning models with deep learning methods, offer a powerful solution to this challenge. This enables the simultaneous consideration of explicit rules and neural models trained on noisy data, combining the strength of structured reasoning with flexible representations. To this end, we introduce the Constitutional Controller (CoCo), a novel framework designed to enhance the safety and reliability of agents by reasoning over deep probabilistic logic programs representing constraints such as those found in shared traffic spaces. Furthermore, we propose the concept of self-doubt, implemented as a probability density conditioned on doubt features such as travel velocity, employed sensors, or health factors. In a real-world aerial mobility study, we demonstrate CoCo's advantages for intelligent autonomous systems to learn appropriate doubts and navigate complex and uncertain environments safely and compliantly.