🤖 AI Summary
Existing relational reinforcement learning (RRL) relies on strong structural assumptions, limiting its applicability to both structured and unstructured tasks, and fails to actively incorporate human expert knowledge. To address this, we propose a novel framework integrating RRL with object-centric representations, enabling human-in-the-loop active querying via explicit modeling of policy uncertainty—thereby dynamically acquiring high-value guidance in environments with unknown structure. Our key contributions are twofold: (i) the first coupling of RRL with object-level representations, and (ii) the introduction of an uncertainty-driven active learning mechanism for expert interaction. Experiments across diverse complex tasks demonstrate substantial improvements in generalization and sample efficiency, reducing required human interactions by 37%–52%. The framework establishes a new paradigm for human-AI collaborative RL in hybrid structured–unstructured settings.
📝 Abstract
Reinforcement learning (RL) has experienced a second wind in the past decade. While incredibly successful in images and videos, these systems still operate within the realm of propositional tasks ignoring the inherent structure that exists in the problem. Consequently, relational extensions (RRL) have been developed for such structured problems that allow for effective generalization to arbitrary number of objects. However, they inherently make strong assumptions about the problem structure. We introduce a novel framework that combines RRL with object-centric representation to handle both structured and unstructured data. We enhance learning by allowing the system to actively query the human expert for guidance by explicitly modeling the uncertainty over the policy. Our empirical evaluation demonstrates the effectiveness and efficiency of our proposed approach.