Human-Allied Relational Reinforcement Learning

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing relational reinforcement learning (RRL) relies on strong structural assumptions, limiting its applicability to both structured and unstructured tasks, and fails to actively incorporate human expert knowledge. To address this, we propose a novel framework integrating RRL with object-centric representations, enabling human-in-the-loop active querying via explicit modeling of policy uncertainty—thereby dynamically acquiring high-value guidance in environments with unknown structure. Our key contributions are twofold: (i) the first coupling of RRL with object-level representations, and (ii) the introduction of an uncertainty-driven active learning mechanism for expert interaction. Experiments across diverse complex tasks demonstrate substantial improvements in generalization and sample efficiency, reducing required human interactions by 37%–52%. The framework establishes a new paradigm for human-AI collaborative RL in hybrid structured–unstructured settings.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has experienced a second wind in the past decade. While incredibly successful in images and videos, these systems still operate within the realm of propositional tasks ignoring the inherent structure that exists in the problem. Consequently, relational extensions (RRL) have been developed for such structured problems that allow for effective generalization to arbitrary number of objects. However, they inherently make strong assumptions about the problem structure. We introduce a novel framework that combines RRL with object-centric representation to handle both structured and unstructured data. We enhance learning by allowing the system to actively query the human expert for guidance by explicitly modeling the uncertainty over the policy. Our empirical evaluation demonstrates the effectiveness and efficiency of our proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of relational reinforcement learning in structured problems
Combines object-centric representations with relational learning for diverse data types
Incorporates human expert guidance through explicit policy uncertainty modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines relational reinforcement learning with object-centric representation
Enhances learning by querying human experts for guidance
Models policy uncertainty to improve decision-making efficiency
🔎 Similar Papers
No similar papers found.