In-situ Value-aligned Human-Robot Interactions with Physical Constraints

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling human-centered, LLM-driven robots to continuously learn and generalize human preferences while respecting physical constraints. We propose ICLHF—a framework that integrates explicit and implicit human feedback via in-context learning (ICL) to dynamically align learned values with physical feasibility. ICLHF tightly couples large language models, multi-source human feedback mechanisms, and task planning, enabling preference-aware, robust decision-making in domestic environments. Experiments demonstrate that ICLHF improves human-preference compliance in task planning by +23.6% and maintains a 92.4% execution success rate under multiple physical constraints. Crucially, it achieves, for the first time, continuous preference learning from everyday human feedback and cross-task preference transfer—establishing a scalable, lightweight paradigm for value-aligned robotics.

Technology Category

Application Category

📝 Abstract
Equipped with Large Language Models (LLMs), human-centered robots are now capable of performing a wide range of tasks that were previously deemed challenging or unattainable. However, merely completing tasks is insufficient for cognitive robots, who should learn and apply human preferences to future scenarios. In this work, we propose a framework that combines human preferences with physical constraints, requiring robots to complete tasks while considering both. Firstly, we developed a benchmark of everyday household activities, which are often evaluated based on specific preferences. We then introduced In-Context Learning from Human Feedback (ICLHF), where human feedback comes from direct instructions and adjustments made intentionally or unintentionally in daily life. Extensive sets of experiments, testing the ICLHF to generate task plans and balance physical constraints with preferences, have demonstrated the efficiency of our approach.
Problem

Research questions and friction points this paper is trying to address.

Aligning robot actions with human preferences during interactions
Integrating physical constraints with human feedback in robotics
Developing benchmarks for household tasks with preference-based evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines human preferences with physical constraints
Uses In-Context Learning from Human Feedback
Benchmarks everyday household activities
🔎 Similar Papers
No similar papers found.
H
Hongtao Li
College of Artificial Intelligence and Automation, Hohai University, Changzhou, China
Ziyuan Jiao
Ziyuan Jiao
UCLA
RoboticsTask and Motion PlanningMobile ManipulationRobotic Manipulation
X
Xiaofeng Liu
College of Artificial Intelligence and Automation, Hohai University, Changzhou, China
Hangxin Liu
Hangxin Liu
Beijing Institute for General Artificial Intelligence (BIGAI)
RoboticsLocalizationSensors
Z
Zilong Zheng
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China