Understanding Physical Properties of Unseen Deformable Objects by Leveraging Large Language Models and Robot Actions

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of understanding physical properties of unknown deformable objects. Conventional approaches, constrained by closed-world assumptions, exhibit poor generalization to unseen objects. To overcome this limitation, we propose a large language model (LLM)-driven embodied probing framework that tightly integrates LLM-based reasoning with robotic interaction—establishing a closed loop of *action generation → physical feedback → symbolic reasoning*. Specifically, the LLM generates exploratory actions (e.g., folding, bending) grounded in prior knowledge; a robot executes them and observes resultant deformations; the system then symbolically models intrinsic properties—such as foldability and bendability—and leverages these representations for attribute-driven task planning (e.g., bin-packing). Experiments demonstrate high-accuracy physical property identification across diverse unseen deformable objects, significantly improving downstream task success rates and cross-object generalization capability. To our knowledge, this is the first work to deeply integrate LLMs with embodied robotic probing for open-world physical property understanding.

Technology Category

Application Category

📝 Abstract
In this paper, we consider the problem of understanding the physical properties of unseen objects through interactions between the objects and a robot. Handling unseen objects with special properties such as deformability is challenging for traditional task and motion planning approaches as they are often with the closed world assumption. Recent results in Large Language Models (LLMs) based task planning have shown the ability to reason about unseen objects. However, most studies assume rigid objects, overlooking their physical properties. We propose an LLM-based method for probing the physical properties of unseen deformable objects for the purpose of task planning. For a given set of object properties (e.g., foldability, bendability), our method uses robot actions to determine the properties by interacting with the objects. Based on the properties examined by the LLM and robot actions, the LLM generates a task plan for a specific domain such as object packing. In the experiment, we show that the proposed method can identify properties of deformable objects, which are further used for a bin-packing task where the properties take crucial roles to succeed.
Problem

Research questions and friction points this paper is trying to address.

Understanding physical properties of unseen deformable objects using LLMs and robot interactions
Overcoming traditional planning limitations for deformable objects via LLM-based property probing
Generating task plans (e.g., bin-packing) by identifying object properties like foldability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based probing of deformable object properties
Robot actions determine object physical traits
LLM generates task plans using examined properties
🔎 Similar Papers
No similar papers found.
C
Changmin Park
Dept. of Electronic Engineering, Sogang University, Baekbeom-ro, Mapo-gu, Seoul, 04107, South Korea
Beomjoon Lee
Beomjoon Lee
Sogang University
Robotics
H
Haechan Jung
Dept. of Electronic Engineering, Sogang University, Baekbeom-ro, Mapo-gu, Seoul, 04107, South Korea
H
Haejin Jung
Dept. of Mechanical Engineering, Korea Aerospace University, Hanggongdaehak-ro, Deogyang-gu, Goyang, 10540, South Korea
Changjoo Nam
Changjoo Nam
Associate Professor, Sogang University
Multi-Robot SystemsTask and Motion PlanningManipulation