ACTLLM: Action Consistency Tuned Large Language Model

πŸ“… 2025-06-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional vision-based robotic systems struggle to simultaneously execute tasks and perform spatial reasoning in dynamic environments, resulting in limited adaptability. This paper proposes a vision-language collaborative framework for dynamic manipulation tasks. First, it leverages large language models to generate structured, scene-level linguistic descriptions that jointly support spatial understanding and action planning. Second, it reformulates the Markov decision process as a multi-turn visual dialogue model, enabling long-horizon task decision-making. Third, it introduces an action consistency constraint mechanism that aligns perception and behavior via joint vision-language representation learning and action regularization. Experiments on diverse challenging visual manipulation benchmarks demonstrate substantial improvements in generalization capability and contextual coherence, validating the framework’s high adaptability and robust operational performance in dynamic environments.

Technology Category

Application Category

πŸ“ Abstract
This paper introduces ACTLLM (Action Consistency Tuned Large Language Model), a novel approach for robot manipulation in dynamic environments. Traditional vision-based systems often struggle to learn visual representations that excel in both task execution and spatial reasoning, thereby limiting their adaptability in dynamic environments. ACTLLM addresses these challenges by harnessing language to craft structured scene descriptors, providing a uniform interface for both spatial understanding and task performance through flexible language instructions. Moreover, we introduce a novel action consistency constraint that aligns visual perception with corresponding actions, thereby enhancing the learning of actionable visual representations. Additionally, we have reformulated the Markov decision process for manipulation tasks into a multi-turn visual dialogue framework. This approach enables the modeling of long-term task execution with enhanced contextual relevance derived from the history of task execution. During our evaluation, ACTLLM excels in diverse scenarios, proving its effectiveness on challenging vision-based robot manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving robot manipulation in dynamic environments
Aligning visual perception with actionable representations
Enhancing long-term task execution via visual dialogue
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-based structured scene descriptors
Action consistency constraint for perception
Multi-turn visual dialogue framework
πŸ”Ž Similar Papers
No similar papers found.