Confusion-Aware In-Context-Learning for Vision-Language Models in Robotic Manipulation

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the susceptibility of vision-language models in robotic manipulation to shortcut learning, which often leads to confusion between visually similar objects, undermining robustness and causing unpredictable failures. To mitigate this issue, the authors propose Confusion-Aware In-Context Learning (CAICL), a novel approach that integrates confusion analysis with in-context learning for the first time. CAICL identifies sources of confusion, dissects error-prone features, and incorporates these insights into prompt design to steer the model toward discriminative cues. The method establishes a confusion-aware learning framework tailored for robotic manipulation, achieving an 85.5% success rate on VIMA-Bench. It consistently alleviates shortcut learning across tasks of varying generalization difficulty, significantly enhancing the model’s ability to distinguish confusable objects and improving operational stability.

Technology Category

Application Category

πŸ“ Abstract
Vision-language models (VLMs) have significantly improved the generalization capabilities of robotic manipulation. However, VLM-based systems often suffer from a lack of robustness, leading to unpredictable errors, particularly in scenarios involving confusable objects. Our preliminary analysis reveals that these failures are mainly caused by shortcut learning problem inherently in VLMs, limiting their ability to accurately distinguish between confusable features. To this end, we propose Confusion-Aware In-Context Learning (CAICL), a method that enhances VLM performance in confusable scenarios for robotic manipulation. The approach begins with confusion localization and analysis, identifying potential sources of confusion. This information is then used as a prompt for the VLM to focus on features most likely to cause misidentification. Extensive experiments on the VIMA-Bench show that CAICL effectively addresses the shortcut learning issue, achieving a 85.5\% success rate and showing good stability across tasks with different degrees of generalization.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
robotic manipulation
confusable objects
shortcut learning
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confusion-Aware In-Context Learning
Vision-Language Models
Shortcut Learning
Robotic Manipulation
Prompt Engineering
πŸ”Ž Similar Papers
No similar papers found.
Y
Yayun He
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Z
Zuheng Kang
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
B
Botao Zhao
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Z
Zhouyin Wu
Shenzhen Bao’an Middle School, Shenzhen, China
J
Junqing Peng
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Jianzong Wang
Jianzong Wang
Postdoctoral Researcher of Department of Electrical and Computer Engineering, University of Florida
Big DataStorage SystemCloud Computing