π€ AI Summary
This work addresses the susceptibility of vision-language models in robotic manipulation to shortcut learning, which often leads to confusion between visually similar objects, undermining robustness and causing unpredictable failures. To mitigate this issue, the authors propose Confusion-Aware In-Context Learning (CAICL), a novel approach that integrates confusion analysis with in-context learning for the first time. CAICL identifies sources of confusion, dissects error-prone features, and incorporates these insights into prompt design to steer the model toward discriminative cues. The method establishes a confusion-aware learning framework tailored for robotic manipulation, achieving an 85.5% success rate on VIMA-Bench. It consistently alleviates shortcut learning across tasks of varying generalization difficulty, significantly enhancing the modelβs ability to distinguish confusable objects and improving operational stability.
π Abstract
Vision-language models (VLMs) have significantly improved the generalization capabilities of robotic manipulation. However, VLM-based systems often suffer from a lack of robustness, leading to unpredictable errors, particularly in scenarios involving confusable objects. Our preliminary analysis reveals that these failures are mainly caused by shortcut learning problem inherently in VLMs, limiting their ability to accurately distinguish between confusable features. To this end, we propose Confusion-Aware In-Context Learning (CAICL), a method that enhances VLM performance in confusable scenarios for robotic manipulation. The approach begins with confusion localization and analysis, identifying potential sources of confusion. This information is then used as a prompt for the VLM to focus on features most likely to cause misidentification. Extensive experiments on the VIMA-Bench show that CAICL effectively addresses the shortcut learning issue, achieving a 85.5\% success rate and showing good stability across tasks with different degrees of generalization.