OmniVIC: A Self-Improving Variable Impedance Controller with Vision-Language In-Context Learning for Safe Robotic Manipulation

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional variable impedance controllers (VICs) exhibit limited generalization in unstructured contact tasks and fail to guarantee safe physical interaction. Method: This paper proposes a vision-language-model-driven universal variable impedance control framework—the first to integrate retrieval-augmented generation (RAG) and in-context learning (ICL) into VIC, enabling semantic-aware, online generation of adaptive impedance parameters. Coupling a structured experience memory bank with real-time force/torque feedback, the framework establishes a closed-loop “perception–reasoning–control” architecture. Results: Evaluated on both simulation and real robotic platforms, the method increases average task success rate from 27% to 61.4% and significantly reduces force violations, demonstrating breakthrough improvements in cross-task generalization, safety, and robustness.

Technology Category

Application Category

📝 Abstract
We present OmniVIC, a universal variable impedance controller (VIC) enhanced by a vision language model (VLM), which improves safety and adaptation in any contact-rich robotic manipulation task to enhance safe physical interaction. Traditional VIC have shown advantages when the robot physically interacts with the environment, but lack generalization in unseen, complex, and unstructured safe interactions in universal task scenarios involving contact or uncertainty. To this end, the proposed OmniVIC interprets task context derived reasoning from images and natural language and generates adaptive impedance parameters for a VIC controller. Specifically, the core of OmniVIC is a self-improving Retrieval-Augmented Generation(RAG) and in-context learning (ICL), where RAG retrieves relevant prior experiences from a structured memory bank to inform the controller about similar past tasks, and ICL leverages these retrieved examples and the prompt of current task to query the VLM for generating context-aware and adaptive impedance parameters for the current manipulation scenario. Therefore, a self-improved RAG and ICL guarantee OmniVIC works in universal task scenarios. The impedance parameter regulation is further informed by real-time force/torque feedback to ensure interaction forces remain within safe thresholds. We demonstrate that our method outperforms baselines on a suite of complex contact-rich tasks, both in simulation and on real-world robotic tasks, with improved success rates and reduced force violations. OmniVIC takes a step towards bridging high-level semantic reasoning and low-level compliant control, enabling safer and more generalizable manipulation. Overall, the average success rate increases from 27% (baseline) to 61.4% (OmniVIC).
Problem

Research questions and friction points this paper is trying to address.

Developing adaptive impedance control for safe robotic manipulation
Enhancing generalization in unseen contact-rich task scenarios
Bridging semantic reasoning with low-level compliant control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language model interprets task context for impedance control
Self-improving RAG retrieves prior experiences from memory bank
In-context learning generates adaptive impedance parameters using VLM
🔎 Similar Papers
No similar papers found.