SELF-VLA: A Skill Enhanced Agentic Vision-Language-Action Framework for Contact-Rich Disassembly

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing robotic disassembly approaches rely heavily on task-specific data and struggle to handle the high variability, strong contact interactions, and long-horizon sequential operations inherent in end-of-life electronics disassembly, resulting in poor generalization and deployment challenges. This work proposes a vision–language–action (VLA) agent framework that integrates explicit disassembly skills, marking the first effort to embed structured disassembly knowledge into a VLA model. By incorporating modular skill design, contact-aware control, and sequential decision-making mechanisms, the approach overcomes the limitations of end-to-end VLA models in complex industrial disassembly tasks. Experiments demonstrate that the proposed method significantly outperforms state-of-the-art VLA models on two contact-intensive disassembly tasks, achieving substantial improvements in both generalization capability and practical applicability.

Technology Category

Application Category

📝 Abstract
Disassembly automation has long been pursued to address the growing demand for efficient and proper recovery of valuable components from the end-of-life (EoL) electronic products. Existing approaches have demonstrated promising and regimented performance by decomposing the disassembly process into different subtasks. However, each subtask typically requires extensive data preparation, model training, and system management. Moreover, these approaches are often task- and component-specific, making them poorly suited to handle the variability and uncertainty of EoL products and limiting their generalization capabilities. All these factors restrict the practical deployment of current robotic disassembly systems and leave them highly reliant on human labor. With the recent development of foundation models in robotics, vision-language-action (VLA) models have shown impressive performance on standard robotic manipulation tasks, but their applicability to complex, contact-rich, and long-horizon industrial practices like disassembly, which requires sequential and precise manipulation, remains limited. To address this challenge, we propose SELF-VLA, an agentic VLA framework that integrates explicit disassembly skills. Experimental studies demonstrate that our framework significantly outperforms current state-of-the-art end-to-end VLA models on two contact-rich disassembly tasks. The video illustration can be found via https://zh.engr.tamu.edu/wp-content/uploads/sites/310/2026/03/IROS-VLA-Video.mp4.
Problem

Research questions and friction points this paper is trying to address.

disassembly automation
contact-rich manipulation
vision-language-action
generalization
end-of-life electronics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
Skill Integration
Contact-Rich Disassembly
Agentic Framework
Foundation Models
🔎 Similar Papers
No similar papers found.