AtomicVLA: Unlocking the Potential of Atomic Skill Learning in Robots

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models are limited in long-horizon, multi-step robotic tasks due to their reliance on a single action decoder, which hinders scalability and the ability to continually acquire new skills. This work proposes AtomicVLA, a framework that jointly models task planning, atomic skill abstraction, and fine-grained action generation to construct an extensible library of atomic skills for decomposing and executing complex tasks. Its core innovations include a skill-guided mixture-of-experts (SG-MoE) mechanism and an auto-routing encoder, enabling dynamic expert allocation and supporting continual learning and composition of atomic skills. Experiments demonstrate that AtomicVLA significantly outperforms baseline methods on the LIBERO, LIBERO-LONG, and CALVIN simulation benchmarks, achieving performance gains of 18.3% and 21% in real-world long-horizon tasks and continual learning scenarios, respectively.

Technology Category

Application Category

📝 Abstract
Recent advances in Visual-Language-Action (VLA) models have shown promising potential for robotic manipulation tasks. However, real-world robotic tasks often involve long-horizon, multi-step problem-solving and require generalization for continual skill acquisition, extending beyond single actions or skills. These challenges present significant barriers for existing VLA models, which use monolithic action decoders trained on aggregated data, resulting in poor scalability. To address these challenges, we propose AtomicVLA, a unified planning-and-execution framework that jointly generates task-level plans, atomic skill abstractions, and fine-grained actions. AtomicVLA constructs a scalable atomic skill library through a Skill-Guided Mixture-of-Experts (SG-MoE), where each expert specializes in mastering generic yet precise atomic skills. Furthermore, we introduce a flexible routing encoder that automatically assigns dedicated atomic experts to new skills, enabling continual learning. We validate our approach through extensive experiments. In simulation, AtomicVLA outperforms $\pi_{0}$ by 2.4\% on LIBERO, 10\% on LIBERO-LONG, and outperforms $\pi_{0}$ and $\pi_{0.5}$ by 0.22 and 0.25 in average task length on CALVIN. Additionally, our AtomicVLA consistently surpasses baselines by 18.3\% and 21\% in real-world long-horizon tasks and continual learning. These results highlight the effectiveness of atomic skill abstraction and dynamic expert composition for long-horizon and lifelong robotic tasks. The project page is \href{https://zhanglk9.github.io/atomicvla-web/}{here}.
Problem

Research questions and friction points this paper is trying to address.

Visual-Language-Action models
long-horizon tasks
continual skill acquisition
scalability
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Atomic Skill Abstraction
Mixture-of-Experts
Continual Learning
Long-horizon Robotic Tasks
Visual-Language-Action Models