CORAL: Scalable Multi-Task Robot Learning via LoRA Experts

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of negative transfer, performance degradation due to gradient conflicts, and high storage overhead from deploying separate models in multitask robotic learning. The authors propose a fully parameter-isolated multitask architecture that freezes a pretrained vision–language–action (VLA) backbone and equips each task with a lightweight LoRA expert module. A dynamic inference engine, termed the CORAL Manager, routes inputs to the appropriate expert in real time based on task instructions, enabling zero-overhead task switching during inference. This approach eliminates inter-task interference and catastrophic forgetting, supports continual task expansion without retraining the backbone, and is inherently suited for lifelong learning. Evaluated on the real-world Galaxea R1 platform and three simulation benchmarks—LIBERO, WidowX, and Google Robot—the method significantly outperforms joint training baselines and effectively resolves instruction ambiguity, enabling scalable multitask robot learning.

Technology Category

Application Category

📝 Abstract
Deploying Vision-Language-Action (VLA) models in real-world robotics exposes a core multi-task learning challenge: reconciling task interference in multi-task robotic learning. When multiple tasks are jointly fine-tuned in a single stage, gradients from different tasks can conflict, causing negative transfer and reducing per-task performance. Yet maintaining a separate full checkpoint per task is often storage- and deployment-prohibitive. To address this dilemma, we present CORAL, a backbone- and embodiment-agnostic framework designed primarily to mitigate multi-task interference while remaining naturally extensible to a continuous stream of new tasks. CORAL freezes a single pre-trained VLA backbone and attaches one lightweight Low-Rank Adaptation (LoRA) expert per task; at runtime, a dynamic inference engine (the CORAL Manager) routes language instructions to the appropriate expert and swaps experts on the fly with zero inference overhead. This strict parameter isolation avoids complex gating networks and prevents parameter-level cross-task interference by construction; as an added capability, it also enables sequentially introducing new tasks without parameter overwriting caused by catastrophic forgetting. We validate CORAL on a real-world Galaxea R1 dual-arm mobile manipulator and three simulation benchmarks (LIBERO, WidowX, Google Robot), where CORAL overcomes fine-grained instructional ambiguity and substantially outperforms joint training, yielding a practical and scalable system for lifelong multi-task robot learning. Website: https://frontierrobo.github.io/CORAL
Problem

Research questions and friction points this paper is trying to address.

multi-task learning
task interference
robot learning
negative transfer
catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA Experts
Multi-Task Robot Learning
Parameter Isolation
Dynamic Inference Routing
Lifelong Learning
🔎 Similar Papers
No similar papers found.