BagelVLA: Enhancing Long-Horizon Manipulation via Interleaved Vision-Language-Action Generation

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models struggle to effectively integrate language-based planning with visual prediction in complex, long-horizon manipulation tasks, limiting their action generation performance. This work proposes BagelVLA, which for the first time interleaves language reasoning and visual prediction within the action generation loop and introduces a Residual Flow Guidance (RFG) mechanism to enable low-latency, high-precision multimodal coordination. Built upon a unified pre-trained understanding-generation architecture, the framework combines single-step denoising visual feature extraction with a multimodal interleaved generation strategy. BagelVLA significantly outperforms current methods across multiple simulated and real-world long-horizon manipulation benchmarks, demonstrating particularly strong performance on multi-stage reasoning tasks.

Technology Category

Application Category

📝 Abstract
Equipping embodied agents with the ability to reason about tasks, foresee physical outcomes, and generate precise actions is essential for general-purpose manipulation. While recent Vision-Language-Action (VLA) models have leveraged pre-trained foundation models, they typically focus on either linguistic planning or visual forecasting in isolation. These methods rarely integrate both capabilities simultaneously to guide action generation, leading to suboptimal performance in complex, long-horizon manipulation tasks. To bridge this gap, we propose BagelVLA, a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework. Initialized from a pretrained unified understanding and generative model, BagelVLA is trained to interleave textual reasoning and visual prediction directly into the action execution loop. To efficiently couple these modalities, we introduce Residual Flow Guidance (RFG), which initializes from current observation and leverages single-step denoising to extract predictive visual features, guiding action generation with minimal latency. Extensive experiments demonstrate that BagelVLA outperforms existing baselines by a significant margin on multiple simulated and real-world benchmarks, particularly in tasks requiring multi-stage reasoning.
Problem

Research questions and friction points this paper is trying to address.

long-horizon manipulation
Vision-Language-Action
linguistic planning
visual forecasting
embodied agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
Interleaved Generation
Residual Flow Guidance
Long-Horizon Manipulation
Unified Embodied Agent
🔎 Similar Papers
No similar papers found.