DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
VLA models exhibit insufficient robustness under distributional shift and complex multi-step tasks, primarily due to inadequate semantic representation alignment under flow-matching training. To address this, we propose a geometric regularization-based representation optimization method: leveraging distributional discrepancies between observation and action embeddings, we construct a geometry-guided signal; representation-level intervention is then achieved via monotonic weight modulation and residual embedding updates—without altering the flow-matching trajectory. Theoretically, our method guarantees convergence and ensures inference stability. It incurs negligible computational overhead and is plug-and-play compatible with existing VLA architectures. Experiments demonstrate substantial improvements in task success rates under multi-step manipulation and data-constrained settings, achieving an average +12.3% gain on the Ravens and RealRobot benchmarks.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models trained with flow matching have demonstrated impressive capabilities on robotic manipulation tasks. However, their performance often degrades under distribution shift and on complex multi-step tasks, suggesting that the learned representations may not robustly capture task-relevant semantics. We introduce DiG-Flow, a principled framework that enhances VLA robustness through geometric regularization. Our key insight is that the distributional discrepancy between observation and action embeddings provides a meaningful geometric signal: lower transport cost indicates compatible representations, while higher cost suggests potential misalignment. DiG-Flow computes a discrepancy measure between empirical distributions of observation and action embeddings, maps it to a modulation weight via a monotone function, and applies residual updates to the observation embeddings before flow matching. Crucially, this intervention operates at the representation level without modifying the flow matching path or target vector field. We provide theoretical guarantees showing that discrepancy-guided training provably decreases the training objective, and that guided inference refinement converges with contraction. Empirically, DiG-Flow integrates into existing VLA architectures with negligible overhead and consistently improves performance, with particularly pronounced gains on complex multi-step tasks and under limited training data.
Problem

Research questions and friction points this paper is trying to address.

Enhances VLA robustness via geometric regularization
Addresses performance degradation under distribution shifts
Improves complex multi-step task handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric regularization enhances VLA robustness
Discrepancy measure modulates observation embeddings before flow matching
Residual updates at representation level without modifying flow path
🔎 Similar Papers
No similar papers found.