🤖 AI Summary
Diffusion- and flow-based models for robot control typically require large-scale interaction data and incur high training costs. Method: This paper proposes a test-time policy enhancement paradigm that requires no additional training. It convexly combines distribution-level score functions from multiple pre-trained diffusion or flow-matching models, enabling policy fusion directly in the distribution space. Contribution/Results: We theoretically prove that the combined policy strictly dominates any individual base policy. Furthermore, we design a general plug-and-play framework for heterogeneous policy integration, supporting cross-modal fusion (e.g., vision-language-action with vision-action policies). Extensive evaluations on Robomimic, PushT, RoboTwin, and real-robot tasks demonstrate significant improvements in both policy performance and generalization capability.
📝 Abstract
Diffusion-based models for robotic control, including vision-language-action (VLA) and vision-action (VA) policies, have demonstrated significant capabilities. Yet their advancement is constrained by the high cost of acquiring large-scale interaction datasets. This work introduces an alternative paradigm for enhancing policy performance without additional model training. Perhaps surprisingly, we demonstrate that the composed policies can exceed the performance of either parent policy. Our contribution is threefold. First, we establish a theoretical foundation showing that the convex composition of distributional scores from multiple diffusion models can yield a superior one-step functional objective compared to any individual score. A Gr""onwall-type bound is then used to show that this single-step improvement propagates through entire generation trajectories, leading to systemic performance gains. Second, motivated by these results, we propose General Policy Composition (GPC), a training-free method that enhances performance by combining the distributional scores of multiple pre-trained policies via a convex combination and test-time search. GPC is versatile, allowing for the plug-and-play composition of heterogeneous policies, including VA and VLA models, as well as those based on diffusion or flow-matching, irrespective of their input visual modalities. Third, we provide extensive empirical validation. Experiments on Robomimic, PushT, and RoboTwin benchmarks, alongside real-world robotic evaluations, confirm that GPC consistently improves performance and adaptability across a diverse set of tasks. Further analysis of alternative composition operators and weighting strategies offers insights into the mechanisms underlying the success of GPC. These results establish GPC as a simple yet effective method for improving control performance by leveraging existing policies.