ManiDreams: An Open-Source Library for Robust Object Manipulation via Uncertainty-aware Task-specific Intuitive Physics

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness in robotic manipulation caused by perceptual, parametric, and structural uncertainties. While existing approaches primarily minimize prediction errors, they often lack explicit modeling and constraint of uncertainty sources. To bridge this gap, we propose a modular planning framework that, for the first time, unifies all three uncertainty types within a closed-loop manipulation pipeline. By leveraging distributional state representations, backend-agnostic dynamics prediction, and declarative constraint optimization, our method explicitly constrains uncertainty during planning. The framework provides composable abstraction interfaces that enhance the robustness of any base policy without requiring retraining. Evaluated on the ManiSkill benchmark, our approach significantly outperforms reinforcement learning baselines, maintains stable performance under diverse perturbations, and demonstrates broad applicability and effectiveness across pushing, grasping, insertion tasks, and real-world deployment.

Technology Category

Application Category

📝 Abstract
Dynamics models, whether simulators or learned world models, have long been central to robotic manipulation, but most focus on minimizing prediction error rather than confronting a more fundamental challenge: real-world manipulation is inherently uncertain. We argue that robust manipulation under uncertainty is fundamentally an integration problem: uncertainties must be represented, propagated, and constrained within the planning loop, not merely suppressed during training. We present and open-source ManiDreams, a modular framework for uncertainty-aware manipulation planning over intuitive physics models. It realizes this integration through composable abstractions for distributional state representation, backend-agnostic dynamics prediction, and declarative constraint specification for action optimization. The framework explicitly addresses three sources of uncertainty: perceptual, parametric, and structural. It wraps any base policy with a sample-predict-constrain loop that evaluates candidate actions against distributional outcomes, adding robustness without retraining. Experiments on ManiSkill tasks show that ManiDreams maintains robust performance under various perturbations where the RL baseline degrades significantly. Runnable examples on pushing, picking, catching, and real-world deployment demonstrate flexibility across different policies, optimizers, physics backends, and executors. The framework is publicly available at https://github.com/Rice-RobotPI-Lab/ManiDreams
Problem

Research questions and friction points this paper is trying to address.

robust manipulation
uncertainty
intuitive physics
robotic manipulation
object manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

uncertainty-aware manipulation
intuitive physics
distributional planning
composable abstractions
robust robotic control
🔎 Similar Papers
No similar papers found.