🤖 AI Summary
This work addresses constrained optimization problems in robotic perception, planning, and recognition—where uncertainty is inherent and multiple high-quality, feasible solutions are required. We propose the first probabilistic inference framework that strictly satisfies geometric, kinematic, and environmental constraints. Methodologically, we extend Stein variational gradient descent (SVGD) to constrained domains by integrating manifold optimization on SE(3), implicit constraint embedding, and probabilistic motion planning, enabling violation-free, distributed learning. Our approach achieves end-to-end generation of high-quality, diverse, and strictly feasible solution distributions—demonstrated for the first time in collision-avoidant motion planning, tabletop alignment, and point-cloud-driven pose estimation. Experimental results show significant improvements in robustness and generalization over prior methods, particularly under distributional shift and partial observability.
📝 Abstract
Many core problems in robotics can be framed as constrained optimization problems. Often on these problems, the robotic system has uncertainty, or it would be advantageous to identify multiple high quality feasible solutions. To enable this, we present two novel frameworks for applying principles of constrained optimization to the new variational inference algorithm Stein variational gradient descent. Our general framework supports multiple types of constrained optimizers and can handle arbitrary constraints. We demonstrate on a variety of problems that we are able to learn to approximate distributions without violating constraints. Specifically, we show that we can build distributions of: robot motion plans that exactly avoid collisions, robot arm joint angles on the SE(3) manifold with exact table placement constraints, and object poses from point clouds with table placement constraints.