A research scientist at the Robotics and AI Institute.
Developed fast optimization algorithms for simulation, planning, and control of robotic systems.
Designed and implemented differentiable physics tools for trajectory tracking, planning, and reinforcement learning tasks in robotic locomotion and manipulation.
Developed optimization algorithms that enable game-theoretic reasoning for autonomous vehicles.
Combined reinforcement learning with sampling-based algorithms to solve contact-rich manipulation tasks.
Unified collision detection and contact dynamics into a single optimization problem.
Took a physics- and optimization-first approach to address some of the limitations of current physics engines.
Proposed a simple approach to neural object contact simulation.
Leveraged differentiable contact simulation for control through contact.
Coupled an online estimation technique and a fast dynamic game solver to allow an autonomous driving vehicle to optimize its trajectory while learning the objective functions of the cars in its surroundings.
Developed fast solvers for constrained dynamic games and applied them to complex autonomous driving scenarios.
Background
Research interests include machine learning, optimization, and computer vision, aiming to enhance robot capabilities in manipulation and locomotion. The goal is to bridge theoretical foundations with practical algorithmic solutions to enable robots to achieve human-level capabilities in whole-body manipulation within real-world environments.
Miscellany
Personal interests include robot learning and optimization.