1. Developed a safety shield for provably safe collaboration; 2. Formulated a theoretical framework for integrating formal safety guarantees into RL training, showing that safety guarantees can substantially improve policy performance; 3. Introduced a method for preference alignment from a single user instruction, reducing the need for extensive user feedback; 4. Methods have been successfully deployed on multiple robot platforms in international collaborations with academic and industrial partners.
Research Experience
Postdoctoral Researcher at the Autonomous Systems Lab, focusing on: 1. Developing a safety shield for provably safe collaboration; 2. Formulating a theoretical framework for integrating formal safety guarantees into RL training; 3. Introducing a method for preference alignment from a single user instruction, enabling personalized robot behavior.
Education
Ph.D.: Technical University of Munich, Computer Engineering, Advisor: Matthias Althoff; Postdoctoral Scholar: Stanford University, Autonomous Systems Lab, Advisor: Marco Pavone.
Background
Research Interests: Teaching robots to safely work with humans. Professional Field: Computer Engineering. Summary: Aiming to develop robots that can support human workers in sectors such as manufacturing, healthcare, and geriatric care, performing tedious, strenuous, and dangerous tasks.
Miscellany
Vision: To deploy autonomous robots to support humans in everyday tasks including industrial applications, household, and geriatric care.