FrontierNet: Learning Visual Cues to Explore (RA-L 2025) – proposed a color-only frontier-based system for efficient exploration
VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation (CVPR 2025, Oral at EgoVis Workshop) – enables zero-shot robotic manipulation from in-the-wild human videos
FuncGrasp: Learning Object-Centric Neural Grasp Functions from Single Annotated Example Object (ICRA 2024) – infers continuous grasp functions for unseen objects using one annotated example
Anthropomorphic Grasping with Neural Object Shape Completion (RA-L 2023, co-first author) – manipulates unseen objects using single-view visual input
TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation (CVPR 2023) – reformulates pose estimation as joint texture and pose optimization