Yan Di
Scholar

Yan Di

Google Scholar ID: HSlGGvwAAAAJ
Harbin Institute of Technology, Shenzhen
pose estimation
Citations & Impact
All-time
Citations
1,084
 
H-index
16
 
i10-index
22
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • 2024: Five papers accepted to CVPR 2024 — KP-RED and ShapeMaker (joint shape canonicalization, segmentation, retrieval, and deformation); HiPose (near SOTA performance on instance-level pose estimation with high speed); SecondPose (state-of-the-art on category-level pose estimation); MOHO (synthetic-to-real hand-held object reconstruction with a new synthetic dataset)
  • 2024: SG-Bot (scene-graph-based object rearrangement) accepted to ICRA 2024
  • 2023: DDF-HO (hand-held object reconstruction) and CommonScenes (scene generation from scene graph) accepted to NeurIPS 2023
  • 2023: U-RED (unsupervised shape retrieval and deformation in indoor scenes) accepted to ICCV 2023
  • 2023: SST (neural reconstruction from RGB sequences) accepted to ICME 2023
  • 2023: IPCC-TP (trajectory prediction in traffic scenes) accepted to CVPR 2023
  • 2023: Self-supervised category-level pose estimation paper accepted to IEEE Robotics and Automation Letters (RAL 2023)
  • 2023: Robotic grasping paper MonoGraspNet accepted to ICRA 2023
  • 2023: 3D object detection method OPA-3D (category-level pose estimation in traffic scenes) accepted to RAL 2023
  • 2022: ZebraPoseSAT won ‘Overall Best Segmentation Method’ and ‘Best BlenderProc-Trained Segmentation Method’ at BOP Challenge, ECCV 2022; ranked second in RGB-only pose estimation (partial code contribution)
  • 2022: Category-level pose estimation works GPV-Pose, RBP-Pose, and SSP-Pose accepted to CVPR 2022, ECCV 2022, and IROS 2022 respectively
  • 2021: Instance-level pose estimation work SO-Pose accepted to ICCV 2021
  • 2020: Dynamic reconstruction works accepted to ICCV 2019 and ICRA 2020