Chen-Yi Lu
Scholar

Chen-Yi Lu

Google Scholar ID: hmsjcJwAAAAJ
Purdue University
Machine LearningComputer VisionGenerative Modeling
Citations & Impact
All-time
Citations
259
 
H-index
6
 
i10-index
6
 
Publications
10
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Selected Publications:
  • 1. ICCV 2025: SKALD: Learning-Based Shot Assembly for Coherent Multi-Shot Video Creation
  • 2. CVPR 2025: Improving Semi-supervised Semantic Segmentation with Sliced-Wasserstein Feature Alignment and Uniformity
  • 3. ECCV 2024: ReCon: Training-Free Acceleration for Text-to-Image Synthesis with Retrieval of Concept Prompt Trajectories
  • 4. Biosystems Engineering 2021: Online semi-supervised learning applied to an automated insect pest monitoring system
  • 5. Pest Management Science 2022: Towards intelligent and integrated pest management through an AIoT‐based monitoring system
  • 6. Computers and Electronics in Agriculture 2023: Edge-based wireless imaging system for continuous monitoring of insect pests in a remote outdoor mango orchard
  • 7. IFAC-PapersOnLine 2019: Generative adversarial network based image augmentation for insect pest classification enhancement
Research Experience
  • Interned at Adobe Research in the summers of 2023 and 2024, collaborating with Mehrab Tanjim to develop methods for retrieval-augmented diffusion to improve inference efficiency and designed a learned metric for multi-shot video coherence. Before joining Purdue, worked on precision agriculture at NTU, building embedded systems and ML algorithms for interpretable decision support, robust edge deployment, and real-world agricultural monitoring.
Education
  • Ph.D. student at Purdue University, advised by Prof. Somali Chaterji; previously a Research Assistant at National Taiwan University (NTU) with Prof. Ta-Te Lin.
Background
  • A 3rd-year Ph.D. student at Purdue University, focusing on adversarially robust and data-efficient learning algorithms for computer vision and multimodal tasks. Broader interests include generative modeling and robust multimodal learning.
Co-authors
0 total
Co-authors: 0 (list not available)