Can Yaras
Scholar

Can Yaras

Google Scholar ID: KmjObzwAAAAJ
PhD Student, University of Michigan
Deep LearningOptimization
Citations & Impact
All-time
Citations
220
 
H-index
8
 
i10-index
7
 
Publications
16
 
Co-authors
3
list available
Resume (English only)
Academic Achievements
  • Published several papers, including:
  • - MonarchAttention: Zero-Shot Conversion to Fast, Hardware-Aware Structured Attention (NeurIPS'25)
  • - Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24)
  • - Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination (JMLR)
  • - Zero-Shot Conversion to Monarch-Structured Attention (ICML’25, ES-FoMo III Workshop)
  • - Explaining and Mitigating the Modality Gap in Contrastive Multimodal Learning (CPAL'25)
  • - Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations (NeurIPS’23, M3L Workshop)
  • - Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold (NeurIPS'22)
  • - Linear Convergence Analysis of Neural Collapse with Unconstrained Features (NeurIPS’22, OPT Workshop)
  • - Miniaturizing a Chip-Scale Spectrometer Using Local Strain Engineering and Total-Variation Regularized Reconstruction (Nano Letters)
  • - Randomized Histogram Matching: A Simple Augmentation for Unsupervised Domain Adaptation in Overhead Imagery (IEEE J-STARS)
Research Experience
  • Conducting doctoral research at the University of Michigan, focusing on hardware-aware design of efficient machine learning algorithms.
Education
  • Received an undergraduate degree from Duke University in Electrical and Computer Engineering, along with a major in Mathematics and a minor in Computer Science; currently pursuing a PhD at the University of Michigan, advised by Qing Qu and Laura Balzano.
Background
  • Currently a final-year PhD candidate in Electrical and Computer Engineering at the University of Michigan, with research interests in hardware-aware design of efficient machine learning algorithms via low-dimensional structure.