Scholar
Milad Hashemi
Google Scholar ID: LiqLdKYAAAAJ
Google
Computer Architecture
Machine Learning
Systems
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
1,853
H-index
17
i10-index
18
Publications
20
Co-authors
0
Contact
GitHub
Open ↗
Publications
2 items
SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads?
2025
Cited
0
ECO: An LLM-Driven Efficient Code Optimizer for Warehouse Scale Computers
2025
Cited
0
Resume (English only)
Academic Achievements
- Paper: 'Data-Driven Offline Optimization for Architecting Hardware Accelerators', ICLR 2021
- Paper: 'Oops I Took A Gradient: Scalable Sampling for Discrete Distributions', ICML 2021 (Outstanding Paper Award Honorable Mention)
- Paper: 'No MCMC for me: Amortized sampling for fast and stable training of energy-based models', ICLR 2021
- Paper: 'A Hierarchical Neural Model of Data Prefetching', ASPLOS 2021 (IEEE MICRO Top-Picks Honorable Mention)
- Paper: 'Learned Hardware/Software Co-Design of Neural Accelerators'
- Paper: 'Neural Execution Engines: Learning to Execute Subroutines', NeurIPS 2020
- Paper: 'An Imitation Learning Approach to Cache Replacement', ICML 2020
- Paper: 'Learning Execution through Neural Code Fusion', ICLR 2020 and ML for Systems Workshop @ ISCA-2019
- Paper: 'Learning Memory Access Patterns', ICML 2018
- Paper: 'Continuous Runahead: Transparent Hardware Acceleration for Memory Intensive Workloads', MICRO 2016 (Nominated for the Best Paper Award)
- Paper: 'Efficient Execution of Bursty Applications', CAL 2015 (Best of CAL 2016)
- Paper: 'Accelerating Dependent Cache Misses with an Enhanced Memory Controller', ISCA 2016
- Paper: 'Filtered Runahead Execution with a Runahead Buffer', MICRO 2015
- Paper: 'MorphCore: An Energy-Efficient Microarchitecture for High Performance ILP and High Throughput TLP', MICRO 2012 (Best Paper Award)
Research Experience
Worked in the HPS research group; published papers in multiple international conferences and participated in various research projects.
Education
2011-2016: Ph.D. in Electrical and Computer Engineering from The University of Texas at Austin, advised by Professor Yale Patt.
Background
Currently a Principal Scientist at Google Deepmind. Research interests include computer architecture, machine learning, and systems.
Miscellany
Professional service:
- Co-Editor, IEEE MICRO Special Issue on Machine Learning for Systems, September 2020
- Co-organizer, Graph Representation Learning and Beyond, co-located with ICML 2020
- Co-founder/steering-committee, ML for Systems Workshop, co-located at NeurIPS 2018 - 2023
- Co-founder/organizer, ML for Computer Architecture and Systems at ISCA 2019 - 2022
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up