Xuan Yang
Scholar

Xuan Yang

Google Scholar ID: HaA2AWIAAAAJ
Google
computer architecturedeep learningmachine learningcomputer vision
Citations & Impact
All-time
Citations
5,550
 
H-index
15
 
i10-index
19
 
Publications
20
 
Co-authors
0
 
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • “DNN Dataflow Choice is Overrated”, arXiv, September 2018
  • “Optimized Coarse-Grained Dataflow for Scalable NN Accelerators”, ASPLOS 2019 (to appear)
  • “TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory”, ASPLOS 2017
  • “Programming Heterogeneous Systems from an Image Processing DSL”, ACM TACO, Vol. 14, Issue 3, September 2017
  • “A Systematic Approach to Blocking Convolutional Neural Networks”, arXiv, June 2016
Research Experience
  • Working with Prof. Mark Horowitz in the Stanford VLSI Research Group
  • Jun. 2018–Sep. 2018: Intern at Google AI Perceptron, Mountain View, CA
  • Jun. 2016–Sep. 2016: Intern at Apple, Cupertino, CA
  • Jun. 2014–Sep. 2014: Intern at Samsung Research America, San Jose, CA
  • Fall 2013: Teaching Assistant for EE 271 Introduction to VLSI Systems at Stanford
Background
  • Ph.D. student in the Department of Electrical Engineering, Stanford University
  • Research interests in computer architecture and systems
  • Focuses on energy-efficient and high-performance acceleration for deep learning and computer vision applications
  • Designed a systematic framework to analyze the design space of Deep Neural Network (DNN) accelerators, covering dataflow, loop transformation, and resource allocation
  • Developed an automatic hardware generation toolchain that generates DNN accelerators from the Halide domain-specific language (DSL)
Co-authors
0 total
Co-authors: 0 (list not available)