Scholar
Chao Fang
Google Scholar ID: 3wg-QTgAAAAJ
Shanghai Qi Zhi Institute
efficient ML
AI accelerator
hardware-software co-design
precision-scalable computing
RISC-V
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
401
H-index
11
i10-index
11
Publications
20
Co-authors
23
list available
Contact
No contact links provided.
Publications
10 items
CD-PIM: A High-Bandwidth and Compute-Efficient LPDDR5-Based PIM for Low-Batch LLM Acceleration on Edge-Device
2026
Cited
0
A Scheduling Framework for Efficient MoE Inference on Edge GPU-NDP Systems
arXiv.org · 2026
Cited
0
P3-LLM: An Integrated NPU-PIM Accelerator for LLM Inference Using Hybrid Numerical Formats
2025
Cited
0
Precision-Scalable Microscaling Datapaths with Optimized Reduction Tree for Efficient NPU Integration
2025
Cited
0
SnipSnap: A Joint Compression Format and Dataflow Co-Optimization Framework for Efficient Sparse LLM Accelerator Design
2025
Cited
0
APT-LLM: Exploiting Arbitrary-Precision Tensor Core Computing for LLM Acceleration
2025
Cited
0
Efficient Precision-Scalable Hardware for Microscaling (MX) Processing in Robotics Learning
2025
Cited
0
Enable Lightweight and Precision-Scalable Posit/IEEE-754 Arithmetic in RISC-V Cores for Transprecision Computing
2025
Cited
0
Load more
Resume (English only)
Co-authors
23 total
Zhongfeng Wang
Nanjing University
Co-author 2
Aojun Zhou
The Chinese University of Hong Kong
Jinming Lu
University of California, Santa Barbara
Haonan Wang
University of Southern California, ISI
Co-author 6
Co-author 7
Marian Verhelst
Micas - ESAT - KU Leuven, Belgium
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up