Scholar
Runming Yang
Google Scholar ID: HhTh7rQAAAAJ
Tsinghua University
LLM
Distillation
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
57
H-index
3
i10-index
1
Publications
7
Co-authors
0
Contact
Email
yrm22@mails.tsinghua.edu.cn
GitHub
Open ↗
Publications
9 items
Can We Trust LLMs on Memristors? Diving into Reasoning Ability under Non-Ideality
2026
Cited
0
ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection
2026
Cited
3
Revisiting Model Interpolation for Efficient Reasoning
2025
Cited
0
Timber: Training-free Instruct Model Refining with Base via Effective Rank
2025
Cited
0
PTQTP: Post-Training Quantization to Trit-Planes for Large Language Models
2025
Cited
0
Shadow-FT: Tuning Instruct via Base
2025
Cited
0
InfiJanice: Joint Analysis and In-situ Correction Engine for Quantization-Induced Math Degradation in Large Language Models
2025
Cited
0
Quantization Meets Reasoning: Exploring LLM Low-Bit Quantization Degradation for Mathematical Reasoning
2025
Cited
0
Load more
Resume (English only)
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up