Scholar
Qiyang Min
Google Scholar ID: gDc5LbUAAAAJ
ByteDance
Large Language Model
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
19
H-index
3
i10-index
0
Publications
8
Co-authors
0
Contact
No contact links provided.
Publications
9 items
ConceptMoE: Adaptive Token-to-Concept Compression for Implicit Compute Allocation
2026
Cited
0
Scaling Latent Reasoning via Looped Language Models
2025
Cited
0
SeeDNorm: Self-Rescaled Dynamic Normalization
2025
Cited
0
UltraMemV2: Memory Networks Scaling to 120B Parameters with Superior Long-Context Learning
2025
Cited
0
Expert Race: A Flexible Routing Strategy for Scaling Diffusion Transformer with Mixture of Experts
2025
Cited
0
Frac-Connections: Fractional Extension of Hyper-Connections
2025
Cited
0
Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling
2025
Cited
0
Ultra-Sparse Memory Network
arXiv.org · 2024
Cited
0
Load more
Resume (English only)
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up